The SQL Server documentation informs me that:
- When a field has the char datatype, every value saved takes up as many bytes as the maximum length defined;
- When a field has the varchar datatype, every value saved takes up as many bytes as the value uses, and no more.
It then goes on to tell me that:
"Use char when the data entries in a column are expected to be consistently close to the same size.
Use varchar when the data entries in a column are expected to vary considerably in size."
It doesn't mention why, however. What reason(s) do I have to use char over varchar? Do I gain performance by doing so?
"Much that I bound, I could not free. Much that I freed returned to me."
(Lee Wilson Dodd)
- When a field has the char datatype, every value saved takes up as many bytes as the maximum length defined;
- When a field has the varchar datatype, every value saved takes up as many bytes as the value uses, and no more.
It then goes on to tell me that:
"Use char when the data entries in a column are expected to be consistently close to the same size.
Use varchar when the data entries in a column are expected to vary considerably in size."
It doesn't mention why, however. What reason(s) do I have to use char over varchar? Do I gain performance by doing so?
"Much that I bound, I could not free. Much that I freed returned to me."
(Lee Wilson Dodd)