I think I’ve seen that guidance (at least 18) before (either on DB2-L or elsewhere). It seems like a reasonable guideline but I would hesitate to make it a strict rule. I think the basic premise that the column should be large enough to be able to solicit at least some savings is sensible… and 18 is as good as any number to get to that target (it would be hard to argue forcibly though whether it is better than 20 or even 17). And it might make some sense to play up the variation angle. Even if the text is at least 18 bytes it wouldn’t make sense to make it variable unless the actual values varied from a few bytes in length up to 18. For example, if 90% of the actual values are 15 bytes or more, I wouldn’t use VARCHAR.
Another thing you might consider adding is to use DB2 compression instead of variable length columns (in some cases). With data compression built-in to DB2 now - and operating very efficiently - compression is frequently a better choice for conserving disk space than variable length columns. This is so because DB2 compresses automatically - behind the scenes. With variable length columns a two-byte prefix is required for each column to indicate the actual length of the data in the variable length column. This prefix must be set and maintained by application code - this is not necessary with compression. Also, DB2 compression will compress the entire row – possibly eliciting extra savings, whereas VARCHAR potentially saves space only for that specific column. However, the overhead of compression in some cases may be problematic. DB2 must incur CPU in order to compress and de-compress the data as it is modified and read. However, I/O benefits can outweigh the additional CPU because more data will be stored per page due to the smaller size of the rows.