The hard way to do a calculation is to add up all the columns in each table by datatype, add some overhead for any variable length characters. Then, multiply that times the # of records in each table and that will give you the vaguely "realistic" current size (sorta like geniune fake fur. @=). After which, guestimate how much more data is likely to be added on a daily/weekly/monthly basis. This gives you an idea of what percentage you want your DB to grow everytime it does plus tells you what size hard drive you'll need to accomodate (I can't spell today) everything.
Datatypes:
Char() - 1 byte of space for every character
Int - 4 bytes of space
Money - 8 bytes
Small Money - 4 bytes
Float - 8 bytes
Small DateTime - 4 bytes
DateTime - 8 bytes
TinyInt - 1 byte
SmallInt - 2 bytes
Real - 4 bytes
Rows with no variable length characters - 2 bytes overhead
Amt of overhead added for variable length characters depends on datatypes & # of columns. Always assume the variable characters (Varchar, NVarchar, VarBinary, etc) will use the max. # of characters you give them and add a little extra.
Not sure how much Memo, Text and Image datatypes hold, but I'm pretty sure they're huge. Pages (which hold the data) are 8KB in size. If you have a row that would seem to "split" between pages, assume the whole thing goes to the next page, because records don't split.
An extent has eight 8KB pages on it, and there are 16 extents to a megabyte. This is how you calculate data size. Then you have to figure space for indexes, and I've never been able to do that. @=)
Shorthand answer is to keep (slowly) increasing either the MB size or the percentage size of your data file until it stops slowing down the DB everytime it grows. You'll have to play with it, but don't set it too high, or you're wasting space and I/O time for the data to be found.
Catadmin - MCDBA, MCSA
Remember, "Work" is a four letter word. And you know what your mother told you about using four letter words!