Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to allocate database size in SQL Server?

Status
Not open for further replies.

varshan09

Programmer
Oct 28, 2003
45
IN
Hi,

Is there any way of finding out about the free space required in the database? Our production database is facing performance problem when the data grows and it becomes slow till the time we do Shrink database. Our problem is, we need to find out the optimum value of target_size which is required to given for shrinking the database.

Thanks
Varsha
 
Database growth, log growth and shrink jobs cost performance. Generally you want to increase all filesizes manually before you add data, so the database does not have to 'grow' before it can insert the data.

You can estimate the datasize, by calculating how many records fit in a page (adding all column sizes), and multiply this with the number of rows. This is not a simple calculation!
A quicker, rough, method is adding an x percentage of the data, see how big this is and calculate from here.

Nils Bevaart
 
The hard way to do a calculation is to add up all the columns in each table by datatype, add some overhead for any variable length characters. Then, multiply that times the # of records in each table and that will give you the vaguely "realistic" current size (sorta like geniune fake fur. @=). After which, guestimate how much more data is likely to be added on a daily/weekly/monthly basis. This gives you an idea of what percentage you want your DB to grow everytime it does plus tells you what size hard drive you'll need to accomodate (I can't spell today) everything.

Datatypes:
Char() - 1 byte of space for every character
Int - 4 bytes of space
Money - 8 bytes
Small Money - 4 bytes
Float - 8 bytes
Small DateTime - 4 bytes
DateTime - 8 bytes
TinyInt - 1 byte
SmallInt - 2 bytes
Real - 4 bytes

Rows with no variable length characters - 2 bytes overhead

Amt of overhead added for variable length characters depends on datatypes & # of columns. Always assume the variable characters (Varchar, NVarchar, VarBinary, etc) will use the max. # of characters you give them and add a little extra.

Not sure how much Memo, Text and Image datatypes hold, but I'm pretty sure they're huge. Pages (which hold the data) are 8KB in size. If you have a row that would seem to "split" between pages, assume the whole thing goes to the next page, because records don't split.

An extent has eight 8KB pages on it, and there are 16 extents to a megabyte. This is how you calculate data size. Then you have to figure space for indexes, and I've never been able to do that. @=)

Shorthand answer is to keep (slowly) increasing either the MB size or the percentage size of your data file until it stops slowing down the DB everytime it grows. You'll have to play with it, but don't set it too high, or you're wasting space and I/O time for the data to be found.



Catadmin - MCDBA, MCSA
Remember, "Work" is a four letter word. And you know what your mother told you about using four letter words!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top