Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Maximum Number Of Rows

Status
Not open for further replies.

TamedTech

IS-IT--Management
May 3, 2005
998
GB
Hello Guys,

Ok, so this is somthing i'm looking into to start planning for future application growth.

I have a large and rather thorough application that records statistcal data from all sorts of equipment, and its all supported by a MsSQL2005 database.

Now the product has not launched yet so we're still working on a comparitivly small amounts of test data, but i sat down with my business partners this morning (none of us are what you would call an SQL expert) and we started to look at projected business growth and suchlike.

I thought i'd come and see what sort of thesholds we'd be looking at for the maximum amount of data that can be stored on the database, is it simply down to the ammount of disk space we can provide? what about performance and managability?

None of this data is particularly massive, just 'int' values, its just the database records transactions, and we'd be looking at several thousand per day, and start multiplying that by several thousand units you could be looking at a sizable number of records every day inserted into the database.

Just thought i'd stick my foot into the forum to see your thoughts on planning for this kind of growth, clustering servers, achiving records and all that kind of stuff.

Like I say this is all future planning, and at the time we'd employ an expert to come and consult for us, but at this time i thought i'd just investigate it a little,

Thanks guys,

Rob
 
There is no limit to the size of the database. It will be limited to the size of you disk. However, You will want to manage the following things so performance does not degrade as the size of your db grows.
1. Make sure all tables are properly indexed. That is up to the person who designed the db. as a general rule of thumb every table should have at least a unique clusterd index.
2. Make sure you are backing up the Transaction log as well as the database. (only if the db recovery model is set to full) If you don't backup the Log is will grow out of control.
3. Make sure you Alter the indexes weekly to get rid of fragmentation.
4. You should place your Transaction log on a separate RAID from the database.
5. Consider purchasing a backup tool like light speed to compress backups.

The list could go on but it you keep up on these things you should be ok.

- Paul
- Database performance looks fine, it must be the Network!
 
ptheriault said:
There is no limit to the size of the database. It will be limited to the size of you disk.
While this is true for most people, there is actually a max database size.
A database can be no larger than 1,048,516 terabytes. This is also a misleading number. SQL supports up to 32,767 database files per database, with a maximum size of 16 terabytes per file. This gives you a maximum total size of 524,272 terabytes (give or take). Here is some of the maximums for SQL Server
ptheriault said:
as a general rule of thumb every table should have at least a unique clusterd index.
Um, no. If you have a table which is getting hundreds of thousands of inserts per day, and is almost never read a clustered index is something that you don't want as it will slow down the inserts.

ptheriault said:
Make sure you Alter the indexes weekly to get rid of fragmentation.
Indexes should be cleaned up when you have a fragmentation problem, not on a schedule. Properly setup indexes will rarely need to be defragged.
ptheriault said:
Consider purchasing a backup tool like light speed to compress backups.
Software like light speed can cause more problems then they can solve. Storage is cheep. If you backups are large, buy more drive space. Anything that modifies the way that SQL does a backup is probably a bad idea.

Rob,
SQL Server can handle billions of records per table. I've managed several databases with well over a billion records in the tables.

Databases of this size require a different approach than a smaller database does. When you are dealing with databases of this size, your drive configuration will start to become much more important. So will making sure that tables and indexes are built with enough white space in each 8k page on the drive to allow for new data to be inserted without causing massive data moving.

Take a look at the FAQ on drive setups that I've written (faq962-5747). This covers some basic guidelines for drive setups depending on the size of the database and the number of users.

Assuming that your database is running 24x7 and the load is spread across the day evenly you are looking at ~12 transactions per second. I would assume a peak of probably 40 transactions per second. That's a nice mid range system probably.

If you are planning on setting up a database in this range a senior database administrator can be very useful in sizing the server, and getting everything setup, so that not only is everything fast now, but when the database has grown and is 500 Gigs in size it's still fast, and manageable, and most importantly is able to keep your business running.

When planning out the size of your database, don't forget to include the size of your indexes. They also take space, especially if you will indexing wide columns.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Denny,
I'm not sure I understand some of the things you pointed out.
Every index will become fragemented no matter how you set them up by simple insert and deleting statements. That doesn't take into consideration any autogrow events that might happen on the files. I've always found that it is better to keep up on fragmentation than letting it go.
the general rule of thumb quote about clustered indexes comes right out of the MCTS microsoft study guide. Heap table can become very difficult to manage. Of course I am assuming that this is an OLTP database.
As far as light speed, I've been using it for long time and never had a problem. In fact if it wasn't for light speed I would have no way to refresh one of my production databases to test that is over 700GB.


- Paul
- Database performance looks fine, it must be the Network!
 
Thank you guys for your thorough response, I'm glad to see that my transaction tables will be managable.

The fragmentation on these particular tables is not going to be a massive issue as it should be write only, i dont plan on deleting records at any point, just writing them and then displaying them with some reporting applications.

This is what I would class as a 'real time' system and isnt running any form of batch opperation, the event happends, it gets logged straight away, and this will be running 24/7 and they will be reasonably distributed, but we do have 'peak times' where the system will be more busy than usual.

I'm glad to see that this stuff is definatly doable, but i'm not database expert and the current setup is tidy enough i think for a while, we'd be looking at a complete application rebuild before we get anywhere near this size of database anyway so we'll be looking at getting an expert such as yourselves in to work on it for us.

Whats the deal with books for SQL? I'm always a little warey of books as i often buy the beginner ones and they never seem to support the level at which i'm trying to achieve and i tend to outgrow the book within a week. I'm looking at a very steep learning curve at the moment, but its important that i gain a fair understanding of how it all works, do you have any suggestions?

As for planning server size and growth, as this will be the back-bone of my company it somthing i wont be sparing the pennies on it, so it'll be into a nice dedicated/managed server setup from a good ISP, i'll be sure to insist on a large disk setup with back facilities and suchlike.

Again, if you have a host you would reccomend i'd be happy to take that on board.

Thanks,

Rob
 
TamedTech,
My company is currently hosted at Qwest in Va. We are not very happy with the level of their support. We are looking into Iron Mountan at this point.
As for books I like the SQL Server 2005 Handbook by Microsoft.
Good luck.


- Paul
- Database performance looks fine, it must be the Network!
 
Every index will become fragemented no matter how you set them up by simple insert and deleting statements.
That's not actually the case. If you plan ahead and set the fill factor to the correct amount then your index fragmentation should be minimal as the SQL Server will use the white space that you left within the page. As that white space beings to fill you will need to clear out some of the records to new pages.

In a system will few to no deletes you can usually assume that most of the inserts will be to the bottom of the index and there for will not fragment the indexes.

No in a OLAP system this all changes of course. In an OLAP system your inserts can be anywhere through out the index and more white space is needed to prevent fragmentation. OLAP systems need to be more carefully watched as there padding will fill more often and the indexes will need to be reordered more often.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top