Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Server Capacity

Status
Not open for further replies.
Nov 7, 2006
4
US
Hello All,
I am in the process of upgrading our DataBase hardware to a newer system. I have very little experience in capacity planning. I was wondering if I can get a ballbark estimate of the server needs based on our DB size.

Number of Databases - 130 (all of varying size from 10MB to 1.5 GB)
Total Size of all Databases - 10GB
Number of Concurrent Connections to all the Databases - 50
Database Growth - 10% per month

The largest Database has approximately 1.5 million records in couple of tables. Complex queries needs to be run on those tables. With the existing 5 year old server 4 CPU / 4 GB memory, it takes 3 hours to process the tuned query.

I was wondering if you can suggest me the normal capacity of the servers that is normally being used to handle the above mentioned size of data.

Thanks
 
Your basic dual chip, dual core server with 4 Gigs of RAM will probably do the trick.

Without knowing how many transctions per second, disk io requirements, etc a better recommendataion probably isn't possible.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Run PerfMon against your current database server and check your processor, memory and I/O counters. This will give you a good feel for what is in desperate need of upgrading.



Catadmin - MCDBA, MCSA
"No, no. Yes. No, I tried that. Yes, both ways. No, I don't know. No again. Are there any more questions?"
-- Xena, "Been There, Done That"
 
Thanks for the Info. I ran the perfmon on our Database server and found the following.

Under OFF Peak Conditions:
Transactions/Sec - 10
Memory - Page Reads/Sec: 0.3
- Buffer Cache Hit Ratio: 99.XX
I/O Counter - Disk Reads/sec: 20
- Disk writes/sec: 5
- Avg. disk Queue Length: 0-1.5 (occassional 55, 75)
- %Disk Time: 332
Processor - %Processor Time: 12

Under Peak loads:
Transactions/Sec - 40
Memory - Page Reads/Sec: 0.4
- Buffer Cache Hit Ratio: 99.XX
I/O Counter - Disk Reads/sec: 82
- Disk writes/sec: 9
- Avg. disk Queue Length: 17 (occassional 55, 75)
- %Disk Time: 1663
Processor - %Processor Time: 35

Also, I tried getting the counter results when running a complex query. The query took me 12 minutes to bring up some 800,000 records. Following are the results.
Memory - Page Reads/Sec: 0.4
- Buffer Cache Hit Ratio: 99.XX
I/O Counter - Disk Reads/sec: 300-400
- Disk writes/sec: 8-20
- Avg. disk Queue Length: 40-115 (occassional 55, 75)
- %Disk Time: 3000-6000
Processor - %Processor Time: 25

I found out from my system admin that the disk is of Raid-1 config.
From the above numbers, I believe I/O is the issue here. Probably I should think about increasing the I/O. Am I being correct here? Please help?

Thanks,
sqltekmember
 
You definitely need a bigger hard drive. I recommend a SAN with multiple Disk Controllers. Multiple disk controllers give you more control over I/O. Also, is your SQL Server hyper-threaded???

Lastly, I recommend a minimum of RAID 5 for SQL Server. RAID 1 is basic mirroring and that's part of what is taking so long to write. Essentially, Windows has to write twice to the hard drives, which slows down any updates / inserts you'll be making.

RAID 5 is faster on writing, a tad slower on reading (but usually unnoticable) because it stripes across multiple volumes.

BTW, Microsoft said (as part of the WIndows 2000 Server push) that the Disk Queue Length should never be higher than 2. I've seen it as high as 10 without causing too much of a problem, but 55 & 75 are DEFINITELY not good.

There's nothing wrong with getting an entirely new Server setup if you can afford it, but if the company is looking to save money, concentrate on the RAID Array and increasing your hard drive space.



Catadmin - MCDBA, MCSA
"No, no. Yes. No, I tried that. Yes, both ways. No, I don't know. No again. Are there any more questions?"
-- Xena, "Been There, Done That"
 
I hate to dissagree on the drive into that CatAdmin has provided, but I have to.

When doing a hardware RAID windwows doesn't know of care what RAID level you are using.

When you write to a RAID 1 array the data is written to both disks at the same time. Because there is no parity to be calculated when using a write intensive application RAID 1 is the best option (such as LOG Files).

RAID 5 should be used for data files (mdf, ndf) for when the application is doing mostly reads. Because in order for the RAID card to write to a RAID 5 array it needs to calculate the parity it is slower to write to a RAID 5 array then it is to a RAID 1 array.

Writting to and reading from a RAID 1 array will be the same as reading and writting to a single disk. RAID 1 provides to performance plus or minus as it is a direct mirror of the other disk.

Your definetly do need to look at your disk layout. You will see some improvement when that is fixed. With your average disk queue above 2-4 that is a bottleneck. Check out my FAQ on Disk setups faq962-5747 it may be of some use.

What is the current disk setup that you have? Number of disks per array, number of controllers, number of arrays, what files are on which array, etc.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Thanks CatAdmin and mrDenny for your help.
Yes our server is Hyperthreaded. It has 2 physical processors. Raid 1 config with 26 Gig Hard Disk space. The Database(mdf) and LogFiles(ldf) are kept at the same location in the machine. Our application is more of an OLTP system and as mentioned earlier its of 10GB size. Occassionally reports with larger queries and datasets that might use the tempdb will run off our system during the peak time.
I think our company is willing to spend some money to get a new server and have this server as a backup server. I read your article on the Disk Configuration. Excellent! It gave me an perspective of what to choose for our new server. Also, our architecture is little more different like having several(around 150) DBs of various size ranging from 4MB to 1GB totalling to 10GB, with 40 TPS and 80-90 users, I am not sure about the category of the server size(small or medium) that we fit in. Any ideas? Would Dell Poweredge 2950 server with a Raid 5 config, Dual Controller, 15k RPM Hard disk work for us? At this point in time, I was thinking of having all the data files and log files in the same drive. Would this be a major issue given the size of the DB.

Thanks,
sqltekmember
 
When it dought, move up not down. With 40 TPS you might be ok with the MDF and LDF on one drive. But the question becomes what will the load be in 2-3 years. Will it still be where it is, or will the load have increased somewhat? If it's going to be increasing, you should be planning for the increased load, otherwise you will be underpowered in the future.

How many drives does a Dell 2950 hold (I haven't used Dell in years, we are an HP shop).

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
If you have any stats on how big your DBs were a year ago, or several months ago, you can do an estimate of how much your database grows per month. Use that information to project out the minimum hard drive space you'll need in 5 years and then add X amount of Gigs to it for good measure. Companies don't like to put out for hardware very often and the finance people tend to get very annoyed if they just bought you space and you're telling them a year later you need more. @=)

Don't hate to disagree with me, Denny. It's how I learn. @=)

Honestly, I don't have personal experience with a SAN kind of RAID. Most of my info comes from book-larnin' and in my current job, we have a whole nuther department in a different state that manage the hardware.



Catadmin - MCDBA, MCSA
"No, no. Yes. No, I tried that. Yes, both ways. No, I don't know. No again. Are there any more questions?"
-- Xena, "Been There, Done That"
 
The data growth is around 30-35% per year. This prompts to take the medium-sized server route.
To answer MrDennys question, the dell server holds upto 8 drives.
Thank you for your valuable suggestions and inputs. I will discuss with our System Admin and share the thoughts.
Hopefully the new server implementation will reduce the I/O problems with larger queries and load.

Lastly, Can you please let me know (with some basic assumptions like no sargable where conditions, only indexed columns are fetched etc...) approximately how much time it would take to fetch million records from a join of two tables with million rows each and two other small tables. As I mentioned earlier, with the current server it takes 17-20 minutes. I am pretty sure the time will go down drastically with the new server's I/O. But I have no idea how much it will be, will it be in seconds or milliseconds. This would help me decide on what to expect of the new servers and also it will serve as a benchmark to accompolish process/query tuning.

Thanks Again,
sqltekmember
 
If I was setting up this machine, I'd setup the drives like this.

Disk 0,1 - RAID 1 C:\ and D:\. C:\ for the OS, and D:\ for the SQL Binaries, antivirus, Outlook, etc.
Disk 2,3,4,5 - RAID 5 E:\ - SQL Data Files
Disk 6,7 - RAID 1 L:\ Log files

A query like that will probably still take several minutes to complete.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005) / MCITP Database Administrator (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top