Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RAID - buy card or use motherboard's 4

Status
Not open for further replies.

Xaqte

IS-IT--Management
Oct 4, 2002
971
US
I'm finally getting the opportunity to work with RAID. I want to get started on the right path, and reduce my chances of mistakes.

I have an Asus A8N SLI Deluxe Motherboard with NV RAID 0/1/0+1/5 JBOD.

Should I use the onboard RAID, or should I buy a card?
If you suggest a card, please include any suggested manufacturer's as well.

Additional info:
This will be for SATA drives.
This will be a Windows 2000 server with critical data.
I am planning to use RAID 5.

Thanks,

X
 
is cost an issue?

if not, then you can get raid cards which support a whole slew of features and you can pick one which matches exactly with your needs...

--------------------
Procrastinate Now!
 
The answer is, it depends.

For most applications, the onboard RAID controller will be sufficient. However, if you are considering running RAID 5 with the onboard controller, you should first make sure that the controller performs parity calculations (called XOR operations) in hardware, rather than software. The reason for this is that the XOR computations used in parity-based RAID is compute intensive, so if the system has to do it in software then it will a) eat up CPU cycles from the main processor in the server, and b) not be as fast as they would be if the computations were done in hardware. So if you are going with RAID 5 AND your onboard controller doesn't do hardware parity/XOR, then I would recommend a dedicated RAID 5 card that does.

Now, the part where I try to talk you out of using RAID 5 altogether. In the past RAID 5 has been the most popular form of RAID used on servers, but that is now changing. The main reasons for RAID 5's popularity is that it allowed you to have redundancy in the event of disk failure but still make use of nearly all of the disk space on your hard disks (you do lose the equivalent of 1 disk for partiy data). When disk space was more expensive, this was an unbeatable deal. However, nowdays disk space is quite cheap. In most cases, a RAID 5 array will have far more disk space than is likely to be used by the server applications, for two simple reasons:

1. In RAID array, you will get better performance the more disks that you use (more disks equals more disk space).
2. The smallest hard disks that you can buy today are much larger than they used to be (larger disks equals more disk space).

Now, there is a performance problem with RAID 5 that a lot of people either don't know about or talk about, and it is related to the parity calculations mentioned above. When using a RAID 5 array, the disk read performance is usually quite good. This is because instead of reading a single file from a single disk, you are able to read a piece of the file (stripe) simultaneously from multiple disks, resulting in you being able to get the entire file faster than had it been stored on a single disk.

However, any time you write to or modify data on a RAID 5 array, all of the stripes of the file have to be read from the disk, then the file has to be broken up into the correct number of stripes, then the parity data has to be calculated, then the stripes have to be written to the disk. As you can imagine, this can take quite a long time.

So in summary, RAID 5 gets it's high read speeds from the data being striped across multiple disks, whereas it gets it's slow write speed from the data parity (which is what makes it fault tolerant to begin with).

If you are building a server that is going to be primarily read-only, or has a very high ratio of reads to writes (like a web server), then RAID 5 is probably OK. But if your server will be doing a roughly equivalent number of reads and writes (like many database or mail servers, and even file servers) then RAID 5 can be a performance bottleneck.

Instead I recommend RAID 10 (also called 1+0). In RAID 10, your disks are first broken up into sets of mirrored pairs, and then data is striped across those mirror sets. Because it is striping the data, you will see a significant increase in performance over just a single disk in both read and write, and since you will not be doing parity calculations you will see a much higher level of write performance than you would with RAID 5. For even small numbers of disks (4-8) RAID 10 can offer read performance nearly on par with RAID 5 and much higher write performance.

Not only that, RAID 5 is far less fault tolerant than RAID 10. For example, in RAID 5 you can lose a single disk without any problems. But if you lose two disks simultaneously (or lose a second disk before the first disk is replaced and the array is rebuilt) then you have lost all of the data on your array. In RAID 10, you can lose multiple disks without losing all of the data on the array, as long as you do not have both disks in the same mirrored set fail. You could theoretically lose half of your disks in a RAID 10 array and, depending on which disks failed, the array could still function. And the more disks you add to the array, the lower the probability of losing all of your data to multiple drive failures becomes.

Furthermore, when you are rebuilding a RAID 5 array after a disk failure, the performance of the array is severely degraded. This is because the RAID controller has to use all of that parity data in conjuntion with the data that wasn't lost in order to recreate the missing data. This is not only compute intensive, it is extremely I/O intensive (much like writing to or modifying data in a RAID 5 array, only on a much larger scale since it will always involve all disks simultaneously). But in a RAID 10 array, the "missing" data doesn't have to be reconstructed. Instead it is merely copied from the surviving disk of the mirrored set to the replacement disk, which results in a much less I/O and compute intensive rebuild of the array, which translates into a lower degree of perforance degradation during the rebuild, as well as potentially faster rebuilds.

So from my perspective, RAID 5 doesn't make a whole lot of sense for most applications. If you need to store large amounts of static data (archive systems or web sites) then it is a good option. For for storing and accessing dynamic data RAID 10 is a much better option.
 
kmcferrin,
Thanks for the info! I'm beginning to rethink the RAID 5, but it is a difficult decision to make. This will primarily be a storage/file server. However, if files are created more than they are being read I would have to say I need a decent write speed... correct?

I can say though (thanks to both of you) I bought a LSI Logic MegaRAID SATA 150-6 and six 320gb SATA drives.

If anyone else has any more thoughts or info, please post.
 
File creation (and modification) results in lots of disk reads. If it is going to be a file server, it can go RAID 5 or RAID 10, depending on your needs. If you're going to be servicing lots of users writing simultaneously, then RAID 10 is probably going to perform better. But if this is a file server for a small office of 5-10 people, even if they are all writing simultaneously it's not going to cause an unbearable performance hit.
 
The real problem here is the windows swap file. It is written to pretty much constantly. With this file on RAID5 you are taking a noticeable performance hit.

My prefered arrangement is a small RAID1 just for the OS, and everything else on RAID5. Some of the RAID10 and RAID1+ options are definitely worth looking into.
 
I like the thought of two arrays. However, I'm afraid I won't be around as much as I would like for server maintenance.

How much more difficult would it be to maintain(create, restore, etc.) two arrays vs one?
 
Another question:

Would it be possible to have a RAID 5 array of my six 320gb drives for storage, and get another drive for the os and connect it to the motherboard?
 
It takes a little more time to create 2 arrays, but there's no real maintenance at all. The OS will simply see two drives.

You could put the OS on a non-raid controller, but you may have problems with the boot sequence. The BIOS may try to boot to the RAID first.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top