It also (potentially!!!) offers much better survivability than other common RAID levels--...agree.
Your worry about raid 5 for the data...
With the OS on the raid 1 and regular backups of the raid arrays, your in a good position. The raid 1 is inherently safer, as the chances of two physical disk failures at the same time is far less then a raid 5 failure; the number of disks are less than a raid 5 for the OS, giving the raid 1 disks less of a failure rate, due to the less chance of the disks bombing out due to read errors or physical disk failure... again the larger the number of disk the greater the chance of disk errors, as pointed out in the Intel document. Also there is less data, due to the lack of parity info, less chance of a read error.
Again I will stress the need to automate a consistency check.. this procedure checks read ability on every block of an array, and corrects errors.. without regular checks, if a block(s) become unreadable, the errors are usually found only if the heads perform a read request..if too many errors are found at the same time, an array can fail, as in the case where an array fails in degraded mode, due to a second disk failure.
The OS is super critical, if you lose it with active directory, it can be a nightmare, raid 1 for the OS is a good move. If the raid 5 data array should crash, the restore from tape is will be simple, and fairly fast dependent upon you tape unit's speed... yes, you lose a days work, but this is not bad terrible (if you do not have a redundant server).
If you have the resources....
Yes, the odds of failure with raid 10, OVERALL, are much lower than raid 5, as the chance of loosing two drives in the array, which mirror themselves, at the same time, is low (but always a possibility, but it is the worst case scenario). With a minimum of one global spare, the danger is lower. Performance of raid 10 is greater than raid 5 due to the lack of parity creation overhead, and to a smaller extent the parity does require extra disk space requiring greater head movement to find data . To duplicate your capacity in raid 10 would require 6 drives for the data array.
Considering the large capacity drives, the greater number of drives involved (greater number of blocks involved),I would still go for one more drive, as a global hotspare. Which bring in the subject of bus saturation...on a u320 scsi channel, 5 drives per channel is the maximum number of drives before the channel is saturated with data one the array is under load, any more than 5 drives on a channel will not increase an array's speed, if anything it will lower it slight due to the scsi overhead of the greater number of drives. So this would have an affect on a raid 10, with all 6 (or 7 drives on one channel with a hotspare), probably would not have a large affect.
If I understand you correctly, you have the addin card, not the embedded raid controller. With the addin card you can use the onboard scsi interface for the tape backup, with the embedded raid interface you will need to get an addin scsi card for a tape unit.
Last, once the server comes in, I would leave it on for a few days without data, and run the Dell diagnostics repeatedly on the array before transferring data.
"On a side note- the 2800's are great bargains, IMO. They expanded the drive carrier from the 6-drive bay the 2500/2600's had. They are truly a bargain."
Fully agree..I read a database server review, and the 2800's were in the league with higher priced competitors.
Can you explain why you need 73 Gig for the OS ? Remember, less block, less chance of read failures. If you installed all programs to the data partition (95% will), the OS would take <5 Gig, installed apps, use reads mostly, so the apps on raid raid 5 would not hurt performance as long as the temp/log files are on the raid 1.
........................................
Chernobyl disaster..a must see pictorial