Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations wOOdy-Soft on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

what's the best way of showing 4 hard disks as 1? 1

Status
Not open for further replies.

marcdoyle

Technical User
Dec 18, 2002
95
GB
I am building a fileserver at work, using 4x identical 250gb SATA hard disks. Ideally I want these discs to appear as one 1TB drive in Windows, for ease of management, but i'm unsure what the best way to do this is.

The motherboard ( Supermicro X6DAE-G2) supports SATA Raid, but is it possible to use RAID to connect the 2 drives connected to the motherboard and 2 drives connected to a PCI Card together to form a single drive? We're not too worried about failover as we plan to backup the entire system to tape over night, however i'm worried that RAID might not be recoverable if all 4 hard disks are not identical.

My network administrator also said he thinks it's possible to make a single partition spread over multiple disks using a tool that comes with Windows Server 2000/2003 but he doesn't know what it is. This would probably be ideal as it wouldnt matter which hard drives are used.

What would be the best way to do this?

Thanks

Marc

_____________________________________________________________________________________________________________________
There are 10 kinds of people in the world; those who understand binary and those who don't.
 
I don't think you will get the "ease of management" you are looking for with this approach. You might get it to write to the whole thing but your backup software may not see things your way and what happens if one of the drives goes bad? There would be no way to figure out what parts of what files were on that drive.
 
You could use RAID but RAID takes up some space to keep track of what is on each disk and this has to be stored on one hard drive so it might take 5 250 gig hard drives to equal 1 Gig or maybe 4 300 gig drives or something like that.

If you do not like my post feel free to point out your opinion or my errors.
 
Actually, ceh4702, you are both right and wrong in your statement. Depending on the RAID level (0, 1, 5, 10, 50 - these are the most common levels), up to the capacity of 50% of total drive space can be used as "overhead".

In Windows 2000, both Pro and Server flavors, and Windows XP, you can implement "software RAID" in your disk management console. When you install a minimum of two identical drives in these OSs, you can implement RAID 0 or 1. RAID 0 (striping) requires a minimum of two drives, but you can use all of the (formatted) capacity of both drives, two 250GB drives (approximately 233GB formatted) will give you 466GB worth of storage. Be careful though. If one drive fails, EVERYTHING is gone... you will have no recoverable data! It matters not how many drives you have connected in this way, all of the data is gone. With RAID 0, you will gain throughput... LOTS of it. The more drives, the better the gain.

RAID 1 (mirroring) also requires a minimum of two drives, but you cannot use three drives. It must be an even number of identical capacity drives, arranged in pairs. With RAID 1, you will always have 50% overhead. Two 250GB drives will only store 250GB of data. Four 250GB drives will only store 500GB of data, but you PAID for 1TB. This is also known as redundancy. If one drive fails, the other will still be available to pick up where the first failed. Performance gains are nominal, but present, in RAID 1.

RAID 5 (striping with parity) is the best of both worlds above. You will gain in both performance and security. RAID 5 requires a minimum of three drives to implement, but the amount of overhead varies directly with the number of drives. 3 drives will be 33%. 4 drives will be 25%. 5 drives will be 20%. 6 drives is 16.6%, and so on. In RAID 5, the data is divided across the drives, in a striped fashion, (we'll use a three drive scenario) databit, databit, paritybit. On the next write pass, it is databit, paritybit, databit. and on the third pass it is paritybit, databit, databit. If you stack these on top of one another, and read down you can see that the parity moves across the disks. This is where you redundancy comes in. If DRIVEA fails, you simply replace it, and the controller (or the OS) can read the parity information from the other two disks and extrapolate the correct information to go back onto the drive to be rebuilt.

Only Server 2K and 2K3 can use a software implemented RAID5, with the downlevel clients (2Kpro, XPhome and XPpro) you can only use RAID0 and RAID1.

This is a pretty good website with some pretty good explanations of the various implemantations and uses of RAID levels. I hope I have not confused anybody TOO much... RAID is kinda confusing anyway, but check out the above website, and it will explain better.

e-mail me at ddraper at igalaxy dot net
 
Thanks Haemphyst, this was exactly the information i was after, a Raid 0 is what i want, extra speed with no disk space loss. We will be backing up the entire contents to SuperDLT, so we can do without redundancy, but would you recommend buying a few spare hard drives of the same make/model incase there is a failure, or will software raid 0 work over different models of hard disk?

Thanks for your help

Marc

_____________________________________________________________________________________________________________________
There are 10 kinds of people in the world; those who understand binary and those who don't.
 
marcdoyle
Raid Arrays are best when matching drives are used.
Raid O as stated will give you performance but at the cost of possible data security, as this is not an issue, you would benefit from spare drives of the same type.
Martin

We like members to GIVE and not just TAKE.
Participate and help others.
 
Thanks Martin, i shall have to persuade my boss to spring for another couple of drives, this is still costing a lot less than a pre-built server, i just hope it performs well!

one day soon i might actually get to the stage of setting up a raid, but today i went to fit the CPU's (dual xeons) to the motherboard only to discover there are no mounting points in the Antec Performance II case for dual xeons! So it's out with the drill tomorrow to fix that..

_____________________________________________________________________________________________________________________
There are 10 kinds of people in the world; those who understand binary and those who don't.
 
Yes... it is always a good idea to have extra drives on hand, but can I recommend against the RAID0, in favor of a proper RAID5 implementation? The reason I ask this is because as your drive number goes up, your failure rate (and therefor potential loss of data) does as well - at the same rate!... *IF* one of the drives is to fail, and if you are using 2 drives the chance of data (or monetary) loss is TWICE that of a single drive. I know you are backing up daily, but what happens WHEN one of those drives dies during the workday? Now everybody that relies on that data storage is out of work, until you can replace the drive (30 to 45 minutes), rebuild the array (1 to 2 hours), and restore the data from tape (another 1 to 2 hours). LOTS of dollars! Also, if the array is on a server that everybody uses, the server will be out of commission while it is down for drive replacement. With a RAID5 and a proper hotswap backplane, all of the users will be able to continue working, you will be able to replace the drive, the backplane will rebuild on the fly... and all without the users even knowing something is wrong. All it would take is ONE RAID0 drive crash to pay for a proper backplane, and all of the associated drives (even including the overhead drivespace). Just speaking from experience, and NOT, by any means telling you what to do, but just keep these facts in mind, when pitching a storage solution budget to the powers that be. The RAID5 WILL be fast enough! I assure you.

e-mail me at ddraper at igalaxy dot net
 
Thanks again haemphyst. I have been looking at that site you gave me the link to and am also starting to think raid 5 would be a better option. At the moment the company is running on a 50gb file server (astonishing i know) So i think 750gb would be ample for quite some time to come, and i do think that the employees would find it unacceptable to lose work that hasnt yet been backed up (even though i believe that is what happens on the current server, just hasnt ever happened)

One thing i don't understand from your post is about hot-swapping. Can you remove SATA drives from their data cable and power cable while the PC is turned on, or is this someting additional that the "backplane" you mention gives you, and if so, what is a backplane?

Sorry for knowing so little about this!

Thanks

Marc

_____________________________________________________________________________________________________________________
There are 10 kinds of people in the world; those who understand binary and those who don't.
 
Not a problem! Glad to share what I can.

"Hot-Swapping" should be supported by the backplane, which is simply the name of the external box that does the various interfacing and logic, i.e. SCSI to SCSI, SATA to SCSI, or parallel ATA to SCSI, and striping functions. It also is a housing for the multiple drives - I have seen up to 24 drives supported in a single array. (It does WAY more than that, but this is the general idea.) The manufacturer of the backplane will supply specifications for you, but you do need to look for the hot swap capability. This will minimize any downtime, should a drive in the array fail.

If you are actually going to go with an internal RAID, I cannot say with absolute certainty that you will be able to hot swap a drive if it dies on you. (I am not terribly familiar with SATA, so for further information on the standard, you might post somewhere else for somebody that is better equipped to handle the question.) Aside from the hot swap capabilities of SATA (?), if you are going to implement a software RAID, I am fairly positive that a software RAID will not have proper rebuild "on-the-fly" capabilities. The OS will recognize the new drive, but a reboot and a reinitialization of the array will be mandatory. While the drive is rebuilding, the data will be available to users, but it will be extremely slow. I know you mentioned that your MoBo supports SATA RAID, but be certain that it does support rebuilding functions, without a reboot. This is likely information you will have to get from the MoBo manufacturer.

I firmly believe the hardware implementation, though more expensive (but future thinking - read "upgradeable"), is the preferred method, over an internal, software implementation.

e-mail me at ddraper at igalaxy dot net
 
Just remember the risks associated with RAID arrays:

If you have two drives in a RAID array then you are (approximately) doubling the chances of drive failure. As data stored in a RAID array can be split across multiple drives if one drive fails you run the risk of losing lots more data that you would normally - data recovery from RAID arrays is far more difficult than from a single drive.

Ed Metcalfe.

Please do not feed the trolls.....
 
Been doing raid since 1992...

Raid 0 should never been invented. It was a possibility over 12 years ago but know one ever implemented it until MS started hyping it. Ms also hyped that you could create an array out of 32 disks, of mixed controllers (MFM, SCSI IDE etc), which is BS, unless < 5 minutes is your array life expectancy.

Do you realize how long it take to get a OS back on line, even from tape. No less when the Raid 0 craps out, you are responsible for all the work lost the day it dies. Then if it dies in the late afternoon, you will be there at night. Also when you move on to your next job, NEVER tell your interviewer you ran a production server on raid 0, you will not get the job; expect a chuckle or two from the interviewer if you do.

This year alone,I had problems with two raid 1 server arrays due to the fact the scsi adapters were not exactly the same model ( both Adaptec). I would not recommend running an array unless all drives are on the the exact same model controllers. Mind you I have servers with different controllers which work fine, why take a chance.

Read the article on Raid 0

Ps. Supermicro makes some really fine server boards, I use them in all my server builds
 
The software RAID built into Windows server is very slow compared to a hardware controller. It is not possible to boot from a software RAID volume, so you would need at least one extra disk or partition for the boot.
Why? Because the NT boot loader would be spread over the disks, the BIOS only knows how to boot from the first hard drive and the software RAID driver can't be loaded until Windows starts loading - catch 22.

John
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top