Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Cluster without starage hardware? 2

Status
Not open for further replies.

MattSavage

IS-IT--Management
Aug 20, 2003
54
US
Is it possible to cluster 2 servers using ONLY the 2 servers. I have 2 servers each running W2k AS and they each have 3 SCSI hard drives set up with RAID 5. Can I partition the drives to create a shared disk system, that way the boot disk is separated from the others?
 
Hi,

I have a similar question.
I've been reading about clustering for about two days but I'm a little confused.
I want to buy two seperate blade servers w/ 4 procs each.
I want to build a RAID array and connect both servers to it.
I know there is some hardware I need (interconnect, etc).
Can you build a cluster with just the storage hardware and two servers with Windows 2003 Adv Srv running on both or do you need more?
I guess I'm asking what is the bare minimum software and hardware required to pool the processor and ram from both servers?
I want to use both servers to run an Oracle data warehouse.
Server #1 would be 100% dedicated to Oracle but the other one would be used for multiple applications but would have a lot of downtime (idle processor capacity).
I see on the websites for both Dell and IBM their clustering solutions that include a whole bunch of stuff that I don't know if I need.
It must also be so incredibly expensive that they have just have "call for pricing" at the bottom of the page.
Oracle also sells their Real Application Cluster software that I'm also not sure I need.

Any help, ANY help would be appreciated.
Even if someone just pointed towards a FAQ or some good sites that broke some of this down.

OK
Thanks,
Mike
 
A twin tailed external SCSI enclosure, two SCSI "Y" cables, and two terminators. The shared drives go in the enclosure. Since you'll have two RAID controllers, one in each server, on the same bus you have to change the SCSI ID of one of them. Out of the box they will both be 7. You'll need a seperate LUN for the Quorum, and addiltional LUNs for you application's logs and data.

Windows 2003 Enterprise installs the cluster service by default. All you have to do is configure it. Some applications do their own brand of partitioning, often times referred to as application clustering. It's more of a network load balancing situation than a failover cluster in the MSCS sense. You'll have to get with you application vendor on that.

The hadware can be very inexpensive, or very expensive. It all depends on the needs of your application. You'll need to size memory, CPU, and storage [both quantity and throughput] based on the needs of your application. On one end of the scale, you can create an MSCS cluster using vmware virtual machines; on the other you can install MSCS on a Unisys E7000 with 32 processors. The storage can range from two SCSI disks in a simple enclosure to fibre connected enterprise SAN storage solutions.



 

Thank you very much for the response.

I read elsewhere in this newsgroup after I posted my question that MSCS doesn't really pool processor and memory resources and that it's more for app failover and maybe network load balancing.
It makes me think I might need Oracle Real Application Cluster and their Parallel Execution (OPS) functionality.
I don't even mind paying for the Oracle software as long as I select just the minimum required hardware and OS config to make the cluster.

I have just a few more questions and I think I'm good to go.

What's a LUN?
What's Quorum?

and...

When I was going to build this with just one server, I wanted to build 3 RAID arrays in the storage enclosure, with 3 RAID controllers in the server with 5 18GB drives connected to each card for a total of 15 drives.

Does this configuration change your recommendation above?

"A twin tailed external SCSI enclosure, two SCSI "Y" cables, and two terminators. The shared drives go in the enclosure. Since you'll have two RAID controllers, one in each server, on the same bus you have to change the SCSI ID of one of them"

I would think I would want 3 RAID controllers in each server.

Does that mean I would need six SCSI "Y" cables, and three terminators, and a ? tailed external SCSI enclosure?

Can I even set it up this way?

Also, if you could recommend a vendor or website that sells the hardware pieces, that would be great.

I hate not understanding stuff and asking newbie questions but the more I read on the internet the more confused I get.

Trying to figure some of this out by reading vendors websites is tricky because they make it sound like you can't set this up without their products.

Anyway, thanks again for the help.


 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top