Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Shaun E on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Netware 6 clustered servers and Netbackup 4.5 FP3 3

Status
Not open for further replies.

vm146971

IS-IT--Management
Sep 18, 2002
39
US
Looking for feed back from anyone backing up clustered Netware 6.x servers using Veritas Netbackup 4.5 FP3. Would like to hear about any issues encountered, tips or other information. We are still in a lab with this but going live shortly. Have read Veritas TID 246140 "Known limitations when backing up Netware Clusters with Netbackup". EMC storage w/ a StorageTek L180 library
 
We are running NBU4.5MP3. We have a NW6 Groupwise cluster. I was not happy with NBU performance (restores) of the cluster so we implemented Backup Express which has performed perfectly. Our testing showed that NBU was not cluster aware and that restores failed or took 10 hours. Backup Express is cluster aware and the 10 hour restore took 10 minutes.

 
We have a number of Netware 6 clusters running on our network. The performance has been typical of other clients on the network. We had to create a backup policy for each cluster member. We have seen no degradation of backup or restore performance relating Netware clusters. The one thing has given us a problem is not having switch ports and NICs locked at 100 MBit Full-Duplex. Our Clusters are currently at NBU DC 4.5 MP3. We are getting ready to move to FP6.
 
MaineDCAdmin,
Please let me know more about your setup. Target or non-Target method? Running the master NBU server on Windows or UNIX? I got very poor restore times during my testing and it forced us to purchase another product.

 
Thanks for the feedback, I would also like to know the configuration that MaineDCAdmin is using. We had very good throughput on the restores we tested here, though it was a relitivly small amount of data, I have a Gb ethernet network for backups, and will be testing a 1.28 TB restore in the next week or so and will give you an update. I am also interested in BackupExpress compared to Netbackup. We considered BackupExpress but were concerened that there was not a very large user community, and the phone support desk only running 9 to 5 concerened us. Also the inability to recreate catalogs from backup tapes was an issue.
 
The Netware 6 Clusters consist of two node clusters. Each of the Cluster members are dual processor machines. Each Cluster member is fibre attached to an EMC Clariion SAN. We installed the Netware Target on each cluster member. To do this we had to fail the cluster over to one node, create all the Targets, and then fail over to the other member. I can not stress how critical it is that the NIC settings on each cluster member be "locked" at "1G - Full Dulpex". The setting must also be set to "1G - Full Dulpex" on the switch as well. Do not leave the settings on the NIC or the switch at "Auto".

When creating the Policy(s) for the cluster we made a seperate Policy for each cluster member. The "path name or directives section" (replying from home w/o access to the Adsmin Console) of each policy included ALL of the targets regardless of which cluster member. By making a policy with ALL targets included you gaurantee, that regardless of the state of the cluster the Targets will be backed up. We made a Target for each major group of users, in our case by department.

Our typical performance is about 7,000 KB/sec per DLT drive while performing a backup. We multi-stream to multiple drives concurrently. Restore rates average somewhere between 6,000 and 7,000 KB/Sec depending on file sizes and quantities. We have seen this performance consistently since NBU DC 4.5 MP1, MP3, and now FP3->FP5.

As an experiment we tried backups to a Disk Storage Unit (DSU). Transfer rates of 27,000 KB/Sec per thread up to 3 threads. I'm not sure if the choke point became the Gig-E net or being spindle bound. I suspect I became spindle bound on the DSU, as I was making use of captive storage in a RAID.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top