Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Shaun E on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Your opinion on a major Netbackup project!!

Status
Not open for further replies.

Vela

Technical User
Jun 16, 2004
220
US
OK, here's the issue at this company I started with about 4 months ago.

first off, we backup 300TB per month. We have two EMC DMXs'

/usr/openv file system is about 450TB in size now and growing-

Catalog backups take forever now-

The question is..

How should we go about splitting this backup enviroment up so that it is going to be manageable as it grows larger and larger??

We are thinking about implementing another master server too.

Thanks

Ryan

 
You only need one master because they do very little work. It's the media servers that do the huffing and puffing. Without more details I'm not sure how you can split down the load. Are you saying that the catalog and logs are 450tb? If it is take a look at the retention periods on logs and reports, plus compression options for the database.
 
The images database under /usr/openv is 380 GIG full-

I just had to get 64more GIGs yesterday-

This enviroment has only one master server doing all the work- controlling the drives(max should be 16 per server) and backing everything up which I know is bad-

NO MEDIA servers..

Should I push to make every server with over 80Gig of data a media server??

What kind of compression would work and would Netbackup be bale to interpret the compressed images??

Thanks

Ryan

 
Okay what I think is that you could do with some media servers to take the load. They will reduce the amount of network trafic and processing load. They will also take some of the database load as the media volume info is kept on them rather than the master.

Only use compression if you need to otherwise it's best left alone. NetBackup can manage any image but it may slow the transfer of data and of course you will see no benifit in tape space. In fact people have said that it can use more tapes. The tape drives have very good hardware compression and it's best leaving them to do that job because speed wise you will never beat them.

There is a compression setting for the database which you could use to reduce space. You can also look at retention of logs and reports. Also I only run verbose when there is an issue because of the space they take up.
 
I have to agree w/lenski. We only keep netbackup logs for maximum of 3days set at minimum verbose. This way you can come in on monday and still be able to look at everything from the weekend. We also gzip the log files that aren't the current day. We also only keep information in the activity monitor for a five day period of time. If I have to go back further I just use Advanced Reporter for backup info. Last but not least what version are you running?? By default 4.5 saves catalog info as binary but I believe that 3.4 does not. Binary format does save significant space. You backup more space a month than I do in a year around a year, so no matter what you do the /usr/openv will still be fairly large. Good luck!
 
Well unfortunately I walked into this job not knowing how messed up the enviroment was or how un-maintained it was. So I'm behind the mark opn alot of things.

the logs are g-zipped after 3 days so thats no problem now.
The catalog is not compressed at this point. We have allocated another 300GIG as well because they do not want to delete any of our catalog data, which means I can't run clean_in_background so my catalog is out of sync!!




 
We faced similar problems with our catalog backups so I only backup the basics in the catalog and let the images directory get backed up daily with a full. That tape goes in conjuction with the catalog. If I need to recover something that would normally be on the catalog tape I just do a regular restore from the full.

For instance - if I need to recover a system at a DR exercise I bprecover the catalog tape, which includes the master's images folder then use that information to perform a regular recovery from full tape to get the desired system's images folder back on disk.

Now my catalog takes about 20-40 minutes instead of 4 hours.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top