Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Large DBs, backups and remote sites

Status
Not open for further replies.

jcasetnl

MIS
Jan 25, 2001
58
US
Hello,

We have seven remote VPN sites running Microsoft RMS, which is a Point of Sale application, on top of SQL Server. For availability reasons the SQL DB was installed on the POS machine. This is the same machine that is used to actually ring up customers. The thinking was that if all connectivity were to be lost the POS machine would still function.

The problem is in backing up the databases. Each DB has a lot of customer info and the backup file is typically 300-500 megs for a full backup. Each site has a DSL link of 512kbps to 768kbps, i.e. not much bandwith. Right now they do a nightly full backup to a network share over a VPN link. Well, as you can imagine, tranfering 300-500 megs over a link like this is not only slow but fails quite often.

My first thought is to go to delta backups, but we expect these DBs to grow to 2gig over the next few months and we still need to do a weekly full backup at a minimum. I expect at 2 gigs, the failure rate will be totally unacceptable.

Can anyone suggest an architecture that will allow us to meet our backup needs while keeping the system as available as it is now?

Thanks in advance.
 
Do the backups to the local disk then use robocopy to copy/move the backup file over the VPN to the network share.

Robocopy will be much faster to copy the data accross the VPN connection. It also has settings to retry on failure and can resume where it left off.

Denny
MCSA (2003) / MCDBA (SQL 2000)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
That was my initial thought as well. I looked into it and it's still a possibility, but robocopy adds quite a bit to the process.

1. I have to write a script for each location that my coworkers will not be able to understand or troubleshoot effectively.

2. In addition to robocopy I need to use a compression tool and renaming tool on the backup. If either one fails, which in turn would cause the job to fail, I have no way of knowing without manually checking to see if the file wasn't copied.

3. In the destination, I need yet another script to clean up old files or that, too, is a manual process.

4. Error reporting is either slim or non-existent.

Any other ideas?
 
You can write a single script using variables to cover keeping all the data seperate.

1. This will copy everything over into it's own folder on the server. You can use the same command for every site.
Code:
robocopy d:\backup\folder\files \\server\share\%COMPUTERNAME%\ *.* /Z /S

2. Get a copy of gzip.exe. It compresses and renames the files in a single step. After the gzip is done it will robocopy the files.

3. You can again use robocopy to keep track of the dealing with the old files. Use robocopy to move the files to a temporary file then delete all files in the temporary folder. (This will keep all files which are at least 3 days old.)
Code:
robocopy e:\folder\to\backups\ e:\temp\ /s /MOV /MINAGE 3
@for /f %a IN (`dir /S /B e:\temp\`) do del %a

4. The DOS commands will kick out info that can be captured into a file via what ever you use to schedule these commands.

Denny
MCSA (2003) / MCDBA (SQL 2000)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Is log shipping a possibility, if not using the MS version write your own. We did log shipping over about the same type line and worked fine, the only killer was when rebuilding indexes, but if you have a downtime window you should be OK
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top