Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

backup rate crashes on zip files

Status
Not open for further replies.
Jul 3, 2001
215
US
hi, we are running BE9.0 on windows2000 server (sp4) - local backups only - backing up to a ADIC scalar 24 (LT02) drive over an Adaptec 29160LP SCSI bus. When our backups first start, they fly at over 1GB/min. But when the job reaches a particular directory (approx 15GB of data, 90% of which is *.zip files), the backup slows to a crawl (200-300MB/min). the drive statistics show over 7000 soft write errors - and this server/drive combo has only been up and running for a week! When we back up anything else on the server, the rates are fine - it appears to trip up only on this directory with all the zip files. Veritas is pointing the finger at the drive manufacturer (no big surprise there).

Could the errors we're getting have anything to do with the compression settings conflicting wiht the already-compressed zip files? FYI - we used to back up these files on a WinNT4.0 server and BE 8.6 to an M2 drive, and we did not have this problem (we recently migrated the data to the win2k server and BE9.0)

HELP!! We've tried replacing the SCSI card, and it made no difference. we've updated the veritas device drivers AND the adaptec card drivers, all to no avail.
 
Have you tried disabling compression on that job definition? You may want to consider setting up a test job, with hardware compression disabled, specifically for that directory.

There's an excellent article in the Veritas kb with some suggestions on this:

 
well, we tried setting up a seperate backup job to back up JUST the directory that has all the *.zip files in it - and we got the same result. The backup rate bombs to about 200MB/min. All other backups on this media server are fine - they go at least 1GB/min. what is it with these @#$%@# zip files and BE!!!

grrrr!
 
causemaker

you could try testing this with ntbackup - this should give you a good idea whether it is backup exec that is the issue of the tape device.

also try doing a test backup using the backup 2 disk option - make sure when you create the Backup 2 disk, that you specify a partition big enough - try it with and without compression.

maybe do the backup to disk first.

I can understand why veritas is pointing to the hardware, as soft write errors generally point to this - may as well try and prove them wrong.
 
Since you don't specifically mention it, I have to ask: did you disable compression? You can do this on the job definition in BE. You may also be able to do this with a setting on the tape drive. If you disable compression and have the same errors, this may be a drive problem.

You should then look for a drive testing utility from the manufacturer. There may be a firmware update for the drive that would help you. Justin2000's suggestions are also an excellent approach.

I don't know about the drive you're using, but sDLTs implement a version of the Lempel-Ziv algorithm. This is the same method used by zip utilities such as gzip.

Trying to re-compress compressed data frequently results in LARGER files sizes, and, I would suspect, lead to poor results when writing to tape, such as the drop in throughput you are seeing. I don't know whether that would result in a soft error being reported by BE.
 
This is an old thread but it reminds me of a problem I came across 3 years ago when rolling out Backup Exec 8.6 to 120 NT 4.0 servers. The rollout went fine for the most part, but two servers kept blue-screening during backup.

Eventually tracked the problem down to a large file (compressed using NTFS compression) and a lack of disk space. What appeared to be happening was that either NT 4.0 or Backup Exec tried to uncompress the file prior to writing the data to tape. Due to the lack of disk space (it was a creaky old file server on its last legs) uncompressing the file swamped the drives/memory until the box fell over.

The system had previously been running ARCserve 6 and backing up without (significant) problems but I don't think Backup Exec was the root cause of the crash - it may have been due to data-growth (reducing disk space for file de-compression) or because the large compressed file was corrupt. In any case, removing the problem file from the backup stopped the server crashing (allowing backup and safe-migration to a newer system...)

In my experience all backup software slows down dramatically when it comes to a chunk of compressed data. Defragmenting the target data and ensuring there is a reasonable amount of free disk space for de-compressing files seems to help.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top