I run an Ultrium drive in a UNIX environment. I am seeing over 13MB/sec.
What are your server specs ? Memory/CPU/disk type & size ?
What about file layout ? Look at 10 random files on the drive and look at their sizes. Do you have lots of little files ( < 1MB in size) ?
You will not see a higher throughput with small files because of file open/close overhead.
If you have fewer large files (> 1MB in size), you'll see higher throughput because the system is not so busy opening and closing small individual files. It is just streaming data from files already opened. If this sounds confusing, I'll explain more with word-picture diagrams...
Scenario #1:
10,000 files, average size = .5MB
# Begin Backup flow
Open file
copy data to tape
close file
read directory entry for next file
Return to Open file.
Scenario #2:
5,000 files, average size = 15MB
# Begin Backup flow
Open file
copy data to tape
close file
read directory entry for next file
Return to Open file.
In the first scenario, the system spends most of its' processing opening and closing files, but in the second scenariom it is just doing disk reads.
In my environment, I have about 100 files, each around 1.8GB in size. My system only opens and closes files 100 times. The majority of CPU cycles are spent on streaming data to the tape drive, which is where the higher throughput can be attained.
Feel free to write me directly at bjverzal@yahoo.com if you still have questions.
Bill.