Smart questions
Smart answers
Smart people
Join Tek-Tips Forums
INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS

Member Login




Remember Me
Forgot Password?
Join Us!

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips now!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

Join Tek-Tips
*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Jobs from Indeed

Link To This Forum!

Partner Button
Add Stickiness To Your Site By Linking To This Professionally Managed Technical Forum.
Just copy and paste the
code below into your site.

Streams/Readers/Multiplexing - Balancing Act?!

hutchingsp (TechnicalUser) (OP)
2 May 09 10:34
I'm struggling to get what I think is the optimum throughput from my main file server.

The basic issue is that I have a file server which has a twin drive LTO3 library attached (so it's also a mediaagent) and the data volumes on the file server are on a 4gb/sec SAN so are reasonably quick.

A full backup is around 6.5tb and I average 320gb/hr which isn't terrible - but what I want to achieve is an even spread/flow of data to both tape drives when running a full backup which I think would saturate both tape drives.

My full backup storage policy is currently configured for 2 streams.

The primary copy (to the LTO library) has a multiplexing factor of 4.

The default backup client has 8 readers configured and "Allow multiple readers within a drive/mount point" is selected.

What seems to happen is that the backup doesn't "balance" across both drives, for arguments sakes 8 readers kick off and send 4 readers to each drive, and as the backup progresses 3 readers might finish all on the same drive so I'm left with 1 drive running at full speed with "excess" capacity (i.e. the readers are capable of pulling more data than the drive can accept) whilst the second drive has a single reader with capacity sat doing nothing.

I hope I've done a half-decent job of explaining that :)

Any suggestions appreciated.

Oh, this is Galaxy 7 with all SP's and everything is Windows.
Psy053 (TechnicalUser)
3 May 09 22:09
You have noted that you have a Multiplexing Factor of 4, is this indicative of the media agent also backing other clients and subclients?

If it is, are these backups occuring at the same time as the File Server backups?
hutchingsp (TechnicalUser) (OP)
4 May 09 6:03
It does also backup other clients, though much much smaller.

Basically I find the readers/streams issue "confusing" at best - I just know that when all my jobs are running, if I look in the Job Controller at the "Number Of Readers" column, it always shows, as a maximum, the result of "streams x multiplexing" i.e. "2 x 4 = 8".

If I lower multiplexing to 2 or 3, I see a maximum of 4 or 6 in this column, even though all those streams may come from the backup of 1 FS iDataAgent (the large file server).
Psy053 (TechnicalUser)
4 May 09 19:10
Are you backing up Databases at the same time?

Apparently there can be performance issues when multiplexing Filesystem and Database jobs.
hutchingsp (TechnicalUser) (OP)
5 May 09 15:36
Not during the full backup window no.
Psy053 (TechnicalUser)
13 May 09 21:38
I've been thinking about this a little, and I'm not sure I see how it is beneficial having your file server setup with additional readers and multiplexing when it's going straight to tape.

From what you have explained of your environment, I would have thought that by having multpile readers reading the same data, and having multiple streams writing to the same tape would cause contention somehwere in the process.

I may be way off, but is it possible a straight 2 readers to 2 streams could be a better option for achieving a balanced optimum transfer, dependant on your SAN supplying the data at an acceptable rate.



 
hutchingsp (TechnicalUser) (OP)
15 May 09 9:06
I think some of it comes down to the sizes of the files - we don't have a file server full of large files, there are millions of small ones where I believe the nature of Windows File Handling API's means you need lots of parallel reads to get a decent speed.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members!

Back To Forum

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close