Ok. We are backing up about 20 servers with one job to FSD and then to tape. Only 1 of the servers is a problem because it is half a terrabyte in size. Before we started staging the data, it took 5 hours to backup straight to tape (locally attached admittedly). Now it's taking 10 hours to stage the data across a Gigabit Lan. It takes 2 hours to do all the other servers but this one server, using one stream because it's an agent backup, is a problem.
10 hours is just about acceptable and in our backup window, the trouble is, we want to implement the same solution on a different site where the large server has a terrabyte of data! I still need this to backup within 10 hours.
I will see where I get with trial and error on this, but is the general theory that the smaller the chunk size, the better the performance?
Thank
Ruth