@rrkano - With Exchange if you do a brick, or even doc level backup then you're unlikely to be able to stream enough data to avoid what is described in the document, but if you multiplex this or use disk staging then you should be ok.
@BackupFanatic - Certainly there were a lot of issues with SDLT drive firmware early on, and IIRC there were a few issues with broken leaders too, which I believe they modified the firmware to at least partly address, or maybe it was an HP/Quantum hardware bulletin I saw it in, I can't remember off the top of my head.
The problem in more recent times is that Quantum did a typically HP thing and re-used firmware revision numbers that had already been used by an OEM, or didn't follow a hex increment that allows any person with a modicum of common sense to realise that 1 firmware is more recent than another which just causes more confusion and is bloody annoying!
Anyway getting way off topic here, with regards to your question as to whether disk staging would run into the problem of not being able to stream an LTO3 drive sufficiently. It's unlikely that this would be an issue providing of course that the SCSI or Fibre channel supplying the drive can support a sustained transfer rate (not just burst) equal or greater than the minimum transfer threshold.
When disk staging triggers what to all intents and purposes is the same concept as a customised tapecopy job, you are doing a direct block by block copy from disk to tape so you don't run into issues of increased FSIO overhead of reading many small files. So all things being equal, and assuming you have a dedicated backup server which isn't a production app, email, or other shared/heavily used server it should be fine.
The one thing to watch out for is filesystem fragmentation. Even though what you're doing when you write to disk is writing a huge file which represents the session, you are still at the mercy of the OS filesystem write daemon.
What I mean by this is that when ARCserve issues a system call to the OS to write a block of data to disk, ARCserve itself has no way of being able to tell the OS that it wants to write the data to contiguous blocks, that really is down to the OS and is not under control of any application, even though it really makes common sense to do this.
I did quite a bit of research into this some time ago, but didn't come up with anything concrete, mainly due to lack of time and inclination to work any further on it for a number of reasons I won't go into here.
What I will say is that I did look into the possibility that fragmentation could be being caused in high load (read this as either High CPU, High Bus/PCI load, or maximised disk bandwidth) situations whereby the system calls on what used to be know as the LazyWriteDaemon in NT.
Historically what used to happen in these situations in NT4 was that when the system was under high stress/load and writes or operations were being stacking up, or being cached or delayed the OS would get a bit 'slap-happy' for want of a better term, and would simply write the pending data at the very first free space it came to regardless of whether that made good filesystem fragmentation sense or not.
Although I did a lot of research into this, trying to find any concrete info on whether the same thing happened in 2000 or 2003 was difficult. What I do know is that it's not a problem unique to ARCserve. In fact a while back there was a huge thread on Veritas' forums about this which their forum mods seem to make great efforts to avoid replying in with any really useful information.
Getting back to the fragmentation issue, if it does become a problem, the 'quick' fix is to drop and recreate the partition (assuming you are using a dedicated partition for disk staging) - defragging doesn't always fix the issue and can take a age to complete - if ever. Whilst it might not show up as a problem in tapecopy or disk staging from disk to tape speed, it may show as a slowdown backing up to disk for no apparant reason - say for example a backup of the same unchanged dataset may take 1Hr 1 week, and 1Hr 30 the next (and it has nothing to do with the ASDB).
In a small or medium sized org you're unlikely to come across the waffle I just went through above, but in a larger organisation where you're pushing the limits of the hardware you may well come across this. There are lots of variables other than just CPU and disk bandwidth, but unless you're really hitting this problem I won't waste any more internet bandwidth explaining it
