Let's review:
Exchange executables - rarely written (on install and updates only) read mostly (albiet rarely, on startup compenents are read into memory) read mixed workload.
Exchange logs - sequential writes, sequential reads except during replay. Multiple threads read during replay, so it's a mixed workload. Because they are written sequentially, If you have designed your system so that only one set of logs is on a drive, then it never gets fragmented in the first place.
Exchange databases - random reads and random writes
Working directory - mixed workload (inbound content convertion, stf files)
temp directory - mixed workload (outbound content conversion, stf files)
smtp directory - mixed workload. Multiple threads read, write, and update metadata in streams.
I don't see where a file level defragmentation is going to have much positive impact. I do know that a file level will negatively impact performance while it is running. I do know that the MS Exchange Team (Nino) doesn't recommend this and says so in their blog.
If you really are serious about improving the performance of your system see my comments about allocation unit size of the boot colume in
You'll get a lot more mileage out of this on many levels.
\