I need to defrag the harddrive (not the Exchange database) that the Exchange 2003 application is running on.
Is there anything special that needs to be done prior to running the defrag to avoid corrupting Exchange?
Back it up before you start and when you've finished.
Ideally you should never need to really. There shouldn't be anything else on the disk/partition that's got a store on it. If there isn't then don't even bother defragging it. The pretty blocks you get in a defrag programme have got nothing to do with the actual disk.
If you have got other things on it then fine, but try and find a home for one thing or another so that the database is left alone.
I've been thinking about doing the same thing. Our Windows Server 2003 tells me two of my drives are nearly 40 percent fragmented. As you are, I'm afraid of screwing up the Information Store.
If you really wanted to do this, then I would recommend that you do an offline Exchange Defrag first and make sure you can delete all transaction logs. Clean up the logs and when you verify Exchange mounts OK, shut down all Exchange services. Then move the Info store files to another disk temporarily to free up that space on the disk. Defrag the disk, put back the Exchange database files and restart Exchange services.
Work SMARTER not HARDER. The Spider's Parlor's Admin Script Pack is a collection of Administrative scripts designed to make IT Administration easier! Save time, get more work done, get the Admin Script Pack.
Some of you need to go back and re-read the post. Gdanner doesn't want to defrag the partition with the store, but the one with the Exchange executables.
This isn't necessary, though, as even a fairly high level of fragmentation won't have much effect on system performance. If you do feel it's necessary, though, the first suggestion is right on (though I don't think it's necessary to perform a backup afterward, as the data will be unchanged).
Exchange executables - rarely written (on install and updates only) read mostly (albiet rarely, on startup compenents are read into memory) read mixed workload.
Exchange logs - sequential writes, sequential reads except during replay. Multiple threads read during replay, so it's a mixed workload. Because they are written sequentially, If you have designed your system so that only one set of logs is on a drive, then it never gets fragmented in the first place.
Exchange databases - random reads and random writes
Working directory - mixed workload (inbound content convertion, stf files)
smtp directory - mixed workload. Multiple threads read, write, and update metadata in streams.
I don't see where a file level defragmentation is going to have much positive impact. I do know that a file level will negatively impact performance while it is running. I do know that the MS Exchange Team (Nino) doesn't recommend this and says so in their blog.
If you really are serious about improving the performance of your system see my comments about allocation unit size of the boot colume in
Thanks to all who have posted so far.
Let me clarify a little more.
My main concern is that I am worried about corrupting the Exchange DB if I try and defrag the D: drive.
This server was built as an Exchange server that also hosts file and print sharing. (I didn't build it, I inherited it).
The D: drive which is a 500GB Raid 5 Partition hosts both Share files and the Exchange DB all on the same partition. The Exchange Database is ~70 GB (only about 10% free which is another issue I am working to clear up). The rest of the space is taken up by departmental shares/files and employee home folders/files.
The D: Drive is reporting Total Fragmentation: 30% and File Fragmentation: 61%.
Sounds like you are in the same boat as I am. I'm not going to defrag the drive at this point. I do quarterly IS defrags though. This creates a new store, so I figure if there was fragmentation on the store itself, it would correct itself when it gets re-written.
You would be better off taking the whole lot off somewhere, partitioning the disk up (size to suit, I don't care how) and then sliding all the data back.
The stores and run of the mill file data have no business being together.
That's a good point. From a performance perspective you generally gain nothing by crating multiple logical drive on the smae physical volume. This assumes that eachange components have their own dedicated drives and are properly segregated from other files. Partionting, or using multiple logical drives on the same set of physical spindles does hewever provide some advantage in certain circumstances;
1. It can reduce file level fragmentation by segregating files.
2. It can reserve space, preve3nting an application or component from growing uncontrollably to the point that it impacts other applications or components.
From your comments, I take it that you are already painfully aware that RAID 5 is not optimal for Exchange, and that Exchange should have it's own dedicated drives. Given what you inherited and where you are today, Creating multiple logical drives is probably the best workaround. Create seperate logical partitions for the print spooler, the file shares, the exchange logs, and the exchange database. You'll still have issue with all the IO hitting the same non-optimal spindle configuration, but at least your file share data won't intrude on Exchange and visa versa (and you'll minimize file level fragmentation. The solution is obviously more disk, probably an external enclosure, but the workaround will by you some time to budget and procure that.
"I do quarterly IS defrags though." You should not be running a last ditch repair tool as part of your regularly scheduled maintenance. Unless the space permanently goes away, then you are degrading your performance and taking unnecessary downtime (not to mention introducing a potential source of data loss). The database is internally defragmented by the online defragmentation process, and free space is reused before the physical file size is increased. doing an offline defrag removes whitespace (a certain amount of which is necessary and normal) and clears indexes. Performance is degraded as indexes are rebuilt and repopulated and necessary white space is recreated. There are far better ways to deal with file level defragmentation.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.