For years, enterprise users of defragmentation software have commented upon an interesting phenomenon. In addition to better overall performance of servers and workstations, many also noted improved system reliability and uptime.
Interviews with dozens of system managers appear to validate the premise that a defragmented system runs far more stably than a fragmented one.
In this article, we discuss this issue and offer technical explanation, as well as supporting documentation.
Contiguity = Uptime
Having all program and data files stored in contiguous form seems to be a key factor in keeping a system stable and performing at peak efficiency. The moment a file is broken into pieces and scattered across a drive, it opens the door to a host of stability and reliability issues. According to a number of system managers, having even just a few key files fragmented can lead to significant performance reduction, system crashes, conflicts and errors.
“To maintain optimal system performance, companies need to schedule disk defragmentation on a regular basis for all their servers and workstation,” said Steve Widen, an IT analyst at International Data Corp (IDC). “Otherwise files can take 10 to 15 times longer to access, boot time can be tripled and nightly backups can take hours longer.”
The most common problems caused by file fragmentation are slow boot times and computers that won’t boot up at all, slow or aborted backups, file corruption and data loss, crashes and system hangs, errors in programs, memory problems and hard drive failures. Let’s take a look at a few of these:
Fragmentation is a major factor in slow boot times. This can deteriorate to the point where a server takes hours or won’t boot up at all. Once thought to affect only Windows 9X and NT, this situation is not uncommon on Windows 2000 and Windows XP systems.
The U.S. District Court in Maine, for example, experienced long delays with server shutdown and reboot.
“We had a couple of servers take 20 to 30 minutes to shut down,” says senior automation manager Kevin Beaulieu. “After defragmenting them, reboot time was reduced to 1 to 2 minutes.”
Various Microsoft Knowledgebase (MS KB) articles cover the manifestation of lengthy and failed boots. MSKB Q228734 and Q224526, for instance, explain the situation of the Windows NT failing to boot up. This comes about as fragmentation of Master File Table and key metadata files could not be located during the boot sequence.
Windows 2000 can also hang during start up if a large System “hive” file becomes fragmented. According to MS KB article Q265509, the System hive file is usually the biggest file that is loaded during start up. (A hive is a group of keys, subkeys, and values in the registry that has a set of supporting files containing backups of the data. A hive is treated as a single unit and is saved and restored as one file.) Typically, it suffers badly from fragmentation as it is often modified. As a result, it can’t be loaded and the computer hangs.
There are many documented cases of errors and crashes on Windows caused by fragmentation. A crash takes place, for instance, when attempting to run CHKDSK on a highly fragmented drive.
According to MS KB Q160451 and Q165456, there are several situations where fragmentation causes a system to hang. For example, when you attempt to run CHKDSK on a drive that is heavily fragmented, Windows may crash. Similarly, when the NTFS file system driver is attempting to perform I/O to a fragmented file and does not correctly clear a required field, the process can stop responding; i.e. fragmentation can slow down I/O to the point where programs and processes cease to function entirely. With files scattered throughout the disk in many pieces, they are unavailable to the system when needed and a crash takes place.
That’s what happened to the system at Southern Insurance. The intensive memory requirements of proprietary insurance software resulted in screens freezing and the need to repeatedly reboot.
“We were experiencing regular crashes, as well as slow system response,” said Don Ungaro of Southern Insurance.
Ungaro installed Executive Software’s Diskeeper (www.diskeeper.com) on servers, workstations and laptops. He explains that some of his company’s applications create an abundance of temporary files. This causes fragmentation, with its resulting performance slows and instability, to set in very rapidly.
“Diskeeper provides system stability to business users,” he said. “The cost of continual shut-downs, reboots and crashes more than offsets the nominal cost of the program.”
It is widely accepted that fragmentation hastens the onset of hard drive failure by greatly increasing the amount of disk head movement over the life of the drive.
According to David Whitemore, a Help Desk specialist for tech support organization Protonics, “If files are fragmented, the hard drive has to work more to round up all those files. However, if files are all perfectly organized, there is less searching for the files and that helps make the drives last longer.”
This is supported by an IDC study highlighting enhanced performance and lengthened machine lifespan as one of the prime benefits of automatically defragmenting every server and workstation in the enterprise.
“It can be considered that defragmentation software can extend the life of a typical workstation,” said IDC’s Widen. “IDC estimates that enterprises can add up to two additional years of life to the normal three-year usable life station of workstations.”
While IT departments used to have 12 or more hours available for their backup processes, they are now expected to perform a backup in a much shorter period. At the same time, the amount of data to be backed up is growing exponentially. Lengthy backups mean no time for routine maintenance, and some backups may even have to be aborted.
If files are fragmented, though, it multiplies the amount of time needed to get a backup done — the head has to waste time locating and gathering numerous fragments before they can be consolidated for back up. Thus it is common for IT departments to report backup times shrinking significantly after instituting routine defragmentation.
Reynolds Metals Co., for example, conducts daily backups on 80 Windows NT/2000 servers. It uses Backup Exec by Veritas to transfer 200 GB disks to DLT 7000 tape drives.
“In some cases, it took around 14 to 16 hours to backup servers each night,” said Bonnie Manley, open systems manager. “There was no time for maintenance, virus scan or anything else.”
To combat this situation, the company initiated a daily fragmentation schedule on all machines. According to Manley, “Now backups take a maximum of four hours to complete.”
Proactive System Management
If fragmentation is not addressed routinely, or if there is a failure to understand its role in helping to cause reliability and stability problems, IT workload can increase due to misidentifying the source of problems. This can lead to unnecessary software reinstallations, hard drive re-imaging and even unnecessary equipment replacement. By being forced to work reactively, IT budgets rise and user productivity is adversely affected due to unacceptable levels of downtime.
On the other side of the coin, many of the system managers interviewed note ease of system management resulting from greater levels of system availability upon instituting regular defragmentation of their workstations and servers. However, they also caution that automation is the key to the effectiveness of site-wide defragmentation, and that manual defragmentation tools should be avoided due to being labor intensive. By setting up a time slot each day for the defragmentation program to run automatically, they believe that Help Desk calls are reduced, troubleshooting demands are lessened, and the need for reactive system maintenance is diminished.
“Defragmentation helps us be proactive in the management of 88 servers and 520 desktops so that we provide the greatest performance and availability possible,” said Jim Roycroft of IT Support Center Inc.