By Ian Hamilton
In the aftermath of the September 11 terrorist attacks, increasing concern has been voiced about the security of both business and government computer systems. How safe is sensitive data? And when systems are destroyed in any kind of disaster, how do organizations go about the process of recovery?
Many businesses requiring disaster-tolerant systems maintain a duplicate of all data stored in the primary data facilities in a remote data facility. Typically, data in the remote facility is synchronized in near real time. Disk storage systems map the information in files or database into tracks and blocks. Each time a track or block is written to the local disk, it is also written to the remote disk.
These storage systems ensure minimal loss of data in the event of a disaster, but not without substantial system and telecom costs. Specifically, in a case where the remote facility is a significant distance from the primary facility, telecom costs can be substantial, often requiring DS3 connectivity, with associated costs in the range of $20,000 per month for a cross-country circuit.
In less critical situations, a commonly employed disaster recovery technique is offsite storage of backup media. In the event of a disaster, the backup media is used to load data on a remote system. This approach requires substantial time to recover from disaster, and all updates between backups are lost.
Another approach used in medium-critical data replication situations is scripted FTP. One financial services firm pointed out that it has around 100 FTP scripts running each night, collecting information from departmental file services, then bringing them back to corporate file servers for backup and disaster recovery services. Supporting this system requires a substantial IT staff and network bandwidth, since entire copies of the file system are transferred.
So, how do mid-sized and smaller companies protect critical data without busting their IT budgets to do so?Using a trusted data transfer service, entire file systems can be replicated using fewer human resources and significantly less bandwidth.
Incremental transfers allow businesses to transfer only portions of the file systems that are different on the source and target hosts. Synchronization can be scheduled to occur as frequently as the underlying business requirements dictate. Synchronization can additionally be performed on portions of file systems, between heterogeneous systems and on any combination of source and target hosts. Public networks can be used to transfer data for synchronization without fear of data corruption or interception.
Bottom line: All companies need a plan in place to protect organizational data. Disasters occur and they come in all shapes and sizes. Whatever might happen, no organization can afford to let it take the business down.
Ian Hamilton is vice president of research and development for Signiant, Inc., a provider of trusted data transfer services for businesses.