Monday, June 17, 2024

Disaster Recovery Methods Expand

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The bill for any kind of man-made or natural disaster can be staggering. From 2003 to 2006, more than 16 weather-related disasters in the United States caused more than $1 billion worth of damage, according to the U.S. Department of Commerce’s National Climatic Data Center.

Not surprisingly, organizations throughout the country are taking steps to minimize the risk and safeguard assets. But the risks now extend beyond the traditional boundaries of disaster recovery (DR).

“When it comes to DR planning, the challenges are mounting for CIOs,” says Fred Moore, an analyst at Horison Information Strategies in Boulder, Col. “They’ve added concerns ranging from widespread loss of electrical power to the growing intrusion threats from hackers to employee sabotage and terrorist attacks.”

Take the case of Fidelity Bank of Edina, Minn. After analyzing its DR needs, it became obvious that the only way to reach its recovery objectives was to pay for space at a remote location for replicating images and data. Fidelity Bank soon found that establishing a dedicated DR facility wasn’t enough. Once it opened that site, it had to confront a wide range of networking and bandwidth issues to efficiently replicate business data.

“We started sending 40 to 45 GB of data per day between our main bank and collocation facility,” says Rick Erickson, assistant network administrator at Fidelity Bank. “As the volume of data increased, we realized that we would have to double or triple our bandwidth to accommodate demand.”

Rather than replicating entire server images across the WAN, Fidelity Bank was forced to send only data. Thus, if a disaster occurred, it would be forced to rebuild the lost server from recovered data. As this considerably slowed the process, it defeated the purpose of rapid recovery.

Thus, the organization began to look at WAN optimization solutions. It ultimately selected NX appliances by Silver Peak Systems in Santa Clara, Calif. The appliances sit between network resources and the WAN infrastructure used to connect them to remote users. The idea is to reduce the amount of data traversing the WAN and improve application performance. In addition, the technology protects stored data using encryption.

Using these tools, the bank consistently sees a 94 percent data reduction across the WAN, and 1 GB transfers have been reduced from 70 minutes to 4 minutes.

Plan Ahead

Planning, then, is a critical aspect of any DR endeavor. And it’s one of the hardest parts to get right.

“A wrong guess can cost significant time and money,” says Erickson. “You find out through the planning process what applications have to be up sooner than later.”

Fidelity found out, for example, that e-mail was not as necessary as the IS organization originally thought. But some simple applications it didn’t rate highly were found to be absolutely crucial to rapid recovery. Thus, there is no substitute for taking a fresh look at the entire infrastructure during DR planning to evaluate what is vital and separate that from what is in reality non-essential.

Chip Nickolett, principal of systems integration firm Comprehensive Consulting Solutions of Brookfield, Wisc., has noticed SAN mirroring becoming more and more common as a part of the business continuity plans he’s helping to implement. The reason is simple: Few firms can afford to be down one to three days while they to execute a DR plan.

But even with sound planning, says Nickolett, there is no substitute for a plan that has been tested so thoroughly that it actually performs when required. He tells the tale of a severe failure at a customer’s main computer system. A part was needed that would have kept production down for 24 hours or more. Luckily the data was on a SAN.

“We took a small development system, executed their DR plan (making changes on the fly to accommodate the smaller system), and had them up and running in approximately four hours,” says Nickolett. “The DR plan allowed us to walk through everything that was needed, without forgetting anything, and restoring production systems in relatively short period of time.”

This example highlights the essence of modern recovery strategies — velocity. There is no longer time to hunt through piles of tape or send offsite for information for back up tapes to be driven to a recovery facility. Things must be ready instantly.

“The best prepared organizations recognize that speed is essential in recovering from whatever disaster may come to pass,” says Moore.

That, says Greg Schulz, an analyst at StorageIO in Stillwater, Minn., is why remote data replication has become so important. Such technology is now making its way down the food chain and is becoming affordable by small and mid-sized organizations.

“Many entry level storage systems are also supporting some form of remote replication built into the solution, as opposed to requiring external appliances or host based software,” says Schulz. “The reason why is that small and medium size business data is just as much exposed and at risk as larger environments.”

He also points to another trend: the rise of bandwidth and latency optimization techniques, such as those used by Fidelity.

“WAN optimization continues to be popular as people understand issues and requirements centered around data movement and effective bandwidth,” says Scultz. “Continuous data protection (CDP) has gotten a lot of hype; however, adoption has been limited that should change with more organizations leveraging it with data backup as part of an overall data protection strategy.”

This article was first published on To read the full article, click here.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles