Friday, December 6, 2024

After Katrina… Remapping Disaster Plans

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In IT, there was an accepted rule of thumb that stated primary and

secondary data storage facilities should be 25 kilometers apart. That was

the rule until a few weeks ago, anyway.

The multi-state path of destruction of hurricanes Rita and Katrina has

turned that rule on its head, forcing many IT managers to rethink their

disaster recovery plans.

”We used to assume that disasters would not have encompassed 300 miles.

Katrina was one of the broadest national disasters to hit the U.S. and it

blew that right out of the water,” says Andreas Antonopoulos, senior

vice president at Nemertes Research in New York City.

Antonopoulos says a new rule of thumb is emerging that says IT managers

should plan their off-site storage to be in a different geographic region

than their primary data center.

”What we learned from Katrina is that in a disaster that big, you have

failures across many facilities. You have to take into consideration that

you have no water, no electricity, structural damage and no personnel.

Normally, you take into account some of these conditions, but not all of

them at once,” he says.

For instance, he says companies located wholly in New Orleans were

completely out of commission.

To avoid this scenario, experts say companies should redraw their plans

to address remote storage and recovery facilities, as well as emergency

staff to manage the process.

The first step to remapping disaster recovery plans is to decide what

data has priority, according to Dave Kershen, practice manager for the

data management group at Sun Microsystems, Inc. in Dallas.

”If you’re a customer-facing organization, you’re going to want to make

sure your customer data is ready to go,” says Kershen. ”It would cover

the gamut from customer support, which is your most visible area, to

backing up pieces of customer care, such as billing, crediting and

invoicing. If you’re a Web portal, then you need to make sure your Web

hosting system is online and running.”

Taking the time to prioritize is key because of the high costs for remote

backups. ”It’s more expensive than doing it on your own locally,”

Kershen admits.

What drives up the cost are the service costs for storing the data

outside your own data center — whether you outsource to a remote backup

service or collocate at a data center — and the expense of frequently

sending gigabytes of data across the wide area.

But Antonopolous says to look at remote backup costs as an insurance

policy. ”You have to look at the risk you’re willing to assume,” he

says.

He also recommends using information life-cycle management tools as a

guide for determining what data to back up remotely.

”Your storage strategy should match your disaster recovery strategy,”

Antonopolous explains. ”Look at each piece of data and decide what level

of freshness you need for it. Can it be on tape or does it have to be

online and immediately available? Also, there are different rates of

frequencies for backup that you can consider.

”Data can be stored in real time, daily, weekly or even monthly,” he

adds. ”The less frequent, the lower your cost. You need to make sure

that the right data is matched to the right method.”

Allen Gwinn, senior IT director at Southern Methodist University in

Dallas, says, ”I only back up what I can’t afford to lose. That’s my

litmus test.”

Gwinn, who weathered the most recent hurricanes, as well as the great

flood of 1995 that hit Dallas, is a big believer in off-site storage.

”You do have to have your data in at least two locations,” he says.

He makes sure that staff and faculty email and file server files are

backed up regularly and stored in several locations. He not only mirrors

the files at a data center more than 10 miles away, but he also keeps a

current copy of the data on 400G-byte external hard drives wrapped in

weatherproof casing with an IT staff member in case of a forced

evacuation.

But Gwinn stops short of sending his data out of state.

”Are you going to spend $15,000 to $20,000 to move data to an area that

you’ll probably never have to go to recover the data? That’s a knee-jerk

reaction. You look at the resources available to you and what money you

have available to mitigate your risk. For me, it’s the value verses the

risk assessment to geographically diversify my storage,” he says.

People Power

Experts warn that many IT groups fail to address a critical part of

disaster recovery: loss of personnel. ”You may be able to recover all

your data, but not all your people. For companies with a base that is

geographically narrow, that is a tremendous risk,” Antonopoulos says.

He adds that a flu outbreak could wreak just as much havoc as a hurricane

if personnel issues aren’t addressed in disaster recovery plans.

Antonopoulos recommends that companies with a geographically distributed

base cross-train their staffs on disaster recovery so they can get the

network up and running from an alternate location.

”For companies that have adopted encryption, the key escrow is a key

component to this strategy. Your key management system should have widely

distributed access — it should not be dependent on a single person or a

single region. There should be multiple copies of the keys,” he says.

Kershen agrees. ”In New Orleans, there were companies that lost their

data and their encryption keys and they were doing backflips to get their

data restored.”

He encourages his clients during implementation and integration to

”ensure there are a number of people who have authority and access

passwords. We highly recommend they be from alternate data centers in

alternate geographic locations.”

”I think some of the best disaster recovery plans involve duplicate job

training exercises across geographically diverse locations,” says

Kershen. ”The staff has the ability to cross-train and while it may not

be their primary function, they could limp along in case of a disaster.”

Kershen says companies need only look at the financial impact of a

disaster to see the viability in remapping their off-site storage and

recovery plans.

”Every hour of downtime in the energy industry, for instance, could

equate a million dollars in lost revenue opportunity,” says Kershen.

”Those numbers are scaring people into action. And the scope of

disasters is much larger. We’re seeing whole cities being taken out of

service with prolonged outages and that changes things. A stand-by

generator isn’t going to cut it.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles