Friday, March 29, 2024

Don’t Let Backup Take a Backseat

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

With storage requirements moving into the tera-, peta- and exabyte ranges, companies need to refine their backup strategies to ensure availability of their growing data stores.

“Many data centers still perform backup operations the same they have for decades – and it does not work any more,” says Lauren Whitehouse, Analyst, Enterprise Strategy Group, Milford, MA. “It is time to re-evaluate the capabilities and requirements, and reset expectations – just because a 4GB Oracle database could be recovered in three hours in 1987 doesn’t mean it can be today when the database is 4TB.”

Accordingly, Enterprise IT Planet interviewed several storage experts and gleaned the following tips for improving your own backup and restoration procedures.

1. Plan in reverse – figure out what needs to be restored, and how fast, and then devise an appropriate backup plan.

“What people should do, but often don’t, is start with the recovery requirements,” says W. Curtis Preston, vice president of Framingham Mass. storage consultancy GlassHouse Technologies, Inc.

This means determining the Recovery Time Objective – how quickly the data needs to be restored – and Recovery Point Objective – how current the data must be – for each class of data and creating a plan that meets those requirements.

2. Save files to disk before migrating them to tape.

“Disk staging makes a huge difference, shrinking backup windows by as much as three quarters,” says Ramon Kagan, Manager of UNIX services at York University in Toronto. “We are able to do backups much faster from the server standpoint and then cycle it to tape during the day, saving people and servers a lot of time.”

3. Eliminate Excess – Do you need to store daily copies of a file that hasn’t changed in six months, or the personal copies of an email the CEO sent to all employees? Deduplicating files reduces the amount of storage needed and speeds backup times.

“We have commonly seen 20-to-1 capacity reduction using data de-duplication,” says Whitehouse.

4. Have backups stored outside the disaster impact zone. At a minimum, backup tapes should be stored off site. Better yet, all data is mirrored to a disaster recovery facility far enough away that it is still on line when the flood/hurricane/earthquake/blackout brings down the primary data center.

5. Track down and eliminate any network bottlenecks, which will slow down backup and restoration. This is particularly an issue with server virtualization, where multiple virtual servers are using the same network interface card and network connection.

“Make sure that you walk through the whole chain from client, to network, to server, to tape drive to ID bottlenecks,” says Preston. “You may be surprised to find that the bottleneck is Gb Ethernet to the tape drive. Tape drives are often too fast for the network interface.”

6. Minimize the number of backup products. There can be advantages to using a best of breed product for a particular type of server, but make sure those advantages outweigh the cost and time of supporting multiple products.

7. Use multiple layers of protection, where appropriate.

“Depending on the business value, time sensitivity, and critically of the data involved we apply different backup methods,” says Dan Funchion, senior manager of IT Infrastructure/Operations for SunGard Availability Services in Wayne, Penn. who is responsible for backing up or replicating 30TB of data daily. “In many cases we will implement multiple solutions for the same data sets (for example, remote replication combined with tape backup).”

8. Store a copy of the recovery plan with the backup data. Particularly when there is a major disaster, those who normally handle backup/restoration may not be available. Storing a copy of the plan with the tapes allows someone else to take the necessary steps.

9. Test the restoration process before it is needed, and test it on the actual equipment that will be used. This is particularly critical when you are planning on using a disaster recovery site that contains different servers or a different network architecture. When talking about a multitiered service, it doesn’t do any good to restore only one part. Or, if the application is used to looking for a piece of code or a file on a particular server in order to complete an operation, what will happen if it’s on a different server? So test the entire system, not just whether the files restore properly.

10. Set up routine file restoration as a help desk function. It doesn’t take a high level of expertise to restore someone’s accidentally deleted Word file.

“It is important to push as much of the restoration function to the help desk so storage professionals can work on improving levels of service,” says Robert L. Stevenson, Managing Director, Storage, for TheInfoPro, Inc. in New York City. “That will give you more flexibility to handle growth and address areas where there are inadequate backups.”

This article was first published on EnterpriseITPlanet.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles