You don’t need a Nobel prize for Economics to realize that the world’s economies are facing a slowdown or recession head on. And it doesn’t take a genius, or large leap of logic, to work out that your data center’s budget is likely to face a cut.
Whether you have an inkling a cut is coming or you haven’t been warned of an impending budget cut, establishing a course of action to cut costs now would be a wise move, according to Ken McGee, a vice president and Fellow at Gartner.
As far back as last year Gartner was warning about the need to prepare for a recession. Since then, things have obviously changed for the worse. “Since that time, the factors we based the research on — such as GDP growth projections and expert predictions for the likelihood of a recession — have worsened to a degree that convinces us it is now time for clients to prepare for cutting IT costs,” McGee said in January.
McGee recommends dedicating top staff exclusively to investigating IT cost-cutting measures, and appointing a senior auditor or accountant to the team to provide an official record of the team’s performance. He also recommends reporting progress to senior managers on a weekly basis and identifying a liaison with a legal representative to make it easier to work through legal issues that may crop up in connection with maintenance and other contracts or penalty clauses. This is to ensure cost-cutting measures don’t result in increased legal liabilities for your company.
So, having established that now is the time to take measures to help the data center weather a recession, the question is where should you look to cut costs?
Cost-Cutting Sweet Spots
One of the most significant data center costs is electricity — for powering both the computing equipment and the systems used to provide cooling. Virtualization can play a key role in reducing overall electricity consumption, as it reduces the number of physical boxes needed to power and cool.
A single physical server hosting a number of virtual machines can replace two, three or sometimes many more underutilized physical servers. Although a physical server working at 80 percent utilization uses more electricity than one working at 20 percent, it is still far more energy-efficient than running four servers at 20 percent along with the accompanying four disk drives, four inefficient power supplies, and so on.
Virtualization also shrinks costs by reducing the amount hardware that must be replaced. If you operate fewer servers, you then have fewer to replace when they reach the end of their lives. Thanks to advanced virtual machine management software from the likes of Microsoft and VMware, the time spent setting up and configuring them (and thus the associated cost) can be much less than that spent managing comparable physical servers.
And virtualization need not be restricted to servers. What’s true of servers is true of storage systems, too: Storage virtualization can cut costs by reducing over-provisioning and reducing the number of disks and other storage media that must be powered (and cooled), bought and replaced.
This leads to the concept of automation. Data center automation can take a vast amount of investment, but it also promises significant cost savings. In a time of recession it’s prudent to look at initiatives that carry a modest price point and offer a relatively fast payback period. These may include patch management and security alerting (which in turn may enable lower cost remote working practices,) and labor-intensive tasks, such as password resets. Voice authentication systems, for example, can dramatically reduce password reset costs in organizations that have large numbers of employees calling the IT help desk with password problems. Such systems automatically authenticate the user and reset relevant passwords.
Any automation software worth its salt also has the added benefit that when it reduces the number of man-hours spent dealing with a task, managers have the flexibility to choose between reducing data center human resource costs and reassigning employees to other tasks, including implementing further cost cutting systems — thereby creating a virtual circle.
A more straightforward, but contentious, strategy is application consolidation. Clearly the more applications your data center runs, the more complex and expensive it will be to manage them. Thus, consolidating on as few applications as possible makes good financial sense, assuming, of course, the apps are up to the required task. If these are open source applications, which in practice probably means Linux-based ones, then there’s a potential for significant savings, in terms of operating system and applications license fees, and CALs.
Bear in mind that significant support costs will remain, and Microsoft and other large vendors make the case that the total cost of ownership of open source software is no lower than closed source, but at the very least, you may be able to use open-source alternatives as bargaining chips to get a better deal from your existing closed source vendors.
As well as looking at changes that can be made at the micro level, it’s also useful to look at the macro level at the way your whole data center operations are structured. For example, you may have set yourself a target of “the five nines” for system availability, but it’s worth evaluating if this is really necessary. How much would it reduce your costs to ease this target to 99.9 percent? And what impact would it have on the profitability of the business as a whole?
If you can identify only a few applications that require 99.999 percent uptime, it’s important to consider if your data center is the best place from which to provide them. A specialized application service provider may be able to provide this sort of reliability at a lower cost for a fixed, per user fee, with compensation if they fall below this service level. It certainly doesn’t make sense to provide more redundancy than you need: That’s simply pouring money down the drain.
Also consider whether your data center is operating longer hours than necessary. Thanks to the power of remote management tools, you may find it makes more sense financially to leave it unmanned at certain times, while having a number of staff “on call” to sort out problems remotely, should the need arise.
Finally, it’s worth mentioning best practice IT management frameworks like the IT Infrastructure Library (ITIL) and Microsoft Operations Framework (MOF). Aligning operations to these frameworks is a medium- to long-term project, but they are intended to ensure that all IT services, including those associated with the data center, are delivered as efficiently as possible.
If you can achieve that, you are a long way down the path to ensuring your data center can endure any slowdown the economy can throw at it — not just this time, but the next time, and the time after that.
This article was first published on ServerWatch.com.