Thursday, March 28, 2024

Data Center Users Speak Out

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

There’s no denying that data centers have changed in the past few years. There’s been the addition of dual-core and quad-core processors, and new generations of blades and racks. IS managers, however, are immersed in the havoc the latest wave of high-density servers are wreaking on an already struggling infrastructure.

Michael May, data center lead associate for the Midwest Independent Transmission System Operator (ISO) of Carmel, Indiana, has witnessed an ever-increasing need to add more servers to satisfy business needs. He manages three facilities encompassing around 10,000 square feet occupied mainly with Windows servers from HP and Sun.

“The added servers are going into the existing spaces, and incoming blades are causing major power and cooling concerns about being able to continue supporting the needs of the business,” says May.

To cope, Midwest ISO has had to add six room power distribution units (PDUs) and four cooling units, replaced a UPS system and remediated/relocated more than 20 racks — all in the past two years. Fortunately, the company managed to accomplish all that while maintaining normal operations (365/24), and in most cases without any outages on existing equipment.

Despite that, May is worried about the possibility that something will get shut down inadvertently. That has been averted, to date, because of dead-on documentation and attention to detail.

“The plus for the business is that it can collect data faster and more accurately with the additional servers,” says May. “But eventually, we will run out of the big three — space, power and cooling.”

How Dense Can You Get?

Similarly, density, and its power and cooling repercussions, is the biggest concern of Donna Manley, IT senior director of information systems & computing (ISC) at the University of Pennsylvania in Philadelphia. She manages a decentralized IT infrastructure, with numerous data centers serving the university’s different schools and centers. ISC runs the central data center, which serves the university as a whole. This 6,500-square-foot facility houses approximately 300 midrange servers, an IBM mainframe and an IBM SAN.

“The biggest change I’ve noticed in recent years is the increased capacity and processing power that manufacturers supply in a smaller footprint,” says Manley. “Although this presents an opportunity to house more equipment in a smaller space, it is a constant challenge to cool the environment and power the units.”

Manley’s current headache: analyzing the requirements necessary to sustain fully redundant UPS systems in light of consistent increases in power usage. She’s in the midst of determining a strategic course of action to resolve this issue. She believes water-cooled server systems may fundamentally change the data center in the next couple of years.

“If vendors are committed to take us in the direction of water-cooled systems, major infrastructure and facilities-based changes will present logistical and budgetary challenges to us,” says Manley. “In addition, I expect an ongoing escalation towards cost containment or cost avoidance.”

Scaling Out Goes Virtual

In the ongoing debate over scale out vs. scale up, the balance seems to be tipping in favor of the latter. Larger boxes containing multiple virtual systems from the likes of IBM, Unisys, HP and Sun appear to be gaining favor.

But Dell Chief Technology officer (CTO) Kevin Kettler remains a firm believer that scale-out architectures, in terms of blades and racks of small-form factor servers, will continue to dominate.

“Consolidation through virtualization is certainly an ongoing trend,” says “But scaling out is a better way to achieve it.”

He is practicing what he preaches at Dell’s main data center in Round Rock, Texas. And that means tackling the associated heat-density issues head-on.

“Power and thermal demands are changing drastically within server rooms,” he says. “Any IT project these days has to consider heat density as an important factor.”

Massive utility bills motivated Kettler to analyze the specifics of data center energy consumption in a drive for maximum efficiency in all areas — clients, hardware, software, infrastructure and management. He found that only 41 percent of total power went to IT equipment, with servers being the principal load. Another 28 percent was consumed in power distribution, and 31 percent went to cooling.

CPUs are, of course, the biggest energy hogs within servers. But Kettler’s analysis gave him an interesting perspective of the total energy picture.

“We realized that the processor takes up only 6 percent of the total power in our data center,” he says. “Power management is clearly not just about AMD vs. Intel.”

Weathering the Storm

Both analysts and users confirm the degree to which power challenges now rule the roost in data centers. According to the research firm Gartner, half of all current data centers will lack adequate power and cooling capacity by 2008. That’s causing some organizations to cut back on expansion — adding more servers would require a complete overhaul of the entire power and cooling infrastructure.

“Power is becoming more of a concern,” says Dan Agronow, CTO of The Weather Channel Interactive in Atlanta. “We could put way more servers physically in a cabinet than we have power for those servers.”

He points out another facet of this problem: Data center virtualization and compaction are making it necessary for some organizations to upgrade to 10Gb Ethernet to avoid network latency issues. The backbone needs more bandwidth to handle a much larger aggregate server total.

The good news is that the price is right. Two or three years ago, a 10 gigabit module ran about $20,000. Today, a module costs about $4,000, and some are as low as $1,000. As a result, increasing numbers of organizations are able to take advantage of the network speed improvements 10 gigabit offers.

The Weather Channel, for example, has been working with Verizon to beef up the network to accommodate the amount of data that would be replicated and synchronized.

“Before, we had 1Gb links, which limited the amount of data we could move between facilities,” says Agronow. “Now, we are connected by a 10Gb ring, which gives us plenty of room to grow.”

This article was first published on ServerWatch.com

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles