Wednesday, December 11, 2024

Power and Cooling Savings From the Ground Up

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Like everything else in life there is a right way to go about reducing energy costs in the data center and a multitude of wrong ways.

How about attacking that inefficient Computer Room AC (CRAC) system first? Wrong. Reworking your power infrastructure with better PDUs or voltage control? Wrong. Removing all the underfloor cabling that is obviously severely restricting airflow? Wrong.

While these actions are fine in and of themselves, you get far less bang for your buck than if you conduct them using the wrong sequence.

This data comes from an informative presentation I heard this week by Jack Pouchet, director of energy initiatives at Emerson Network Power. He basically separated out all the elements involved and laid them out in a sequence. And the good news for data centers is that the individual server components as well as the servers (and other IT equipment) are top priority when it comes to energy savings.

Here’s the logic: Yes, the power and cooling infrastructure represents about 50 percent of total power consumption and the IT equipment (servers, storage and communication gear) takes up the other half. But every watt saved at the server component level results in cumulative savings of 2.84 watts when it works its way back through the entire power and cooling infrastructure to where the power comes into the building. If you chase the savings based purely on cooling or power equipment savings, you don’t realize quite the same cascading effect.

The best strategy, then, is to first look at reducing losses at the component level with the processor being the most important element. Other components, such as the disk and the fans can also bring about small savings. Slower disks draw less power, for example, so if you don’t really need super-fast disks, don’t order them.

The Emerson model is based on a 5,000 square foot data center which contains 210 racks, each with an average of 2.8 kW per rack. The biggest single item of savings comes from installing lower power processors. This consumes around 10 percent less power and it cascades upstream all the way through to the grid connection. Even though such processors typically cost more, they pay for themselves rapidly in energy savings.

Next comes high-efficiency power supplies for servers, which can account for 11 percent in savings. Typical server power supplies, after all, are oversized to accommodate maximum server configuration — even though most servers are shipped at much lower configurations (i.e., if you don’t buy fully loaded servers, you are overpowering them). Obviously, there are higher losses associated with oversized power supplies.

Another surprisingly good move is to turn on the server power management features your vendor has been supplying. This saves about 8 percent in total power bills, as idled servers operate about 45 percent below full power.

With these three actions, then, you achieve 29 percent energy savings. And remember that by hitting these three actions at the top of the food chain, you get the benefit in lower loads so you need less power and cooling.

Those actions can be supplemented by virtualization and the use of blade servers. Virtualization saves approximately 8 percent in energy costs according to most estimates. Blades, on the other hand, reduce the electricity bill by around 1 percent.

Some claim blades and virtualization lead to out-of-control power and cooling. Unplanned, this can certainly be the case. But as part of an encompassing implementation, a high-density architecture substantially reduces power and cooling — as the amount of space is far less, it is easier to get the power and cooling to where it is needed and to control its effectiveness.

Now it is time to look at your the power and cooling distribution side. A power distribution architecture that brings higher voltage (240v) to the server via power distribution units (PDU) and uninterruptible power supply (UPS) can be responsible for an overall energy cut of 2 percent. Power supplies are 0.6 percent more efficiency at 240v than at 208v. Use of more modern PDUs and UPSes adds the rest.

How about cooling? Various elements and strategies provide about 12 percent energy reduction. Best practices that eliminate leakage of cold air into hot (and vice versa), and produce optimum air flow (such as removing cabling and other barriers from underfloor) can bring about a 1 percent gain.

Variable capacity cooling, though, can save 4 percent. The premise for this is that IT loads have a large variation in cooling and airflow requirements. Thus, it makes sense to match cooling capacity with the IT loadto eliminates overcooling and improve cooling efficiency by reducing fan speeds etc. A 20 percent drop in fan speed alone can cut the fan’s power consumption in half.

High-density supplemental cooling does even better — about 6 percent. Reason: Most spaces have a cooling overcapacity. They have more than enough cold air in the room but it doesn’t get to where it is needed. You end up with hot spots regardless of how high you crank up the CRAC units. Supplemental cooling sits in or above the rack and takes the cold air right to the blades or racks that need it most. It is also more efficient. It takes 30 percent less power to cool 1 kW of load using supplemental cooling compared to a traditional chilled water cooling CRAC system.

Finally, monitoring cooling units can bring about a 1 percent drop in energy usage. Pre-set thresholds and a network of sensors allow CRAC units and supplemental boxes to cycle up or down according to need.

In the 5,000 square foot data center example above, an initial load of 1,127 kW was brought down to 585 kW using this approach. Just the change to low power processors throughout the facility gives immediate savings of 40 kW. But that adds up to a total of 111 KW saved when you follow it all the way to the front door.

Note that nothing here involves a change in the way the data center operates in terms of availability, performance and redundancy. Yet by applying this approach, a 5,000 square foot space with 210 racks operating at 2.8 kW per rack can be brought down to an 1,800 square feet space than contains only 60 racks running at 6.1 KW per rack. Instead of 350 tons of cooling capacity, you now need only 200 tons.

Implementation of a higher density infrastructure, therefore, can help reduce energy consumption by about 50 percent.

“Even if you are one of those rare companies where energy efficiency is not your key concern, implementing these strategies will free up power, cooling and space,” said Pouchet.

This article was first published on ServerWatch.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles