Like everything else in life there is a right way to go about reducing energy costs in the data center and a multitude of wrong ways.
How about attacking that inefficient Computer Room AC (CRAC) system first? Wrong. Reworking your power infrastructure with better PDUs or voltage control? Wrong. Removing all the underfloor cabling that is obviously severely restricting airflow? Wrong.
While these actions are fine in and of themselves, you get far less bang for your buck than if you conduct them using the wrong sequence.
This data comes from an informative presentation I heard this week by Jack Pouchet, director of energy initiatives at Emerson Network Power. He basically separated out all the elements involved and laid them out in a sequence. And the good news for data centers is that the individual server components as well as the servers (and other IT equipment) are top priority when it comes to energy savings.
Here’s the logic: Yes, the power and cooling infrastructure represents about 50 percent of total power consumption and the IT equipment (servers, storage and communication gear) takes up the other half. But every watt saved at the server component level results in cumulative savings of 2.84 watts when it works its way back through the entire power and cooling infrastructure to where the power comes into the building. If you chase the savings based purely on cooling or power equipment savings, you don’t realize quite the same cascading effect.
The best strategy, then, is to first look at reducing losses at the component level with the processor being the most important element. Other components, such as the disk and the fans can also bring about small savings. Slower disks draw less power, for example, so if you don’t really need super-fast disks, don’t order them.
The Emerson model is based on a 5,000 square foot data center which contains 210 racks, each with an average of 2.8 kW per rack. The biggest single item of savings comes from installing lower power processors. This consumes around 10 percent less power and it cascades upstream all the way through to the grid connection. Even though such processors typically cost more, they pay for themselves rapidly in energy savings.
Next comes high-efficiency power supplies for servers, which can account for 11 percent in savings. Typical server power supplies, after all, are oversized to accommodate maximum server configuration — even though most servers are shipped at much lower configurations (i.e., if you don’t buy fully loaded servers, you are overpowering them). Obviously, there are higher losses associated with oversized power supplies.
Another surprisingly good move is to turn on the server power management features your vendor has been supplying. This saves about 8 percent in total power bills, as idled servers operate about 45 percent below full power.
With these three actions, then, you achieve 29 percent energy savings. And remember that by hitting these three actions at the top of the food chain, you get the benefit in lower loads so you need less power and cooling.
Those actions can be supplemented by virtualization and the use of blade servers. Virtualization saves approximately 8 percent in energy costs according to most estimates. Blades, on the other hand, reduce the electricity bill by around 1 percent.
Some claim blades and virtualization lead to out-of-control power and cooling. Unplanned, this can certainly be the case. But as part of an encompassing implementation, a high-density architecture substantially reduces power and cooling — as the amount of space is far less, it is easier to get the power and cooling to where it is needed and to control its effectiveness.
Now it is time to look at your the power and cooling distribution side. A power distribution architecture that brings higher voltage (240v) to the server via power distribution units (PDU) and uninterruptible power supply (UPS) can be responsible for an overall energy cut of 2 percent. Power supplies are 0.6 percent more efficiency at 240v than at 208v. Use of more modern PDUs and UPSes adds the rest.
How about cooling? Various elements and strategies provide about 12 percent energy reduction. Best practices that eliminate leakage of cold air into hot (and vice versa), and produce optimum air flow (such as removing cabling and other barriers from underfloor) can bring about a 1 percent gain.
Variable capacity cooling, though, can save 4 percent. The premise for this is that IT loads have a large variation in cooling and airflow requirements. Thus, it makes sense to match cooling capacity with the IT loadto eliminates overcooling and improve cooling efficiency by reducing fan speeds etc. A 20 percent drop in fan speed alone can cut the fan’s power consumption in half.
High-density supplemental cooling does even better — about 6 percent. Reason: Most spaces have a cooling overcapacity. They have more than enough cold air in the room but it doesn’t get to where it is needed. You end up with hot spots regardless of how high you crank up the CRAC units. Supplemental cooling sits in or above the rack and takes the cold air right to the blades or racks that need it most. It is also more efficient. It takes 30 percent less power to cool 1 kW of load using supplemental cooling compared to a traditional chilled water cooling CRAC system.
Finally, monitoring cooling units can bring about a 1 percent drop in energy usage. Pre-set thresholds and a network of sensors allow CRAC units and supplemental boxes to cycle up or down according to need.
In the 5,000 square foot data center example above, an initial load of 1,127 kW was brought down to 585 kW using this approach. Just the change to low power processors throughout the facility gives immediate savings of 40 kW. But that adds up to a total of 111 KW saved when you follow it all the way to the front door.
Note that nothing here involves a change in the way the data center operates in terms of availability, performance and redundancy. Yet by applying this approach, a 5,000 square foot space with 210 racks operating at 2.8 kW per rack can be brought down to an 1,800 square feet space than contains only 60 racks running at 6.1 KW per rack. Instead of 350 tons of cooling capacity, you now need only 200 tons.
Implementation of a higher density infrastructure, therefore, can help reduce energy consumption by about 50 percent.
“Even if you are one of those rare companies where energy efficiency is not your key concern, implementing these strategies will free up power, cooling and space,” said Pouchet.
This article was first published on ServerWatch.com.
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.