Friday, March 29, 2024

Interop: What Are Your Datacenter Metrics?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

NEW YORK — Datacenters. Every big enterprise has them, but how many
actually have solid metrics to determine the value of their datacenter?

In a session at Interop, Andreas Antonopoulos, senior vice president and founding partner at Nemertes Research, asked participants how they measure their datacenters metrics. He noted that the metrics people use tell you a lot about their
role or how they think of the datacenter. It can be thought of in terms of
servers, square footage, CPUs and the number of CPU cores. Yet there is
another key metric that must always be put into the equation and measured
against all the others — power.

“97 percent of people we surveyed had no clue about how much power they
used in terms of cost,” he said. “The problem is power was almost
free until five years ago, but that’s not the case anymore. Now power costs.”

So what’s a datacenter manager to do? Antonopoulos argued that we should
all follow Google, Yahoo and Microsoft (GYM) and build datacenters far away
from dense urban areas, which tend to have higher energy costs, though the
availability of IT staff can sometimes be an issue.

“Why is Google in South Carolina?” Antonopoulos asked. “Chinese T-shirts.
South Carolina used to be a world center for cotton mills, but China devastated
that industry, and so South Carolina has lots of power stations with spare
capacity.”

The other problem in measuring datacenter metrics is the fact that most
current datacenters were built for peak level of demand. It’s an design
that Antonopoulos argued is not predictable and makes the datacenter
inflexible and inefficient.

The solution is to move from a design architecture to a runtime
architecture using provisioning tools and virtualization where servers can
be repurposed and reallocated as needed.

Antonopoulos noted that he’s seen datacenters waste power while idling waiting for peak capacities. As a rough estimate he noted that by spooling up servers and resources as required, instead of merely provisioning for peak capacity, datacenters could save as much as 30 percent of power requirements.

Instead of tiered datacenter structures where Web, application and database servers all exist in separate silos, Antonopoulos strongly advocates a
more flexible approach there as well. He argued that network architecture
should be flat and simple with fewer layers, simpler design, lower latency,
fewer hops and higher capacity.

The key is virtualization, which enables better utilization and the
ability to move servers around as needed. Virtualization is a particularly
hot topic this week as VMware, Cisco (NASDAQ: CSCO) and others roll out new initiatives.

This article was first published on InternetNews.com. To read the full article, click here.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles