Wednesday, April 21, 2021

Interop: What Are Your Datacenter Metrics?

NEW YORK — Datacenters. Every big enterprise has them, but how many
actually have solid metrics to determine the value of their datacenter?

In a session at Interop, Andreas Antonopoulos, senior vice president and founding partner at Nemertes Research, asked participants how they measure their datacenters metrics. He noted that the metrics people use tell you a lot about their
role or how they think of the datacenter. It can be thought of in terms of
servers, square footage, CPUs and the number of CPU cores. Yet there is
another key metric that must always be put into the equation and measured
against all the others — power.

“97 percent of people we surveyed had no clue about how much power they
used in terms of cost,” he said. “The problem is power was almost
free until five years ago, but that’s not the case anymore. Now power costs.”

So what’s a datacenter manager to do? Antonopoulos argued that we should
all follow Google, Yahoo and Microsoft (GYM) and build datacenters far away
from dense urban areas, which tend to have higher energy costs, though the
availability of IT staff can sometimes be an issue.

“Why is Google in South Carolina?” Antonopoulos asked. “Chinese T-shirts.
South Carolina used to be a world center for cotton mills, but China devastated
that industry, and so South Carolina has lots of power stations with spare
capacity.”

The other problem in measuring datacenter metrics is the fact that most
current datacenters were built for peak level of demand. It’s an design
that Antonopoulos argued is not predictable and makes the datacenter
inflexible and inefficient.

The solution is to move from a design architecture to a runtime
architecture using provisioning tools and virtualization where servers can
be repurposed and reallocated as needed.

Antonopoulos noted that he’s seen datacenters waste power while idling waiting for peak capacities. As a rough estimate he noted that by spooling up servers and resources as required, instead of merely provisioning for peak capacity, datacenters could save as much as 30 percent of power requirements.

Instead of tiered datacenter structures where Web, application and database servers all exist in separate silos, Antonopoulos strongly advocates a
more flexible approach there as well. He argued that network architecture
should be flat and simple with fewer layers, simpler design, lower latency,
fewer hops and higher capacity.

The key is virtualization, which enables better utilization and the
ability to move servers around as needed. Virtualization is a particularly
hot topic this week as VMware, Cisco (NASDAQ: CSCO) and others roll out new initiatives.

This article was first published on InternetNews.com. To read the full article, click here.

Similar articles

Latest Articles

IT Planning During a...

Without a doubt, 2020 changed everything. I like to compare it to a science fiction movie where time travel is involved. Clearly, we have...

Best Data Quality Tools...

Data quality is a critical issue in today’s data centers. The complexity of the Cloud continues to grow, leading to an increasing need for...

NVIDIA’s New Grace ARM/GPU...

This week is NVIDIA’s GTC, or GPU Technology Conference, and they likely should have changed the name to ATC because this year – it...

What is Data Segmentation?

Definition of Data Segmentation Data segmentation is the process of grouping your data into at least two subsets, although more separations may be necessary on...