Monday, December 9, 2024

Gartner: Too Many Chips Spoil the Server Broth

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

With the number of chips per server and cores per chip increasing, future generations of servers may end up with way more processing power than the computer could possibly utilize, even under virtualization, Gartner has found. The research firm issued a report on the issue earlier this week.

This doubling and doubling again of cores will drive the servers well above the peak levels for which software systems are engineered. That includes operating systems, middleware, virtualization tools and applications. The result could be a return to servers with single-digit utilization levels.

The problem is that the computer industry resides and depends on constant upgrades. It’s not like consumer electronics, for example, where stereo technology remained unchanged for decades. The computer industry is driven by Moore’s Law (define), and that means Intel has to keep selling chips and OEMs have to keep selling servers.

“Their whole business model is driven on delivering more for the same price,” said Carl Claunch, vice president and distinguished analyst at Gartner. “They have to keep delivering on the refresh rate, and you have to be constantly delivering something new.”

And fast chips are more glamorous than working on the subsystem, which has lagged when compared to processor performance. Memory and I/O buses are much slower than the CPU, causing bottlenecks on a single PC. On a virtualized system, it can be even worse.

So with Intel (NASDAQ: INTC) flooring the gas pedal on driving new products, vendors like IBM (NYSE: IBM), Dell (NYSE: DELL) and HP (NYSE: HPQ) have no choice but to follow to get revenue from product refresh sales. “When someone does take their foot off the gas it will be a train wreck, because so much is dependent on that rate of refresh and speed of improvement,” said Claunch.

Ed Turkel, manager of the Scalable Computing & Infrastructure unit at HP, seemed to concur. “Due to the more compute power available with multi-core systems, the applications may need to be re-implemented to fully take advantage of the compute power available to them,” he said in an e-mail to InternetNews.com.

“This issue is commonplace in high performance computing today, but we will start to see this as an issue in other segments. For instance, virtualization environments will also need to become more multi-core-aware, perhaps creating virtual machines that virtualize multiple cores into a single machine that hides this added complexity.”

Sockets, chips and cores, oh my!

Currently, the most popular server motherboards have two to four sockets, with dual socket being the most popular, according to Intel. Anything above four sockets is labeled as a “multi processor” (MP) server, but those are very rare and only used in extremely high-end systems, accounting for single-digit market share.

It gets even more confusing on the processor side, as the return of multithreading in Intel’s Core i7 (“Nehalem”) means one core appears as two when running two separate threads.

So far, Intel has launched the six-core Xeon and AMD has a six-core Opteron in the works. Intel plans for an eight-core Core i7 (“Nehalem”) for servers, which will run two threads per core, and AMD is planning for a 12-core server in 2011.

If motherboard makers start going to 8, 16 or 32 socket motherboards, it could be possible to see 256-core machines. With 12 and 16-core processors, that could hit 512 cores, and so on in the coming years.

This article was first published on InternetNews.com. To read the full article, click here.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles