Thursday, March 28, 2024

Hardware Today: Supercomputing Gets Low-End Extension

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

It would seem an oxymoron to say “supercomputing” and “low end” in the same breath. Yet, the big news in the supercomputing arena this year has been its emergence as a force outside of its traditional spheres — big labs and big government.

Corporate America, in particular, is beginning to grasp the competitive edge supercomputing offers. General Motors, for example, was able to reduce the time it takes to design and build new vehicles from 60 months to 18 months. And DreamWorks has been using supercomputing for the many complex mathematical calculations involved in modern-day animation.

But perhaps the most surprising development has been the adoption of the technology by small businesses and start ups.

“We service companies in the 20 to 70 employee range,” said Dave Turek, IBM’s vice president of supercomputing, and head of IBM’s Deep Computing Capacity On-Demand program, which offers ASP-like access to massive amounts of compute power. “The aggregate compute power we can offer is beyond what any Fortune 100 company can muster.”

For example, $1 million dollars might buy you a Teraflop if you install your own hardware and software. For a much smaller amount, you can now rent 10 times the resources, complete your calculations and use that data to take a product to market.

At the other end of the spectrum, companies like Dell are coming out with products and architectures to reduce the price of high performance computing. The PowerEdge SC1425, for example, is a 1U dual processor server that can scale into a supercomputing platform. Similarly, the PowerEdge 1850 HPCC has been bundled with InfiniBand as an affordable high-performance clustering platform.

The boundary is blurring between technical and commercial computing,” said Dr. Reza Rooholamini, director of enterprise solutions engineering at Dell. “Commercial entities, such as oil companies and financial institutions, are now asking for supercomputing clusters.”

Among the current Dell supercomputing customers, he lists Google for parallel searching, Fiat Research Centre for engineering and crash test simulation, and CGG for seismic analysis.

Ivory Tower Transition

While the price per Teraflop has come down a lot in recent years, the transition of supercomputing from the ivory tower to the shop floor has been ongoing for the better part of a decade, when massive Crays began to be displaced by cluster architectures that could take advantage of cheaper chipsets and innovations, such as parallel processing.

This trend has continued, and now the 11-year-old Top 500 list of supercomputers is very different from the list of earlier times: 296 of its members are using clusters. The list is updated semiannually based on the LINPACK Benchmark. Today, to get into the top 10, you must have at least 10 Tflop/s. And for a spot in the top 100, 2 Tflop/s is the barrier to entry. According to Top500.org, the total combined performance of all 500 systems on the list is now 1.127 Pflop/s, compared to 813 Tflop/s a mere six months ago.

Of the 500 systems listed, 320 systems are now using Intel processors. IBM Power processors are found in 54 systems, HP PA RISC processors are in 48 systems, and AMD brings up the rear with processors in 31 systems.

“IBM has done a good job engineering the POWER processor to keep it competitive with Intel’s processor lines,” said Gartner Group analyst John Enck. “Overall penetration of POWER in different devices continues to rise.”

Topping the list is the Department of Energy’s DOE/IBM BlueGene/L beta-System with its record Linpack benchmark performance of 70.72 Tflop/s. This system will soon be delivered to the Department of Energy’s Lawrence Livermore National Laboratory in Livermore, Calif.

IBM’s lead is due in part to a major shift in chipset architectures that began in 1999. Turek explains that BlueGene is built with 700 MHz embedded microprocessors. This was a conscious effort to get away from the Intel/AMD model of progressively faster processors, which consume more power and require lots of cooling. Building supercomputers on this foundation meant they took up too much space and cost too much to run, said Turek.

“You needed a nuclear reactor to power and cool using the traditional model,” he said. “Lower-powered IBM POWER processors offered better power management and greater efficiency.”

>> Top500 Top 10 List

No. 2 on the Top 500 list is a Linux-based cluster used at NASA Ames Research Center in Mountain View. Calif. Named Columbia, this SGI Altix system is driven by Linux and 10,240 Intel Itanium 2 processors.

“Linux is doing very well and high performance computing buyers seem to like its advantages, such as being able to switch hardware over time,” said Earl Joseph a supercomputing analyst at IDC. “Itanium 2 performs well on many codes so the combination is a good fit, especially for customers who want high performance.”

Columbia achieved 51.87 teraflops (trillions of calculations per second) in recent tests, according to Top500.org. Previously, NASA Ames had been using SGI Origin computers running on IRIX, a Unix variant.

“IRIX was a very mature operating system and Linux doesn’t yet have all of its features quite yet,” said Bob Ciotti, Terascale Systems Lead at NASA. “But it has matured much more rapidly than IRIX and is getting there very fast.”

Formerly top on the list, NEC’s Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is now number three. Its Linpack benchmark performance is 35.86 Tflop/s.

The top 10 also includes the IBM-built MareNostrum cluster installed at the Barcelona Supercomputing Center (No. 4 with 20.53 Tflop/s), California Digital Corp.’s Thunder — an Intel Itanium 2 Tiger4 1.4 GHz Quadrics machine (No. 5 with 19.9 Tflop/s), HP’s ASCI Q AlphaServer SC45 (No 6. with 13.9 Tflop/s), and the Virginia Tech X-system, sometimes referred to as ‘SuperMac’ due its use of Apple’s Xserve servers (No. 7 with 12.25 Tflop/s).

“Apple’s use of the POWER processor gives them 64-bit performance at a value price,” said Gartner Group analyst John Enck.

How Super is Super?

The numbers above, however, will likely be overshadowed by developments during the course of 2005. IBM, for example, plans to install a 360 Teraflop IBM Blue Gene/L supercomputer in the first half of 2005 at the DOE National Nuclear Security Agency. And as the boundaries of compute power expand ever upward, what currently rates as super may soon be expected.

“Five years ago, the most powerful supercomputer in the world was 1 T/flop,” said Turek. “In 10 years, 10 T/flops will be ho hum.”

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles