For decades, computer companies have made lots of money selling big computers. The equation was simple: if you wanted big iron, you had to pay big bucks.
That rule is no longer true. Now you can build your own supercomputer, using commodity components, for a fraction of the cost of a Cray, IBM SP2, or SGI Origin 2000. Using the open source Unix variant Linux and off-the-shelf PCs, a growing number of people are doing just that.
The first Linux supercomputer appeared in 1994, when two researchers at NASA’s Goddard Space Flight Center in Greenbelt, MD, coupled together 16 DX4 PCs running Linux into a single system. The scientists, Thomas Sterling and Don Becker, called the system Beowulf, after the heroic figure who slew the monster Grendel in the early English epic poem.
Beowulf describes a class of computers made up of a cluster of standard PCs running Linux. Most of the cluster is usually composed of machines dedicated solely to number crunching and often run without keyboards or monitors. A server node feeds data to the rest of the cluster for processing, and serves as an administration system. The cluster is generally hooked together using off-the-shelf ethernet cards, although a variety of higher-speed networking arrangements are used in various systems in an attempt to improve performance.
Cost-cutting and customizing
That first Beowulf cluster at NASA was quickly followed by similar systems at other research labs and universities. To scientists on tight research budgets, Beowulf’s low cost is very attractive. The fact that Linux is open source is a big plus for them as well, because they can write their own device drivers for telescopes, magnetic imaging devices, and other specialized scientific instruments.
Access to the source code also lets them optimize Linux for cluster computing by stripping out unnecessary parts of the kernel when they compile it. In a cluster where only the server node has a monitor, for example, cutting the device drivers for monitors out of the kernel used by the rest of the cluster can free up memory for other uses.
Today, www.beowulf.org lists 55 universities and research laboratories with Beowulf clusters. The Los Alamos National Laboratory in Los Alamos, NM, for example, has a Beowulf cluster named Avalon, which is currently ranked number 160 on a list of the world’s 500 fastest supercomputers.
While many of these groups built their own Beowulf systems, you can now buy ready-to-go Beowulf systems from companies like Paralogic in Bethlehem, PA, Carrera Systems of Corona, CA, and even IBM. That has led to Beowulf clusters springing up in the computer rooms of major corporations: The Boeing Co.’s Applied Research and Technology group, in Seattle, for example, is experimenting with a 16-cpu Beowulf cluster for designing new airplanes. Pharmaceutical company Bristol Myers Squibb has been running a 20-node cluster since February 1999. Proctor & Gamble has a 32-node system running in a research facility near Cincinnati.
| In the corporate world, as in universities, much of Beowulf’s appeal is its low cost. For specific applications that will run on a Beowulf cluster — and not all will — “there is no peer for price to performance,” says Nathan Siemers, a researcher who works with Bristol Myers’ Beowulf cluster.
That point was made clear at the March 1999 LinuxWorld Expo in San Jose, CA, where IBM showed off a $150,000 cluster of 17 Netfinity servers with Pentium II chips running a copy of Red Hat Linux. The system matched the performance of a $5.5 million Cray T3t-900-AC64 supercomputer. (Benchmark results are available at www.haveland.com/povbench).
One company that requires no convincing that Linux offers more bang for the buck is New York-based oil company Amerada Hess Corp. Hess, which owns gas stations up and down the East Coast, has been using Linux Beowulf clusters for the past year to do the heavy number-crunching required to look for oil and gas underground. In September 1998, Hess bought its first Beowulf cluster from Paralogic. That worked out so well that the company bought two more from Dell Computer Corp.
Hess’s three 32-node Beowulf systems each turn in about 95% of the performance of the 32-cpu IBM SP2 supercomputer Hess previously used for the same task. But each Linux system cost only about $120,000: about one-twentieth the price of the SP2, according to Vic Forsyth, manager of Geophysical Systems in Amerada Hess’s Houston office.
That’s not all. Hess was paying nearly $10,000 a month for hardware support for the IBM supercomputer. The cost to maintain the hardware for Linux systems, on the other hand, was essentially zero. That’s because the computers Hess bought from Dell come with a three-year warranty covering all hardware problems. The $120,000 Hess saves annually on hardware maintenance, said Forsyth, would pay for a new Linux supercomputer each year. It’s no surprise, then, that Hess is preparing to buy a fourth Linux system — and to unload the SP2.
But a Beowulf cluster is not for everyone. One of the downsides of Beowulf systems is that they don’t run a lot of commodity software. They are not necessarily well suited to running a database application, for instance. “Beowulf clusters are not very versatile machines,” says Siemers. “You have to tailor applications for them.”
Sometimes, however, that can actually pay off. Hess, for example, could use Beowulf systems because it was able to port its seismic mapping application, which it wrote in-house, from AIX on the IBM SP2 to Linux. The entire port – some two million lines of code – took Hess only about a week, said Forsyth.
That has given Hess a competitive advantage, according to Forsyth. The company’s competitors can’t take advantage of the cheap processing power of Beowulf clusters because they use commercial oil exploration software, he claimed. Those software packages have not been ported to packages to Linux.
Hess’s quick port to Linux doesn’t surprise Doug Eadline, president of Paralogic. For companies like Hess, explains Eadline, Beowulf is pretty much plug and play. “If your code works on an IBM SP2,” he says, “you’re a good candidate for a Beowulf cluster.”ø
1. Beowulf home page Ground zero for info on the Linux supercomputer
Dan Orzech is a Philadelphia-based technology writer whose work has appeared in the Los Angeles Times, The Philadelphia Inquirer, and computer publications such as EarthWeb’s Datamation.