Friday, March 29, 2024

Pay-As-You-Grow Supercomputing

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

IBM made its first on-demand computing play of the new year Thursday when it extended its vaunted ubiquitous computing services to supercomputing.As with many of its on-demand endeavors, Big Blue is offering customers the option of scaling software requirements out as much as needed, but without the fixed costs associated with most software licensing schemes. Specifically, IBM customers now may choose between buying POWER or Intel processor-based supercomputer clusters or access the power on demand, paying for processing power based on the required capacity and duration of use.

IBM hopes to convince the enterprise that its flexible payment approach to scalability closely resembles the way the Internet works. Dave Jursik, vice president of WorldWide Linux Cluster Sales, told internetnews.comhis company is providing a virtualized resource that customers can draw as needed.

“Customers in some sectors want access to large scale computing power in short bursts,” Jursik said. “Supercomputing on demand promises to help turn fixed costs into variable costs, matching supercomputing power exactly to customer demand.”

Jursik noted that certain market segments — such as digital media and life sciences — require supercomputer-like power, but only at certain times in their product cycles. There is also downtime in these sectors, meaning the supercomputers sit idle. Computing power depreciates after it goes unused for a year or more. IT managers may feel they are wasting money if they pay a high price for powerful machines that don’t get used with regularity. IBM’s strategy, Jursik said, is to give the IT manager the power to boost or lower the processing power they need.

IBM’s infrastructure of choice for this initiative? IBM will build a grid of Intel and POWER processors, to be made of hundreds of IBM eServer p655 systems and a massive Linux cluster with IBM eServer x335 and x345 systems. The first IBM supercomputing hosting facility will be based in Poughkeepsie, N.Y., with other national and international facilities to follow.

Jursik said the philosophy is geared for firms who undertake complex, computing-intensive projects that are short term –months as opposed to years. IBM’s first customer for this was PGS Data Processing, a division of Petroleum Geo-Services, which needed supercomputing on demand for a seismic imaging project to locate reservoirs of oil.

“Seismic imaging services employ the latest numerically intensive applications, but are also highly cost competitive. PGS has been looking for a more flexible business model which addresses peak computing requirements, assures rapid response to our customers, but minimizes long term, incremental cost commitments to PGS,” said Chris Usher, President of Global Data Processing.

Rush, who estimated contracting with IBM for supercomputing on demand could save his firm $1.5 million, said PGS can now scale real-time to handle requests for important deep water imaging solutions.

“There is a fair amount of demand for this type of capacity — for large clusters — and there isn’t anywhere else a customer can go to get this type of power to execute a short-term job,” Jursik said.

In fierce competition with Sun Microsystems’ Sun ONE and Microsoft’s .NET, IBM’s on-demand strategy attempts to offer an easy path to integrating a company’s disparate and diverse computing applications and operations.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles