Businesses can now tap the computing power traditionally reserved for organizations with hefty infrastructure investments as IBM‘s “Deep Computing On Demand” program opens its virtual doors.
Nestled in a secure area within IBM’s Poughkeepsie plant is a cluster of 32-bit xSeries Intel-based Linux systems and 64-bit pSeries AIX servers. The hardware, which is accessible via a secure VPN connection, is available to customers that require the processing power of a supercomputer in doses that don’t warrant dedicated, on-site systems.
Storage is handled using a combination of integrated disks, tape libraries and smaller external SANs. Combined, the hardware provides customers with the benefits of parallel processing and secure, location independent assets that don’t require room in, and the resources of, already-taxed datacenters.
Minimizing capital expenditures aside, Dave Turek, IBM’s VP of Deep Computing, notes that the program also spares companies the lengthy and complex process of selecting amongst a wide spectrum of technologies and deploying systems with steep maintenance and management requirements. “At the end of the day they don’t want to be an apologist for one technology over another,” says Dave Turek.
According to Turek, the deep computing on demand program is ideal for companies that face high-priority, short-term workloads measured in weeks or months.
One company is currently harnessing the cluster’s power to speed its delivery of underground data to petroleum exploration companies. GX Technology is using the program’s Linux cluster to reduce cost while crunching more data and reducing the risk of costly drills in unsuitable sites.
Mick Lambert, GX Technology’s president and CEO, says, “IBM’s Deep Computing on demand gives us the power to dramatically reduce project cycle times and increase our project capacity, while reducing infrastructure and operating costs.”
In the future, Turek says that IBM will expand the cluster to include blade technologies and AMD’s Opteron. For the time being, however, he feels that customers can complete time-critical computing projects in a “fire and forget” manner without incurring the costs of inherently disruptive upgrades.