Download the authoritative guide: Cloud Computing 2019: Using the Cloud for Competitive AdvantageSpawned in the mainframe days of computing, grid today is being taken out of the realms of academia and research and being used by enterprises in an attempt to ease the process of homogenizing heterogeneous and siloed compute environments.
Because grid computing puts a layer of virtualization, or abstraction, between applications and the operating systems (OS) those applications run on, it can be used to tie together all a corporation's CPUs and use them for compute-intensive application runs without the need to for stacks and stacks of new hardware.
And because the grid simply looks for CPU cycles that are made available to the grid though open grid services architecture (OGSA) APIs, applications simply interact with the CPU via the grid's abstraction layer irregardless of OS, said Tom Hawk, IBM's general manager of Grid Computing. In this way, Windows applications can run on Unix and Unix applications can run on Windows and so on.
"We're exploiting existing infrastructure through some fairly sophisticated algorithmic scheduling functions -- knowing which machines are available, pooling machines into a broader grouping of capacity on our way towards exploiting those open APIs so that we really, truly do separate the application from the infrastructure," he said.
Basically, grid can be thought of as similar to the load balancing of a single server but extended to all the computers in the enterprise. Everything from the lowliest PC to the corporate mainframe can be tied together in a virtualized environment that allows applications to run on disparate operating systems, said Hawk.
"The way I like to think about it really simply is the internet and TCP/IP allow computers to communicate with each other over disparate networks," he said. "Grid computing allows those computers to work together on a common problem using a common open standards API."
Some companies in the insurance industry, for example, are utilizing grid to cut the run-time of actuarial programs from hours to minutes, allowing this group to use risk analysis and exposure information many times a day verses just once. In one example, IBM was able to cut a 22-hour run-time down to just 20 minutes by grid enabling the application, said Hawk.
But any large, compute-intensive application, such as those used in aerospace or the auto industry to model events or the life sciences industry, can be (and are) grid-enabled to take advantage of a company's unused CPU cycles, said Ed Ryan, vice president of products for perhaps the oldest commercial grid company, Platform Computing. By doing so, a company can reduce its hardware expenditures while raising productivity levels through the faster analysis and retrieval of critical information.
By utilizing the compute resources of the entire enterprise, CPU downtime is put to productive work running programs that once had to wait until nightfall before enough CPU time was available. Servers, which typically have a very low CPU utilization rate, can be harnessed to run more applications more frequently and faster. But this can get addictive, said Ryan.
"Our biggest customers go into this to drive up their asset utilization and what ends up happening is their end-user customers get hooked on having more compute power to solve their problems," he said.
What this means to the average CIO, who typically has stacks of hardware requests waiting for attention in the inbox, is they can provide this power while throwing most of the new hardware requests into the circular file.
Even data retrieval and integration is being targeted by at least one firm for grid enablement. Avaki is taking grid to a new level by using it as a enterprise information integration (EII) engine that can either work with or by-pass altogether current EII efforts, said Craig Muzilla, vice president of Strategic Marketing for Avaki.
In fact, Avaki's founder is confident grid will become so pervasive in the coming years it will be commoditized as just a standard part of any operating system. That is why Dr. Andrew Grimshaw founded Avaki as a EII vendor.
"For the CPU cycles it's maybe a little bit more straightforward," said Muzilla. "Instead of having to go buy more servers to speed things up or do analysis faster, to run the application faster I can go harvest the untapped CPU cycles. We think eventually that kind of compute grid technology will be embedded in the operating system so we don't think long-term it's that attractive for ISVs."
Grid also plays right into the hands of companies looking to implement on-demand, utility or service-orientated architectures (SOA) since it enables the integration of disparate, heterogeneous compute resources by its very nature. Therefore, on-demand environments can piggy-back on the grid to achieve the integration and productivity promises of those methodologies, said IBM's Hawk.
"Right now, I'd say the No. 1 reason customers are deploying this technology is to gain resolution or to fix specific business problems they're having around either computing throughput or customer service," he said. "The real cool thing here, long-term, is about integration and about collaboration and that's why I keep harping on this concept of productivity."