Spawned in the mainframe days of computing, grid today is being taken out of the realms of academia and research and being used by enterprises in an attempt to ease the process of homogenizing heterogeneous and siloed compute environments.
Because grid computing puts a layer of virtualization, or abstraction, between applications and the operating systems (OS) those applications run on, it can be used to tie together all a corporation’s CPUs and use them for compute-intensive application runs without the need to for stacks and stacks of new hardware.
And because the grid simply looks for CPU cycles that are made available to the grid though open grid services architecture (OGSA) APIs, applications simply interact with the CPU via the grid’s abstraction layer irregardless of OS, said Tom Hawk, IBM’s general manager of Grid Computing. In this way, Windows applications can run on Unix and Unix applications can run on Windows and so on.
“We’re exploiting existing infrastructure through some fairly sophisticated algorithmic scheduling functions — knowing which machines are available, pooling machines into a broader grouping of capacity on our way towards exploiting those open APIs so that we really, truly do separate the application from the infrastructure,” he said.
Virtualized Environment
Basically, grid can be thought of as similar to the load balancing of a single server but extended to all the computers in the enterprise. Everything from the lowliest PC to the corporate mainframe can be tied together in a virtualized environment that allows applications to run on disparate operating systems, said Hawk.
“The way I like to think about it really simply is the internet and TCP/IP allow computers to communicate with each other over disparate networks,” he said. “Grid computing allows those computers to work together on a common problem using a common open standards API.”
Some companies in the insurance industry, for example, are utilizing grid to cut the run-time of actuarial programs from hours to minutes, allowing this group to use risk analysis and exposure information many times a day verses just once. In one example, IBM was able to cut a 22-hour run-time down to just 20 minutes by grid enabling the application, said Hawk.
But any large, compute-intensive application, such as those used in aerospace or the auto industry to model events or the life sciences industry, can be (and are) grid-enabled to take advantage of a company’s unused CPU cycles, said Ed Ryan, vice president of products for perhaps the oldest commercial grid company, Platform Computing. By doing so, a company can reduce its hardware expenditures while raising productivity levels through the faster analysis and retrieval of critical information.
By utilizing the compute resources of the entire enterprise, CPU downtime is put to productive work running programs that once had to wait until nightfall before enough CPU time was available. Servers, which typically have a very low CPU utilization rate, can be harnessed to run more applications more frequently and faster. But this can get addictive, said Ryan.
“Our biggest customers go into this to drive up their asset utilization and what ends up happening is their end-user customers get hooked on having more compute power to solve their problems,” he said.
What this means to the average CIO, who typically has stacks of hardware requests waiting for attention in the inbox, is they can provide this power while throwing most of the new hardware requests into the circular file.
Even data retrieval and integration is being targeted by at least one firm for grid enablement. Avaki is taking grid to a new level by using it as a enterprise information integration (EII) engine that can either work with or by-pass altogether current EII efforts, said Craig Muzilla, vice president of Strategic Marketing for Avaki.
In fact, Avaki’s founder is confident grid will become so pervasive in the coming years it will be commoditized as just a standard part of any operating system. That is why Dr. Andrew Grimshaw founded Avaki as a EII vendor.
“For the CPU cycles it’s maybe a little bit more straightforward,” said Muzilla. “Instead of having to go buy more servers to speed things up or do analysis faster, to run the application faster I can go harvest the untapped CPU cycles. We think eventually that kind of compute grid technology will be embedded in the operating system so we don’t think long-term it’s that attractive for ISVs.”
Grid also plays right into the hands of companies looking to implement on-demand, utility or service-orientated architectures (SOA) since it enables the integration of disparate, heterogeneous compute resources by its very nature. Therefore, on-demand environments can piggy-back on the grid to achieve the integration and productivity promises of those methodologies, said IBM’s Hawk.
“Right now, I’d say the No. 1 reason customers are deploying this technology is to gain resolution or to fix specific business problems they’re having around either computing throughput or customer service,” he said. “The real cool thing here, long-term, is about integration and about collaboration and that’s why I keep harping on this concept of productivity.”
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.