Until recently, grid computing was an academic tool, used as a cost-effective means for tackling computationally intensive problems. In disciplines like particle physics and meteorology, when problems involved computations that could run in parallel, grids provided what were essentially cheap supercomputers.
Most enterprise applications, though, are designed for centralized processing, and couldn’t easily be diced up and distributed. This partially explains why grids have been slow to take off, despite the hype. Even so, Steve Tuecke, CEO of Univa, a grid software startup, believes grid computing is a case of Internet history repeating itself.
”As with the Internet, it was first adopted by academics. Then, it moved into large, leading-edge companies before finally gaining wider adoption,” he said.
Wider adoption happened after compelling applications, namely email, took off.
Thus far, Tueckes vision is panning out, with grids slowly making their way from academia into large enterprises. Financial institutions, energy companies, and insurance firms have found that they, like academic institutions, have large problems that can be broken down into smaller computation problems and distributed.
But, if grids stick to this supercomputer model, they will eventually hit an adoption wall. As adoption grows, however, and as the upstart grid industry prepares for the assault on the mid-tier, the concept of grid computing is changing, evolving to encompass more than just distributed computations.