Brian Biles, vice president of marketing at Data Domain, suggests looking at history for clues. At what point would a decision to change underlying architectures have led to a more productive, responsive and cost-effective enterprise? At what point did it no longer make sense to stick with mainframes and ignore PCs and client/server architectures? At what point did ignoring Internet and networking technologies become foolish?
Telling the future is largely a guessing game, says Biles, but looking at the past offers illuminating examples.
Whenever a new IT architecture is introduced, a new set of server and storage building blocks emerge that optimize the deployment of that architecture, says David Scott, president and CEO of 3PAR. In mainframe computing, the storage building block that emerged was the monolithic, shared-cache array. In distributed computing, the dual controller modular array emerged.
According to Scott, the third wave of IT architecture, utility computing, is now gaining momentum.
''It allows customers to achieve more with less, on demand, by leveraging server, network and storage virtualization,'' he says.
Scott believes that a new building block for storage has emerged in the form of utility storage. Utility storage, he says, is built on unique n-way, clustered controller architecture with fine-grain virtualization and centralized volume management. As a result, he says, utility storage has been designed from the ground up to deliver a simple, efficient, and massively scalable tiered-storage array for utility computing.