The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend. It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.
But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own. If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.
There are many cases where local storage-backed virtualization is an ideal deployment scenario. There is no need to dismiss this approach without first giving it serious consideration.
The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system. Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us. However, few companies needed high availability at this level prior to implementing virtualization. And the price tag of the necessary infrastructure and software to do this with virtualization is often so high as to make it too expensive to justify.
High availability systems are complex and are often overkill. It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.
I see companies moving toward high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements. The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.
With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well. This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.
Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether. But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss – simply because the marketing literature promoted the concept.
Like anything, virtualization and all the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them. If the individual feature does not make sense for your business, do not assume you have to purchase or implement that feature.
Many organizations virtualize but use only a few, if any, of these “assumed” features. Don’t look at virtualization as a black box. Look at the parts and consider them like you would consider any other technology project.
What often happens is a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed. Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.
Even if high availability features are not purchased, the decision to use SAN might already be made and fail to be revisited after changes to the plan are made. It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.
This concept does not stop at virtualization. Extend it to everything that you do. Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.