Friday, March 29, 2024

Virtualization: Just Because You Can….

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Clearly, virtualization is the “hot, new technology” facing many IT organizations. Yet it seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.

As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do. We shouldn’t blindly attempt to make our decisions based on popular deployment methodologies or worse, myths.

Virtualization itself should be a default decision today for those working in the x64 computing space. Systems should be deployed sans virtualization only when a clear and obvious necessity exists, such as specific hardware needs, latency sensitive applications, etc. Baring any specific need, virtualization is free to implement, with solutions from many vendors. It offers many benefits both today and in future-proofing the environment.

That being said, what I often see today are companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems. This it certainly is not.

Virtualization is a very important tool to have in the IT toolbox. It’s one we will reach for very often, but it does not solve every problem. It should be treated like every other tool that we posses, and be deployed only when appropriate.

I see several things recurring when virtualization discussions come up as a topic. Many companies today are moving towards virtualization not because they have identified a business need. No, instead it’s deployed because it’s the currently trending topic. People feel that if they don’t implement virtualization that somehow they will be left behind or miss out on some mythical functionality.

This is generally good as it is increasing virtualization adoption. But it’s bad because good IT and business decision-making processes are being bypassed. What often happens in the wave of virtualization hype is that IT departments feel that not only do they have to implement virtualization itself, but may feel pressured to do so in ways inappropriate for their business.

Four Factors Tied to Virtualization

There are four things that I often see tied to virtualization, often accepted as virtualization requirements – whether or not they make sense in a given business environment. These are 1) server consolidation, 2) blade servers, 3) SAN storage and 4) high availability or live failover.

Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.

Clearly, consolidation is a great benefit for nearly all deployments (mileage may vary, of course). It is nearly always able to be achieved simply through better utilization of existing resources. It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation. It’s not uncommon to see datacenter footprints decimated in larger organizations.

In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question. These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.

But these shops can still virtualize “in place” systems on a one-to-one basis to gain other benefits of virtualization today. They can look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.

In short, it’s important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.

Blade servers are often seen as the choice for virtualization environments. Blades may play better in a standard virtualization environment than they do with more traditional computational workloads. Yet this is both highly disputable and not necessarily applicable data.

Being a good scenario for blades themselves does not make it a good scenario for a business. Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.

Blades needs to be evaluated using the same harsh criteria when virtualizing as when not. And, very often, they will continue to fail to provide the long-term business value needed to choose them over the more flexible alternatives. Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.

One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN. This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization, which, if they don’t require SAN, benefit greatly from it.

The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend. It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.

But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own. If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.

There are many cases where local storage-backed virtualization is an ideal deployment scenario. There is no need to dismiss this approach without first giving it serious consideration.

The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system. Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us. However, few companies needed high availability at this level prior to implementing virtualization. And the price tag of the necessary infrastructure and software to do this with virtualization is often so high as to make it too expensive to justify.

High availability systems are complex and are often overkill. It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.

I see companies moving toward high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements. The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.

With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well. This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.

Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether. But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss – simply because the marketing literature promoted the concept.

Like anything, virtualization and all the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them. If the individual feature does not make sense for your business, do not assume you have to purchase or implement that feature.

Many organizations virtualize but use only a few, if any, of these “assumed” features. Don’t look at virtualization as a black box. Look at the parts and consider them like you would consider any other technology project.

What often happens is a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed. Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.

Even if high availability features are not purchased, the decision to use SAN might already be made and fail to be revisited after changes to the plan are made. It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.

This concept does not stop at virtualization. Extend it to everything that you do. Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles