Thursday, March 28, 2024

Considering Virtualization for Small Business

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzzword being bantered about constantly in every conversation involving technology. There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question is whether virtualization applies to the small and medium business markets at this time.

The quick answer to this question is: absolutely.

Unlike many technologies that provide a great degree of technological risk and expense and may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood. In short, it provides a layer of hardware abstraction that can benefit an IT organization of any size. It may possibly apply even more to small business IT departments than to the enterprise space.

Virtualization: Seriously, What is It?

Before looking at how virtualization can benefit the SMB market I would like to provide some definitions. In today’s IT landscape it has become popular to re-label many common technologies as “virtualization” for marketing reasons, unnecessarily complicating the issue.

True virtualization refers to the virtualizing of entire operating systems. Wikipedia uses the term platform virtualization and I will as well. Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies.

The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine. This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine.

Instead of being limited to only installing one operating system image per computer we can now – with platform virtualization – install many copies of the same or disparate operating systems onto the same piece of hardware. A powerful concept indeed.

Why Has it Taken So Long?

The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?” This is an excellent question. The answer is actually quite simple.

Traditional platform virtualization technologies require a lot of support within the computer hardware itself. IBM has been building this type of support into its mainframe systems for decades. Large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well.

These systems are highly specialized and typically run their own custom operating system(s). Generally only large IT shops could afford servers of this magnitude. Small shops did not have ready access to these technologies.

For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little. It was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.

What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64).

The first move was to use software alone to make this possible on the x86 processor family. The early players in this space were VMWare and Microsoft, with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server.

These products showed that no special hardware was needed to effectively virtualize whole operating systems. Companies of all sizes began to experiment with the concept of virtualizing their existing commodity platforms. This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.

Following on the tail of these software-only solutions, the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor. This allowed for more flexibility, security and performance. It brought the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers.

By doing so, the virtualization market has really exploded. This is true both from the vendor side, as more and more vendors begin offering virtualization related products, and from the customer side, as virtualization begins to be better understood and its use becomes more commonplace.

With the latest rounds of purchasing, most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization, making the equation often tip in that direction naturally. This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.

Why is Virtualization Beneficial to SMBs?

There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure): servers and desktops.

By far the easier and more obvious choice is the virtualization of servers.

Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today. Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle. Meanwhile, additional workloads fail to find a home due to budget constraints, space or implementation time. Virtualization to the rescue.

Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware. We could virtualize just a single server system but this would not gain us any utilization advantages. Or, in theory we could virtualize hundreds of servers if our hardware could handle it.

Typically, small businesses can virtualize several typical servers roles onto a single physical server. Virtual machine density is, of course, determined by load characteristics as well as by the available hardware. Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made.

Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it. It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum. And twenty or more is not an unreasonable number to hope to achieve.

Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage. This is not the case at all. Virtualization provides a range of benefits even without using a SAN storage infrastructure.

There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies. Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization. But they do present an opportunity for future improvement when and if budgets allow.

Three Advantages

Small businesses will immediately see many advantages from virtualization, even doing so on a small scale. Some of these benefits are obvious and some are less so.

• Cost: Our first advantage is hardware cost. By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system. In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer – but more enterprise class – offerings with important performance, stability and support features. These features may include integrated power management and KVM over IP from an out-of-band management console.

• Reducing power consumption: It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department.

The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw. Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.

• Reducing backup complexity: Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others.

So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily. Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users.

In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed. A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again.

Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up. Restore done. System back online.

This is as simple as it gets.

• Ease of provisioning: Building a new server operating system directly on hardware is a time consuming venture for most shops.

This is especially true if there are any surprises with new hardware type that has not been used previously. There may be missing drivers or special operating system settings and parameters needed to support the hardware. With virtualization the target platform is always identical, removing many surprises from this process. This make it both faster and more reliable.

In many cases deployment is also faster simply because the process of preparing the base machine is so much faster. To kick off a manual install of Linux on a traditional physical server I must purchase the server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing.

Or from some virtualization environments I can simply kick off the entire process with a single command at the command line. Deploying a new server could go from hours or days to minutes. This does not even begin to address the simplicity of cloning existing systems within a virtual environment.

• Significant software cost savings: Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license. Red Hat gives you multiple installs but not unlimited like Novell. Microsoft has a range of virtualization pricing options depending on your needs, including an unlimited per processor deployment license.

In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.

• The ability to “roll back” an entire operating system: Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done. This is great for software testing. It’s especially good for testing operating system patches or any critical update process where, if something went wrong, it could cause your system to become unresponsive and potentially not repairable.

The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver. Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original file system.

No Extra Software/Hardware Costs

All the benefits mentioned above come with a move to virtualization and do not require additional cost for software or hardware.

If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images. At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features.

The main feature that really makes this solution impressive is the concept of live migration. Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another. This can be done for purposes of load balancing, disaster testing or to survive a disaster itself. With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime.” And even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down. The transition between virtual machine host nodes is completely transparent to the end users.

There is one major caveat. Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system. So when planning to use SAN to increase the reliability of your virtual machines, be sure not to use a SAN that is not as redundant or more so than your servers themselves. Otherwise you may increase cost while accidentally lowering reliability and performance.

For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it. Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.

Desktop Virtualization

Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements, especially with Microsoft Windows desktops.

Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops. Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions. This is because some device must always exist “on the desktop,” providing a keyboard, mouse and monitor which cannot be virtualized. And the desktop operating system itself must be running elsewhere.

Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers. All these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users’ existing machines (such as users at home logging in to the office).

We will start with the concept of the terminal server, as this is the most easily virtualized and the most straightforward. Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server, we need do nothing more than install that server into a virtual environment rather than into a physical one. It is really a question of server virtualization, not of desktop virtualization – it is only perceived by the end user as being related to their desktops.

The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user.

This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop.

We could, theoretically, do the same thing with Linux or some other flavor of Unix. But since those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode, we can only differentiate between a true virtualized desktop and a Unix-based terminal server in its usage –not by any strict technological means, as they are one and the same.

Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.

Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user, even if technologies exist to make this technologically unnecessary. But still there are benefits to this model. The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.

Using virtualized desktops provides far more control to the company than does providing laptops. Laptops can be stolen, lost or damaged. Laptops wear out and need to be replaced regularly. A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be. Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.

Almost any worker who uses a computer in the office already has one at home for personal use and often also has laptop in addition to high speed Internet access. Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security.
For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop. This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop.

Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk. They can then go home and work from there as well if this is allowed.

Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images. The flexibility is there to do whatever makes the most sense in your environment.

Desktop vs. Server Virtualization

Desktop virtualization is hardly the no-brainer that server virtualization is. It’s less advantageous due to the complexity and surprise cost of licensing. And, except for remote users, hardware on the desktop must always remain.

Desktop virtualization will require careful analysis on a case-by-case basis to determine if it will meet the cost and usability needs of the individual organization. Most organizations who choose to go this route will likely opt to only partially virtualize. They’ll use it only in cases where it makes the most sense, such as roaming users and remote workers, while keeping traditional desktops for many staffers.

Using terminal server options will often be far more common than “true desktop virtualization,” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.

Another Virtualization Use: Run Additional OSes

There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment. This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops.

This is a common scenario for people who need to test multiple operating systems for support or development. It is not useful for production systems and is generally outside the scope of this discussion. It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.

Apple Lags in Virtualization

In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products. There is a reason for this. Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware. And Apple does not have an enterprise-ready virtualization product ready for its own platform.

The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance, thereby eliminating most of the cost benefits of this approach. You would then need to run it on a host-based virtualization product such as VMWare Fusion or Parallels, which are designed for use on top of a desktop and not as a server-class product.

This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs. If Apple were to change its licensing strategy around virtualization, Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.

Virtualization: Consider It

Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero. Many technologies promise important improvements for businesses but most create questionable value while incurring real cost.

In contrast, virtualization brings real, measurable value while often costing nothing – and often reducing spending immediately. For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles