When you read about virtualization today, much of the content focuses on the relatively new concept of server virtualization. In this context, multiple operating system and application sets are virtualized on a single server, allowing it to be more efficiently and cost effectively used.
While this drives much of the innovation (and revenue) around virtualization today, there are a multitude of virtualization schemes addressing a spectrum of applications. In this article, we'll explore many of the ideas around virtualization and identify their uses and advantages.
Let's begin with a quick introduction of virtualization to set the stage for the following exploration. Virtualization is nothing more than an abstraction over physical resources to make them shareable by a number of physical users (see Figure 1).
From this definition, we could categorize many things as virtualization that have historically not been so (such as early time sharing systems). But this is the essence of virtualization, and as we'll find, creates many interesting and unique technologies.
Figure 1: Virtualization as an Abstraction of an Environment.
Cloud computing is another technology creating large amount of press these days. It's interesting to note that virtualization is the key enabler of cloud computing and cloud infrastructures in general (both public and Private Cloud computing).
Virtualization enables dynamically provisionable infrastructure (servers, storage, networking bandwidth, etc.) while also making it much simpler to manage. Without virtualization, resources would be more difficult to provision and share, making them less cost effective.
Let's start with the most prolific aspect of virtualization, called platform virtualization. Platform virtualization (and its sub-categories) is what enables both server and desktop virtualization. Platform in this context refers to the hardware platform and its various components. This includes not only the CPU, but also networking and storage and even bus attachments such as USB and serial ports.
The key technology that makes this possible is called the hypervisor (see Figure 2). The hypervisor is the component that virtualizes the platform, making the underlying physical resources shareable and implementing the policies for sharing among the multiple users.
The users here are called virtual machines, which is an aggregation of the operating system and application set. Note that the hypervisor can be implemented in one of two major styles. The bare-metal hypervisor (type-1) sits directly on the host server and serves as the platform. The hosted hypervisor (type-2) is an application that runs in the context of a host operating system.
Both are useful styles, but type-1 hypervisors are commonly used for server virtualization, where type-2 is used for desktop- or laptop-based virtualization.
Additionally, the virtual machine (VM) uses a wrapper that specifies the requirements and constraints for the VM. This wrapper can use a number of formats, but one growing standard is called the Open Virtualization Format (or OVF).
Note here that the VM is really nothing more than a file in some format. The virtual disk used by the VM is just another file encapsulated within the VM. As a VM, managing the OS and applications can be simplified (in some ways, complicated in others) but is also easily copied to create a new instance.
The VM as a file in a host system (hypervisor) has some interesting benefits as well as some unique issues. As a file, it's easy to manage a VM as a template (a starting point for derivatives of the OS and application set). It's also simple to move a VM from one host system (hypervisor) to another, as the process is nothing more than a file copy.
The fundamental downside of VMs is due to their simplicity. It's simple to end up with a large number of VMs and not know exactly what each contain or why they were created (cloned, snap-shotted, etc.). With VM sprawl comes other issues such as VM contents that become out of date (some of which may contain security exploits).
When a VM isn't running (inactive), it exists as a file in storage and can't actively be managed for patches or software updates. This generally creates a management issue and an additional area of research. One potential solution is to catalog the components of the VM within the wrapper metadata to make it simpler to manage all VMs (whether active or inactive).
As you would expect with any popular technology, there are a variety of ways in which virtualization can be achieved. For platform virtualization, there are two primary models, called full virtualization and para-virtualization. Let's now explore these platform virtualization techniques and their relative benefits.
Full virtualization provides a sufficient emulation of the underlying platform that a guest operating system and application set can run unmodified and unaware that their platform is being virtualized.
While from a purist perspective, this is ideal, it comes with cost. Providing a full emulation of the platform means that all platform devices are emulated with enough detail to permit the guest OS to manipulate them at their native level (such as register-level interfaces).
The device emulation must also emulate that device's idiosyncrasies, making this style very costly from a performance perspective. As you can imagine, that cost can be quite high, given that the OS manipulates the device as it does typically, and the hypervisor implements emulation at that level while bridging to a physical device on the server (which may be a different device).
Figure 3: Full Virtualization of the Underlying Platform.
While full virtualization comes with a performance penalty, the technique permits running unmodified operating system, which is ideal, particularly when source is unavailable such as with proprietary operating systems.
Solutions that implement full virtualization today include VMware's family of hypervisors, Xen and XenServer from Citrix, VirtualBox from Oracle, QEMU from Fabrice Bellard, and KVM from RedHat (among others).
To realize the full performance benefit of platform virtualization, another technique was created, called para-virtualization The fundamental issue with full virtualization is the emulation of devices within the hypervisor. A solution to this problem is to make the guest operating system aware that it's being virtualized. With this knowledge, the guest OS can short circuit its drivers to minimize the overhead of communicating with physical devices.
In this way, the guest OS drivers and hypervisor drivers integrate with one another to efficiently enable and share physical device access. Low-level emulation of devices is removed, replaced with cooperating guest and hypervisor drivers. The downside of para-virtualization is that the guest must be modified to integrate hypervisor awareness, but this includes a tremendous upside in overall performance.
Figure 4: Para-Virtualization to Improve Performance.
The Xen hypervisor popularized this approach, introducing the term para-virtualization. Today, most virtualization solutions support para-virtualization as the norm.
Using VMware hypervisors, you'll find the introduction of guest tools (which dynamically modify the guest OS). Using the Microsoft Hyper-V hypervisor, you'll find the term "enlightened," which is just another term for para-virtualization (the guest is enlightened to the fact that it's being virtualized). Solutions that implement para-virtualization include RedHat's Xen, VMware's family of hypervisors, KVM, and others.
Para-virtualization implies an interface between the guest OS drivers and the hypervisor. This is an obvious area for standardization, but unfortunately one does not exist. Within the Linux world, the Virtio (Virtualization I/O) project seeks to standardize interface over the spectrum of possible devices, but this is currently used solely in Linux.
Emulation is the process by which a host emulates, or imitates, another platform.
Traditionally, emulators were viewed as inefficient because they mimicked the execution of instructions for a guest through a number of instructions in the host. In addition to emulating the platform, the individual components within the platform require emulation as well. This includes the CPU (emulating the instructions and internal behaviors, such as registers and caches), a simplified version of the memory subsystem and as device emulation above, and any idiosyncrasies of the system.
This style of virtualization goes well beyond the simple abstraction for sharing, and instead creates new capabilities. For example, as shown in Figure 5, our host platform could utilize a given processor (x86-based CPU), but the emulator could export platforms of an entirely different processor (such as an ARM and PowerPC).
Figure 5: Emulating Target Environments Different from the Host.
This is an extremely useful style of virtualization, particularly from a development perspective. As developers, we can emulate a new target environment for the purposes of target-based development without having that particular physical hardware environment at our disposal.
Given the powerful capabilities provided by emulation, it's not surprising that partial answers have been found for the performance issues that exist. To improve performance of the fully virtualized platform, solutions like QEMU implement dynamic binary translation with cached code blocks to minimize the number of translations that are required.
QEMU also includes portions of a compiler to optimize code generation for a given platform.
Solutions that implement emulation include Bochs (specifically for x86-based platforms) and QEMU, which supports a number of host platforms and implements a variety of target platforms.
Emulation solves a variety of interesting problems outside of traditional platform virtualization. One problem comes from the field of digital preservation (maintaining the ability to execute historical software).
The computer history simulation project has developed a multi-system simulator called SIMH, similar to the QEMU, but focused on historically significant computer hardware. The SIMH simulator implements simulators for the Data General Nova and Eclipse systems, various VAX and PDP systems, IBM system 1401 and IBM System 3, the MITS Altair 8800, and many others. This project is under active development, so more hardware is likely on the way. The project also maintains tools for cross-assembling code for some of the emulated platforms.
Another intellectually interesting project under digital preservation is the Virtual AGC (Apollo Guidance Computer). This project emulates the inner workings of the AGC and is capable of executing the original Apollo software (for a variety of Apollo missions), running on Linux, BSD, Windows, and Mac OS-X OSes. The software also contains an emulation of the Saturn rocket Launch Vehicle Digital Computer (LVDC), which managed the firing of the rocket engines during the initial stages of the launch. The software can present a graphical view of the Display Keyboard unit (DSKY), providing the same interface as was used by astronauts during those missions.
Today, you can find emulators making a comeback in commercial spaces. For example, the Wii Virtual Console provides emulation of many older Nintendo titles (from a spectrum of systems such as the SNK Neo Geo, Nintendo Entertainment System, Nintendo 64, and others). There also exists a number of emulators for older game hardware and software. One of the most interesting is MAME (Multiple Arcade Machine Emulator), which emulates a large number of hardware platforms based on a variety of processors and graphics hardware. The scale of emulation provided by MAME is very impressive.
Operating system virtualization is an interesting form of virtualization that is highly efficient, while providing the basic elements of virtualization.
The technique is used regularly in virtual hosting scenarios, primarily because of its lightweight nature. In OS virtualization, rather than emulating the hardware platform, the technique creates multiple isolated user-spaces on top of a single kernel (see Figure 6).
Note here that this is more than just segregated processes but instead a complete user-space (with its own pid space, and unique identifiers for all shareable elements). This technique is implemented within the host kernel, with applications for administration and provisioning.
Figure 6: Isolated User-Spaces in Operating System Virtualization.
While providing the necessary capabilities for virtualization, the technique also provides more advanced features that were typically found only in full platform virtualization. For example, OS virtualization can isolate and provide limits for memory, I/O, network, and CPU. Some solutions can even support live migration of a virtual host between physical hosts.
The downside of OS virtualization is that since the kernel is visible to the isolated user-spaces, all must rely on the same kernel (version, configuration, services available, etc.).
Application virtualization is a relatively new name in the virtualization space, but has been around in various forms for quite a while. In this lighter form of virtualization, an application is individually abstracted from the underlying platform (operating system) to extend capabilities to it (see Figure 7).
This can be done to make an application believe that a legacy platform is available to it (with underlying services which may no longer be available). The application is unaware of the abstraction, but the technique allows legacy applications to execute on newer platforms.
Figure 7: Virtualizing Applications on a Platform.
This technique of virtualization is becoming more popular and interesting because of its lightweight nature. Solutions in the application virtualization space include Sandboxie (for sandbox isolation of applications), Xenocode from Code Systems Corporation, and ThinApp from VMware. Other examples include Wine (which allows Microsoft Windows applications to run on Linux) and Cygwin (which allows Linux applications to run on Microsoft Windows systems).
We can also think about application virtual machines (such as JVM, Adobe Flash, and CLR) as application virtualization. These approaches are different in that they typically abstract a platform for development in non-native languages (such as Java or C#), but they do isolate the physical platform through a runtime environment, and therefore provide virtualization of applications.
Desktop virtualization, similar to application virtualization, is another technology that has existed in some form for some time, but is now becoming more popular.
In this model of virtualization, a physical machine is virtualized through a client/server model (Figure 8). In other words, a physical machine existing in one location is virtualized to another (possible remote machine) over a network.
Figure 8: Desktop Virtualization as a Form of Client/Server Computing.
This model is growing in importance through a number of use models. One of the most popular is cloud computing, where a desktop can be embodied within a virtualized server infrastructure.
Since the desktop exists as a VM on a server in a rack of servers, the only way to interact with the desktop is by extending the desktop across a network to the user. This extension is provided through a virtual desktop interface (VDI) protocol, designed for the purposes of virtualizing a desktop and its interfaces to a remote user. Hardware thin clients have been developed (which contain a minimal amount of hardware), but a standard host and operating system can run the VDI software also.
What makes VDI interesting is that it's possible to mix operating systems. For example, you can virtualize a Solaris operating system on a Windows remote client.
Protocols that provide VDI include the traditional Virtual Network Computing (VNC) which is primarily used for X windows environments, Remote Desktop Protocol (RDP) from Microsoft, Independent Computing Architecture (ICA) from Citrix, and PC-over-IP (PCoIP) developed by Teredici.
As virtual machines are aggregated onto a physical machine, it's not unlikely that they will communicate with one another. For this reason, hypervisors include virtual networking abilities to optimize network traffic. Note that this implies software-based networking, but is commonly very efficient.
A virtual machine, as part of its virtualized platform, is provided with a virtual NIC. This virtual NIC attaches either to a physical NIC (on the physical platform) or to virtual networking infrastructure within the hypervisor (see Figure 9).
Figure 9: Virtual Networking Layer within the Hypervisor.
In addition to virtual NICs, it's also possible to assign them to virtual switches to isolate traffic within the virtual networking layer of a hypervisor. These switches can be further connected to physical NICs for attachment to the external (physical) network.
Most hypervisors today implement virtual networking in some form. There even exists an open source multi-platform virtual switch called the Open vSwitch, which provides a virtual multilayer switch. Open vSwitch is developed under the Apache open source license, and supports many different hypervisors (including Xen, XenServer, KVM and VirtualBox). In addition to providing standard virtual networking, the offering also provides enterprise-level features such as VLANs, QoS and per VM policing, NAT, trunking, with future support for hardware level acceleration (VMDQ, SR-IOV). Open vSwitch is an open-source offering similar to Cisco's Nexus 1000V or VMware's distributed vswitch.
For best performance, integrating support for virtualization directly into the hardware is ideal.
The earliest example of this was in the early 1970s in the IBM System/370 mainframe (in support of the first VM operating system, called the VM/370). Today, you'll find virtualization support in desktop and laptop CPUs for processing and I/O from both Intel and AMD.
The x86 instruction set turned out not to be ideal for virtualization based upon some of the privileged instructions. To remedy this, hardware was updated to make the environment more easily virtualized (in other words, not require dynamic binary translation). In this way, with hardware support, the processors can trap and emulate sensitive instructions directly in hardware instead of software.
Intel (VT-x) and AMD (AMD-V) provide hardware assists for virtualization, and can additionally more efficiently transition between VMs and the hypervisor using special instructions. It's also possible for VMs to bypass the hypervisor to deal directly with hardware, making I/O operations more efficient.
Hardware-assisted virtualization is useful in full and para-virtualized scenarios and is exploited by all major hypervisor solutions.
Much like virtualization of a platform, virtualization of storage provides equally interesting benefits. Virtualization of storage is an abstraction of storage from what is presented to the user (logical form) which may be fundamentally different than that of the physical form (see Figure 10).
Let's explore some of the ways that storage is virtualized, and the benefits and services that it brings.
Figure 10: Storage Virtualization as a Means to Abstract Storage.
From a SAN perspective, we can think about the abstraction from the user perspective (LUNs presented to a user, and the respective block mappings) and from the physical storage perspective (enclosures of drives with front-end servers providing the virtualization).
From this perspective, we can see that the abstraction hides the details of the backend storage from the front-end users, which provides a number of useful capabilities. First, since the back-end will be made up of a heterogeneous pool of drives with differing capabilities, different service level agreements (SLAs) can be provided. For example, users with requirements of fast access can have their data placed predominantly on Fibre Channel drives. Users who may be using storage for archive data may only require SATA drives, which are slower.
A more advanced capability is a dynamic tiering of the storage, such that as the data ages, it can dynamically and transparently migrate from faster, more expensive storage (FC drives) to slower, less expensive storage (SATA drives). Data migration provides this capability, and depends upon virtualization to provide the transparent remapping as data moves.
Finally, data protection technologies such as RAID implement a form of virtualization in that data is remapped (potentially with error correcting codes) to increase the reliability and availability of the storage system. By definition, this creates an abstraction of the back-end storage system to front-end users for the purpose of adding services and transparent data operations.
Hopefully from this article, you can see the wide and diverse uses for virtualization technologies. From early IBM systems to today, virtualization has emerged as one of the true paradigm shifting technologies one found in our daily lives.
Virtualization emerged as a mainframe technology, but today is used in servers and desktops, preserving historical or esoteric hardware, and even in mobile phones (serving as the high-level operating system for the phone's OS and applications). More applications of virtualization are certainly to come.
About the Author
M. Tim Jones is a senior Architect with Emulex Corporation in Longmont, CO. His background ranges from the development of software for geosynchronous satellites to the architecture and development of storage and virtualization solutions.
One of the ways around the issues of security and control that make some businesses wary of cloud computing is to build a private cloud -- one that remains within the corporate firewall and is wholly controlled internally. Private clouds also increase the agility of IT an organization's IT infrastructure and make it easier to roll out new technology projects. Download this eBook to get the facts behind the private cloud and learn how your organization can get started.