I've mentioned that Hyper-V is built on or into Windows Server 2008. This platform has a long history, stretching back to the original release of Windows NT 3.1 in July 1983. In fact, the latest versions of Microsofts operating systems (Windows 7 & Windows 2008 R2) are designated as release 6.1 using the same numbering system.
The first releases of Windows NT were notoriously poor on the breadth of hardware they supported. Drivers for the desktop versions of Windows were not compatible and consequently only a limited range of hardware would run under NT.
At the time this had the benefit of providing driver stability, a key requirement of any server platform. Since then, the range of supported devices and drivers has widened significantly. Consequently, Windows 2008 (and therefore Hyper-V) supports a huge range of hardware components.
This is in contrast to vSphere, where the Hardware Compatibility Guide is quite restricted. Clearly parallels can be drawn with Microsofts desire to ensure early versions of Windows NT were stable.
One good example of a supported feature is MPIO (multi-path I/O) for storage. MPIO is available natively within Windows 2008 for iSCSI and Fibre Channel devices. However, VMware was required to implement a new API (vStorage API for Multipathing) and Pluggable Storage Architecture to provide multipathing features.
In conjunction with the benefits of support are those of management. Companies looking to virtualize their x86 platforms will already be familiar with the deployment and maintenance of Windows Server, as this will already be their choice of platform today. Even in organizations where Unix variants are in the majority, Windows will likely be used on the desktop or for email services.
This means companies have a degree of skill and maturity in managing Windows environments. This extends past the simple installation process and covers patch management, security, upgrades, monitoring and reporting. For example, all of the Hyper-V features can be monitored and managed using WMI, the standard Microsoft Management Framework.
The vSphere hypervisor is based on a Linux kernel and so administered through a mixture of the command line and GUI client. Deploying vSphere requires training of workers on a totally new platform, with new concepts and terminology.
vSphere will require integration into existing management frameworks and the customization of maintenance and upgrade procedures. While the cost of training may seem low, for each person trained it represents the equivalent of a single vSphere license.
As previously discussed, VMware is the clear leader in server virtualization. vSphere is more mature than Hyper-V in many respects, yet not all users need the advanced features provided by VMware. Where users are starting their virtualization journey, Hyper-V may provide a more logical choice because:
The product is offered free, or bundled within existing Windows 2008 purchases.
Hyper-V uses Windows 2008 Server, which already had wide support and skills within many organizations.
Hyper-V leverages Windows 2008 Server components, providing support across a wide range of hardware.
Hyper-V can therefore be suited to many virtualization requirements. However, it is probably fair to say that it currently cant meet high performance, large-scale deployments. VMware still has the edge in networking and security (with features such as vShield and vDS (vNetwork Distributed Switch), but these advanced features are not needed by all clients.
Microsoft will continue to improve and add to Hyper-V. Based on previous history, they will probably continue to give the product away or bundle it for free within the existing Windows Server platform in order to gain market share.
Ultimately, VMware has to make money from vSphere as their core business is virtualization. For Microsoft, virtualization today doesnt represent their core business but is a stepping-stone to moving customer computing workloads into the cloud. They can therefore continue to provide Hyper-V at no cost. And over time, as the feature differences disappear, the definition of good enough will meet most customers requirements.
One of the ways around the issues of security and control that make some businesses wary of cloud computing is to build a private cloud -- one that remains within the corporate firewall and is wholly controlled internally. Private clouds also increase the agility of IT an organization's IT infrastructure and make it easier to roll out new technology projects. Download this eBook to get the facts behind the private cloud and learn how your organization can get started.