Friday, June 21, 2024

Optimizing Virtualization for the Hybrid Cloud

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Since its inception, the promise of virtualization has been that it would make everything in the Data Center easier and less expensive.   Very few people would argue that broad point, but at the same time, there are still some areas that have not seen those benefits.  One of these is data replication. 

Data replication solutions originally designed for the physical world can, of course, replicate data from one physical site to another.  Virtualization hasn’t broken them.  But it has added complexities and subtleties that legacy replication products simply cannot address.

Cloud Storage and Backup Benefits

Protecting your company’s data is critical. Cloud storage with automated backup is scalable, flexible and provides peace of mind. Cobalt Iron’s enterprise-grade backup and recovery solution is known for its hands-free automation and reliability, at a lower cost. Cloud backup that just works.


Even as virtualization’s widespread adoption continues to grow (more than 65% of all workloads are virtualized today, and growth is expected to continue for several more years), IT is finding that it’s just not enough.  Virtualization within the local data center —  Private Cloud — has been a great source of cost savings, but there is a need to expand beyond that, and store and access data and workloads at remote locations (the Public Cloud).  This combination of Private and Public Clouds is called the Hybrid Cloud. 

To achieve a true Hybrid Cloud, users need to be able to easily and quickly replicate their data and workloads between hypervisors within their Private Cloud, and to be able to move them to Public Cloud providers and back again, with similar speed and ease, while managing the whole thing from a single interface.

There are at least two major areas where the Private Cloud comes up short in the modern datacenter.  The first and most obvious areas of these is Business Continuity and Disaster Recovery.  In order for an enterprise to recover after a true disaster, their data needs to be replicated offsite, and for most organizations it no longer makes sense to obtain an expensive dedicated second site to permit recovery after such an unlikely event as a disaster.  The same function can be realized in a much less expensive way by replicating data and workstations into a Public Cloud provider’s environment. 

The other, slightly less obvious, area is cloudbursting.  Most IT organizations experience a seasonal swing to their IT traffic, and as a result they need to be able to expand their footprint onto reliable equipment that can be quickly put into service.  They can either maintain their own supply of standby equipment that is ready to be put into service or they can contract with an outside provider to have equipment ready when it’s needed.  One solution requires capital investment for equipment that will be idle much of the time, while the other is an operating expense that is generally only incurred when the additional equipment is actually needed.

Both BC/DR and cloudbursting require replicating large amounts of data to a remote location and keeping that data current. Mission-critical applications require data that is as current as possible, ideally up to the most recent transaction.

Designed for a Physical World

Many popular data replication products were designed for a physical world and have been force-fitted into the virtual world.  Others were backup products that have had replication cobbled in through additional rounds of coding.  These products weren’t designed for a virtualized world, and in some cases weren’t even originally designed to be replication products.

For replication to work efficiently in a virtualized world, the replication tool needs to be purpose-built to understand virtualization and virtual machines.  First and foremost, the tool needs to be hypervisor-aware so that it can manage data within virtual machines.  As VMs are created, destroyed, resized, and migrated, the replication tool needs to automatically detect these changes and handle them gracefully.  It also needs to differentiate between different VMs and their needs for RPO, RTO, and retention.  What’s more, by replicating at the virtual disk level (e.g. VMDK in a VMware world), entire disks and LUNs are not being replicated. A lack of virtualization awareness will result in inefficiencies in replication, storage, and network utilization, and potentially a lack of legal compliance.

Snapshots, the most commonly implemented method of replication, introduce a great deal of overhead and in large and complex implementations can take a long time to complete.  Vendors often claim that they can complete snapshots in 10-15 minutes, but in many cases, they can take an hour or more. 

Rather than using snapshots, Continuous Data Protection (CDP) ensures that every transaction is recorded and replicated immediately. These transactions can be coordinated so that write-order is maintained over a complex set of VMs and storage devices. CDP ensures that RPO will approach zero in the event of a disaster, and as a side benefit, enables rollback and recovery to any point in time. 

Other requirements for replication to work efficiently in a virtualized world include:

·  Storage vendor independence.  The same replication tool should be used across the entire IT estate, regardless of which storage vendors are in use. IT should not be locked into a single expensive storage vendor. The destination could be another vendor’s storage Public Storage Cloud.

· An interface that integrates with the hypervisor’s management tools.

Traditional physical approaches offer some benefits in critical areas but not all.



Appliance- Based




EMC SRDF, NetApp SnapMirror, Hitachi TrueCopy

EMC RecoverPoint

Veritas Volume Replicator, Vision DoubleTake

Actifio, Datto, Veeam, Commvault Simpana

Continuous Data Protection





Virtualization Awareness





Manage VMs Independently





Flexible Destinations





Multi-Tier App Support




With added software

Optimizing Virtualization for the Cloud Environment

One product that does accomplish these benefits in the VMware space is Zerto Virtual Replication.  Since it is hypervisor-based, in a VMware environment it provides:

· Continuous Data Protection for recovery with no lost data or transactions

· Consistency across complex and multi-tier applications

· Minimal impact on production applications

· Storage vendor agnosticism

·  Support for cloud-based destinations

We have been unable to identify a product in the Hyper-V, KVM, or Xen marketplaces that can do all of these things.  However, as you’ll see, that may not be a long-term concern.

In the current marketplace, the variety of cloud and hypervisor providers has led to a situation where no two vendors (e.g. Amazon Web Services, VMware Cloud Hybrid Services) are compatible with one another, and in some cases a vendor’s Private Cloud products are not even compatible with their own Public Cloud offerings (e.g. Microsoft Hyper-V and Azure).  Being able to migrate between Private and Public clouds is important, but it’s also important that a chosen solution doesn’t become a Hotel California (where data checks in but it can’t leave).

IT will wish to move data between clouds and hypervisors to maximize savings and agility and will want to so with a minimum of complexity, and so must be careful that a choice isn’t locking them into a particular technology.  Just as an IT organization would not want to be committed to a single storage platform, they should not commit to a single cloud platform or hypervisor. Different platforms offer different cost structures and feature sets that change with each new release.

Market Landscape

Momentum for technology that enables movement between clouds and hypervisors is increasing.  Companies like Telstra, Allstream, Logicalis Group, OnX Managed Services, Wipro, Zerto, VCE, NetApp, and Accenture have joined Cisco’s Intercloud Fabric Initiative, whose goal is to permit IT to place virtualized workloads in public or private clouds with the same level of security, access control and QoS.   

What’s more, Zerto recently announced plans for their own Cloud Fabric initiative, which will provide a new infrastructure layer between cloud providers and hypervisors, enabling easy movement of VMs and data between local hypervisors and public and private clouds without requiring application reconfiguration or retooling.  If they can deliver on this initiative, this will go a long way toward knocking down the barriers between different private and public clouds, and enabling a true Hybrid Cloud.

The parallels between basic virtualization and this Cloud Fabric are quite apparent.  Virtualization freed IT from compute and storage hardware lock-in, increasing user choice and flexibility and lowering costs.  A cloud fabric has the potential to free IT from lock-in to a particular cloud service or hypervisor, allowing users and IT to choose the service that makes the most sense at a particular time, and to move between services as conditions change. This cloud fabric model will permit IT to select the right platform for each project, based on SLA, cost, performance, features, or other criteria without having to worry about compatibility or lock-in. They’ll even be able to move to a different platform as needs change.

The result will be a federation of services all available interchangeably, which will lead to freedom of choice for users and IT, increased savings in both capital and operating expenses, and the agility that everyone wants.

We believe that today’s incompatibility among different cloud and hypervisor providers will likely be replaced, initially by a cloud fabric model which brings additional abstraction, effectively eliminating the incompatibilities between different versions the hypervisors and cloud.

In order to be ready for this hypervisor- and cloud-agnostic model, IT needs data protection and migration that is fully virtualization and cloud ready today, and we believe that these features are the ones that will get IT there.  The incompatibilities and separations between hypervisors and cloud providers are important today, but we expect that they will be moderated through technology like cloud fabrics.  The result will be true freedom of choice for IT.

Christine Taylor is an analyst for The Taneja Group.

Photo courtesy of Shutterstock.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles