Managing Data Storage Growth: Buyer's Guide

Storage virtualization offers major advantages yet also raises key concerns about cost, performance and data availability.
Posted November 9, 2011

Kavitha Nair

(Page 1 of 2)

Server virtualization does not represent a consolidation of all IT assets, as is commonly believed. For instance, as industry experts point out, it certainly does not consolidate storage.

In fact, the common practice of consolidating current workloads onto virtual machines – combined with the latent demand for applications – actually creates increased stress on storage. This can result in performance drops and capacity issues.

Mark Peters, Sr. Analyst Enterprise Strategy Group says, “Companies face these issues, even if latent demands only accelerate storage needs rather than create new ones. Either way, there are often unanticipated storage costs, availability concerns and performance bottlenecks discovered along the server virtualization path that either stall or even stop server consolidation and desktop virtualization projects from being started or completed”.

Companies have embraced storage virtualization in some form or the other to tackle this. But concerns persist around containing costs, performance, and data availability. In order to understand the implications and address these concerns, it is therefore important to ask the following questions:

Will it address my business needs?

Storage Area Networks (SAN) were not a priority when the leader in the food labelling industry, Sinclair International Ltd, embarked on their virtualization journey. The end of the lifecycle for their Network Attached Storage (NAS) and Direct Attached Storage (DAS), coupled with rapid explosive data growth, encouraged the company to consider SAN.

SAN allowed for a large amount of storage and flexibility in their infrastructure deployment. Barry Watts, IT Manager at Sinclair, believes the approach to SAN has to be strategic and business-driven decision. Watts considered his company’s forecasted growth, and the significance and impact of the changes on his physical server farm, applications, software and data going forward.

Consequently, in 2009 Watts charted out his company’s server virtualization and storage strategy in hard financial terms for six years. “As per my calculations, based on our company’s forecasted growth, the historic server and Direct Attached Storage (DAS) replacement cycle, and the projected growth in business data, by putting the EMC NS-120 Celerra with the integrated Clariion at the core of the virtual server architecture, we would save 144,000£ [approx. $230,000 US] on infrastructure asset purchase over six years,” he says.

This kind of a holistic approach, believes Richard Flanders, Director Product Marketing MTI, allows for an effective utility model that ensures flexibility and economies of scale.

Will it ensure higher performance and availability?

Vincent Boragina, Manager System Administration, W. P. Carey School of Business IT Arizona State University, aimed to reach a 100% in server virtualization. Performance from IT assets was imperative. The advance in server virtualisation over the years, alongside desktop virtualization, led the school to dabble in high-end storage I/O needs with sequel databases and file servers (initially kept off the server virtualisation layer as the products were yet to mature). But when they started to virtualize these platforms, they faced a higher degree of latency. The need for I/O had advanced.

Boragina explains, “The issues with virtualization rests not so much with the storage capacity, as much as with how fast and the low latency it requires, to get the data on and off the disc. What is key, are the controllers and the fiber connectivity, etc., that run the disc, which impact the IOPS (Input/Output Operations Per Second) and the latency of that disc. This is where complexity rises, as it is harder to measure latency. Performance was my key criteria.”

The school implemented DataCore’s SANsymphony-V and XIO storage, where XIO was the disk sub system and DataCore was the hypervisor for the storage and the storage I/O controllers. As a result, the school achieved a 50% reduction in latency time and a 25-30% increase in the overall I/O. With the redundancy and I/O requirements met, the school was able to virtualize any platform.

Importantly, to address issues like high performance, one need not overhaul the existing storage stack, added George Teixeira, CEO at DataCore. DataCore’s SANsymphony-V Storage Hypervisor, for instance, utilizes existing storage assets to boost performance with the adaptive caching. Its auto-tiering enables optimal use of SSDs/Flash, and high-availability for business continuity. “This precludes the investments of purchasing additional IT assets and pre-mature hardware obsolescence,” says Teixeira.

Business continuity was the added benefit for the school, as it came built-in within the DataCore solution. An added effect of this implementation: speedier backup due to a faster I/O.

Page 1 of 2

1 2
Next Page

Tags: data storage, Dell, VMware, Virtualization Management, buyer guide, DataCore

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.