Tuesday, April 16, 2024

Managing Data Storage Growth: Buyer’s Guide

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Server virtualization does not represent a consolidation of all IT assets, as is commonly believed. For instance, as industry experts point out, it certainly does not consolidate storage.

In fact, the common practice of consolidating current workloads onto virtual machines – combined with the latent demand for applications – actually creates increased stress on storage. This can result in performance drops and capacity issues.

Mark Peters, Sr. Analyst Enterprise Strategy Group says, “Companies face these issues, even if latent demands only accelerate storage needs rather than create new ones. Either way, there are often unanticipated storage costs, availability concerns and performance bottlenecks discovered along the server virtualization path that either stall or even stop server consolidation and desktop virtualization projects from being started or completed”.

Companies have embraced storage virtualization in some form or the other to tackle this. But concerns persist around containing costs, performance, and data availability. In order to understand the implications and address these concerns, it is therefore important to ask the following questions:

Will it address my business needs?

Storage Area Networks (SAN) were not a priority when the leader in the food labelling industry, Sinclair International Ltd, embarked on their virtualization journey. The end of the lifecycle for their Network Attached Storage (NAS) and Direct Attached Storage (DAS), coupled with rapid explosive data growth, encouraged the company to consider SAN.

SAN allowed for a large amount of storage and flexibility in their infrastructure deployment. Barry Watts, IT Manager at Sinclair, believes the approach to SAN has to be strategic and business-driven decision. Watts considered his company’s forecasted growth, and the significance and impact of the changes on his physical server farm, applications, software and data going forward.

Consequently, in 2009 Watts charted out his company’s server virtualization and storage strategy in hard financial terms for six years. “As per my calculations, based on our company’s forecasted growth, the historic server and Direct Attached Storage (DAS) replacement cycle, and the projected growth in business data, by putting the EMC NS-120 Celerra with the integrated Clariion at the core of the virtual server architecture, we would save 144,000£ [approx. $230,000 US] on infrastructure asset purchase over six years,” he says.

This kind of a holistic approach, believes Richard Flanders, Director Product Marketing MTI, allows for an effective utility model that ensures flexibility and economies of scale.

Will it ensure higher performance and availability?

Vincent Boragina, Manager System Administration, W. P. Carey School of Business IT Arizona State University, aimed to reach a 100% in server virtualization. Performance from IT assets was imperative. The advance in server virtualisation over the years, alongside desktop virtualization, led the school to dabble in high-end storage I/O needs with sequel databases and file servers (initially kept off the server virtualisation layer as the products were yet to mature). But when they started to virtualize these platforms, they faced a higher degree of latency. The need for I/O had advanced.

Boragina explains, “The issues with virtualization rests not so much with the storage capacity, as much as with how fast and the low latency it requires, to get the data on and off the disc. What is key, are the controllers and the fiber connectivity, etc., that run the disc, which impact the IOPS (Input/Output Operations Per Second) and the latency of that disc. This is where complexity rises, as it is harder to measure latency. Performance was my key criteria.”

The school implemented DataCore’s SANsymphony-V and XIO storage, where XIO was the disk sub system and DataCore was the hypervisor for the storage and the storage I/O controllers. As a result, the school achieved a 50% reduction in latency time and a 25-30% increase in the overall I/O. With the redundancy and I/O requirements met, the school was able to virtualize any platform.

Importantly, to address issues like high performance, one need not overhaul the existing storage stack, added George Teixeira, CEO at DataCore. DataCore’s SANsymphony-V Storage Hypervisor, for instance, utilizes existing storage assets to boost performance with the adaptive caching. Its auto-tiering enables optimal use of SSDs/Flash, and high-availability for business continuity. “This precludes the investments of purchasing additional IT assets and pre-mature hardware obsolescence,” says Teixeira.

Business continuity was the added benefit for the school, as it came built-in within the DataCore solution. An added effect of this implementation: speedier backup due to a faster I/O.

Will it help optimize cost?

Thin Provisioning, Deduplication, and Automated Tiering are some of the proven technologies that help optimize cost and simplify day-to-day management of storage systems. These technologies, observes Forrester Research in its Market Overview: Midrange Storage, Q3 2011, have been in the market for some time now. But they have not yet seen massive adoption.

a) Thin provisioning Alastair Cairns, Network Manager at St Mary’s Redcliffe & Temple School, was initially nervous about thin provisioning. Though the FAS 2020 SAN from NetApp and VMware came equipped with this technology, Cairns switched this feature on much later, as he was yet to understand its implications.

He realized the importance of thin provisioning, when he set out to reconfigure the storage in VMware. This required him to move a number of his servers from one Logical Unit Number (LUN) to another. In doing so, he found it difficult to move a 30GB server drive, which was not thin provisioned.

“Had I turned on this feature, the servers would have all moved before the end of a working day. Since then, I have thin provisioned all our drives. Consequently, I now get more out of our storage and have boosted the utilization rates of my IT assets by probably 50%,” he says.

b) Storage tiering: Tiering, a key function, was a manual and a tedious procedure for a long time, points out Forrester. Auto-tiering is now gathering momentum with a slew of launches from industry heavyweights like DataCore and Dell.

Dell’s Fluid data architecture, which allows for intelligent data management, is a key feature in its multiprotocol virtualized storage solution, Dell Compellent. “It allows data to be moved automatically from tier to tier depending on its usage. Accordingly, mission critical applications move to the most expensive discs while the infrequently used and archived data gravitate to the lower costs discs. This frees up space in the higher tier for more data,” says Antonio Gallardo, Storage Business manager, Dell EMEA.

CMA, a leading provider of intraday CDS data and OTC market solutions, employed Dell’s Fluid Data technology along with VMWare virtualization. The auto tiering feature, which was initially called data progression, allowed 60% of their data to sit on the fastest, most accessible storage, while virtual application servers sit in the middle and all historical data sit on the lowest tier. “This simplified my storage management, ensured high performance and precluded the need to purchase additional equipment,” says Ryan Sclanders, Infrastructure Manager, CMA Vision.

c) Deduplication: Deduplication retains the original copy, eliminates redundant data and frees up capacity. Jeremy Wallis, Systems Engineering Director at NetApp UK, remarked that this feature helped most of his customers save 80-90% of capacity in a virtualized environment. St Mary’s Redcliffe & Temple School, for instance, gets a great deal of file similarity across its drives. A teacher creates a worksheet for a student, which leads to 30 odd copies. Another faculty member replicates this. “As a result, we get about 50% deduplication across the board. With NetApp we save approximately 50% of storage space,” adds Cairns.

Will it maximize returns from installed IT assets and provide future ROI?

As previously noted, based on six-year forecast period from 2009, Sinclair anticipated a capital savings on infrastructure asset purchase of 144,000£. “Our ROI is ahead of forecast. We are now in 2011, three years down the line, and have already saved 60,000£ [$95000]. This includes the cost of purchasing the SAN outright,” added Watts.

This year, the company also brought forward a major upgrade to their ERP systems. Due to go live in November 2011, this upgrade required 18 additional servers for hosting and testing. This project was not on the drawing board when the planning was put in place in 2009.

Had the company been on their older machines (and not used virtualization), they would have incurred a cost of 115,000£ [$184,000] on physical hardware and operating system software. But the virtual servers in the SAN enabled Watts to create the entire ERP server infrastructure at no cost. “With the 115,000£ from this project, by the end of the year we will have saved a staggering 175,000£ [$280,000] of capital solely by adopting a combination of EMC SAN strategy.” Watts now anticipates a total CapEX saving of 250,000£ [$400,000] by end of 2014.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles