The storage industry continues to grow at an unprecedented rate, and the growth in demand for faster and more efficient computers is supporting new and improved advances in storage technologies. This unprecedented growth isn’t showing any signs of slowing down either, according to industry analysts.
This is certainly good news for storage vendors, but what does it mean for the future of the industry? Let’s take a look at several key areas that are predicted to shape the storage industry for 2004 and beyond.
— Diamond Lauffin, Nexsan
Information Lifecycle Management
Although many industry analysts believe that through 2005 information lifecycle management (ILM) will be approximately 80 percent vision and 20 percent products, Diamond Lauffin, senior executive vice president of Nexsan Technologies, predicts it will be more like 95 percent vision and 5 percent implemented technologies.
Lauffin says that even five years ago the cost between the different types of storage were very dramatic, and the extreme differences of cost and performance allowed for manufactures to explore the concepts of HSM (Hierarchical Storage Management), etc. The premise, he says, is that the end user would like to have all of their data on-line all of the time. “Primary storage was too expensive, so we developed products like HSM to migrate data from high-priced disk to a near-line device like tape or optical,” says Lauffin.
However, according to Lauffin, today some storage vendors are supplying disk systems that are being used as primary, secondary, near-line, backup, and archive. “It’s the same exact system, no difference,” says Lauffin. Lauffin explains that there is no difference in cost to the end user regardless of the use.
“When you can provide a disk solution for backup and archive that is equal to or less in cost than tape and that same system is operating at speeds that allow it to be used as primary storage, why would an end user need a software application to migrate date to tiered storage?” he asks. However, he continues, “I do see a use for software that eliminates duplicate files so that end users are not keeping duplicate copies of files that are not going to change.”
John C. Lallier, vice president of technology, FalconStor, predicts that those figures may be closer to a 70/30 split because ILM is such a broad category. “It isn’t a lack of products as much as it is the need to define the processes these products will be used to automate,” says Lallier.
Others agree that while ILM is a good idea for larger organizations, it may be difficult to justify for the small to mid-sized enterprise (SME) segment, which is still wrestling with basic backup window and storage consolidation issues. The problems that ILM solves, according to Zophar Sante, vice president of marketing for SANRAD, are still not at the top of the list for the SME market. But he does believe that ILM solutions can be deployed at the same time as storage consolidation solutions are delivered to SME.
Sante believes that ILM suppliers who partner with IP SAN suppliers and Disaster Recovery (DR) solution providers could find that ILM capabilities layer nicely over the IP-SAN infrastructure. “Within a true IP SAN, there can exist multiple classes of storage systems — ranging from high-end $20K TB RAID solutions to $3K per TB disk solutions to removable media systems,” says Sante.
Sante also explains that any ILM solution can use all or part of an IP SAN infrastructure to seamlessly migrate files between all three classes of storage in a manner that is invisible to the application server. According to Sante, another way to use ILM in conjunction with an IP SAN is to use the IP SAN as a stage two repository for files located on the internal disk drives of the application server.
“For example,” he says, “an organization could have an email server with 1TB of internal RAID and 2TB of storage resources from an IP SAN. As needed, older files will be transferred between the internal RAID and the IP-SAN storage.” In this case, Sante explains, the ILM solution has 3TB of total storage capacity broken into two classes of storage — the precious and limited internal RAID of the server and the easy to expand IP SAN infrastructure.
“By the way,” Sante continues, “a true IP SAN infrastructure can easily have 500TB of capacity and can increase volumes on the fly.”
Enhancing a system’s ability to handle innumerable transactions per second
Gartner has predicted that the seamless integration of scalable technologies that can enhance a system’s ability to handle innumerable transactions per second will become more crucial to the success of any storage strategy.
— Zophar Sante, SANRAD
FalconStor’s Lallier agrees that scalability is crucial to any storage strategy, although he says he’s not sure that the integration always needs to be seamless. Nor are transactions per second always the crucial factor, he continues. “It depends on what the storage system is used for — as ideally a storage strategy would take into account the data requirements of the various users/systems and have different categories with different characteristics.”
Nexsan’s Lauffin says it’s a simple question of logic. “Seamless integration has always been and will only become more of a decision point for purchase by all end users,” says Lauffin. Lauffin also believes that manufacturers which continue to produce solutions that cannot operate transparently will lose business to manufacturers of solutions that can. He points out that “the winner by default is the end user.”
Sante says he agrees 100 percent because 90 percent of today’s storage is still internal disk within a single server or is directly attached (DAS). But he adds that in the future the majority of storage systems will be SAN connected, and instead of taking I/O from a single application server, the system will be responsible for servicing I/O for 4, 10, or even 20 application servers.
“If a storage system cannot maintain a minimum of 15K I/O operations per second (IOPS) and 100 MBs of throughput, then it will become the bottleneck — no-one buys bottlenecks,” says Sante.
Archiving will be in application performance management
Other industry analysts believe the most significant results of archiving will be in application performance improvements. Lallier believes this is very likely as it can be clearly defined and implemented. Archiving older, rarely used data can free up valuable, fast storage devices for database and messaging applications, he says.
Lauffin totally disagrees with this concept and says that there is very little room for improvement in this area over the direction and cost reductions of already implemented technologies, and what room is available does not matter because this would not be the right approach to really providing a better solution for the end user.
“Archiving in its most basic definition defines that these are going to be files that are seldom or if ever accessed, so speed is of little concern. And just putting a faster engine in a car is not the answer to significantly impact an end user’s life,” says Lauffin.
“If the cost is low enough,” he continues, “then once I write my data, it is already archived. So if we want to talk about the opportunity for a software application to more effectively move data to a secondary location and then use that data with version control, etc. to be used as both DR and archive, I would agree.”
Sante also disagrees and says that he’s not sure he understands why this would be the most significant result since most applications are slow because of client load, LAN traffic, security policies, and storage IO performance. “I’ve never heard a user tell me their application would run faster if they could only archive more,” says Sante.
This is Part I of a two-part article. Part II will address the following predictions:
- Users will not be able to automate storage management until they build dedicated storage management teams
- Through 2006, e-mail archiving products will dominate the overall archiving market
- Through 2005, storage virtualization will not improve storage utilization
- By 2006, storage area network (SAN) management functions will be embedded as part of storage element managers and storage resource management tools