Storage consolidation and server consolidation sound great in theory. Reduce the number of servers, cut down on management complexity, and improve return on investment.
But does this work out in reality?
Should every organization adopt networked storage? Are there parameters that must be met before organizations implement a Storage Area Network (SAN)?
“Networked storage doesn’t come cheap,” says Lynn Neal, senior systems integrator for Sprint. “It takes a strong business case to justify it — one based on specific needs that address a real issue.”
Opinions vary concerning how big you have to grow before a SAN becomes viable. Tom Major, vice president of IP Storage at Boulder, Colo.-based LeftHand Networks, says anyone with fewer than 20 servers or so would be better off sticking with direct-attached storage (DAS).
“Up to 20 servers should be reasonably manageable unless you have large growth expectations,” he explains. “Beyond that, shared storage becomes more viable.”
Rather than setting the bar based on servers, Joe DeRosa of Maxtor estimates that a 3TB minimum is probably needed before a SAN becomes viable.
“A Storage Area Network incurs big upfront costs,” says DeRosa. “You need to have a tremendous amount of storage in use to justify the expense.”
In support of this opinion, he quotes Gartner Group figures that show 75 percent of the disks currently existing in the enterprise are still on DAS. For most, it seems, the price tag is a little hefty when it comes to a SAN with its attendant Fibre Channel (FC) switches, storage arrays, and host bus adapters (HBAs).
You don’t have to be a Fortune 500 company to adopt a Fibre Channel (FC) SAN, though.
Take the case of Denver Health Hospital and Medical Center. This organization services 20,000 admissions and 600,000 outpatient visits annually. During the late 1990s, it experienced an explosion in its server population — from six servers in 1996 to 97 in 1999. All storage was direct-attached at that point, residing mainly on a wide range of operating systems.
“We were running out of data center space, couldn’t manage our storage effectively, and suffered from severe underutilization on many servers,” says Jeff Pelot, CTO of Denver Health. Denver Health’s SAN consists of EMC CLARiiON boxes (3TB) along with Brocade switches. Despite the high cost, the hospital perceived value on its investment due to greater manageability in its environment.
But determining exactly what IT investments are worthwhile and which are not can be a difficult proposition. Vendors are accused of doctoring, if not conjuring, total cost of ownership (TCO) and return on investment (ROI) figures out of thin air.
“Nobody believes vendor ROI calculations because most are not worth believing,” says Michael Karp, senior analyst at Enterprise Management Associates (EMA). “You have to understand and challenge their underlying assumptions.”
Karp suggests a more workable approach of combining ROI and TCO metrics and taking into account all possible expenses. He recommends factoring in all costs, including acquisition, training, personnel, and management. By viewing the entire picture over a three-year time frame, a more realistic impression can be attained. Such a method can help reduce the likelihood of rash decisions.
One possible way to reduce the considerable upfront infrastructure costs that accompany a Fibre Channel (FC) SAN is to opt for an IP-based SAN. This route takes expensive switches, HBAs, and the underlying FC cabling out of the equation. Some vendors charge that performance suffers badly on IP compared to FC, but the latest breed of IP SANs seems to largely have overcome this hurdle.
Denver Health, for instance, has experience with both FC and IP SANs. When its number of serves surged to 166 and it discovered 1,100 Access databases throughout its campus, IT technicians embarked on yet another round of server, storage, and database consolidation. This time it chose an IP SAN from LeftHand Networks.
“We found FC SANs to be expensive to implement, and they required specialized training and technicians to operate,” says Pelot. “An IP SAN is so much more affordable, and implementing one is like building something with Legos.”
In Polot’s estimation, his facility doesn’t require an FC build-out, especially now that IP technology has achieved a level of maturity. However, Denver Health fully intends to maintain its existing FC SAN. With the infrastructure already in place, it makes no sense to eliminate it. As its switch ports have already reached capacity, future expansion will follow the IP path. And what about the thorny performance question?
“Initially, we met with some resistance in adopting an IP SAN, especially with regard to utilizing IDE drives,” says Pelot. “But although I/O is very slightly down compared to the FC SAN, the users have never noticed the difference.”
Obviously, however, IP SANs aren’t the solution in every instance.
Just as Denver Health will continue using its FC infrastructure for years to come, there are many organizations around that will maintain and add to their switch fabrics. Further, higher-end applications with the most stringent performance expectations — such as some research labs or financial institutions — will continue to mandate FC for years to come.
All Eggs in One SAN
Regardless of IP or FC, though, there is one other major cost aspect to give careful attention to — single point of failure. If you house data on five systems, one failure means a 20 percent loss. Alternatively, if you house everything on a single large storage array within a SAN architecture, you have a far smaller chance of failure, but one failure results in a 100 percent loss. It’s much like the old adage about putting all your eggs in one basket.
“Anyone implementing a SAN had better take disaster recovery (DR) and business continuance (BC) into account,” says Mark Bradley, storage strategist at Computer Associates. “You have to add DR, BC, backup, and fail-over into the equation and realize that these redundancy features will increase your overall costs.”