The idea of putting storage systems on their own network offers flexibility, scalability and rapid shared access to large hierarchical arrays.
Analysts currently predict a SAN market of $11 billion by 2002, up from virtually nothing in 1999. Surveys show that managers want SANs as a way to improve disaster recovery, availability, and scalability with less emphasis on improved data sharing, performance, protection and administration. Many large installations now have SANs or are considering them as a future investment.
In theory, SANs allow users to access information from any platform (particularly important in mixed Windows/Unix installations), share common storage devices and files, and readily increase storage capacity to meet rapidly mounting needs. This should all happen while actually reducing the burden on clogged LANs and WANs.
Backup has become a key issue in most installations. Not only is there more data these days, the time used to do backup (typically late at night) is no longer available due to 24-by-7-by-365 e-commerce, Internet, and data warehousing applications.
Thus LAN-free serveless backup becomes essential, and a SAN provides the essential way to accomplish it. Such a SAN should include an entire storage hierarchy with NAS, disks, tapes, tape libraries, and other archiving facilities.
What is the downside? One problem is that the SAN introduces another network. Like LANs or WANs, a SAN requires management software, utilities, operation system support, security, maintenance, and training. The added complexity could easily outweigh the advantages. Doubling the network staff's workload is not feasible when there is already plenty to do and no one available to hire.
At best, SANs represent a major infrastructure change that requires a compelling justification. They also may require higher-bandwidth cabling and aren't currently extensible over long distances.
One technique calls for using the same protocols for all networks - run storage networks over IP. There are several alternatives here, including storage over Gigabit Ethernet, i-SCSI, and Fibre Channel over IP.
The Gigabit Ethernet alternative looks attractive on the surface since it would avoid duplication of training, test equipment and software. This is particularly important to smaller installations, which value ease of use over maximum performance.
The I-SCSI alternative would preserve current investments in storage devices, software, and management methods. Still another possibility includes the use of the emerging Infiniband standard to provide a migration path from all past approaches.
Most current SANS offer only limited interoperability. Equipment from different manufacturers often won't work together despite the use of common software and hardware interfaces. Third-party testing has become necessary to ensure interoperability.
What does the future hold? Clearly software will assume more of the cost and complexity of SANs. Hardware costs will decrease and performance will improve. Software, unfortunately, doesn't readily allow a silicon solution. Thus integrated software companies will play a major role in the emergence of SANs.
SANs would serve the purpose of global enterprises, network service providers, and the emerging storage service providers. Users could then tap into services and storage as needed, and pay only for what they require with no long-term capital investment.
Elizabeth M. Ferrarini is a free-lance writer from Boston.
This article was first published on Network Storage Forum, an internet.com site.