Well, two things are true when it comes to integration. SANs are faster and more reliable than ever before, and enterprises are finding more and more ‘sound’ reasons to integrate their SANs. But integration can be a difficult undertaking, with several critical challenges associated with the process — all of which can be frustrating for both the business side and the technology side of operations.
Some industry experts feel that the issue is further exasperated by the claims of some storage vendors that only they can offer the functionality needed in terms of unifying the administration of multiple SANs into one console as well as integrating the storage resources between the SANs. In other words, some vendors boast that they alone have found the Holy Grail of enterprise storage: true integration.
True Integration Equals Industry Standards
Quite a few storage industry experts, on the other hand, believe that true integration will never be achievable by any one vendor. Wayne Lam, vice president at FalconStor, says that while several vendors have tried to create true integration, none of these efforts has been successful. Lam believes common standards are needed for true integration and that it will take an industry organization such as SNIA, FCIA, etc. to come up with a standard like CIM (Common Information Model) and create a common platform for storage vendors to integrate and build on.
“The challenge and issue with true integration is that not all storage vendors stick to the standard and instead sometimes implement their conforming solution in a way that may cause other compatibility issues,” he says. These issues have to be resolved before something can truly become a standard. However, Lam does offer up a brighter side in pointing out that at least the CIM initiative gives some hope for integration, since SNIA and CIM can be the common platform used by storage vendors, which when implemented properly gives a possibility for true integration.
Others feel that as an industry, storage vendors are making real progress in this regard. According to Paul Ross, director of storage network marketing at EMC, some vendors are having more success than others. However, he adds, the task becomes much easier when you start with a storage architecture that is modular and that is transport- and protocol-transparent.
Ross believes that the key issue for IT organizations addressing this challenge is implementing a centralized SRM (storage resource management) infrastructure. “The challenge isn’t whether two SANs are physically separate or connected, but rather whether these SANs can be managed as a single resource,” he explains.
Page 2: The Mystery of Integrating Multiple SANs
There are other challenges and issues facing storage customers when it comes to integrating multiple SANs, but Lam says it all boils down to three little words: compatibility, compatibility, and compatibility. He believes that the biggest reason for this is that even though there are standards out there such as CIM, some vendors either choose not to follow the standards at all or they follow them in ways that suit their individual proprietary methods.
“When customers try to integrate multiple SANs, they run into many difficulties because some vendors have requirements that conflict when islands of SANs are integrated, such as the switch settings to the storage, or when there’s an HBA from a host that is supposed to access storage from both SANs, and one vendor requires the timing or loop count/loop down time be a certain value, while the other vendor requires a completely different setting. The question the customer then faces is: ‘Which value do I choose?’” says Lam. “And, the answer is,” he continues, “it doesn’t matter, because whichever setting the customer chooses will violate the other vendor’s recommended setting.”
Lam points out that one of the ways to get around this dilemma is to have middleware (such as a storage appliance) in the fabric. “From the host’s point of view, the customer only has one storage platform to deal with — the storage appliance platform.” And, he adds, because the appliance is in the data path and can translate the differences between storage devices and can access different vendor’s storage, the host does not have to be specific to any storage vendor’s recommendations or specifications.
Other industry experts believe that storage customers must carefully investigate the ability to have storage subsystems from different vendors share the same SAN, especially if the storage expects to gain access to the same host. “What’s most important to the customer is to prove that the end-to-end solution will work in a real environment,” says Ross. “This means not only qualifying storage and a switch, but also including HBAs, switch vendor interoperability, software, SAN extensions such as DWDM and FC over SONET, and support for multiple storage vendors at the same time.”
Fabric Instability Another Major Challenge
Ross adds that fabric stability is another challenge facing storage customers. He believes that the FC SAN’s mechanism of logging in and out of a network provides a graceful way to add and remove devices in the SAN. “Zoning is a mechanism for controlling access among devices in a SAN, and when a customer tries to deploy a large SAN, the above changes generate a “State Change Notification” to all the other participants in the SAN.” The problem is that the larger the SAN gets, the more this type of traffic gets generated, potentially hindering performance.
Another issue, according to Ross, is that if a mistake is made in a very large SAN, such as a zoning change, this mistake could potentially take down the entire SAN. “New mechanisms that allow for building large domain count SANs that minimize the propagation of overhead traffic are beginning to be delivered,” says Ross. “These new features create virtual SANs or sub-nets to isolate portions of SANs from one another.”
Page 3: Complexity Among the Switches
Another major challenge facing storage customers is the sheer complexity of today’s SANs. For example, says Ross, if SAN islands are made up of mid-range switches, which are typically of the 16- or 32-port variety, merging them into a large SAN means that you have to consume many ports just to provide inter-switch connectivity. And there are practical limits to the number of hops and domains.
“Exceeding the domain and hop count can result in fabric timeouts, which will affect performance,” says Ross. He believes the best way to build large SANs is to use large SAN switches, usually referred to as director-class switches. “These devices offer high port capacity, which helps keep domain count low and, at the same time, offers a high degree of availability through the use of redundant sub-systems such as power, cooling, and control processors.”
This is Part I of a two-part article on True Integration: Fact or Fiction. Part II of the article will cover the following:
- Do major storage challenges exist because storage vendors are not always aware of the latest ‘arrays’ offered by their competitors and may not have the drivers to support the various hardware? If so, what can be done to change this problem?
- Alternatively, do these challenges instead exist because many storage vendors manage their storage arrays using proprietary protocols that make interoperability a challenge by requiring either continual cross-licensing or reverse engineering by competitors?
- Has storage technology reached the point where true integration is even achievable?
- Will a common set of protocols, such as the CIM-SAN1 protocol being developed by the SNIA change the future of integration?
- Will more cooperation between storage vendors change the future of integration? Is this realistic?
Feature courtesy of
Enterprise Storage Forum.