SAN services is an increasingly popular technology used by businesses seeking greater flexibility in their data storage. A SAN, or storage area network, provides raw storage devices across a high-speed network, and is typically sold as a service of some kind to customers who also purchase other services.
A customer might have a server or a rack of servers provisioned for use in a data center, and then also purchase a certain amount of SAN storage in conjunction with it. SAN services may also be installed and administered over a clients existing, local fiber network, and administered through a service plan.
A customer may want to purchase SAN services -- SAN provided as a service -- for any number of reasons:
If a customers storage needs change a great deal -- for instance, with a seasonal business --and you want to rapidly provision and de-provision storage for them based on demand, it makes sense to go with SAN services, or a SAN system in general, instead of more conventional fixed, in-house storage. The customer worries less about individual disks or even arrays, and can simply think about what they need and when.
This is one of the standard selling points for SAN services and SANs in general: SANs are designed to remain robustly available. Another advantage is that SAN services provide extended availability for a customers server as well, since storage and processing power can be heavily decoupled from each other. A faulty server can be decommissioned and then a new one provisioned and booted from the SAN itself, since SANs typically do support booting.
The storage can be geographically separate from the processingnot just in a separate room, but in another building entirely. This is handy if you have branch offices where various hardware installations have arisen or taken root, and its easier to move data or processing around than to move people or whole offices. The limits of the SAN services capabilities will of course depend on the robustness of the SAN network at hand.
A SANs performance is best measured in IOPS, or I/O operations per secondan index of how much throughput can be performed on the SAN in question. In the case of SAN services, this is one of the biggest indexes of the tier of service being bought or sold, since a SAN services customer will typically have a guarantee of no less than a certain number of IOPS for their service tier.
IOPS is kept discrete from storage capacity. The two metrics will mean radically different things to customers in this context, since they stem from different needs.
If a customer needs to store a lot of data but only access a little of it at a time, they could buy 1 TB of SAN storage but a very small IOPS allotment. Likewise, a customer that doesnt need a lot of data but needs the fastest possible access to all of it could do the inverse: buy a small amount of storage (say, 200GB) but opt for the highest possible tier of access speed.
A good SAN services vendor will want to offer flexibility in both storage amounts and IOPS. Or at the very least, tiered plans that reflect realistic usage based on customer scenarios they may have personal experience with.
Another important issue is response time, which should remain uniform across service tiers whenever possible, but which could also theoretically be tiered as a service offering within reasonable limits.
A SAN service that, for instance, harvests data across a large set and compiles a report to be downloaded at leisure could be hosted on a service with a slightly slower response time as a cost-saving measure. That said, there should be a reasonable floor for response time across the board. And most people will generally be interested only in IOPS, storage capacity, replication capacity or similar attributes as the defining factors for the SAN service tier theyre buying.
Most anyone in the business of selling SAN services (or purchasing them) should be mindful of how the physical makeup of a SAN will affect a customer.
The makeup of a disk-based SANthe number of disks, the size and RPM factor of each disk, the RAID level of the arrayswill have a huge impact on performance, and often in unexpected ways.
Anjo Kolk of Symantec has talked about how disks remain the biggest bottleneck in any SAN, and how adding disks to any SAN is generally the best way to speed it up. His discussion is highly Oracle-centric, but Oracle apps give SANs some of the heaviest workouts and make a good test case for how responsive a given SAN is.
The workload is also a major factor, since not all SAN customers are created equal. Yonah Russ, a systems architect with a good deal of SAN experience, did a little math and found that a SAN with an average workload under 70% did better with RAID 10 than RAID 5a strong argument for tiering customers by workload and expected throughput whenever possible. (Having good vendor-specific data to work from as a reference, such as Storage Performance Council tests, helps all the more.)
Another major element for those selling SAN services is anticipating collisions in capacity and utilization. A SAN whitepaper by James Morle of Scale Abilities, Ltd., talks about how to perform such planning (again, in an Oracle-centric fashion), and recommends a volume-partitioning scheme that sacrifices space for IOPS. His conclusions are similar to Kolks above: more discs equals more IOPS, not just more storage, and IOPS is the most desirable metric. Even if you dont implement his specific advice, the gist is clear: those selling SAN services can guarantee, much more reliably, the behavior of a given customers service tier by thinking ahead.
Its worth mentioning Amazons Elastic Block Store (EBS) Service if only as a footnote to conventional SAN services. EBS is conceptually quite similar to a SAN: it works like a giant block storage device (with sizes varying as needed); it supports volume snapshotting and replication; and customers only pay for what they use.
That said, the usage model is different from most other SAN services. For one, throughput is flat: there is as yet no tiering of I/O priority across different account levels. Also, EBS only works with Amazons own EC2 computing services. It cant be hitched to an arbitrary machine, at least not without building some kind of interface between EC2 and the machine in question. Finally, it doesnt allow the same level of granular control over storage volumes that you have with a conventional SAN, so it shouldnt be considered a one-to-one replacement for such things.
One of the ways around the issues of security and control that make some businesses wary of cloud computing is to build a private cloud -- one that remains within the corporate firewall and is wholly controlled internally. Private clouds also increase the agility of IT an organization's IT infrastructure and make it easier to roll out new technology projects. Download this eBook to get the facts behind the private cloud and learn how your organization can get started.