SAN services is an increasingly popular technology used by businesses seeking greater flexibility in their data storage. A SAN, or storage area network, provides raw storage devices across a high-speed network, and is typically sold as a service of some kind to customers who also purchase other services.
A customer might have a server or a rack of servers provisioned for use in a data center, and then also purchase a certain amount of SAN storage in conjunction with it. SAN servicesmay also be installed and administered over a client’s existing, local fiber network, and administered through a service plan.
SAN Services rationales
• SAN Services: Mutable storage demands
• SAN Services: High availability
• SAN Services: Geographical separation of processing and data
• SAN Services: Performance as a sales metric
• SAN Services: Performance as a physical metric
• SAN Services: Amazon EBS
Why use SAN services?
A customer may want to purchase SAN services — SAN provided as a service — for any number of reasons:
Mutable storage demands
If a customer’s storage needs change a great deal — for instance, with a seasonal business –and you want to rapidly provision and de-provision storage for them based on demand, it makes sense to go with SAN services, or a SAN system in general, instead of more conventional fixed, in-house storage. The customer worries less about individual disks or even arrays, and can simply think about what they need and when.
This is one of the standard selling points for SAN services and SANs in general: SANs are designed to remain robustly available. Another advantage is that SAN services provide extended availability for a customer’s serveras well, since storage and processing power can be heavily decoupled from each other. A faulty server can be decommissioned and then a new one provisioned and booted from the SAN itself, since SANs typically do support booting.
Geographical separation of processing and data
The storage can be geographically separate from the processing—not just in a separate room, but in another building entirely. This is handy if you have branch offices where various hardware installations have arisen or taken root, and it’s easier to move data or processing around than to move people or whole offices. The limits of the SAN services’ capabilities will of course depend on the robustness of the SAN network at hand.
Performance as a sales metric
A SAN’s performance is best measured in IOPS, or I/O operations per second—an index of how much throughput can be performed on the SAN in question. In the case of SAN services, this is one of the biggest indexes of the tier of service being bought or sold, since a SAN services customer will typically have a guarantee of no less than a certain number of IOPS for their service tier.
IOPS is kept discrete from storage capacity. The two metrics will mean radically different things to customers in this context, since they stem from different needs.
If a customer needs to store a lot of data but only access a little of it at a time, they could buy 1 TB of SAN storage but a very small IOPS allotment. Likewise, a customer that doesn’t need a lot of data but needs the fastest possible access to all of it could do the inverse: buy a small amount of storage (say, 200GB) but opt for the highest possible tier of access speed.
A good SAN services vendor will want to offer flexibility in both storage amounts and IOPS. Or at the very least, tiered plans that reflect realistic usage based on customer scenarios they may have personal experience with.
Another important issue is response time, which should remain uniform across service tiers whenever possible, but which could also theoretically be tiered as a service offering within reasonable limits.
A SAN service that, for instance, harvests data across a large set and compiles a report to be downloaded at leisure could be hosted on a service with a slightly slower response time as a cost-saving measure. That said, there should be a reasonable floor for response time across the board. And most people will generally be interested only in IOPS, storage capacity, replication capacity or similar attributes as the defining factors for the SAN service tier they’re buying.
Performance as a physical metric
Most anyone in the business of selling SAN services (or purchasing them) should be mindful of how the physical makeup of a SAN will affect a customer.
The makeup of a disk-based SAN—the number of disks, the size and RPM factor of each disk, the RAID level of the arrays—will have a huge impact on performance, and often in unexpected ways.
Anjo Kolk of Symantec has talked about how disks remain the biggest bottleneck in any SAN, and how adding disks to any SAN is generally the best way to speed it up. His discussion is highly Oracle-centric, but Oracle apps give SANs some of the heaviest workouts and make a good test case for how responsive a given SAN is.
The workload is also a major factor, since not all SAN customers are created equal. Yonah Russ, a systems architect with a good deal of SAN experience, did a little math and found that a SAN with an average workload under 70% did better with RAID 10 than RAID 5—a strong argument for tiering customers by workload and expected throughput whenever possible. (Having good vendor-specific data to work from as a reference, such as Storage Performance Council tests, helps all the more.)
Another major element for those selling SAN services is anticipating collisions in capacity and utilization. A SAN whitepaperby James Morle of Scale Abilities, Ltd., talks about how to perform such planning (again, in an Oracle-centric fashion), and recommends a volume-partitioning scheme that sacrifices space for IOPS. His conclusions are similar to Kolk’s above: more discs equals more IOPS, not just more storage, and IOPS is the most desirable metric. Even if you don’t implement his specific advice, the gist is clear: those selling SAN services can guarantee, much more reliably, the behavior of a given customer’s service tier by thinking ahead.
It’s worth mentioning Amazon’s Elastic Block Store (EBS) Serviceif only as a footnote to conventional SAN services. EBS is conceptually quite similar to a SAN: it works like a giant block storage device (with sizes varying as needed); it supports volume snapshotting and replication; and customers only pay for what they use.
That said, the usage model is different from most other SAN services. For one, throughput is flat: there is as yet no tiering of I/O priority across different account levels. Also, EBS only works with Amazon’s own EC2 computing services. It can’t be hitched to an arbitrary machine, at least not without building some kind of interface between EC2 and the machine in question. Finally, it doesn’t allow the same level of granular control over storage volumes that you have with a conventional SAN, so it shouldn’t be considered a one-to-one replacement for such things.