Monday, May 27, 2024

The Real Cost of Storage

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

When I started writing this article, my premise was that the cost advantage of SATA drives over Fibre Channel (FC) drives was not that great in environments with performance requirements. Often when you write an article, you are trying to prove a point, but since this is more of an exploration, I will try to be as objective as possible to see if I am correct in my assumption. In the next article, we will see if I am correct when the facts are analyzed. Keep in mind that I am talking about SATA and FC drives in a performance environment, not where density is the primary consideration. Let’s see if my hunch was correct.

SATA Obsession

Configurations using SATA drives are all the rage, given the lower cost of these drives. I have seen SATA drives used in the enterprise initially for D2D backup, but now I am starting to hear about and see sites using SATA drives for all storage requirements because of the cost savings. SATA is starting to be used for mainline storage for large databases and for just about everything FC drives are typically used for, but that comes with some reliability concerns.

To respond to those valid concerns, RAID vendors had to develop controllers that supported RAID-6, allowing for multiple failures by having two drives for parity instead of one. Because of the additional drive, you will need more bandwidth between the controller and drives than with RAID-5. For example, using the information from the Seagate Web site, for 750 GB SATA drives the average transfer rate is 78 MB/sec. If you have RAID-5 8+1, you need 702 MB/sec to run the RAID set at rate, but with RAID-6 8+2 you need 780 MB/sec, or about 10% more bandwidth, to complete the rebuild at full rate. The same full rate issues are true for streaming I/O for video, audio or any application that requires bulk reads or writes. One area that people seem to forget requires bulk rates is database re-indexing.

Almost everyone is at least looking at SATA drives, but given the bit error rate compared to FC drives (see Restoring Data That Matters), and the fact that for IOPS they are much slower because of the lower RPM rate and longer seek time, sites are still buying FC drives, but the trend is changing and I am not sure this is a good idea.

Meeting Performance

Let’s get to the details of my conjecture. I assume that there are environments that require a certain level of performance for streaming I/O, documented in MB/sec or GB/sec. To achieve this performance you need the following hardware components:

  • Memory bandwidth with enough performance to meet the I/O requirements
  • PCI bus with enough performance to meet I/O requirements (see A Historic Moment for Storage I/O)
  • Storage connections such as FC HBAs (the most common connection in the enterprise by far), USCSI, SATA, HCAs, or a NIC
  • Switch ports such as a FC switch (the most common), InfiniBand switches, or Ethernet switches
  • RAID controllers which have either an FC, IB, SATA, or Ethernet interface
  • Disk drives which today have an FC, SATA or up and coming SAS interface

You need enough of each of these components to meet the performance requirement, since any single component could become the bottleneck. For example, if you have 4 Gbit FC and a RAID controller capable of transferring data at 4 GB/sec, and you only have two HBAs, with two ports each, you could only transfer at 3.2 GB/sec, assuming full duplex transfers, since each HBA port is capable of running at 800 MB/sec (400 MB/sec each for read write). Assuming that enough disk drives were configured on the controller to run at 4 GB/sec, and all of the other hardware components were not a bottleneck, the HBAs will be the bottleneck.

So what is needed is a system that is balanced to meet the performance requirements. To have that type of system requires hardware at each of the areas that meets the requirements: memory, PCI bus bandwidth, HBAs, switch ports, RAID controllers and, of course, disk drives. Each of these components requires power, cooling and, which costs money, but this O&M cost is not going to be considered in my assessment since I do not have cost numbers for this area.

This article was first published on To read the full article, click here.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles