Friday, March 29, 2024

Solid State Drives Preparing to Dominate

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A revolution that will see solid state drives (SSDs) sweep away a large percentage of conventional hard drives in servers and enterprise storage devices is about to begin.

The onslaught of solid state drives in the consumer gadget market place has already become unstoppable, with manufacturers ditching spinning hard disks in everything from iPods to removable storage media to netbooks as fast as they can. What happens in the consumer market usually makes it to business devices a year or so later, which puts the server market directly in the firing line.

The reason SSDs will begin to take over is that they offer some outstanding benefits to server administrators. Many of these stem from the fact that an SSD has no moving parts, unlike a conventional hard drive, which has rapidly spinning platters and complex moving read/write heads. Because it is solid state, an SSD is not prone to mechanical failure and is thus more reliable and able to withstand extreme shocks, vibrations and wide variations in temperature. This also means it uses far less power (perhaps 0.05W when idle and 2.4W when in use compared to 4.5W when idle and 6W when in use for a conventional hard drive,) and generates much less heat that then has to be dissipated.

But perhaps the biggest benefit of an SSD is the very rapid random read access speed it can offer. Accessing a given piece of data on a spinning hard disk involves moving a read head to a particular location and waiting for the right part of the disk to spin under the read head. These mechanical maneuvers take time, and the exact amount of time depends on the relative locations of the read heads and the data on the disk when a read request comes in. Typical read access times are of the order of 5-10ms.

By contrast, an SSD can access any given piece of data very rapidly, no matter where it is stored. Access times are constant (even when the disk has been idle for some time, as there is no “spin-up” time) and are of the order of 0.075ms — some 100 times faster than a hard drive. And fragmentation has no effect because access time is independent of the physical location of data.

As an indication of the performance increase you might expect by replacing a hard drive with an SSD, a quick test using a $99 32GB Crucial Internal 2.5″ Solid State Drive reduced the time required to boot Ubuntu 8.04 to 18 seconds on a test system, compared to the 32 seconds required to boot the operating system using a fast SATA II hard drive on the same system. (This is not intended to be a rigorous test but goes some way to illustrating the significant improvements in read times that an SSD can offer.)

The high read speeds that SSDs offer make them very attractive for use with certain applications such as web servers and some databases where high performance is key and I/O is a bottleneck. The drawback is their price: an OCZ Vertex 250Gb SSD will set you back around $750, about 10 times the price of a conventional Seagate Barracuda drive with the same capacity. But as prices fall (which they are currently doing at a rate of about 60% per year) it will make financial sense to use SSDs in an increasingly large number of servers running speed-critical applications. And if prices continue to fall at the current rate then it has been predicted, SSDs will start to make high performance Fibre Channel (FC) drives in storage arrays obsolete within three years.

While SSDs offer superior access times to conventional drives, when it comes to writing data it’s a whole different story. Writing is a slower process on an SSD than a hard drive, and thanks to a phenomenon known as “write amplification” this is exacerbated. Essentially, an SSD writes a whole block of data at once, so to write 4kb of data it may be necessary to read an entire block of 256kb or more of data from the SSD into memory, add the new 4kb to the existing data, and then rewrite the 256kb block back to the SSD. Advances in SSD technology are making write amplification and write speeds in general less of a problem than has been the case, but writing to an SSD is currently significantly slower than writing to a hard drive.

There’s one more very big potential problem with SSDs, and that is to do with the limited amount of times individual memory cells can be written to. Depending on the quality of the memory cells, this may be as little as 10,000 writes, or as many as 1 million or more. To an extent this problem is mitigated by a technique called wear leveling, which ensures that all parts of the SSD are used equally frequently, rather than burning out parts of it from over-use while other parts are never used. Typically an SSD will have a wear leveling efficiency of about 3 percent — meaning that there is a 3 percent differential between the most and least used blocks.

Still, that begs a very difficult question: How long will an SSD last before it is worn out and unusable? The answer to this depends on how it is used, which in turn depends on the types of applications or data that it stores. Because of wear leveling, the size of an SSD also has a part to play in dictating how long it lasts. That’s because for a given amount of data stored and usage pattern, the bigger the SSD the longer it will take before all the individual memory cells are worn out — simply because there are more of them to wear out.

So while it’s not possible to say in precise terms how long an SSD will last, it’s likely that for read-intensive applications, an SSD of appropriate capacity will not wear out for many years — almost certainly longer than a conventional hard drive that would prudently need to be replaced after just two to three years.

(As an aside, many people recommend avoiding journaling file system such as Ext3 with SSDs, on the grounds that the extra write activity caused by the journaling could wear out the memory excessively quickly. But some recent quantitative research suggests that the impact of a journaling file system on SSD longevity is actually quite small: In most cases it results in no more than an additional 4 percent to 12 percent of data being written to the drive.)

How practical is it to install an SSD in place of a conventional spinning hard drive? All popular SSDs come with a SATA interface and power connector, so in terms of connection it’s simply a matter of unplugging and removing an existing SATA drive and replacing it with the SSD. Most SSDs have a 2.5″ form factor rather than the 3.5″ form factor found in conventional drives, so you’ll probably need an adapter kit to mount an SSD in a 3.5″ drive bay. These can be picked up for about $5.

SSD prices are dropping rapidly, so by holding off purchasing you will undoubtedly save yourself some money. But for sever applications that are read-intensive, where hard disk I/O is the bottleneck, and where high-performance is key, moving to SSD storage today could give you a significant performance boost right now at a reasonable cost, without any significant change to the rest of your infrastructure. And as prices continue to drop, they’ll be an increasing number of servers for which the speed and power advantages of an SSD make them preferable to old-fashioned spinning magnetic media.

Article courtesy of ServerWatch.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles