The ZFS Story: Clearing Up the Confusion: Page 2

The ZFS file system has much to recommend it, yet a good deal of confusion exists about it as well.


You Can't Detect What You Can't See: Illuminating the Entire Kill Chain

(Page 2 of 3)

In order to gracefully handle very large filesystems, data integrity features were built into ZFS that included a checksum or hash check throughout the filesystem that could leverage the inclusive software RAID to repair corrupted files. This was seen as necessary due to the anticipated size of ZFS filesystems in the future. Filesystem corruption is a rarely seen phenomenon but as filesystems grow in size the risk increases. This lesser known feature of ZFS is possibly its greatest.

ZFS also changed how filesystem checks are handled. Because of the assumption that ZFS will be used on very large filesystems there was a genuine fear that a filesystem check on boot time could take impossibly long to complete and so an alternative strategy was found. Instead of waiting to do a check at reboot the system would require a scrubbing process to run and perform a similar check while the system was running. This requires more system overhead while the system is live but the system is able to recover from an unexpected restart more rapidly. A trade off but one that is widely seen as very positive.

ZFS has powerful snapshotting capabilities in its logical volume layer and in its RAID layer has implemented very robust caching mechanisms making ZFS an excellent choice for many use cases. These features are not unique in ZFS but are widely available in systems older than ZFS. They are, however, very good implementations of each and very well integrated due to ZFS' nature.

At one time, ZFS was open source and during that era its code became a part of Apple's Mac OSX and FreeBSD operating systesms because they were compatible with the ZFS license. Linux did not get ZFS at that time due to challenges around licensing. Had ZFS licensing allowed Linux to use it unencumbered the Linux landscape would likely be very different today.

Mac OSX eventually dropped ZFS as it was not seen as having enough advantages to justify it in that environment. FreeBSD clung to ZFS and, over time, it became the most popular filesystem on the platform although UFS is still heavily used as well. Oracle closed the source of ZFS after the Sun acquisition leaving FreeBSD without continuing updates to its version of ZFS while Oracle continued to develop ZFS internally for Solaris.

Today Solaris remains using the original ZFS implementation now with several updates since its split with the open source community. FreeBSD and others continued using ZFS in the state it was when the code was closed source, no longer having access to Oracle's latest updates. Eventually work to update the abandoned open source ZFS codebase was taken up and is now known as OpenZFS. OpenZFS is still fledgling and has not yet really made its mark but has some potential to revitalize the ZFS platform in the open source space but at this time, OpenZFS still lags ZFS.

Still a Leader

ZFS, without a doubt, was an amazing filesystem in its heyday and remains a leader today. I was a proponent of it in 2005 and I still believe heavily in it. But it has saddened me to see the community around ZFS take on a fervor and zealotry that does it no service and makes the mention of ZFS almost seem as a negative - ZFS being so universally chosen for the wrong reasons: primarily a belief that its features exist nowhere else, that its RAID is not subject to the risks and limitations that those RAID levels are always subject to or that it was designed for a different purpose (primarily performance) other than what it was designed for. And when ZFS is a good choice, it is often implemented poorly based on untrue assumptions.

ZFS, of course, is not to blame. Nor, as far as I can tell, are its corporate supporters or its open source developers. Where ZFS seems to have gone awry is in a loose, unofficial community that has only recently come to know ZFS, often believing it to be new or "next generation" because they have only recently discovered it.

From what I have seen this is almost never via Solaris or FreeBSD channels but almost exclusively smaller businesses looking to use a packaged "NAS OS" like FreeNAS or NAS4Free who are not familiar with UNIX OSes. The use of packaged NAS OSes, primarily by IT shops that possess neither deep UNIX nor storage skills and, consequently, little exposure to the broader world of filesystems outside of Windows and often little to no exposure to logical volume management and RAID, especially software RAID at all, appears to lead to a "myth" culture around ZFS with it taking on an almost unquestionable, infallible status.

Problems with ZFS Cult-Like Following

This cult-like following and general misunderstanding of ZFS leads often to misapplications of ZFS or a chain of decision making based off of bad assumptions that can lead one very much astray.

One of the most amazing changes in this space is the change in following from hardware RAID to software RAID. Traditionally, software RAID was a pariah in Windows administration circles without good cause - Windows administrators and small businesses, often unfamiliar with larger UNIX servers, believed that hardware RAID was ubiquitous when, in fact, larger scale systems always used software RAID.

Hardware RAID was, almost industry wide, considered a necessity and software RAID completely eschewed. That same audience, now faced with the "Cult of ZFS" movement, now react in exactly the opposite way believing that hardware RAID is bad and that ZFS' software RAID is the only viable option. The shift is dramatic and neither approach is valid - both hardware and software RAID and both in many implementations are very valid options and even using ZFS the use of hardware RAID might easily be appropriate.

ZFS is often chosen because it is believed that it is the highest performance option in filesystems but this was never a key design goal of ZFS. The features allowing it to scale so large and handle so many different aspects of storage actually make being high performance very difficult. ZFS, at the time of its creation, was not even expected to be as fast as the venerable UFS, which ran on the same systems as it. However, this is often secondary to the fact that filesystem performance is widely moot as all modern filesystems are extremely fast and filesystem speed is rarely an important factor - especially outside of massive, high end storage systems on a very large scale.

Page 2 of 3

Previous Page
1 2 3
Next Page

Tags: networking, ZFS, filesystem

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.



IT Management Daily
Don't miss an article. Subscribe to our newsletter below.

By submitting your information, you agree that datamation.com may send you Datamation offers via email, phone and text message, as well as email offers about other products and services that Datamation believes may be of interest to you. Datamation will process your information in accordance with the Quinstreet Privacy Policy.