Thursday, March 28, 2024

The Agony of File Systems

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Will the data storage industry never learn? Decades-long mistakes keep on getting repeated, opines storage expert Henry Newman.

The storage industry continues to make the same mistakes over and over again, and enterprises continue to take vendors’ bold statements as facts. Previously, we introduced our two-part series, “The Evolution of Stupidity,” explaining how issues seemingly resolved more than 20 years ago are again rearing their heads. Clearly, the more things change, the more they stay the same.

This time I ask, why do we continue to believe that the current evolutionary file system path will meet our needs today and in the future and cost nothing? Let’s go back and review a bit of history for free and non-free systems file systems.

Time Machine — Back to the Early 1980s

My experiences go back only to the early 1980s, but we have repeated history a few times since then. Why can we not seem to remember history, learn from it or even learn about it? It never ceases to amaze me. I talk to younger people, and more often than not, they say that they do not want to do hear about history, just about the presentation, and how they are going to make the future better. I coined a saying (at least I think I coined it) in the late 1990s: There are no new engineering problems, just new engineers solving old problems. I said this when I was helping someone develop a new file system using technology and ideas I had helped optimize the design around 10 years earlier.

In the mid-1980s, most of the open system file systems came as part of a standard Unix release from USL. A few vendors, such as Cray and Amdahl, wrote their own file systems. These vendors generally did so because the standard UNIX file did not meet the requirements of the day. UFS on Solaris came from another operating system, which was written in the 1960s, called Multics . That brings us to the late 1980s, and by this time, we had a number of high-performance file systems from companies such as Convex, MultiFlow and Thinking Machines. Everyone who had larger systems had their own file system, and everyone was trying to address many, if not all, of the same issues. They were in my opinion the scalability of:

  1. Metadata performance
  2. Recovery performance
  3. Small block performance
  4. Large block performance
  5. Storage management

The keyword here is scalability. Remember, during this time disk drive density was growing very rapidly and performance was scaling far better than it is today. Some of the vendors began the process of looking at parallel systems and some began charging for file systems that were once free. Does any of this sound like what I said in a recent blog post, “It’s like deja-vu, all over again” (Yogi Berra)? But since this article is about stupidity, let’s also remember the quote from another Yogi, Yogi Bear the cartoon character, “I’m smarter than the average bear!” and ask the question: Is the industry any smarter?

Read the rest about wrong-headed thinking with file systems at Enterprise Storage Forum.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles