Monday, December 9, 2024

NFS Overhaul Promises Big Payoff

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The network file system (NFS) protocol is getting its biggest overhaul in more than a decade, and the results could be profound for end users.

Version 4.1 of NFS, developed by a team of veterans from various storage interests, promises to unlock new performance and security capabilities, particularly for enterprise data centers.

NFS was originally designed to solve the problem of remote access to home directories and supporting diskless workstations and servers over local area networks. With the advent of cheaper high-performance computing in the form of Linux compute clusters, multi-core processors and blades, the demands for higher performance file access have risen sharply. It’s no wonder that a protocol designed for 1984 speeds would be unable to cope.

“NFS is getting pressure from clustered file systems like Lustre and GPFS, as well as custom file systems produced by Web 2.0 service providers such as Google GFS,” said Mike Eisler, senior technical director at NetApp (NASDAQ: NTAP).

The latest makeover to this time-honored distributed file system protocol provides all the same features as before: straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access. Unlike earlier versions of NFS, however, it now integrates file locking, has stronger security, and includes delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks.

pNFS Changes the Storage World

pNFS is a key feature of NFS 4.1. The p in pNFS stands for parallel, and pNFS will provide parallel I/O to file systems accessible over NFS. It enables the storage administrator to do things like stripe a single file across multiple NFS servers. This is equivalent to RAID 0, which boosts performance by allowing multiple disk drives to serve up data in parallel. pNFS takes the concept and extends it to multiple storage devices connected to the NFS client over a network.

“Even for files too small to stripe, those files can be distributed across multiple NFS servers, which provides statistical load balancing,” said Eisler. “With a capable cluster of NFS servers and a back-end file system, files or ranges within files can be relocated transparent to the applications accessing data over pNFS.”

This article was first published on InternetNews.com. To read the full article, click here.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles