Datamation Logo

NFS Overhaul Promises Big Payoff

August 11, 2008
Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More .

The network file system (NFS) protocol is getting its biggest overhaul in more than a decade, and the results could be profound for end users.

Version 4.1 of NFS, developed by a team of veterans from various storage interests, promises to unlock new performance and security capabilities, particularly for enterprise data centers.

NFS was originally designed to solve the problem of remote access to home directories and supporting diskless workstations and servers over local area networks. With the advent of cheaper high-performance computing in the form of Linux compute clusters, multi-core processors and blades, the demands for higher performance file access have risen sharply. It’s no wonder that a protocol designed for 1984 speeds would be unable to cope.

“NFS is getting pressure from clustered file systems like Lustre and GPFS, as well as custom file systems produced by Web 2.0 service providers such as Google GFS,” said Mike Eisler, senior technical director at NetApp (NASDAQ: NTAP).

The latest makeover to this time-honored distributed file system protocol provides all the same features as before: straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access. Unlike earlier versions of NFS, however, it now integrates file locking, has stronger security, and includes delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks.

pNFS Changes the Storage World

pNFS is a key feature of NFS 4.1. The p in pNFS stands for parallel, and pNFS will provide parallel I/O to file systems accessible over NFS. It enables the storage administrator to do things like stripe a single file across multiple NFS servers. This is equivalent to RAID 0, which boosts performance by allowing multiple disk drives to serve up data in parallel. pNFS takes the concept and extends it to multiple storage devices connected to the NFS client over a network.

“Even for files too small to stripe, those files can be distributed across multiple NFS servers, which provides statistical load balancing,” said Eisler. “With a capable cluster of NFS servers and a back-end file system, files or ranges within files can be relocated transparent to the applications accessing data over pNFS.”

This article was first published on InternetNews.com. To read the full article, click here.

  SEE ALL
ARTICLES
 

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Datamation Logo

Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.

Advertisers

Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.

Advertise with Us

Our Brands


Privacy Policy Terms & Conditions About Contact Advertise California - Do Not Sell My Information

Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.