Saturday, June 19, 2021

NFS Overhaul Promises Big Payoff

The network file system (NFS) protocol is getting its biggest overhaul in more than a decade, and the results could be profound for end users.

Version 4.1 of NFS, developed by a team of veterans from various storage interests, promises to unlock new performance and security capabilities, particularly for enterprise data centers.

NFS was originally designed to solve the problem of remote access to home directories and supporting diskless workstations and servers over local area networks. With the advent of cheaper high-performance computing in the form of Linux compute clusters, multi-core processors and blades, the demands for higher performance file access have risen sharply. It’s no wonder that a protocol designed for 1984 speeds would be unable to cope.

“NFS is getting pressure from clustered file systems like Lustre and GPFS, as well as custom file systems produced by Web 2.0 service providers such as Google GFS,” said Mike Eisler, senior technical director at NetApp (NASDAQ: NTAP).

The latest makeover to this time-honored distributed file system protocol provides all the same features as before: straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access. Unlike earlier versions of NFS, however, it now integrates file locking, has stronger security, and includes delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks.

pNFS Changes the Storage World

pNFS is a key feature of NFS 4.1. The p in pNFS stands for parallel, and pNFS will provide parallel I/O to file systems accessible over NFS. It enables the storage administrator to do things like stripe a single file across multiple NFS servers. This is equivalent to RAID 0, which boosts performance by allowing multiple disk drives to serve up data in parallel. pNFS takes the concept and extends it to multiple storage devices connected to the NFS client over a network.

“Even for files too small to stripe, those files can be distributed across multiple NFS servers, which provides statistical load balancing,” said Eisler. “With a capable cluster of NFS servers and a back-end file system, files or ranges within files can be relocated transparent to the applications accessing data over pNFS.”

This article was first published on InternetNews.com. To read the full article, click here.

Similar articles

Latest Articles

3 AI Implementations That...

I was on a joint educational call for the World Talent Economic Economic forum on mobile computing this week. We drifted to topics that...

Survey of Site Reliability...

NEW YORK — Site reliability engineers (SREs) are warning of a looming scalability ceiling and saying the adoption of AIOps isn’t happening at a...

Druva Integrates sfApex to...

SUNNYVALE, Calif. — A maker of software for cloud data protection and management is helping companies safeguard essential customer data that their sales and...

Best Data Science Tools...

Data science has transformed our world. The ability to extract insights from enormous sets of structured and unstructured data has revolutionized numerous fields —...