Press ESC to close Press / to search

NFS Server on Linux: Complete Setup, Exports, and Client Configuration Guide

🎯 Key Takeaways

  • Table of Contents
  • NFS Versions: NFSv3 vs NFSv4
  • Installing and Configuring the NFS Server
  • Configuring /etc/exports: Access Control and Options
  • Firewall Configuration

πŸ“‘ Table of Contents

NFS (Network File System) is the standard protocol for sharing directories across Linux servers. Unlike block storage or object storage, NFS mounts appear as regular directories to any client β€” no special API, no client SDK, just a mount point that behaves exactly like local storage. It is the backbone of shared home directories, application data volumes, CI/CD artifact caches, and Kubernetes persistent storage in on-premises environments. This guide covers setting up an NFS server on Linux, configuring exports with access controls, mounting on clients, tuning performance, and securing the deployment.

Table of Contents

NFS Versions: NFSv3 vs NFSv4

Feature NFSv3 NFSv4 / NFSv4.1
Port Multiple (111, 2049, dynamic) 2049 only (firewall-friendly)
Authentication IP-based only IP-based + Kerberos (sec=krb5)
File locking Separate NLM protocol Built-in
ACL support No Yes (POSIX and Windows ACLs)
Stateful Stateless (simpler recovery) Stateful (better performance)
pNFS (parallel NFS) No Yes (v4.1+)

Use NFSv4 for all new deployments. It uses only port 2049, supports Kerberos authentication, has built-in file locking, and is required for Kubernetes CSI NFS drivers. Only fall back to NFSv3 for legacy clients that don’t support v4.

Installing and Configuring the NFS Server

# RHEL / Rocky Linux / AlmaLinux
dnf install -y nfs-utils

# Ubuntu / Debian
apt install -y nfs-kernel-server

# Enable and start NFS services
systemctl enable --now nfs-server    # RHEL/Rocky
# or
systemctl enable --now nfs-kernel-server   # Ubuntu/Debian

# Verify NFS is running and listening
systemctl status nfs-server
ss -tlnp | grep 2049

# Check supported NFS versions on the server
cat /proc/fs/nfsd/versions

Create the Export Directories

# Create directories to share
mkdir -p /exports/data
mkdir -p /exports/homes
mkdir -p /exports/media

# Set ownership β€” for shared data volumes, a common practice is to use
# a dedicated UID/GID that maps consistently across all clients
chown -R nobody:nobody /exports/data
chmod 755 /exports/data

# For home directories: owned by individual users (UIDs must match across systems)
# For anonymous access: nobody:nobody is the safe default

Configuring /etc/exports: Access Control and Options

# /etc/exports β€” each line: directory  client(options)
# Multiple clients can be listed on the same line (space-separated)

cat > /etc/exports << 'EXPORTS'
# Share /exports/data to a specific subnet β€” read/write
/exports/data    192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)

# Share /exports/homes β€” read/write, root squash (safer for user home dirs)
/exports/homes   192.168.1.0/24(rw,sync,no_subtree_check,root_squash)

# Share /exports/media read-only to all clients on the LAN
/exports/media   192.168.1.0/24(ro,sync,no_subtree_check)

# Share to a specific host only
/exports/data    192.168.1.50(rw,sync,no_subtree_check,no_root_squash)

# NFSv4 pseudo-filesystem root (required for NFSv4 clients)
/exports         *(ro,fsid=0,no_subtree_check)
EXPORTS

# Apply changes without restarting the NFS server
exportfs -arv

# List currently active exports
exportfs -v

Key Export Options Explained

Option Meaning Recommendation
rw Read/write access Use for writable shares
ro Read-only access Use for software/media repos
sync Write to disk before replying Always use in production (safer)
async Buffer writes (faster, risk of data loss) Scratch/cache only
no_subtree_check Disable subtree checking (faster, slightly less secure) Recommended for most
root_squash Map root (UID 0) on client to nobody Default; use for user shares
no_root_squash Preserve root UID from client Use for trusted infra clients
all_squash Map all UIDs to anonymous Public/untrusted shares
fsid=0 Marks the NFSv4 pseudo-root Required for NFSv4

Firewall Configuration

# NFSv4 β€” only port 2049/TCP required
# UFW (Ubuntu)
ufw allow from 192.168.1.0/24 to any port 2049 proto tcp

# firewalld (RHEL/Rocky)
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" service name="nfs" accept'
firewall-cmd --reload

# NFSv3 requires additional ports (portmapper, mountd, lockd)
# If you must use NFSv3, pin the ports in /etc/nfs.conf:
cat >> /etc/nfs.conf << 'CONF'
[mountd]
port=20048

[lockd]
port=32803
udp-port=32769
CONF

# Then allow all three ports in your firewall

Mounting NFS Shares on Linux Clients

# Install NFS client utilities
dnf install -y nfs-utils          # RHEL/Rocky
apt install -y nfs-common         # Ubuntu/Debian

# Show exports available from a server
showmount -e 192.168.1.10

# Mount an NFS share (NFSv4)
mkdir -p /mnt/nfs-data
mount -t nfs4 192.168.1.10:/exports/data /mnt/nfs-data

# Mount with specific options
mount -t nfs4 \
  -o rw,hard,intr,timeo=600,retrans=2,rsize=131072,wsize=131072 \
  192.168.1.10:/exports/data /mnt/nfs-data

# Verify the mount
df -hT /mnt/nfs-data
mount | grep nfs

# Unmount
umount /mnt/nfs-data

Client Mount Options

# Key mount options for production NFS clients:
# hard     β€” retry indefinitely if server is unavailable (recommended)
# soft     β€” return errors to application if server unavailable (use only for read-only)
# intr     β€” allow signals to interrupt hung NFS operations
# timeo    β€” timeout in tenths of a second before retry (default: 600 = 60s)
# retrans  β€” number of retransmissions before error (default: 2)
# rsize    β€” read block size (default: 131072 = 128KB)
# wsize    β€” write block size (default: 131072 = 128KB)
# vers=4.1 β€” force NFSv4.1 (supports sessions and parallel NFS)
# noatime  β€” don't update access times (reduces write traffic for read-heavy shares)

Persistent Mounts via /etc/fstab and systemd

# Add NFS mount to /etc/fstab
# Format: server:/export  mountpoint  fstype  options  dump  pass
echo "192.168.1.10:/exports/data  /mnt/nfs-data  nfs4  hard,intr,rsize=131072,wsize=131072,_netdev  0  0" >> /etc/fstab

# _netdev is critical β€” tells systemd to mount only after network is up
# Without it, the system may hang at boot waiting for NFS when network isn't ready

# Test fstab without rebooting
mount -a
df -h /mnt/nfs-data

# systemd automount unit (alternative to fstab β€” mounts on first access)
cat > /etc/systemd/system/mnt-nfs\x2ddata.automount << 'UNIT'
[Unit]
Description=NFS data automount

[Automount]
Where=/mnt/nfs-data
TimeoutIdleSec=600

[Install]
WantedBy=multi-user.target
UNIT

cat > /etc/systemd/system/mnt-nfs\x2ddata.mount << 'UNIT'
[Unit]
Description=NFS data share
After=network-online.target
Wants=network-online.target

[Mount]
What=192.168.1.10:/exports/data
Where=/mnt/nfs-data
Type=nfs4
Options=hard,intr,rsize=131072,wsize=131072

[Install]
WantedBy=multi-user.target
UNIT

systemctl daemon-reload
systemctl enable --now mnt-nfs\x2ddata.automount

Automounting with autofs

autofs mounts NFS shares on demand and unmounts them after a timeout β€” ideal for home directories where you don't want all shares mounted on all clients constantly.

# Install autofs
dnf install -y autofs      # RHEL/Rocky
apt install -y autofs      # Ubuntu/Debian

# /etc/auto.master β€” master map file
echo "/mnt/nfs  /etc/auto.nfs  --timeout=600" >> /etc/auto.master

# /etc/auto.nfs β€” specific mounts (relative to /mnt/nfs)
cat > /etc/auto.nfs << 'MAP'
data     -rw,hard,intr  192.168.1.10:/exports/data
homes    -rw,hard,intr  192.168.1.10:/exports/homes
media    -ro            192.168.1.10:/exports/media
MAP

systemctl enable --now autofs

# Test β€” accessing the path triggers the mount
ls /mnt/nfs/data
# Mount appears, disappears after 10 minutes of no access

NFSv4 with Kerberos Authentication

# Kerberos (sec=krb5) adds strong authentication β€” clients prove identity
# before mounting. Requires a Kerberos KDC (MIT Kerberos, FreeIPA, or AD)

# On the NFS server: obtain a host principal
kadmin -q "addprinc -randkey nfs/nfs-server.example.com@EXAMPLE.COM"
kadmin -q "ktadd -k /etc/krb5.keytab nfs/nfs-server.example.com@EXAMPLE.COM"

# Enable gssd (Kerberos GSS-API daemon)
systemctl enable --now rpc-gssd

# Update /etc/exports to require Kerberos
/exports/data  192.168.1.0/24(rw,sync,no_subtree_check,sec=krb5p)
# sec options:
# krb5  β€” authentication only
# krb5i β€” authentication + integrity checking
# krb5p β€” authentication + integrity + encryption (most secure, most overhead)

exportfs -arv

# On clients: obtain host keytab and enable gssd
kadmin -q "ktadd -k /etc/krb5.keytab nfs/client.example.com@EXAMPLE.COM"
systemctl enable --now rpc-gssd

# Mount with Kerberos
mount -t nfs4 -o sec=krb5p nfs-server.example.com:/exports/data /mnt/secure

Performance Tuning

# Increase NFS server threads (default 8, increase for busy servers)
# /etc/nfs.conf:
cat >> /etc/nfs.conf << 'CONF'
[nfsd]
threads=32
CONF
systemctl restart nfs-server

# Tune client rsize/wsize β€” match to your network MTU and workload
# For 1GbE: rsize=131072,wsize=131072 (128KB)
# For 10GbE: rsize=1048576,wsize=1048576 (1MB)

# Check current NFS performance
nfsstat -c   # Client-side statistics
nfsstat -s   # Server-side statistics
nfsiostat    # Per-mount I/O statistics (from nfs-utils)

# Enable NFS server-side read-ahead
echo 16384 > /sys/class/bdi/0:$(mountpoint -d /exports/data)/read_ahead_kb

# For sequential workloads (backups, logs), disable NFS attribute caching on client:
mount -o actimeo=0 ...   # No attribute caching
# For mostly-read workloads (software repos), increase attribute cache timeout:
mount -o acregmax=60,acdirmax=60 ...   # Cache for 60 seconds

Monitoring NFS with nfsstat and iostat

# Server: monitor RPC call rates
watch -n 5 nfsstat -s

# Client: monitor per-mount I/O
nfsiostat 5   # Refresh every 5 seconds

# Key metrics to watch (server):
# getattr β€” high count is normal (stat() calls)
# read/write β€” core data transfer operations
# commit β€” high commit rate = clients using async writes (performance risk)

# Check for stale NFS mounts (hung processes)
cat /proc/mounts | grep nfs
# If a mount hangs, check for processes stuck in D state:
ps aux | awk '$8=="D" {print}'

# Prometheus: node_exporter exposes NFS metrics
# node_nfs_requests_total β€” tagged by operation (read, write, getattr, etc.)
# Alert on: sudden drop in write ops or spike in error responses

Troubleshooting Common NFS Problems

# Mount hangs or times out
# 1. Verify server is reachable
ping 192.168.1.10
nc -zv 192.168.1.10 2049

# 2. Check server exports are active
showmount -e 192.168.1.10

# 3. Check NFS server service
systemctl status nfs-server

# "Permission denied" on mount
# Check /etc/exports β€” client IP must match the allowed range
exportfs -v | grep your-client-ip
# Check firewall is open
# Check that no_root_squash or proper UID mapping is configured

# "Stale file handle" errors
# The server's export was unexported/remounted. Fix:
umount -l /mnt/nfs-data    # Lazy unmount
mount /mnt/nfs-data        # Remount

# Files visible on server but not client (caching issue)
# Force attribute cache refresh:
ls -la /mnt/nfs-data       # Stat forces a revalidation
# Or mount with: -o actimeo=0

# NFSv4 ID mapping issues (files show as "nobody")
# Check /etc/idmapd.conf β€” Domain must match on server and client
cat /etc/idmapd.conf | grep Domain
# Set Domain = example.com on all hosts
systemctl restart nfs-idmapd rpcbind

Conclusion

NFS remains the most operationally straightforward solution for shared file access between Linux servers. NFSv4 reduces firewall complexity to a single port (2049), adds proper file locking, and supports Kerberos authentication for environments that need it. For most infrastructure use cases β€” shared application data volumes, centralized home directories, CI/CD artifact storage β€” a well-tuned NFS server with sync exports, appropriate rsize/wsize values, and hard mounts on clients delivers reliable, low-maintenance shared storage. Pair it with autofs for home directories and systemd mount units for always-on application volumes, and you have a production-ready shared storage layer that requires minimal ongoing administration.

Was this article helpful?

Advertisement
🏷️ Tags: autofs exports Linux file sharing linux storage network file system NFS NFS client NFS server NFSv4 shared storage
R

About Ramesh Sundararamaiah

Red Hat Certified Architect

Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.

🐧 Stay Updated with Linux Tips

Get the latest tutorials, news, and guides delivered to your inbox weekly.

Advertisement

Add Comment


↑