Btrfs Filesystem on Linux: Snapshots, Compression, and RAID Explained for Sysadmins (2026 Guide)
📑 Table of Contents
- 1. What Is Btrfs and Why Does It Matter?
- 2. Installing and Formatting a Btrfs Filesystem
- 3. Subvolumes — The Killer Feature
- 4. Snapshots — Instant Rollback
- 5. Compression — Transparent and Powerful
- 6. Btrfs RAID Modes
- 7. Send/Receive — Incremental Backups
- 8. Scrub and Balance — Routine Maintenance
- 9. Btrfs vs ext4 vs XFS — Comparison Table
- 10. Key Commands Quick Reference
- Conclusion
If you’ve been managing Linux systems for any length of time, you’ve likely spent most of that time on ext4. It’s reliable, well-understood, and universally supported. But ext4 was designed in an era before snapshots, transparent compression, and built-in RAID management were considered essential features for a production filesystem. Enter Btrfs (pronounced “butter-FS” or “B-tree FS”) — a modern, copy-on-write filesystem that has been shipping as stable in major Linux distributions since the early 2020s and is now the default on openSUSE and an increasingly popular choice for Fedora and enterprise workloads.
📑 Table of Contents
- 1. What Is Btrfs and Why Does It Matter?
- Key Advantages Over ext4
- Where ext4 and XFS Still Win
- 2. Installing and Formatting a Btrfs Filesystem
- Recommended Mount Options Reference
- 3. Subvolumes — The Killer Feature
- Working with Subvolumes Day-to-Day
- 4. Snapshots — Instant Rollback
- Creating Snapshots
- Restoring from a Snapshot
- Automating Snapshots with Snapper
- 5. Compression — Transparent and Powerful
- Compression Algorithms
- Enabling Compression
- Measuring Compression Effectiveness
- 6. Btrfs RAID Modes
- Supported RAID Profiles
- Setting Up RAID 1 (Mirror)
- Setting Up RAID 0 (Stripe)
- Adding a Device to an Existing Btrfs Filesystem
- 7. Send/Receive — Incremental Backups
- Full Backup Send
- Incremental Backup Send
- Local Backup Example (to a Second Drive)
- 8. Scrub and Balance — Routine Maintenance
- Btrfs Scrub
- Btrfs Balance
- Filesystem Usage and Health
- 9. Btrfs vs ext4 vs XFS — Comparison Table
- 10. Key Commands Quick Reference
- Filesystem Operations
- Subvolume Operations
- Snapshot Operations
- Send/Receive
- Compression
- Maintenance
- Conclusion
This guide is written for working sysadmins. We will skip the theory where possible and focus on real commands, real decisions, and real trade-offs. By the end, you will know how to format, mount, snapshot, compress, replicate, and maintain a Btrfs filesystem — and when to use it instead of ext4 or XFS.
1. What Is Btrfs and Why Does It Matter?
Btrfs is a Copy-on-Write (CoW) filesystem built into the Linux kernel. Instead of overwriting data in place, it writes new data to a new location and then updates the metadata to point to it. This single architectural decision is what enables snapshots, checksumming, and many other features — they all fall out naturally from CoW semantics.
Key Advantages Over ext4
- Built-in snapshots: Instantaneous, space-efficient point-in-time copies of any subvolume. No LVM required.
- Transparent compression: Compress data on-the-fly with zstd, lzo, or zlib — often saving 30–60% on typical server workloads.
- Integrated RAID: Span and mirror multiple devices natively without md-raid.
- Online operations: Resize, add/remove devices, rebalance, and scrub — all without unmounting.
- Checksumming: All data and metadata are checksummed with CRC32C (or xxHash) by default, catching silent data corruption that ext4 misses entirely.
- Subvolumes: Logical partitions within a single filesystem, each with their own snapshot and quota policies.
Where ext4 and XFS Still Win
- Raw sequential write throughput under heavy load (XFS is still king here).
- Workloads with millions of small random writes — databases like MySQL with InnoDB should use XFS or ext4 with CoW disabled.
- Maximum compatibility — ext4 is bootable from every Linux live environment and rescue tool without extra drivers.
- Very large filesystems (hundreds of TB) — XFS scales more predictably at extreme sizes.
For general-purpose servers, NAS systems, development environments, and any workload that benefits from point-in-time rollback, Btrfs in 2026 is a compelling and production-ready choice.
2. Installing and Formatting a Btrfs Filesystem
The btrfs-progs package provides all userspace tools. On Debian/Ubuntu systems:
# Debian/Ubuntu
sudo apt install btrfs-progs
# RHEL/Rocky Linux 9+
sudo dnf install btrfs-progs
# Arch Linux
sudo pacman -S btrfs-progs
Formatting a device is straightforward. Here we format a single disk:
# Format /dev/sdb as Btrfs
sudo mkfs.btrfs /dev/sdb
# Format with a label (recommended — makes fstab entries more readable)
sudo mkfs.btrfs -L datastore /dev/sdb
# Format with mixed-mode for small volumes under 1GB (dev/testing use)
sudo mkfs.btrfs -M /dev/sdb
# Format multiple devices as RAID 1 (covered in detail later)
sudo mkfs.btrfs -d raid1 -m raid1 /dev/sdb /dev/sdc
Mount the filesystem with recommended options for a typical server workload:
# Basic mount
sudo mount /dev/sdb /mnt/data
# Mount with zstd compression and SSD optimizations
sudo mount -o compress=zstd,ssd,noatime /dev/sdb /mnt/data
# Verify
sudo btrfs filesystem show /mnt/data
To make the mount persistent, add it to /etc/fstab. Use the filesystem UUID rather than the device path to survive disk reordering:
# Get the UUID
sudo blkid /dev/sdb
# Add to /etc/fstab
UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/data btrfs defaults,compress=zstd,noatime 0 0
Recommended Mount Options Reference
compress=zstd— Enable transparent compression using the zstd algorithm (best ratio/speed balance).noatime— Disable access time updates; significant performance improvement on read-heavy workloads.ssd— Hint to Btrfs that the underlying device is an SSD; adjusts allocation strategies.autodefrag— Automatically defragment small random writes in the background (good for databases on Btrfs, but disable CoW per-file for heavy DB use).space_cache=v2— Use the improved free space cache (default on modern kernels; still worth specifying explicitly).
3. Subvolumes — The Killer Feature
A subvolume in Btrfs is a separately addressable, independently snapshotable namespace within a single Btrfs filesystem. Think of it as a lightweight partition — but one that shares the same pool of free space and can be created or deleted in milliseconds without repartitioning.
The canonical layout used by most distributions (openSUSE Tumbleweed, Ubuntu with Btrfs, Fedora) is to put the root filesystem and other directories into separate subvolumes so they can be snapshotted and rolled back independently:
# Mount the top-level Btrfs volume (subvolume ID 5)
sudo mount -o subvolid=5 /dev/sdb /mnt/top
# Create a conventional subvolume layout
sudo btrfs subvolume create /mnt/top/@ # root
sudo btrfs subvolume create /mnt/top/@home # home
sudo btrfs subvolume create /mnt/top/@var # var (logs etc.)
sudo btrfs subvolume create /mnt/top/@snapshots # snapshot storage
# List all subvolumes
sudo btrfs subvolume list /mnt/top
Once created, you mount each subvolume individually using the subvol= option:
# Mount the @ subvolume as root
sudo mount -o subvol=@,compress=zstd,noatime /dev/sdb /
# Mount @home
sudo mount -o subvol=@home,compress=zstd,noatime /dev/sdb /home
Working with Subvolumes Day-to-Day
# Create a new subvolume for a project
sudo btrfs subvolume create /mnt/data/project_alpha
# Show detailed information about a subvolume
sudo btrfs subvolume show /mnt/data/project_alpha
# Delete a subvolume (must be empty of child subvolumes)
sudo btrfs subvolume delete /mnt/data/project_alpha
# Set default subvolume (the one mounted when no subvol= is specified)
sudo btrfs subvolume set-default 256 /mnt/data
4. Snapshots — Instant Rollback
Snapshots are where Btrfs earns its place on production systems. Because Btrfs uses Copy-on-Write, taking a snapshot is an instantaneous, zero-cost operation — it simply creates a new subvolume that shares all the same data blocks as the original. Space is only consumed as the two diverge over time.
Creating Snapshots
# Read-write snapshot (can be modified — useful as a testing clone)
sudo btrfs subvolume snapshot /mnt/data/@ /mnt/data/@snapshots/root-$(date +%Y%m%d-%H%M%S)
# Read-only snapshot (cannot be modified — preferred for backups and send/receive)
sudo btrfs subvolume snapshot -r /mnt/data/@ /mnt/data/@snapshots/root-$(date +%Y%m%d-%H%M%S)
# List snapshots
sudo btrfs subvolume list -s /mnt/data
Restoring from a Snapshot
Rolling back to a snapshot is a two-step process: rename the broken subvolume out of the way, then rename the snapshot into its place.
# Step 1: Boot into a live environment or use a separate mount point
# Mount the top-level Btrfs volume
sudo mount -o subvolid=5 /dev/sdb /mnt/recovery
# Step 2: Rename the current (broken) root subvolume
sudo mv /mnt/recovery/@ /mnt/recovery/@broken-$(date +%Y%m%d)
# Step 3: Make a read-write snapshot of the restore point
sudo btrfs subvolume snapshot \
/mnt/recovery/@snapshots/root-20260430-0200 \
/mnt/recovery/@
# Step 4: Reboot into the restored system
sudo reboot
Automating Snapshots with Snapper
For production systems, automate snapshot management with Snapper:
# Install snapper
sudo apt install snapper # Debian/Ubuntu
sudo dnf install snapper # RHEL/Fedora
# Create a snapper configuration for root
sudo snapper -c root create-config /
# Take a manual snapshot
sudo snapper -c root create --description "Before nginx upgrade"
# List snapshots
sudo snapper -c root list
# Diff between two snapshots
sudo snapper -c root diff 3..4
# Rollback to snapshot #3
sudo snapper -c root rollback 3
5. Compression — Transparent and Powerful
Btrfs compression happens transparently at the filesystem level — applications write uncompressed data, and Btrfs compresses it before writing to disk. There is no application-level change required.
Compression Algorithms
- zstd (recommended) — Best overall ratio and speed. Level 1 (default) is extremely fast; levels 3–6 offer better compression at modest CPU cost.
- lzo — Fastest compression, lowest ratio. Good for CPU-constrained systems or fast NVMe where compression is barely needed.
- zlib — Highest compression ratio, most CPU overhead. Good for archival or cold storage.
Enabling Compression
# Mount with zstd compression (applies to new writes)
sudo mount -o compress=zstd /dev/sdb /mnt/data
# Mount with zstd at a specific level (level 3 — good for text/logs)
sudo mount -o compress=zstd:3 /dev/sdb /mnt/data
# Force compression even for files that Btrfs would skip (like already-compressed files)
sudo mount -o compress-force=zstd /dev/sdb /mnt/data
# Retroactively compress existing files (runs in the background)
sudo btrfs filesystem defragment -r -czstd /mnt/data
Measuring Compression Effectiveness
# Show filesystem space usage including compression ratio
sudo btrfs filesystem df /mnt/data
# Detailed compression statistics (kernel 5.9+)
sudo compsize /mnt/data
# Example compsize output:
# Processed 48200 files, 312000 regular extents (312000 refs), 0 inline.
# Type Perc Disk Usage Uncompressed Referenced
# TOTAL 42% 14G 33G 33G
# none 100% 7.5G 7.5G 7.5G
# zstd 19% 6.5G 33G 33G
In typical mixed server environments (logs, configs, source code, VM images), zstd compression commonly yields a 40–60% reduction in disk space with negligible CPU overhead on modern hardware.
6. Btrfs RAID Modes
Btrfs supports software RAID natively — no mdadm required. You specify the RAID mode for both data and metadata separately at format time, or you can convert existing filesystems.
Supported RAID Profiles
- RAID 0 — Striping across devices. Better performance, no redundancy. All devices must survive.
- RAID 1 — Mirroring. Data exists on at least 2 devices. Can survive one device failure.
- RAID 10 — Striped mirror. Requires 4 devices minimum. Performance + redundancy.
- RAID 5/6 — Parity-based. Note: Still marked experimental in 2026 — do not use for production data.
Setting Up RAID 1 (Mirror)
# Format two drives as RAID 1 (data mirrored, metadata mirrored)
sudo mkfs.btrfs -d raid1 -m raid1 -L mirror_pool /dev/sdb /dev/sdc
# Mount
sudo mount /dev/sdb /mnt/raid1
# Show the RAID topology
sudo btrfs filesystem show /mnt/raid1
# Check device stats (error counters)
sudo btrfs device stats /mnt/raid1
Setting Up RAID 0 (Stripe)
# Format four drives as RAID 0
sudo mkfs.btrfs -d raid0 -m raid1 -L stripe_pool /dev/sdb /dev/sdc /dev/sdd /dev/sde
# Note: Keep metadata as raid1 even when data is raid0 — protects filesystem structure
Adding a Device to an Existing Btrfs Filesystem
# Add a new device to the pool
sudo btrfs device add /dev/sdd /mnt/data
# Rebalance data across all devices (run in the background — can take hours on large filesystems)
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/data
# Monitor balance progress
sudo btrfs balance status /mnt/data
# Remove a device (data will be migrated off first)
sudo btrfs device remove /dev/sdc /mnt/data
7. Send/Receive — Incremental Backups
Btrfs send and receive are arguably the most powerful backup primitives in the Linux storage ecosystem. btrfs send serializes the contents of a read-only snapshot into a stream; btrfs receive applies that stream to a Btrfs filesystem on another machine or disk. Incremental sends only transmit the difference between two snapshots — making efficient remote backups trivial.
Full Backup Send
# On the source machine — take a read-only snapshot
sudo btrfs subvolume snapshot -r /mnt/data/@ \
/mnt/data/@snapshots/root-20260501
# Send the full snapshot to a backup server via SSH
sudo btrfs send /mnt/data/@snapshots/root-20260501 | \
ssh backup@192.168.1.100 \
"sudo btrfs receive /mnt/backup/"
Incremental Backup Send
# Take a new snapshot (parent is the previous one sent)
sudo btrfs subvolume snapshot -r /mnt/data/@ \
/mnt/data/@snapshots/root-20260502
# Send only the delta between the two snapshots
sudo btrfs send \
-p /mnt/data/@snapshots/root-20260501 \
/mnt/data/@snapshots/root-20260502 | \
ssh backup@192.168.1.100 \
"sudo btrfs receive /mnt/backup/"
This approach can reduce backup transfer sizes by 95%+ for daily snapshots of systems that change incrementally. It forms the basis of tools like btrbk and btrfs-backup-ng, which automate snapshot rotation and remote send/receive.
Local Backup Example (to a Second Drive)
# Send snapshot to a local backup Btrfs filesystem
sudo btrfs send /mnt/data/@snapshots/root-20260501 | \
sudo btrfs receive /mnt/external_backup/
# Verify the received subvolume
sudo btrfs subvolume list /mnt/external_backup/
8. Scrub and Balance — Routine Maintenance
Two commands form the core of Btrfs maintenance: scrub and balance. Both run online against a mounted filesystem without requiring downtime.
Btrfs Scrub
Scrub reads all data and metadata on the filesystem, verifies checksums, and automatically repairs corruption where redundancy exists (RAID 1/10). It is the Btrfs equivalent of a disk check — run it monthly on production systems.
# Start a scrub in the background
sudo btrfs scrub start /mnt/data
# Check scrub status
sudo btrfs scrub status /mnt/data
# Start scrub and wait for completion (useful in scripts)
sudo btrfs scrub start -B /mnt/data
# Example output:
# scrub status for xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
# scrub started at Mon May 4 02:00:01 2026 and finished after 00:14:33
# total bytes scrubbed: 512.00GiB with 0 errors
Add scrub to a monthly cron job or systemd timer:
# /etc/cron.monthly/btrfs-scrub
#!/bin/bash
/sbin/btrfs scrub start -B /mnt/data >> /var/log/btrfs-scrub.log 2>&1
Btrfs Balance
Balance redistributes data across chunks and devices. It is necessary after adding or removing devices, and periodically to clean up unbalanced chunk allocation.
# Start a full balance (can take hours — limit with filters in production)
sudo btrfs balance start /mnt/data
# Balance only chunks that are less than 10% utilized (efficient cleanup)
sudo btrfs balance start -dusage=10 -musage=10 /mnt/data
# Check balance status
sudo btrfs balance status /mnt/data
# Pause and resume a balance
sudo btrfs balance pause /mnt/data
sudo btrfs balance resume /mnt/data
# Cancel a running balance
sudo btrfs balance cancel /mnt/data
Filesystem Usage and Health
# Show overall space usage by type (data, metadata, system)
sudo btrfs filesystem df /mnt/data
# Show usage with human-readable allocation details
sudo btrfs filesystem usage /mnt/data
# Show all Btrfs filesystems on the system
sudo btrfs filesystem show
# Show device error counters (check these after any hardware issues)
sudo btrfs device stats /mnt/data
9. Btrfs vs ext4 vs XFS — Comparison Table
| Feature | Btrfs | ext4 | XFS |
|---|---|---|---|
| Copy-on-Write | Yes (native) | No | No (reflinks only) |
| Built-in Snapshots | Yes | No (requires LVM) | No (requires LVM) |
| Transparent Compression | Yes (zstd/lzo/zlib) | No | No |
| Data Checksumming | Yes (data + metadata) | Metadata only | Metadata only |
| Multi-Device / RAID | Yes (native) | No (requires mdadm) | No (requires mdadm) |
| Online Resize (grow) | Yes | Yes | Yes |
| Online Resize (shrink) | Yes | Offline only | No |
| Subvolumes | Yes | No | No |
| Send/Receive Backup | Yes | No | No |
| Max Filesystem Size | 16 EiB | 1 EiB | 8 EiB |
| Max File Size | 16 EiB | 16 TiB | 8 EiB |
| Production Maturity | Stable (since ~2020) | Extremely mature | Very mature |
| Best For | NAS, desktops, general servers, snapshot-heavy workloads | Boot volumes, legacy compatibility, simplicity | High-throughput I/O, large files, databases |
| Avoid For | High-IOPS databases (InnoDB/PostgreSQL direct) | Multi-device pools, space-constrained systems | Small filesystems (<1GB), shrinking volumes |
10. Key Commands Quick Reference
Filesystem Operations
# Format
mkfs.btrfs -L label /dev/sdX
mkfs.btrfs -d raid1 -m raid1 /dev/sdX /dev/sdY
# Mount
mount -o compress=zstd,noatime,ssd /dev/sdX /mnt/point
mount -o subvol=@home /dev/sdX /mnt/home
# Filesystem info
btrfs filesystem show
btrfs filesystem df /mnt/data
btrfs filesystem usage /mnt/data
Subvolume Operations
btrfs subvolume create /mnt/data/mysubvol
btrfs subvolume list /mnt/data
btrfs subvolume show /mnt/data/mysubvol
btrfs subvolume delete /mnt/data/mysubvol
btrfs subvolume set-default ID /mnt/data
Snapshot Operations
# Read-write snapshot
btrfs subvolume snapshot /mnt/data/@ /mnt/data/@snapshots/snap-$(date +%Y%m%d)
# Read-only snapshot (for backups/send)
btrfs subvolume snapshot -r /mnt/data/@ /mnt/data/@snapshots/snap-ro-$(date +%Y%m%d)
Send/Receive
# Full send
btrfs send /mnt/data/@snapshots/snap-ro-20260501 | btrfs receive /mnt/backup/
# Incremental send
btrfs send -p /mnt/data/@snapshots/snap-ro-20260501 \
/mnt/data/@snapshots/snap-ro-20260502 | btrfs receive /mnt/backup/
Compression
mount -o compress=zstd /dev/sdX /mnt/data
mount -o compress=zstd:3 /dev/sdX /mnt/data
btrfs filesystem defragment -r -czstd /mnt/data
Maintenance
# Scrub
btrfs scrub start /mnt/data
btrfs scrub status /mnt/data
# Balance
btrfs balance start -dusage=10 /mnt/data
btrfs balance status /mnt/data
btrfs balance pause /mnt/data
# Device management
btrfs device add /dev/sdZ /mnt/data
btrfs device remove /dev/sdZ /mnt/data
btrfs device stats /mnt/data
Conclusion
Btrfs has crossed the threshold from “interesting experiment” to “production-ready choice” for a wide range of Linux workloads. Its combination of native snapshots, transparent compression, built-in checksumming, and efficient send/receive backups gives sysadmins capabilities that previously required multiple layers of tooling (LVM, mdadm, rsync, and custom snapshot scripts) in a single, unified filesystem.
The key points to take away from this guide:
- Use subvolumes from day one — they cost nothing and enable everything else.
- Enable zstd compression on all general-purpose volumes — it almost always saves space with minimal CPU cost.
- Take read-only snapshots before any major system change — rollback is seconds, not hours.
- Use send/receive for efficient, incremental off-site backups — it beats rsync for Btrfs-to-Btrfs replication.
- Run scrub monthly — silent data corruption is real, and Btrfs is the only standard Linux filesystem that catches and repairs it automatically.
- Avoid Btrfs RAID 5/6 in production until further notice — RAID 1 and RAID 10 are solid.
Whether you’re building a home NAS, a fleet of application servers, or a development workstation, Btrfs in 2026 deserves a place in your storage toolkit. Start with a non-critical volume, get comfortable with the snapshot and send/receive workflow, and you will quickly wonder how you managed without it.
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.