Press ESC to close Press / to search

The Disk Space Ghost: Why df Shows Your Disk is Full But You Can’t Find the Files

🎯 Key Takeaways

  • The Ghost Files Taking Up Your Disk Space
  • Why df and du Often Disagree
  • Cause 1: Deleted Files Still Held Open by Processes
  • Cause 2: Inode Exhaustion
  • Cause 3: Files Hidden Under Mount Points

πŸ“‘ Table of Contents

The Ghost Files Taking Up Your Disk Space

It is one of the most frustrating experiences in Linux administration. The df -h command screams that your disk is 100% full. Your application is crashing, log writes are failing, and everything is breaking. You start frantically searching for large files using du, find, every trick you know β€” and you find nothing. The disk appears full, but the files have vanished like ghosts.

This is not a mystery, and it is not data corruption. It is a predictable, explainable situation that has several distinct causes β€” each with a specific fix. Once you understand the mechanics behind why df and du disagree, you will be able to solve disk space ghost problems in minutes rather than hours of confused searching.

Why df and du Often Disagree

Before diving into solutions, you need to understand the fundamental difference between these two commands. df asks the filesystem itself how much space is used and available β€” it talks directly to the kernel’s filesystem accounting. du walks the directory tree and adds up the sizes of files it can see. These two approaches can produce wildly different results, and each difference points to a specific underlying cause.

The key insight is this: the filesystem can have space allocated that is not visible in the directory tree. When that happens, df and du will disagree, and you have ghost space consuming your disk.

Cause 1: Deleted Files Still Held Open by Processes

This is by far the most common cause of the ghost disk space problem, and it catches almost every Linux beginner off guard. Here is how it works:

When a process opens a file (like a web server opening a log file), the Linux kernel maintains a file handle for that process. If someone deletes the file while the process still has it open, something interesting happens: the file disappears from the directory β€” you cannot see it with ls, and du will not count it β€” but the actual disk space is not freed. The kernel keeps the file’s data on disk until every process that has the file open closes it. Only then does the space get released back to the filesystem.

A classic real-world example: your application has been writing to /var/log/app/debug.log for months. The file grew to 15GB. Someone ran a log rotation script that deleted the file, but did not restart the application. The application still has the old file handle open and is still writing to what it thinks is the same file. From df‘s perspective: 15GB is still allocated. From du‘s perspective: the file is gone.

How to Find Deleted Files Held Open

The lsof command (list open files) is your weapon here. Run this:

lsof | grep deleted

This lists every file that is currently open by a process but has been deleted from the filesystem. The output looks like this:

nginx    1234  www-data  3w   REG   8,1  15728640000  /var/log/nginx/access.log (deleted)
mysql    5678  mysql     5w   REG   8,1   8589934592  /var/lib/mysql/binlog.000001 (deleted)

Those numbers in the middle are file sizes in bytes. The example above shows a deleted nginx log still consuming 15GB and a deleted MySQL binary log consuming 8GB. Combined, you found your ghost space.

How to Fix It

The cleanest fix is to restart the process that holds the file open. This closes the file handle, and the kernel immediately releases the disk space:

systemctl restart nginx
systemctl restart mysql

If you cannot restart the service (for example, a production database with active connections), you have an alternative: you can truncate the file through the process’s file descriptor without restarting it. Find the process ID and file descriptor number from the lsof output, then:

# PID is 1234, fd is 3 from the lsof output
> /proc/1234/fd/3

The > redirect truncates the file to zero bytes. The process continues writing, but you immediately reclaim the space. This works because the process still has a valid file handle β€” it just now points to an empty file.

Cause 2: Inode Exhaustion

Here is a scenario that genuinely confuses people: df -h shows you have gigabytes of space available, but you still cannot create new files. You try to write a file and get “No space left on device.” What is happening?

You have run out of inodes, not disk blocks. Every filesystem has two separate resources: disk blocks (where file data is stored) and inodes (where file metadata is stored β€” name, permissions, timestamps, and the pointer to the data blocks). Each file and directory consumes exactly one inode regardless of its size. A zero-byte empty file uses the same number of inodes as a 10GB file.

Inode exhaustion typically happens when you have an enormous number of tiny files. Common culprits include mail spools with millions of individual email files, PHP session file directories, cache directories with one file per cache entry, and temporary directories filled with small work files.

How to Check Inode Usage

Add the -i flag to df to see inode usage:

df -i

The output shows:

Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/sda1      6553600  6553598     2  100% /

That 100% in the IUse% column with only 2 inodes free β€” that is your problem. You have run out of inodes despite having disk space available.

Finding Which Directory Is Consuming All Inodes

You cannot simply look at file sizes β€” you need to count files per directory. This command finds the directories with the most files:

find / -xdev -printf '%h
' | sort | uniq -c | sort -rn | head -20

This prints the parent directory of every file, counts occurrences, and sorts by count. The directory at the top is your inode hog. Go there and investigate what is creating so many tiny files, then clean them out.

Fixing Inode Exhaustion Long-Term

Once you identify and clean the directory flooding inodes, also fix the root cause. Set up proper rotation or cleanup for whatever is generating the tiny files. For PHP sessions, configure session garbage collection properly. For mail spools, configure mail expiration. For cache directories, implement a TTL-based cleanup cron job.

Cause 3: Files Hidden Under Mount Points

This one is subtle but important. When you mount a filesystem over a directory, any files that already exist in that directory become hidden β€” they are invisible and inaccessible as long as something is mounted on top of them. But they still exist on the underlying filesystem and consume space there.

For example, if you created files in /mnt/data/ before mounting an external drive there, those original files are now hidden under the mount. The external drive’s df output looks fine, and a du of /mnt/data/ shows the mounted drive’s contents. But the underlying partition (usually your root filesystem) is still reserving space for those hidden files.

How to Find Hidden Mount Files

Temporarily unmount the filesystem and check the directory:

umount /mnt/data
ls -la /mnt/data/
# If you see files here, these are your hidden space consumers
du -sh /mnt/data/

If you find files there, either move them out or delete them if they are not needed, then remount:

rm -rf /mnt/data/old_hidden_files
mount /dev/sdb1 /mnt/data

Cause 4: The /proc and tmpfs Confusion

Sometimes beginners see a tmpfs filesystem in their df output consuming gigabytes and panic. The /dev/shm mount, for example, is a shared memory filesystem that lives entirely in RAM. If a process allocates a lot of shared memory, it shows up in df for /dev/shm.

Similarly, /tmp on many modern systems is mounted as tmpfs and lives in RAM. Large temporary files end up consuming memory, not disk space, but they still appear in df output. To find what is in these locations:

du -sh /dev/shm/*
du -sh /tmp/*
du -sh /run/*

For /dev/shm, the consuming process will hold the file open. Use lsof /dev/shm/ to find which process owns the shared memory segments.

The Complete Disk Audit Workflow

When you encounter a “disk full but I can’t find the files” situation, run through these steps in order:

  1. Confirm the problem: df -h to see which filesystem is full, df -i to check if it is inode exhaustion instead of block exhaustion.
  2. Check for deleted open files first β€” this resolves the majority of cases: lsof | grep deleted | awk '{print $7, $9}' | sort -rn | head -20
  3. Find large directories with du: du -h --max-depth=2 / 2>/dev/null | sort -rh | head -30
  4. Check for hidden mount files: Review your /etc/fstab and temporarily unmount non-critical filesystems to inspect what lies beneath.
  5. Check tmpfs and shared memory: du -sh /dev/shm/* /tmp/* /run/*
  6. Look for inode hogs if inode-exhausted: find / -xdev -printf '%h
    ' | sort | uniq -c | sort -rn | head -20

How to Recover Space Safely

Once you have identified your ghost space, recover it safely:

  • For deleted open files: Restart the owning service or truncate via /proc/PID/fd/FD. Never kill -9 a database process without taking precautions.
  • For log files: Use truncate -s 0 /path/to/logfile rather than rm if the process is still running. This empties the file without removing it, so the file handle remains valid.
  • For inode exhaustion: Delete the small files in bulk: find /path/to/dir -type f -mtime +30 -delete removes files older than 30 days.
  • For temp files: find /tmp -mtime +7 -exec rm -rf {} ; 2>/dev/null cleans out files older than a week.

Prevention: Stop Ghost Disk Space Before It Happens

Set up proper log rotation using logrotate with the copytruncate option for applications that cannot be restarted easily. Configure logrotate to send a signal to the application after rotation so it reopens its log file cleanly. Monitor inode usage alongside block usage in your monitoring system β€” most monitoring tools check disk space but forget to check inodes. Set up alerts at 80% inode utilization, not just 80% block utilization. Implement disk space trending so you can predict when you will run out, rather than discovering it when everything breaks. Understanding why disk space disappears without visible files is a superpower in Linux administration. Armed with these tools and concepts, what used to be an hour-long mystery becomes a five-minute diagnosis.


},

Was this article helpful?

Advertisement
🏷️ Tags: beginners disk filesystem storage troubleshooting
R

About Ramesh Sundararamaiah

Red Hat Certified Architect

Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.

🐧 Stay Updated with Linux Tips

Get the latest tutorials, news, and guides delivered to your inbox weekly.

Advertisement

Add Comment


↑