Adding a New Disk to a Linux Server: The Complete Admin Walkthrough
π― Key Takeaways
- How Linux Detects a New Disk
- Two Main Approaches: Direct Mount vs LVM
- Partitioning: fdisk vs parted vs gdisk
- Formatting: ext4 vs XFS β Which to Choose
- Mounting: Temporary Testing vs Permanent Configuration
π Table of Contents
- How Linux Detects a New Disk
- Two Main Approaches: Direct Mount vs LVM
- Partitioning: fdisk vs parted vs gdisk
- Formatting: ext4 vs XFS β Which to Choose
- Mounting: Temporary Testing vs Permanent Configuration
- The LVM Approach: Adding the Disk to a Volume Group
- Cloud VMs: Attaching and Configuring Block Storage
- Common Mistakes That Cause Boot Failures
How Linux Detects a New Disk
When a new physical disk or virtual disk is connected to a running Linux server, the kernel detects it through the SCSI subsystem (for traditional SATA and SAS drives), the NVMe subsystem (for modern SSDs), or the virtio driver (for virtual disks in cloud VMs). The kernel creates device nodes in /dev automatically.
π Table of Contents
- How Linux Detects a New Disk
- Two Main Approaches: Direct Mount vs LVM
- Partitioning: fdisk vs parted vs gdisk
- Partitioning with parted (GPT β recommended approach)
- Partitioning with fdisk (for MBR, disks under 2 TB)
- Formatting: ext4 vs XFS β Which to Choose
- Mounting: Temporary Testing vs Permanent Configuration
- Create the Mount Point Directory
- Temporary Mount β Always Test First
- Permanent Mount via /etc/fstab β Use UUID, Never Device Name
- Testing fstab Before Rebooting β Always Do This
- The LVM Approach: Adding the Disk to a Volume Group
- Cloud VMs: Attaching and Configuring Block Storage
- Common Mistakes That Cause Boot Failures
Verify that the OS has detected the new disk:
lsblk
Expected output showing a new unpartitioned disk alongside the existing system disk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 50G 0 disk
ββsda1 8:1 0 1G 0 part /boot
ββsda2 8:2 0 49G 0 part
ββubuntu--vg-ubuntu--lv 253:0 0 49G 0 lvm /
sdb 8:16 0 100G 0 disk
The new disk is sdb β listed with no partitions beneath it and no mount point. It is completely raw and unformatted.
Also check the disk’s partition table (or lack thereof):
fdisk -l /dev/sdb
And look at recent kernel messages to confirm the disk was recognized:
dmesg | grep -i "sd|nvme" | tail -20
On physical servers with hot-swap bays, if the new disk does not appear automatically after being inserted, trigger a rescan of all SCSI hosts:
for host in /sys/class/scsi_host/host*/scan; do echo "- - -" > $host; done
Two Main Approaches: Direct Mount vs LVM
Before partitioning, you need to decide how this disk will be used. This decision has long-term implications for how flexible and manageable the storage will be:
- Direct mount: Partition the disk, format it with a filesystem, and mount it at a directory. Simple, no overhead, minimal complexity. But the size is fixed to the partition size β resizing later is harder.
- LVM (Logical Volume Manager): Add the disk as a physical volume to an existing or new volume group. More setup initially, but provides online resizing, point-in-time snapshots, and the ability to pool multiple physical disks into a single logical storage pool.
For production storage that will grow over time, LVM is almost always the right choice. For simple, single-purpose storage (a dedicated media archive disk, a fixed-size database replica disk) where you know the size will never need to change, direct mount is perfectly reasonable.
Partitioning: fdisk vs parted vs gdisk
The partitioning tool you use depends on the disk size and partition table type you want to create:
- fdisk: The classic tool, works with MBR (Master Boot Record) partition tables. Familiar and widely documented, but MBR has a hard limit of 2 TB. Any disk larger than 2 TB requires GPT.
- gdisk: The GPT equivalent of fdisk. Interactive, generates GPT partition tables. Required for disks over 2 TB.
- parted: Supports both MBR and GPT partition tables, is scriptable (non-interactive mode), and is preferred when you need to automate partitioning. This is what most provisioning tools use.
As a best practice: use GPT for all new disks, regardless of size. GPT is a modern standard with no 2 TB limit, supports up to 128 partitions, and includes redundant partition table headers (at the beginning and end of the disk) for better resilience to data corruption.
Partitioning with parted (GPT β recommended approach)
# Create a GPT partition table on the new disk
parted /dev/sdb --script mklabel gpt
# Create a single partition using the full disk
parted /dev/sdb --script mkpart primary ext4 0% 100%
# Tell the kernel to re-read the partition table
partprobe /dev/sdb
Verify the partition was created:
lsblk /dev/sdb
Expected output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 100G 0 disk
ββsdb1 8:17 0 100G 0 part
The partprobe command is important β without it, the kernel may not see the new partition until the next reboot. Running partprobe avoids that.
Partitioning with fdisk (for MBR, disks under 2 TB)
fdisk /dev/sdb
Inside the fdisk interactive session:
- Press
nto create a new partition - Press
pfor a primary partition - Press
1for partition number 1 - Press Enter twice to accept the default first and last sectors (full disk)
- Press
wto write the partition table to disk and exit
Formatting: ext4 vs XFS β Which to Choose
Both ext4 and XFS are mature, production-grade journaling filesystems. For most workloads the practical performance difference is negligible, but there are meaningful differences in behavior and tooling:
- ext4: The default filesystem on Ubuntu and Debian. Excellent general-purpose performance. Handles small files and mixed workloads well. Supports both online growing and offline shrinking. Well-understood recovery tools (
e2fsck,debugfs). Good choice when in doubt. - XFS: Default on RHEL, CentOS, Rocky Linux, and Amazon Linux. Excellent performance with large files, high-concurrency writes, and database workloads (parallel I/O patterns). Can only be grown, not shrunk β this is an architectural constraint. Good choice for database storage, media files, and large-file workloads.
For a general-purpose data disk on Ubuntu/Debian:
mkfs.ext4 /dev/sdb1
Expected output:
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: 7a8b9c0d-1e2f-3a4b-5c6d-7e8f90123456
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done
For XFS (with a filesystem label for easier identification):
mkfs.xfs -L datavolume /dev/sdb1
Mounting: Temporary Testing vs Permanent Configuration
Create the Mount Point Directory
mkdir -p /data
The mount point directory must exist before you can mount anything to it. The -p flag creates any necessary parent directories.
Temporary Mount β Always Test First
mount /dev/sdb1 /data
df -h /data
Expected output:
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 100G 24K 95G 1% /data
This mount does not survive a reboot. It is useful for verifying that the disk is formatted correctly and accessible before committing it to fstab.
Permanent Mount via /etc/fstab β Use UUID, Never Device Name
This is where many admins make a critical and painful mistake. Never use device names like /dev/sdb1 in /etc/fstab.
Here is why: Linux assigns device names (sda, sdb, sdc) based on the order the kernel detects disks at boot time. If you add another disk, change the boot order, move disks to different ports, or if the hardware presents disks in a different order after a reboot, /dev/sdb might become /dev/sdc. Your fstab entry then either mounts the wrong disk entirely, or fails to mount and prevents the system from booting cleanly.
UUIDs are generated at filesystem creation time and are globally unique. They never change regardless of how many disks you add or remove, or what order they are detected. Always use UUID in fstab.
Get the UUID of the new partition:
blkid /dev/sdb1
Expected output:
/dev/sdb1: UUID="7a8b9c0d-1e2f-3a4b-5c6d-7e8f90123456" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="abc123de-01"
Add this line to /etc/fstab:
UUID=7a8b9c0d-1e2f-3a4b-5c6d-7e8f90123456 /data ext4 defaults 0 2
The fstab fields explained:
- Device: UUID=… (identifies the filesystem)
- Mount point: /data
- Filesystem type: ext4
- Options: defaults (rw, suid, dev, exec, auto, nouser, async)
- Dump: 0 (skip for backup purposes)
- fsck order: 2 for data disks (1 is for root, 0 to skip)
Testing fstab Before Rebooting β Always Do This
A bad fstab entry will prevent the server from booting cleanly. It will drop into emergency mode, requiring console access to fix. Always test before rebooting:
# First unmount the disk if currently mounted
umount /data
# Test all fstab entries without rebooting
mount -a
If mount -a returns no errors, and df -h /data shows the disk mounted correctly, your fstab entry is valid. If mount -a returns any error message, fix the fstab entry before proceeding. Do not reboot until mount -a succeeds cleanly.
The LVM Approach: Adding the Disk to a Volume Group
If you want to use the new disk with LVM β either to extend an existing volume group or to create a new one β the process is different from direct mount. With LVM, you typically do not need to partition the disk first; LVM can work with the raw device directly.
# Step 1: Initialize as an LVM physical volume (no partition needed)
pvcreate /dev/sdb
# Verify it was created
pvs
Expected output from pvs:
PV VG Fmt Attr PSize PFree
/dev/sda2 ubuntu-vg lvm2 a-- <49.00g 0
/dev/sdb lvm2 --- 100.00g 100.00g
# Step 2a: Add to an existing volume group
vgextend ubuntu-vg /dev/sdb
# Step 2b: OR create a brand-new volume group
vgcreate data-vg /dev/sdb
# Step 3: Create a logical volume using all available space
lvcreate -l 100%FREE -n data-lv data-vg
# Step 4: Format the logical volume
mkfs.ext4 /dev/data-vg/data-lv
# Step 5: Create mount point and mount
mkdir -p /data
mount /dev/data-vg/data-lv /data
# Step 6: Add to fstab using UUID
blkid /dev/data-vg/data-lv
LVM logical volumes exposed through device mapper have stable paths that do not change between reboots (/dev/VGname/LVname), so using the device mapper path in fstab is also acceptable for LVM volumes. However, using UUID is still preferred for consistency.
Cloud VMs: Attaching and Configuring Block Storage
On AWS EC2, attaching an EBS volume through the AWS console or CLI makes the volume appear as a new block device within seconds. The kernel detects it automatically via the hot-plug mechanism.
# After attaching at the provider console, verify it appeared
lsblk
# On modern (Nitro-based) AWS instance types, NVMe devices appear as:
# nvme1n1, nvme2n1, etc.
# On older Xen-based instances:
# xvdb, xvdc, etc.
# Partition, format, and mount as described above
# For AWS specifically, use ext4 or XFS (both work, XFS is default on Amazon Linux)
On Google Cloud, new Persistent Disks appear immediately after attachment. On Azure, newly attached Managed Disks appear within a few seconds. The partition and format steps are identical regardless of cloud provider.
Common Mistakes That Cause Boot Failures
Boot failures caused by incorrect fstab entries are one of the most stressful situations in system administration because they require console or recovery access to fix. Avoid these mistakes:
- Using /dev/sdb1 instead of UUID: The most common mistake. Device names change. UUID never changes.
- Wrong filesystem type in fstab: Writing
ext4for an XFS-formatted partition fails silently at first but causes a mount failure at boot. - Typo in the UUID: UUIDs are long and error-prone to type manually. Always copy-paste from
blkidoutput. - Mount point directory does not exist: Create the directory before adding the fstab entry.
- Not running mount -a to test: Always test before rebooting. A reboot with a bad fstab that was never tested is entirely avoidable.
If you do end up in emergency mode due to a bad fstab, the recovery procedure is: remount the root filesystem as read-write (mount -o remount,rw /), edit /etc/fstab to fix the error, then exit the emergency shell to continue booting normally. Having this procedure memorized before you need it is worth the few minutes it takes.
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.