Press ESC to close Press / to search

KVM and libvirt on Linux: Complete Virtualization Guide for Sysadmins

KVM is Linux's built-in Type-1 hypervisor. This complete guide covers installing KVM, QEMU, and libvirt...

LinuxTutorialsVirtualization Linux Open Source

KVM (Kernel-based Virtual Machine) is Linux’s built-in hypervisor — it ships with every modern kernel and turns your server into a full Type-1 hypervisor without any additional licensing cost. Combined with libvirt and its tooling, you get a production-grade virtualization platform that powers everything from home labs to cloud infrastructure. This guide walks through everything: installation, VM creation, networking, storage, snapshots, and live migration.

Table of Contents

KVM Architecture and How It Works

KVM is implemented as a loadable kernel module (kvm.ko and either kvm-intel.ko or kvm-amd.ko) that exposes the /dev/kvm device. QEMU acts as the userspace component, emulating hardware devices and using the /dev/kvm interface to run guest code at near-native speed on real CPU hardware. libvirt provides a management API on top of QEMU/KVM, and tools like virsh, virt-install, and virt-manager sit on top of libvirt.

This layered architecture means guest VMs execute most instructions directly on physical CPU hardware through hardware-assisted virtualization (Intel VT-x or AMD-V). Only privileged instructions, device I/O, and memory management require intervention from the hypervisor, keeping overhead minimal — typically 2–8% for CPU-bound workloads.

Prerequisites and CPU Virtualization Check

Verify Hardware Virtualization Support

# Check for Intel VT-x or AMD-V support
grep -E 'vmx|svm' /proc/cpuinfo | head -5

# Count CPU threads supporting virtualization
grep -c 'vmx\|svm' /proc/cpuinfo

# Verify KVM kernel module can load
lsmod | grep kvm

If grep returns output, your CPU supports hardware virtualization. If lsmod shows nothing for kvm, load the modules manually:

# Intel CPUs
modprobe kvm kvm-intel

# AMD CPUs
modprobe kvm kvm-amd

# Confirm /dev/kvm exists
ls -la /dev/kvm

If virtualization is not in /proc/cpuinfo, enable it in BIOS/UEFI. Look for “Intel Virtualization Technology”, “VT-x”, “AMD-V”, or “SVM” in the CPU or security settings.

Installing KVM, QEMU, and libvirt

RHEL / Rocky Linux / AlmaLinux / Fedora

# Install the full virtualization stack
dnf install -y @virtualization

# Or install individual packages
dnf install -y \
    qemu-kvm \
    libvirt \
    libvirt-client \
    virt-install \
    virt-manager \
    virt-viewer \
    bridge-utils

# Start and enable the libvirt daemon
systemctl enable --now libvirtd

# Add your user to the libvirt group (avoid needing sudo for virsh)
usermod -aG libvirt $(whoami)
newgrp libvirt

# Verify installation
virsh version
virt-host-validate

debian">Ubuntu / Debian

apt update
apt install -y \
    qemu-kvm \
    libvirt-daemon-system \
    libvirt-clients \
    virtinst \
    virt-manager \
    bridge-utils \
    cpu-checker

# Check KVM readiness
kvm-ok

# Add user to groups
adduser $(whoami) libvirt
adduser $(whoami) kvm

# Start libvirt
systemctl enable --now libvirtd

Post-Install Validation

# Run the host validation tool — all checks should pass
virt-host-validate

# Expected output:
# QEMU: Checking for hardware virtualization                  : PASS
# QEMU: Checking if device /dev/kvm exists                    : PASS
# QEMU: Checking if device /dev/kvm is accessible             : PASS
# QEMU: Checking if device /dev/vhost-net exists              : PASS
# LXC:  Checking for Linux >= 2.6.26                          : PASS

# List default network (should show 'default' active)
virsh net-list --all

Configuring VM Networking (Bridge and NAT)

Default NAT Network (Easiest — Works Out of the Box)

libvirt creates a default NAT network (virbr0) automatically. VMs on this network get IPs in the 192.168.122.0/24 range and can reach the internet through the host’s NAT, but are not directly reachable from the LAN. This is ideal for lab environments.

# Confirm the default network is active
virsh net-list --all
virsh net-info default

# If not started, start it
virsh net-start default
virsh net-autostart default

# View DHCP leases for running VMs
virsh net-dhcp-leases default

Bridged Network (Production — VMs Get LAN IPs)

For production use where VMs need to be reachable on your LAN, create a Linux bridge attached to a physical interface.

# Method 1: NetworkManager (RHEL/Rocky/Ubuntu 20.04+)
nmcli connection add type bridge ifname br0 con-name br0
nmcli connection add type bridge-slave ifname eth0 master br0 con-name br0-eth0
nmcli connection modify br0 bridge.stp no
nmcli connection up br0
nmcli connection up br0-eth0

# Verify bridge is up
ip addr show br0
bridge link show
# Method 2: Define bridge network in libvirt XML
cat > /tmp/bridge-net.xml << 'EOF'
<network>
  <name>bridged-lan</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>
EOF

virsh net-define /tmp/bridge-net.xml
virsh net-start bridged-lan
virsh net-autostart bridged-lan

Storage Pools and Disk Images

Default Storage Pool

# View existing storage pools
virsh pool-list --all

# Default pool is at /var/lib/libvirt/images
virsh pool-info default

# Create a new pool on a separate disk or partition
virsh pool-define-as fastpool dir --target /data/vms
virsh pool-build fastpool
virsh pool-start fastpool
virsh pool-autostart fastpool

Create Disk Images

# Create a thin-provisioned qcow2 image (preferred format — supports snapshots)
qemu-img create -f qcow2 /var/lib/libvirt/images/vm-disk.qcow2 50G

# Create a pre-allocated raw image (slightly faster I/O, no snapshots)
qemu-img create -f raw /var/lib/libvirt/images/vm-raw.img 50G

# Inspect an existing image
qemu-img info /var/lib/libvirt/images/vm-disk.qcow2

# Resize an existing image (VM must be shut down)
qemu-img resize /var/lib/libvirt/images/vm-disk.qcow2 +20G

Creating Virtual Machines with virt-install

Install from ISO

# Download an ISO first (example: Rocky Linux 9)
wget -P /var/lib/libvirt/images/ \
  https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.4-x86_64-minimal.iso

# Create VM from ISO with console access
virt-install \
  --name rocky9-server \
  --ram 4096 \
  --vcpus 2 \
  --cpu host-passthrough \
  --disk path=/var/lib/libvirt/images/rocky9-server.qcow2,size=40,format=qcow2 \
  --cdrom /var/lib/libvirt/images/Rocky-9.4-x86_64-minimal.iso \
  --network network=default \
  --os-variant rockylinux9 \
  --graphics none \
  --console pty,target_type=serial \
  --extra-args 'console=ttyS0,115200n8'

Unattended Install with Kickstart

virt-install \
  --name rocky9-auto \
  --ram 2048 \
  --vcpus 2 \
  --cpu host-passthrough \
  --disk path=/var/lib/libvirt/images/rocky9-auto.qcow2,size=30,format=qcow2 \
  --location https://dl.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/ \
  --network network=default \
  --os-variant rockylinux9 \
  --graphics none \
  --console pty,target_type=serial \
  --initrd-inject /path/to/ks.cfg \
  --extra-args 'inst.ks=file:/ks.cfg console=ttyS0,115200n8'

Import an Existing Cloud Image

# Download a cloud image (pre-built, no OS installer needed)
wget -P /var/lib/libvirt/images/ \
  https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2

# Create a cloud-init config disk for first-boot configuration
cat > /tmp/user-data << 'EOF'
#cloud-config
hostname: myvm
manage_etc_hosts: true
users:
  - name: admin
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5... your-public-key
password: changeme
chpasswd: {expire: false}
ssh_pwauth: true
EOF

cloud-localds /var/lib/libvirt/images/cloud-init.iso /tmp/user-data

virt-install \
  --name centos-stream9 \
  --ram 2048 \
  --vcpus 2 \
  --cpu host-passthrough \
  --disk /var/lib/libvirt/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 \
  --disk /var/lib/libvirt/images/cloud-init.iso,device=cdrom \
  --network network=default \
  --os-variant centos-stream9 \
  --import \
  --graphics none

Managing VMs with virsh

Essential Daily Commands

# List all VMs (running and stopped)
virsh list --all

# Start, stop, restart
virsh start rocky9-server
virsh shutdown rocky9-server   # Graceful (sends ACPI signal)
virsh destroy rocky9-server    # Force off (like pulling power)
virsh reboot rocky9-server

# Connect to VM console
virsh console rocky9-server
# Escape: Ctrl+]

# Get VM IP address
virsh domifaddr rocky9-server

# Show VM details
virsh dominfo rocky9-server
virsh vcpuinfo rocky9-server

# Edit VM XML configuration
virsh edit rocky9-server

# Suspend and resume (save CPU state to RAM)
virsh suspend rocky9-server
virsh resume rocky9-server

# Save VM state to disk (like hibernate)
virsh save rocky9-server /tmp/rocky9-saved.state
virsh restore /tmp/rocky9-saved.state

Modify Running VMs

# Add CPU while running (if guest supports hotplug)
virsh setvcpus rocky9-server 4 --live --maximum

# Change memory while running
virsh setmem rocky9-server 8388608  # 8GB in KB

# Attach a new disk to running VM
virsh attach-disk rocky9-server \
  /var/lib/libvirt/images/extra-disk.qcow2 \
  vdb --driver qemu --subdriver qcow2 --live --persistent

VM Snapshots and Cloning

Internal Snapshots (qcow2 only)

# Create a snapshot (VM can be running)
virsh snapshot-create-as rocky9-server \
  --name "before-upgrade" \
  --description "Clean state before dnf upgrade" \
  --atomic

# List snapshots
virsh snapshot-list rocky9-server

# Revert to a snapshot
virsh snapshot-revert rocky9-server before-upgrade

# Delete a snapshot
virsh snapshot-delete rocky9-server before-upgrade

Clone a VM

# Shut down source VM first
virsh shutdown rocky9-server

# Clone it
virt-clone \
  --original rocky9-server \
  --name rocky9-clone \
  --auto-clone

# Start the clone
virsh start rocky9-clone

Performance Tuning for KVM Guests

CPU and Memory Optimization

# Use host-passthrough to expose full CPU capabilities to guests
# Add to VM XML via virsh edit:
# <cpu mode='host-passthrough' check='none'/>

# Enable huge pages for a VM (reduces TLB pressure)
# On host:
echo 2048 > /proc/sys/vm/nr_hugepages

# In VM XML:
# <memoryBacking>
#   <hugepages/>
# </memoryBacking>

# Pin vCPUs to physical cores (reduces NUMA jitter)
virsh vcpupin rocky9-server 0 2
virsh vcpupin rocky9-server 1 3

Disk I/O Optimization

# Use virtio disk driver (faster than emulated IDE/SATA)
# Ensure your virt-install uses --disk bus=virtio (it's the default)

# Set I/O mode to native for lower latency
# In VM XML, inside <disk>:
# <driver name='qemu' type='qcow2' cache='none' io='native'/>

# For SSDs: enable discard passthrough (TRIM support in guest)
# <driver name='qemu' type='qcow2' discard='unmap'/>

Network Optimization

# Use virtio-net driver (default) and enable multiqueue for high-throughput VMs
# In VM XML:
# <interface type='network'>
#   <model type='virtio'/>
#   <driver name='vhost' queues='4'/>
# </interface>

Live Migration Between Hosts

# Requirements:
# - Shared storage visible to both hosts (NFS, Ceph, iSCSI) OR copy migration
# - Same libvirt version (ideally)
# - SSH trust between hosts

# Migrate to another host (shared storage)
virsh migrate --live rocky9-server qemu+ssh://hypervisor2.example.com/system

# Migrate with disk copy (no shared storage required — slower)
virsh migrate --live --copy-storage-all rocky9-server \
  qemu+ssh://hypervisor2.example.com/system

# Monitor migration progress
virsh domjobinfo rocky9-server

Using virt-manager GUI

For administrators who prefer a graphical interface, virt-manager provides full VM lifecycle management over a local or remote libvirt connection.

# Launch on the KVM host (requires display)
virt-manager

# Connect to a remote KVM host via SSH
virt-manager --connect qemu+ssh://user@hypervisor.example.com/system

virt-manager supports all the same operations as virsh — creating, starting, stopping, cloning, and snapshotting VMs — with a graphical interface including a VNC/SPICE console viewer.

Troubleshooting Common Issues

Permission Denied on /dev/kvm

ls -la /dev/kvm
# crw-rw----+ 1 root kvm 10, 232 ...

# Add your user to the kvm group
usermod -aG kvm $(whoami)
# Log out and back in for group membership to take effect

VM Fails to Start — Check Logs

# Primary log for VM startup failures
journalctl -u libvirtd -f

# QEMU-specific log (per VM)
cat /var/log/libvirt/qemu/rocky9-server.log | tail -50

Network Not Working in Guest

# Ensure IP forwarding is enabled on the host
sysctl net.ipv4.ip_forward
echo 1 > /proc/sys/net/ipv4/ip_forward

# Make it persistent
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.d/99-kvm.conf
sysctl -p /etc/sysctl.d/99-kvm.conf

# Verify virbr0 has firewall rules
nft list table ip filter | grep -A5 LIBVIRT

Poor Disk Performance

# Check if virtio drivers are loaded in the guest
virsh domblklist rocky9-server
# Should show vda/vdb (virtio), not sda/hda (emulated)

# If using emulated disk, change to virtio by editing VM XML
virsh edit rocky9-server
# Change: <target dev='sda' bus='sata'/>
# To:     <target dev='vda' bus='virtio'/>

Conclusion

KVM with libvirt gives you an enterprise-grade hypervisor that is already built into every Linux kernel — no extra licenses, no vendor lock-in, and no agents to install. The combination of virsh for scripted management, virt-install for VM provisioning, and virt-manager for graphical oversight covers every use case from a home lab to a production cloud. Once you master the basics of storage pools, networking modes, and snapshots covered in this guide, you have a platform that scales from one host to multi-node clusters with live migration.

Was this article helpful?