Incus: The Open Source LXD Fork Taking Linux Containers to the Next Level
π Table of Contents
π Table of Contents
- The LXD Fork: What Happened and Why It Matters
- What Incus Does: The Core Concept
- Incus vs. LXD vs. Docker vs. KVM
- Installing Incus on Ubuntu/Debian
- Creating and Managing Containers
- Creating Virtual Machines
- Networking
- Storage Pools
- Migrating from LXD
- Practical Use Cases
- Home Lab Environments
- Development Environments
- Testing Ansible Playbooks
- Key Takeaways
If you’ve been following the Linux container ecosystem closely, you’ve probably noticed that LXD β the system container manager that’s been a favorite in home labs and enterprise environments alike β went through a significant transition in 2023. Canonical moved LXD under the Ubuntu Pro umbrella, changed the licensing terms, and significantly slowed contributions from the community. The Linux Containers project, which had originally developed LXD, responded by forking it. The fork is called Incus, and it’s been moving fast. For anyone running LXD today or looking for a clean, open-source way to manage system containers and virtual machines on Linux, Incus is the story you need to follow.
The LXD Fork: What Happened and Why It Matters
LXD started life as an open-source project under the Linux Containers (linuxcontainers.org) umbrella. Canonical employed the primary developers and did the heavy lifting, but the project was community-focused and contributions were welcome from outside Canonical. In June 2023, Canonical announced that LXD would be moving exclusively to Canonical-controlled infrastructure and that it would become primarily a component of Ubuntu Pro β Canonical’s paid enterprise subscription.
The Linux Containers team, led by StΓ©phane Graber (who originally created LXD while at Canonical), announced the Incus fork shortly afterward. The project is explicitly community-governed, hosted on GitHub, and committed to remaining open source under the Apache 2.0 license without enterprise paywalls. Major Linux distributions, including Debian, Alpine, and Gentoo, adopted Incus as their preferred system container manager within months of the fork.
For practical purposes: if you’re on Ubuntu and relying on the lxd snap package, you’re on Canonical’s version with tighter licensing. If you want the open, community-driven version of the same technology, you want Incus.
What Incus Does: The Core Concept
Incus manages two types of workloads from a single unified interface:
- System containers: Full Linux environment in a container using LXC under the hood. They boot an init system (typically systemd), run multiple processes, and behave much like a lightweight VM β but share the host kernel, making them faster to start and cheaper to run.
- Virtual machines: Full hardware virtualization using QEMU. They have their own kernel, completely isolated from the host. Incus uses virtio-vsock to communicate with a guest agent, enabling live migration, snapshots, and console access without traditional SSH tunneling.
Both types are managed with the same incus command-line tool, the same API, and the same configuration syntax. You can mix containers and VMs in the same cluster.
This is fundamentally different from Docker or Podman, which are designed for application containers β single processes in isolated namespaces. Incus system containers are designed to look and behave like full servers. You SSH into them, install software with apt or dnf, run systemd services, and treat them like isolated VMs that just happen to start in under a second.
Incus vs. LXD vs. Docker vs. KVM
Understanding where Incus fits in the landscape saves a lot of confusion:
- Incus vs. LXD: Incus is a direct fork. API-compatible for most operations, but with a different socket path (
/var/lib/incus/unix.socketinstead of/var/snap/lxd/common/lxd/unix.socket), a newincusbinary instead oflxc, and ongoing divergence in features. If you’re starting fresh, use Incus. - Incus vs. Docker: Different use cases. Docker/Podman are for running single-process application containers (microservices, CI jobs). Incus system containers are for full system environments β dev boxes, test servers, isolated lab environments.
- Incus vs. KVM/libvirt: Incus can run KVM-backed VMs, but it abstracts the management layer significantly. libvirt/virt-manager gives you more raw control over VM hardware configuration; Incus gives you a simpler, unified interface that also handles containers. For home labs and dev environments, Incus is typically easier.
Installing Incus on Ubuntu/Debian
Incus is packaged in Debian 13 (Trixie) and available via the Linux Containers apt repository for Ubuntu and older Debian releases.
On Debian 13 or Ubuntu 24.04 using the upstream repository:
apt install -y curl gpg
mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
cat > /etc/apt/sources.list.d/zabbly-incus-stable.sources << EOF
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF
apt update
apt install -y incus
Initialize Incus after installation:
incus admin init
The init wizard asks about storage backend (btrfs, zfs, lvm, or directory), networking (bridge setup), and clustering. For a single-node setup, accepting the defaults is fine. ZFS or btrfs are recommended over the plain directory backend for snapshot performance.
Add your user to the incus-admin group to manage Incus without sudo:
usermod -aG incus-admin $USER
newgrp incus-admin
Creating and Managing Containers
Launch an Ubuntu 24.04 container:
incus launch images:ubuntu/24.04 myserver
Launch a Debian 12 container:
incus launch images:debian/12 debbox
List available images:
incus image list images: | grep -E "ubuntu|debian|fedora|rocky"
List running instances:
incus list
Get a shell in a running container:
incus exec myserver -- bash
Run a single command:
incus exec myserver -- apt update
incus exec myserver -- systemctl status nginx
Copy files to/from a container:
incus file push /etc/myconfig myserver/etc/myconfig
incus file pull myserver/var/log/syslog ./syslog
Stop, start, and delete:
incus stop myserver
incus start myserver
incus restart myserver
incus delete myserver --force
Creating Virtual Machines
Adding the --vm flag to incus launch creates a full VM instead of a container. Everything else works the same:
incus launch images:ubuntu/24.04 myvm --vm
incus exec myvm -- uname -r # shows VM's kernel, different from host
incus exec myvm -- systemd-detect-virt # outputs "kvm"
For VMs, Incus installs a guest agent (incus-agent) inside the VM using a shared filesystem mount. This gives you file push/pull, exec, and console access without needing SSH. If you want SSH as well, just install and configure it inside the VM normally.
Networking
Incus creates a default bridge network called incusbr0 during initialization. Containers and VMs on this bridge get DHCP addresses in the 10.x.x.x range and can reach the internet through NAT on the host.
List networks:
incus network list
incus network show incusbr0
Create an isolated network (no outbound access):
incus network create isolated-net ipv4.nat=false ipv6.nat=false
Attach an instance to a specific network:
incus network attach isolated-net myserver eth1
To expose a container's service to the outside world, you can set up a static IP and port-forward on the host, or use the macvlan or bridged network driver to put containers directly on your LAN.
Give a container a static IP:
incus config device override myserver eth0 ipv4.address=10.94.118.50
incus restart myserver
Storage Pools
Incus supports multiple storage backends. The most capable are ZFS and btrfs, which enable instant snapshots and clones.
List storage pools:
incus storage list
incus storage info default
Create a ZFS pool on a dedicated block device:
incus storage create fast-pool zfs source=/dev/sdb
Launch an instance on a specific pool:
incus launch images:ubuntu/24.04 myserver --storage fast-pool
Take a snapshot:
incus snapshot create myserver snap1
incus snapshot list myserver
incus snapshot restore myserver snap1
Snapshots on ZFS or btrfs are near-instant and consume only the space of changed blocks since the snapshot was taken. This makes them practical for checkpointing before risky operations β upgrade a package, snapshot first, restore if it goes wrong.
Migrating from LXD
Incus provides a migration tool to move from LXD to Incus on the same machine. The process is straightforward but requires a maintenance window since LXD and Incus cannot run simultaneously.
apt install incus-migrate
incus-migrate
The migration tool will:
- Stop the LXD daemon
- Copy all container and VM data to the Incus storage layout
- Import profiles, networks, and storage pools
- Start Incus with the migrated configuration
After migration, verify your instances are running:
incus list
incus exec firstcontainer -- systemctl status
If you were using lxc commands in scripts, you'll need to update them to incus. The command structure is identical; only the binary name changes.
Practical Use Cases
Home Lab Environments
Incus is excellent for home labs because it lets you run multiple isolated Linux environments on a single machine without the overhead of full virtualization. Spin up a Rocky Linux container to test RHEL-specific configurations, a Debian container for package testing, and an Ubuntu VM for kernel testing β all on the same host, all manageable with one tool.
Development Environments
Instead of polluting your workstation with every development dependency, create a dedicated container per project. Each container gets its own packages, services, and configurations. Destroy and recreate in seconds. Snapshot before major dependency upgrades.
incus launch images:ubuntu/24.04 project-api
incus exec project-api -- bash
# install everything you need inside the container
incus snapshot create project-api clean-baseline
Testing Ansible Playbooks
Incus containers make excellent Ansible targets. Launch a fresh container, point your inventory at it, run the playbook, check the result, destroy and repeat. Much faster than provisioning a real VM and more isolated than testing directly on your workstation.
Key Takeaways
- Incus is the community fork of LXD, created after Canonical restricted LXD to the Ubuntu Pro ecosystem
- It manages both system containers (LXC-based) and full VMs (QEMU-based) with a single CLI and API
- System containers share the host kernel and start in under a second; VMs provide full kernel isolation
- Incus is now packaged in Debian 13 and available via the Zabbly repository for Ubuntu
- ZFS and btrfs storage backends enable instant, space-efficient snapshots and clones
- The migration tool can import your existing LXD containers and configuration in a single step
- Incus is best suited for system environments, lab setups, and development boxes β not single-process application containers
- The project is community-governed, Apache 2.0 licensed, and moving fast with active development
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.