Press ESC to close Press / to search

Restic Encrypted Backups to Backblaze B2: Enterprise Disaster Recovery Guide 2026

The unromantic truth about backups is that most teams have them and almost none test...

Backup Tools Linux Open Source

The unromantic truth about backups is that most teams have them and almost none test restores. When ransomware hits or a storage array fails, the first question is not whether a backup exists but whether you can bring it back within the RTO your business expects. Restic is the backup tool that has quietly taken over Linux server environments because it is a single Go binary, encrypts everything client-side, deduplicates aggressively, and speaks natively to every major object storage provider — including the low-cost leader, Backblaze B2. This guide walks through a production-grade Restic deployment for an enterprise Linux fleet, using B2 as the primary backend and a second repository for the 3-2-1 rule.

## Why Backblaze B2 in 2026

B2 costs roughly one-quarter of AWS S3 Standard with zero egress fees to Cloudflare (via the Bandwidth Alliance). For a fleet backing up a few terabytes per month, the cost difference is real money. Its S3-compatible API works with Restic’s `s3` backend directly, so nothing custom is required.

## Installing Restic

On AlmaLinux 9 or Ubuntu 24.04:

“`bash
# AlmaLinux 9
sudo dnf install -y restic

# Ubuntu 24.04
sudo apt update
sudo apt install -y restic
“`

Create a Backblaze B2 application key scoped to the backup bucket, then store credentials in `/etc/restic/env`:

“`bash
sudo mkdir -p /etc/restic
sudo tee /etc/restic/env <<'EOF' export RESTIC_REPOSITORY="s3:s3.us-west-002.backblazeb2.com/acme-restic-prod" export AWS_ACCESS_KEY_ID="005abc..." export AWS_SECRET_ACCESS_KEY="K005..." export RESTIC_PASSWORD_FILE="/etc/restic/password" EOF sudo chmod 600 /etc/restic/env ``` Generate a strong repository password and save it to `/etc/restic/password`: ```bash openssl rand -base64 48 | sudo tee /etc/restic/password > /dev/null
sudo chmod 600 /etc/restic/password
“`

Critical: back up this password to a password manager and to the CEO’s laptop. Without it, the data in B2 is unrecoverable.

Initialize the repository:

“`bash
source /etc/restic/env
restic init
“`

## Your First Backup

Back up the usual suspects — application data, database dumps, configuration — with exclusions for noisy directories:

“`bash
restic backup \
–one-file-system \
–exclude=’/var/cache’ \
–exclude=’/var/tmp’ \
–exclude=’/tmp’ \
–exclude=’/proc’ \
–exclude=’/sys’ \
–tag nightly \
/etc /var/www /var/lib/app /home
“`

Inspect snapshots:

“`bash
restic snapshots
restic stats latest
“`

## Automating with Systemd Timers

Cron still works, but systemd timers give you better logging and automatic retries. Create `/etc/systemd/system/restic-backup.service`:

“`ini
[Unit]
Description=Restic nightly backup
After=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/restic/env
ExecStart=/usr/bin/restic backup –one-file-system –tag nightly /etc /var/www /var/lib/app
ExecStart=/usr/bin/restic forget –keep-daily 14 –keep-weekly 8 –keep-monthly 12 –prune
Nice=19
IOSchedulingClass=best-effort
IOSchedulingPriority=7
“`

And `/etc/systemd/system/restic-backup.timer`:

“`ini
[Unit]
Description=Nightly restic backup

[Timer]
OnCalendar=*-*-* 02:30:00
RandomizedDelaySec=30m
Persistent=true

[Install]
WantedBy=timers.target
“`

Enable:

“`bash
sudo systemctl daemon-reload
sudo systemctl enable –now restic-backup.timer
“`

The `forget –prune` call enforces retention: 14 daily, 8 weekly, 12 monthly snapshots. After each backup, old data is expired and chunks no longer referenced are deleted from B2.

## Database Dumps Before Backup

File-level backups of a live database are unsafe. Dump first, back up the dump:

“`bash
#!/bin/bash
set -euo pipefail
mkdir -p /var/backups/db
pg_dumpall -U postgres | gzip > /var/backups/db/postgres-$(date +%F).sql.gz
find /var/backups/db -mtime +3 -delete
“`

Run this in the Restic service as an `ExecStartPre`:

“`
ExecStartPre=/usr/local/bin/db-dump.sh
“`

## 3-2-1: Adding a Second Repository

The 3-2-1 rule says: three copies, two media, one off-site. Add a second Restic repository on a different provider and replicate using `restic copy`:

“`bash
export RESTIC_REPOSITORY2=”s3:s3.wasabisys.com/acme-restic-dr”
export RESTIC_PASSWORD_FILE2=”/etc/restic/password”
restic -r $RESTIC_REPOSITORY2 init –copy-chunker-params –from-repo $RESTIC_REPOSITORY
restic -r $RESTIC_REPOSITORY2 copy –from-repo $RESTIC_REPOSITORY
“`

Schedule this as a weekly job so B2 is primary and Wasabi is your cold backup.

## Verifying Backups Work

`restic check –read-data-subset=5%` reads and verifies a random 5% of data each run. Schedule a full `–read-data` check monthly. A verified backup is the only kind that counts:

“`bash
restic check –read-data-subset=5%
“`

Automate restores too. A monthly restore test to a scratch directory, with a diff against live data, catches silent corruption:

“`bash
restic restore latest –target /restore-test –include /etc/nginx
diff -r /etc/nginx /restore-test/etc/nginx
“`

## Restoring Quickly

When the real event happens, you want the command in muscle memory. Restore a single file:

“`bash
restic restore latest –target /tmp/restore –include /etc/nginx/nginx.conf
“`

Restore an entire snapshot:

“`bash
restic restore abc12345 –target /
“`

Mount a snapshot read-only to browse:

“`bash
mkdir /mnt/restic
restic mount /mnt/restic
“`

## Monitoring Backup Success

Every backup should leave a heartbeat your monitoring system watches. The simplest approach is a success sentinel:

“`bash
date > /var/log/restic/last-success
“`

Then Prometheus `node_exporter` with textfile collector or a Nagios-style check alerts if that file is stale.

## Securing the B2 Bucket

Enable object lock on the bucket with a 30-day governance period. That makes backup objects immutable, protecting them from a compromised server running `restic forget –prune –keep-last 0`. Restic plays nicely with object lock when you use `restic forget` with a retention policy rather than blanket deletes.

Also scope the application key to read-write but not delete if you can tolerate manual cleanup — attackers with server access will try to destroy backups first.

## FAQ

**How much does Restic on B2 cost for 1 TB?** Roughly $6 per month plus modest egress if you ever restore from non-Cloudflare networks.

**Does Restic support Windows and macOS?** Yes. The binary is cross-platform and repositories are portable across OSes.

**Can I deduplicate across hosts?** Yes. Point all hosts at the same repository and dedup works across them. Be careful with the password and concurrent access — use a lock aware backend.

**What about very large files?** Restic streams data in variable-size chunks, so multi-hundred-GB files work fine.

**Is there a GUI?** Yes, Backrest and resticUI wrap Restic in a web interface for teams who want less CLI.

**How does Restic compare to Borg?** Borg is excellent but doesn’t natively speak S3 — you need rclone or a remote shell. Restic is cloud-native by design and runs more easily in container environments. Both use content-defined chunking and strong encryption.

**Can I back up databases live?** Not safely from the file system. Always dump first or use a tool like XtraBackup, then back up the dump file with Restic.

**What is the RAM cost of Restic?** Roughly 2 GB per terabyte of repository for the index. Very large repositories (10+ TB) on small backup hosts can run out of RAM during prune operations — schedule prune on a beefier dedicated node.

**Does Restic support snapshots from LVM or ZFS?** Not directly, but you can wrap it in a script that takes the snapshot, mounts it read-only, runs Restic against the mount, and unmounts. This gives you crash-consistent backups of databases without stopping the service.

## Restic for Application Data

A common pattern is using Restic to back up the data directories of containerized applications. For Docker Compose stacks, run a sidecar container that mounts the same volumes read-only and ships them with Restic on a schedule:

“`yaml
backup:
image: restic/restic:latest
restart: unless-stopped
environment:
RESTIC_REPOSITORY: s3:s3.us-west-002.backblazeb2.com/acme-app
RESTIC_PASSWORD_FILE: /run/secrets/restic
volumes:
– app-data:/data:ro
– ./scripts/backup.sh:/usr/local/bin/backup.sh
entrypoint: [“/bin/sh”, “-c”, “while true; do backup.sh; sleep 86400; done”]
“`

This decouples backup from the application’s lifecycle and survives image upgrades.

## Disaster Recovery Drills

A backup is unproven until it has been restored. Schedule a quarterly drill: pick a non-trivial host, wipe a scratch VM, restore from the latest snapshot, bring services online, and verify functionality. Time the entire process. Compare against your business RTO. The first drill always reveals something — a missing dependency, an out-of-date password, an undocumented manual step. Fix it, document it, and repeat.

For added rigor, test the worst case: restore from the off-site secondary repository (Wasabi in our example) without using the primary at all. This proves the secondary works and that you have not silently lost the off-site copy.

## Monitoring Repository Health

Beyond the success sentinel, monitor repository statistics over time:

“`bash
restic stats –mode raw-data
restic stats –mode restore-size
“`

Sudden growth signals a runaway log file or a misconfigured exclude. Sudden shrinkage signals a forget-prune that removed more than expected. Plot both metrics weekly in Grafana and alert on anomalies. The earliest sign of trouble is usually unexpected size, not an outright failure.

## Hardening the Backup Account

The backup credentials are a juicy target — anyone with them can encrypt or destroy your backups. Use B2 application keys scoped to a single bucket, not master keys. Disable list-and-delete capabilities if you only need write access. Rotate the keys yearly via Ansible and document the rotation. Store them in HashiCorp Vault or your cloud provider’s secrets manager, never in plain text in `/etc`.

Better still, use bucket-level immutability (object lock) so even compromised credentials cannot destroy historical backups. B2’s File Lock and S3 Object Lock both support this; with a 30-day governance period, an attacker who gains write access still cannot delete recent backups for 30 days, giving you ample detection time.

## Optimizing Backup Throughput

For large initial backups, raise concurrent uploads:

“`bash
restic backup –pack-size 64 –read-concurrency 8 /data
“`

For nightly incrementals, the bottleneck is usually CPU on the source host because of chunking and encryption. Pin Restic to specific cores to avoid disrupting application workloads:

“`bash
taskset -c 6,7 restic backup …
“`

Network upload bandwidth is rarely a problem for incremental backups because deduplication means you ship only changed chunks — typically a few percent of the dataset per night.

## Restoring at Scale

When the disaster is real and you are restoring 500 GB to a fresh host, pre-warm the Restic index by running `restic snapshots` once before starting the restore — this downloads the index and avoids round-trips during the actual file restore. For multi-host parallel restores from the same repository, run them on separate machines to spread the network load. A single restore typically saturates a 1 Gbps link.

## Monitoring with Prometheus

A small wrapper script can export Restic metrics to Prometheus’s textfile collector:

“`bash
restic stats –json > /var/lib/node_exporter/restic.prom
“`

Convert the JSON into Prometheus exposition format and the metrics show up in your existing dashboards. Alert on `time() – restic_last_success_seconds > 86400` to catch silent failures.

Was this article helpful?