MinIO on Linux: Self-Hosted S3-Compatible Object Storage Complete Setup Guide
π― Key Takeaways
- Table of Contents
- Why Self-Host Object Storage with MinIO
- MinIO Architecture: Single-Node vs Distributed
- Installing MinIO Single-Node (Development/Small Production)
- Distributed MinIO Cluster (High Availability)
π Table of Contents
- Table of Contents
- Why Self-Host Object Storage with MinIO
- MinIO Architecture: Single-Node vs Distributed
- Installing MinIO Single-Node (Development/Small Production)
- Distributed MinIO Cluster (High Availability)
- Configuring TLS with Let's Encrypt
- Nginx Reverse Proxy Setup
- Creating Buckets and Access Policies
- Using AWS CLI with MinIO
- Object Lifecycle Rules and Versioning
- Integrating MinIO with Restic, Rclone, and Velero
- Monitoring with Prometheus and Grafana
- Performance Tuning
- Security Hardening
- Conclusion
MinIO is a high-performance, S3-compatible object storage server you can run on any Linux host. It uses the exact same API as Amazon S3, which means every tool that works with S3 β AWS CLI, Terraform, Rclone, Restic, Velero, application SDKs β works with MinIO without code changes. This guide covers deploying MinIO for production use, configuring TLS, setting up distributed mode for high availability, and integrating it with common tools.
π Table of Contents
- Table of Contents
- Why Self-Host Object Storage with MinIO
- MinIO Architecture: Single-Node vs Distributed
- Installing MinIO Single-Node (Development/Small Production)
- Verify the Installation
- Distributed MinIO Cluster (High Availability)
- Configuring TLS with Let's Encrypt
- Nginx Reverse Proxy Setup
- Creating Buckets and Access Policies
- Using AWS CLI with MinIO
- Object Lifecycle Rules and Versioning
- Integrating MinIO with Restic, Rclone, and Velero
- Restic Backup to MinIO
- Rclone with MinIO
- Velero Kubernetes Backup to MinIO
- Monitoring with Prometheus and Grafana
- Performance Tuning
- Security Hardening
- Conclusion
Table of Contents
- Why Self-Host Object Storage with MinIO
- MinIO Architecture: Single-Node vs Distributed
- Installing MinIO Single-Node (Development/Small Production)
- Distributed MinIO Cluster (High Availability)
- Configuring TLS with Let’s Encrypt
- Nginx Reverse Proxy Setup
- Creating Buckets and Access Policies
- Using AWS CLI with MinIO
- Object Lifecycle Rules and Versioning
- Integrating MinIO with Restic, Rclone, and Velero
- Monitoring with Prometheus and Grafana
- Performance Tuning
- Security Hardening
Why Self-Host Object Storage with MinIO
S3-compatible object storage is foundational to modern infrastructure β application backups, container image layers, log archival, AI/ML training datasets, static assets. AWS S3 billing adds up fast at scale, and egress costs create lock-in. MinIO gives you S3-compatible storage on your own hardware or VPS with no egress fees, complete data control, and the ability to serve storage from regions AWS doesn’t reach.
MinIO is written in Go, single-binary, and achieves over 325 GiB/s read and 165 GiB/s write throughput on NVMe hardware β comparable to or exceeding commercial offerings. The open-source edition (AGPL-3.0) covers the vast majority of use cases.
MinIO Architecture: Single-Node vs Distributed
Single-node single-drive (SNSD): One server, one drive. For development and testing only β no redundancy.
Single-node multi-drive (SNMD): One server, 4+ drives with erasure coding. Production-suitable with drive-level redundancy. Tolerates drive failures without data loss.
Multi-node multi-drive (MNMD): 4+ servers, each with drives. Full high availability β tolerates server and drive failures simultaneously.
Installing MinIO Single-Node (Development/Small Production)
# Download the MinIO server binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
mv minio /usr/local/bin/
# Download the MinIO Client (mc)
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
mv mc /usr/local/bin/
# Create system user and data directories
useradd --system --shell /usr/sbin/nologin minio-user
mkdir -p /data/minio/{disk1,disk2,disk3,disk4}
chown -R minio-user:minio-user /data/minio
# Create environment file (keep credentials secure)
cat > /etc/default/minio << 'ENV'
MINIO_VOLUMES="/data/minio/disk1 /data/minio/disk2 /data/minio/disk3 /data/minio/disk4"
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=ChangeThisStrongPassword123!
MINIO_SITE_NAME=my-minio
MINIO_CONSOLE_ADDRESS=":9001"
ENV
chmod 600 /etc/default/minio
# Create systemd service
cat > /etc/systemd/system/minio.service << 'UNIT'
[Unit]
Description=MinIO Object Storage Server
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server $MINIO_VOLUMES \
--console-address $MINIO_CONSOLE_ADDRESS
Restart=always
RestartSec=5s
LimitNOFILE=65536
TasksMax=infinity
TimeoutStartSec=infinity
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
UNIT
systemctl daemon-reload
systemctl enable --now minio
systemctl status minio
Verify the Installation
# Access MinIO Console at http://your-server:9001
# API endpoint: http://your-server:9000
# Configure mc alias
mc alias set local http://localhost:9000 minioadmin ChangeThisStrongPassword123!
# Check cluster status
mc admin info local
Distributed MinIO Cluster (High Availability)
# Example: 4 nodes, 4 drives each = 16 drives total
# Run this on ALL 4 nodes with identical configuration
# Each node must be resolvable by hostname
cat > /etc/default/minio << 'ENV'
MINIO_VOLUMES="https://minio{1...4}.example.com/data/minio/disk{1...4}"
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=ChangeThisStrongPassword!
MINIO_SITE_NAME=prod-minio-cluster
# Use a load balancer hostname for the API endpoint
MINIO_SERVER_URL=https://minio.example.com
MINIO_CONSOLE_ADDRESS=":9001"
# Erasure set size (default auto, can pin to specific value)
# MINIO_ERASURE_SET_DRIVE_COUNT=4
ENV
# Start MinIO on all 4 nodes simultaneously
# MinIO will wait for all nodes to be available before starting
systemctl start minio
Configuring TLS with Let's Encrypt
# Option 1: MinIO native TLS (place certs in MinIO's cert directory)
mkdir -p /home/minio-user/.minio/certs/CAs
# Get certificate
certbot certonly --standalone -d minio.example.com -d console.example.com \
--email admin@example.com --agree-tos -n
# Copy certificates to MinIO's expected location
cp /etc/letsencrypt/live/minio.example.com/fullchain.pem \
/home/minio-user/.minio/certs/public.crt
cp /etc/letsencrypt/live/minio.example.com/privkey.pem \
/home/minio-user/.minio/certs/private.key
chown minio-user:minio-user /home/minio-user/.minio/certs/{public.crt,private.key}
chmod 600 /home/minio-user/.minio/certs/private.key
# Update environment to use HTTPS
sed -i 's|http://localhost:9000|https://minio.example.com|' /etc/default/minio
systemctl restart minio
Nginx Reverse Proxy Setup
# Recommended: use nginx as the TLS termination point
# This allows MinIO to run on HTTP internally with nginx handling HTTPS
cat > /etc/nginx/sites-available/minio << 'NGINX'
# MinIO API
server {
listen 443 ssl http2;
server_name minio.example.com;
ssl_certificate /etc/letsencrypt/live/minio.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/minio.example.com/privkey.pem;
# Required for large object uploads
client_max_body_size 0;
proxy_request_buffering off;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
chunked_transfer_encoding off;
}
}
# MinIO Console (web UI)
server {
listen 443 ssl http2;
server_name console.example.com;
ssl_certificate /etc/letsencrypt/live/minio.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/minio.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:9001;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
}
NGINX
ln -s /etc/nginx/sites-available/minio /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx
Creating Buckets and Access Policies
# Create buckets using mc
mc mb local/backups # For Restic/Borg backups
mc mb local/app-uploads # For application file uploads
mc mb local/logs-archive # For log archival
mc mb local/terraform-state # For Terraform state files
# Enable versioning on a bucket (protects against accidental deletes)
mc version enable local/backups
# Create a read-only policy for a specific bucket
mc admin policy create local readonly-backups - << 'POLICY'
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::backups/*", "arn:aws:s3:::backups"]
}]
}
POLICY
# Create a service account (non-root access key)
mc admin user add local backup-agent SecretKey123456!
mc admin policy attach local readonly-backups --user backup-agent
# Or generate a temporary access key with expiry
mc admin user svcacct add local minioadmin --expiry 2026-12-31
Using AWS CLI with MinIO
# Configure AWS CLI to use MinIO as the endpoint
aws configure --profile minio
# AWS Access Key ID: minioadmin (or your service account key)
# AWS Secret Access Key: your-secret
# Default region: us-east-1 (MinIO accepts any region string)
# Default output format: json
# Use --endpoint-url for every command
aws --profile minio --endpoint-url https://minio.example.com \
s3 ls
# Upload a file
aws --profile minio --endpoint-url https://minio.example.com \
s3 cp /path/to/file s3://backups/myfile.tar.gz
# Sync a directory
aws --profile minio --endpoint-url https://minio.example.com \
s3 sync /var/log/app/ s3://logs-archive/$(hostname)/
# Set an alias in ~/.aws/config for convenience
cat >> ~/.aws/config << 'CONFIG'
[profile minio]
endpoint_url = https://minio.example.com
CONFIG
# Then omit --endpoint-url:
aws --profile minio s3 ls s3://backups/
Object Lifecycle Rules and Versioning
# Set lifecycle rule: delete objects older than 90 days in logs-archive
mc ilm rule add \
--expire-days 90 \
local/logs-archive
# Set lifecycle rule: transition to Glacier-like cold storage after 30 days
# (requires MinIO tiering to another S3-compatible backend)
mc ilm rule add \
--transition-days 30 \
--transition-tier GLACIER \
local/backups
# List lifecycle rules
mc ilm rule list local/logs-archive
# Set object lock (WORM β Write Once Read Many) on a bucket
mc mb --with-lock local/compliance-logs
mc retention set --default GOVERNANCE 365d local/compliance-logs
Integrating MinIO with Restic, Rclone, and Velero
Restic Backup to MinIO
# Initialize a Restic repository on MinIO
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=ChangeThisStrongPassword123!
export RESTIC_REPOSITORY=s3:https://minio.example.com/backups/server1
export RESTIC_PASSWORD=YourEncryptionPassword
restic init
restic backup /etc /home /var/lib
restic snapshots
Rclone with MinIO
# Add MinIO as an rclone remote
rclone config create minio s3 \
provider=Minio \
endpoint=https://minio.example.com \
access_key_id=minioadmin \
secret_access_key=ChangeThisStrongPassword123! \
path_style=true
rclone ls minio:backups
rclone sync /data/important minio:backups/important
Velero Kubernetes Backup to MinIO
# Create credentials file
cat > /tmp/minio-credentials << 'CREDS'
[default]
aws_access_key_id=minioadmin
aws_secret_access_key=ChangeThisStrongPassword123!
CREDS
# Install Velero with MinIO backend
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.10.0 \
--bucket velero-backups \
--secret-file /tmp/minio-credentials \
--use-volume-snapshots=false \
--backup-location-config \
region=us-east-1,s3ForcePathStyle=true,s3Url=https://minio.example.com
Monitoring with Prometheus and Grafana
# MinIO exports Prometheus metrics natively
# Generate a scrape configuration
mc admin prometheus generate local > /tmp/minio-prometheus.yaml
# Add to Prometheus config (/etc/prometheus/prometheus.yml)
cat >> /etc/prometheus/prometheus.yml << 'PROM'
scrape_configs:
- job_name: minio
metrics_path: /minio/v2/metrics/cluster
scheme: https
static_configs:
- targets: ['minio.example.com']
bearer_token:
PROM
# Import the MinIO Grafana dashboard
# Dashboard ID: 13502 β MinIO Dashboard
# Import at: https://grafana.example.com β Dashboards β Import β 13502
Performance Tuning
# Increase file descriptor limits for MinIO
cat >> /etc/security/limits.conf << 'LIMITS'
minio-user soft nofile 65536
minio-user hard nofile 65536
LIMITS
# Use XFS filesystem on data drives (best performance with MinIO)
mkfs.xfs -f /dev/sdb
mount -o noatime,nodiratime /dev/sdb /data/minio/disk1
# Add to /etc/fstab with performance options
echo "UUID=$(blkid -s UUID -o value /dev/sdb) /data/minio/disk1 xfs defaults,noatime 0 0" >> /etc/fstab
# Benchmark with MinIO's own tool
mc admin speedtest local
Security Hardening
# Disable root user access after creating service accounts
mc admin user disable local minioadmin
# Enable audit logging
mc admin config set local audit_webhook \
enable=on endpoint=http://your-siem:8080 auth_token=secret
# Enable server-side encryption (SSE-KMS or SSE-S3)
cat >> /etc/default/minio << 'ENV'
MINIO_KMS_SECRET_KEY=mykey:bXltaW5pa2V5MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM=
ENV
systemctl restart minio
# Enable SSE on a bucket
mc encrypt set sse-kms mykey local/sensitive-data
# Restrict Console access to specific IPs using firewall
ufw allow from 192.168.1.0/24 to any port 9001 # Console: internal network only
ufw allow from any to any port 9000 # API: accessible externally
Conclusion
MinIO eliminates the operational complexity and cost of managed S3-compatible storage for teams that control their own infrastructure. Its drop-in API compatibility means no application code changes β just point your S3 endpoint at MinIO and every existing integration works. Start with a single-node multi-drive setup for development or small production workloads, then graduate to a distributed cluster as needs grow. With Prometheus monitoring, bucket lifecycle rules, and service accounts handling per-application access control, MinIO is production-ready on day one.
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.