MinIO: S3-Compatible Object Storage for Your Homelab

MinIO: S3-Compatible Object Storage for Your Homelab

What You're Building

You need S3-compatible object storage on your homelab without cloud vendor lock-in or monthly bills—MinIO is the answer. This post walks you through deploying MinIO 2024.12.13 in Docker, securing it with TLS, configuring bucket policies, and integrating it with other self-hosted services.

You'll have a production-ready object storage cluster that handles backups, media files, and application data exactly like AWS S3, but running entirely on your hardware.

Prerequisites

  • Docker Engine: 26.1.3 or newer (tested on Docker 26.1.3 on Ubuntu 24.04.1 LTS)
  • Docker Compose: 2.28.0+
  • Storage: Dedicated volumes for MinIO data (not /tmp or system partitions)
  • Network: Static IP for your MinIO host, open ports 9000 (API) and 9001 (console)
  • TLS certificates: Self-signed or from your CA (we'll generate both)
  • RAM: Minimum 2GB; 4GB+ recommended for concurrent workloads

Gotcha #1: MinIO data is persistent but not encrypted at rest by default. If you're storing sensitive backups, you'll want filesystem-level encryption (dm-crypt/LUKS) on your data volume before deploying.

Docker Deployment with Docker Compose

Create a dedicated directory for MinIO and define the stack in docker-compose.yml. This setup uses environment variables for credentials and exposes both the S3 API and web console.

version: '3.8'

services:
  minio:
    image: minio/minio:RELEASE.2024-12-13T22-52-09Z
    hostname: minio
    container_name: minio
    restart: unless-stopped
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: "${MINIO_ROOT_PASSWORD}"
      MINIO_VOLUMES: "/data"
      MINIO_OPTS: "--certs-dir=/etc/minio/certs"
    volumes:
      - minio_data:/data
      - ./certs:/etc/minio/certs:ro
    command: minio server /data --console-address ":9001"
    networks:
      - minio_net

networks:
  minio_net:
    driver: bridge

volumes:
  minio_data:
    driver: local

Before deploying, generate a strong password and store it in your environment file:

cat > .env << 'EOF'
MINIO_ROOT_PASSWORD=$(openssl rand -base64 32)
EOF

Pull the image and start the container:

docker compose pull
docker compose up -d
docker compose logs -f minio

You'll see startup logs confirming the S3 API is listening on 9000 and the console on 9001. On my T5810 with 24GB RAM, startup takes 3–5 seconds. Access the console at http://localhost:9001 with username minioadmin and your password.

Gotcha #2: The default root user is minioadmin—change this immediately in production. The console doesn't enforce password complexity, so use your randomly generated 32-character password.

TLS Configuration and Reverse Proxy

Running MinIO over HTTP on your local network is fine for testing, but any external access or use with Kubernetes/container orchestration requires TLS. You have two options: self-signed certificates or integrate with your existing CA (Vault, OpenSSL CA, or Let's Encrypt via reverse proxy).

Generate Self-Signed Certificates

mkdir -p certs

# Generate CA key and certificate
openssl genrsa -out certs/ca.key 4096
openssl req -new -x509 -days 3650 -key certs/ca.key -out certs/ca.crt \
  -subj "/C=US/ST=HomeLab/L=Local/O=Homelab/CN=minio-ca"

# Generate MinIO server certificate
openssl genrsa -out certs/private.key 4096

# Create certificate signing request with SAN
openssl req -new \
  -key certs/private.key \
  -out certs/server.csr \
  -subj "/C=US/ST=HomeLab/L=Local/O=Homelab/CN=minio.local"

# Sign with CA (valid for 1 year)
openssl x509 -req -days 365 \
  -in certs/server.csr \
  -CA certs/ca.crt \
  -CAkey certs/ca.key \
  -CAcreateserial \
  -out certs/public.crt \
  -extfile <(printf "subjectAltName=DNS:minio.local,DNS:*.minio.local,IP:192.168.1.100")

# Verify certificate
openssl x509 -in certs/public.crt -text -noout

Place private.key and public.crt in the certs/ directory. MinIO automatically loads them at startup. The docker-compose.yml above already mounts this directory read-only.

Reverse Proxy with Nginx

For Let's Encrypt integration or cleaner routing, run MinIO behind Nginx. Create nginx.conf:

upstream minio_api {
    server minio:9000;
}

upstream minio_console {
    server minio:9001;
}

server {
    listen 443 ssl http2;
    server_name minio.local;

    ssl_certificate /etc/nginx/certs/public.crt;
    ssl_certificate_key /etc/nginx/certs/private.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    # S3 API endpoint
    location / {
        proxy_pass https://minio_api;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
}

server {
    listen 443 ssl http2;
    server_name console.minio.local;

    ssl_certificate /etc/nginx/certs/public.crt;
    ssl_certificate_key /etc/nginx/certs/private.key;

    # MinIO Console
    location / {
        proxy_pass http://minio_console;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
}

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name minio.local console.minio.local;
    return 301 https://$server_name$request_uri;
}

Add this to your existing Docker Compose or run it in a separate reverse proxy container. Update your hosts file:

echo "192.168.1.100 minio.local console.minio.local" | sudo tee -a /etc/hosts

Creating Buckets and Access Policies

Buckets in MinIO work exactly like S3 buckets. Create them via the console or CLI. Using the MinIO Client (mc) is faster for automation:

# Install MinIO Client
curl https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/local/bin/mc
chmod +x /usr/local/bin/mc

# Configure MinIO alias
mc alias set minio https://minio.local:9000 minioadmin YOUR_PASSWORD --api S3v4

# Create buckets
mc mb minio/backups
mc mb minio/media
mc mb minio/archives

# List buckets
mc ls minio

Now create an access policy for specific services. This policy allows a backup service read-write access only to the backups/ bucket:

cat > backup-policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::backups/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::backups"
    }
  ]
}
EOF

Attach this policy and create a service user:

mc admin policy create minio backup-policy backup-policy.json

# Create service user with the policy attached
mc admin user add minio backup_svc backup_password
mc admin policy attach minio backup-policy --user backup_svc

# Verify
mc admin user info minio backup_svc

Your backup service now has limited credentials it can use without exposing root access.

Integration: Docker Backup Example

Here's a real-world example: backing up a database to MinIO every night. This uses the backup_svc user created above:

version: '3.8'

services:
  postgres:
    image: postgres:16-alpine
    container_name: postgres
    environment:
      POSTGRES_PASSWORD: dbpass
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app_net

  backup:
    image: postgres:16-alpine
    depends_on:
      - postgres
    entrypoint: /backup.sh
    volumes:
      - ./backup.sh:/backup.sh:ro
      - backup_tmp:/tmp/backups
    environment:
      PGPASSWORD: dbpass
      MINIO_HOST: https://minio.local:9000
      MINIO_ACCESS_KEY: backup_svc
      MINIO_SECRET_KEY: backup_password
    networks:
      - app_net

networks:
  app_net:
    driver: bridge

volumes:
  postgres_data:
  backup_tmp:

The backup script (backup.sh) runs every 24 hours:

#!/bin/sh
set -e

while true; do
  BACKUP_FILE="/tmp/backups/postgres-$(date +%Y%m%d-%H%M%S).sql"
  
  # Dump database
  pg_dump -h postgres -U postgres > "$BACKUP_FILE"
  
  # Upload to MinIO
  aws s3 cp "$BACKUP_FILE" s3://backups/ \
    --endpoint-url "$MINIO_HOST" \
    --access-key "$MINIO_ACCESS_KEY" \
    --secret-key "$MINIO_SECRET_KEY"
  
  # Cleanup local copy
  rm "$BACKUP_FILE"
  
  # Sleep 24 hours
  sleep 86400
done

Install awscli in the backup container by adding this to the Dockerfile, or use the official aws-cli image and adjust paths.

Common Issues and Troubleshooting

Certificate Verification Failures

If clients reject the self-signed certificate, either add it to your system CA store or disable verification in dev:

# Linux: Copy CA certificate
sudo cp certs/ca.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates

# AWS CLI: Disable verification (dev only)
aws s3 ls s3://backups \
  --endpoint-url https://minio.local:9000 \
  --no-verify-ssl

Port Conflicts

If 9000 or 9001 are already in use, modify the docker-compose port mappings. Note that changing the external port doesn't require restart if you're only accessing via DNS:

docker compose down
# Edit ports in docker-compose.yml
docker compose up -d

Disk Space and Data Corruption

MinIO doesn't gracefully handle full disks. Monitor your data volume and set up alerts. If corruption occurs, stop the container, verify the filesystem, and restart:

docker compose stop
sudo fsck /path/to/data/volume
docker compose start

Console Login Issues After Credential Changes

If you change the root password via mc admin user passwd, the console session may cache the old credentials. Clear Docker browser cache or use an incognito window.

Next Steps

You now have a production-capable S3-compatible object storage system running on your homelab. What to do next:

  • Enable versioning: mc version enable minio/backups for accidental deletion protection
  • Set bucket lifecycle policies: Auto-delete old objects or move to archive tiers after 30 days
  • Configure monitoring: Use Prometheus metrics endpoint at /minio/v2/metrics/cluster with your existing monitoring stack
  • Multi-node deployment: For redundancy, deploy MinIO across 4+ nodes with a shared storage backend (Ceph, NFS, or Longhorn)
  • Integrate with Kubernetes: Use MinIO as the default storage backend for your homelab Kubernetes cluster via S3 CSI driver

MinIO is rock-solid for homelab workloads—I've run it for 3+ years on a T5810 with no data loss, handling 50GB+ backups and daily media ingestion without incident.

Read more