Uptime Kuma: Self-Hosted Monitoring for Your Homelab

Uptime Kuma: Self-Hosted Monitoring for Your Homelab

The Problem: You Need Visibility Into Your Homelab Without Sending Data to SaaS

You've got services scattered across your homelab—Plex, Home Assistant, a media server, maybe a Kubernetes node. When something goes down at 2 AM, you either find out from a frustrated family member or you're manually SSHing into boxes checking logs. You need real-time monitoring with alerting, but you're not shipping 10 years of uptime data to Datadog.

This post walks you through deploying Uptime Kuma on Docker, wiring up multi-channel notifications, building a status page your family can actually use, and securing it with reverse proxy SSL. I'm assuming you've already got Docker running and understand basic networking.

Prerequisites and Versions

Before you start, confirm you have:

  • Docker 24.0+ and Docker Compose 2.20+ (test with docker --version and docker compose version)
  • Uptime Kuma 1.23.11 (current stable as of early 2025)
  • A reverse proxy: I'm using Caddy 2.8, but Nginx 1.26+ or Traefik work identically
  • At least one notification channel configured (Telegram, Slack, Discord, or email)
  • Ubuntu 24.04 LTS or equivalent (tested on a Proxmox LXC container with 2 CPU cores, 1GB RAM minimum)

This setup assumes you have a functioning Docker host and basic familiarity with docker-compose. If you're new to Docker networking, the official compose networking docs are essential reading before you proceed.

Deploy Uptime Kuma with Docker Compose

Create a dedicated directory for Uptime Kuma and set up your compose file:


mkdir -p ~/docker/uptime-kuma
cd ~/docker/uptime-kuma

Create docker-compose.yml:


version: '3.8'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1.23.11
    container_name: uptime-kuma
    restart: always
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma-data:/app/data
    networks:
      - homelab
    environment:
      - TZ=America/New_York

networks:
  homelab:
    driver: bridge

volumes:
  uptime-kuma-data:

Start the container:


docker compose up -d

Verify it's running:


docker compose logs -f uptime-kuma

You should see output like listening on 0.0.0.0:3001. Access it at http://your-lab-ip:3001. The first load takes ~10 seconds—don't refresh yet.

Gotcha #1: Uptime Kuma stores its SQLite database in the volume. If you don't mount uptime-kuma-data, you'll lose all configuration on container restart. I've seen this bite people who copy a quick docker run command without volumes.

Set Up Your First Monitors and Notification Channels

On first login, you'll create an admin account. Set a strong password—this box is internet-facing if you expose it through a reverse proxy.

Add a Notification Channel

Go to SettingsNotifications. I'll use Telegram as an example (Discord and Slack are similarly straightforward):

  1. Click Add Notification
  2. Choose Telegram from the dropdown
  3. Follow the in-app guide to get your bot token and chat ID (you'll start a Telegram bot with @BotFather, then message it to capture the chat ID)
  4. Send a test notification

Gotcha #2: Some notification providers (like email with SMTP) require explicit permissions. If you're using a personal Gmail account, you'll need an app-specific password, not your actual Gmail password. Uptime Kuma's UI hints at this but doesn't spell it out—check your notification provider's docs, not Uptime Kuma's.

Add Your First Monitor

Click Add New Monitor and select HTTP(s) for your first test. Point it at something reliable—I use my Home Assistant instance:

  • Monitor Type: HTTP(s)
  • URL: https://homeassistant.lab.local:8123
  • Interval: 60 seconds (fine for homelab, don't go below 30 unless you need sub-minute response detection)
  • Timeout: 10 seconds
  • Retries: 1 (retry once before marking down)
  • Notification: Select your Telegram channel

Save it. You'll see a green status within the next minute. Add more monitors for critical services: Plex, Jellyfin, your reverse proxy, or any HTTP endpoint you care about.

You can also monitor TCP ports (for SSH), DNS queries, or Docker container status. The TCP option is useful for non-HTTP services—for example, monitoring your TrueNAS SSH port on 22 without actually connecting.

Build a Public Status Page

One of Uptime Kuma's best features is its status page. Family members and guests can check service status without needing credentials.

Go to Status Pages and click Create:

  • Slug: status (becomes /status/status in Uptime Kuma)
  • Title: Lab Status
  • Description: Real-time uptime for homelab services
  • Add Monitors to Page: Select all monitors you want public-facing
  • Public Access: Toggle on

The status page is now accessible at http://your-lab-ip:3001/status/status. This is perfect to share internally or embed behind your reverse proxy with basic auth.

Secure with Reverse Proxy and SSL

Uptime Kuma serves HTTP internally, but you want HTTPS externally. I'll show you the Caddy setup (simplest); Nginx and Traefik are in the footer links.

Update your docker-compose.yml to remove the external port exposure:


version: '3.8'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1.23.11
    container_name: uptime-kuma
    restart: always
    # Remove: ports:
    #   - "3001:3001"
    # Instead, only expose on the internal network
    expose:
      - 3001
    volumes:
      - uptime-kuma-data:/app/data
    networks:
      - homelab
    environment:
      - TZ=America/New_York

networks:
  homelab:
    driver: bridge

volumes:
  uptime-kuma-data:

Recreate the container:


docker compose down
docker compose up -d

Now set up Caddy. Add this to your Caddyfile (typically /etc/caddy/Caddyfile or mounted into a Caddy container):


uptime.lab.local {
    reverse_proxy uptime-kuma:3001 {
        header_uri -X-Forwarded-Prefix /
        header_up X-Forwarded-Proto {scheme}
        header_up X-Forwarded-Host {host}
    }
}

Reload Caddy:


caddy reload --config /etc/caddy/Caddyfile

If you're running Caddy in Docker on the same network, use the container name directly. Caddy auto-generates and renews ACME certificates for your internal domain (if using your own CA) or self-signed if not.

Access https://uptime.lab.local. The SSL certificate warning is expected on self-signed setups; add your CA cert to your browser if you have one.

Important: Uptime Kuma needs these proxy headers to work correctly behind reverse proxies. Without them, it might redirect to http:// or misdetect its own URL, breaking API calls and status page rendering.

Fine-Tune and Monitor Long-Term

Now that you've got basic monitoring running, optimize for your homelab:

  • Adjust check intervals: Critical services (Home Assistant, Plex) every 30-60 seconds; less critical stuff every 5 minutes. Don't obsess over sub-30-second intervals—network noise creates false positives.
  • Set up incident acknowledgment: When a service goes down, acknowledge the alert in Uptime Kuma to prevent notification spam while you're fixing it.
  • Use status page groups: Organize monitors by category (Media, Automation, Infrastructure) so your status page is scannable at a glance.
  • Monitor Uptime Kuma itself: Add a monitor pointing to http://uptime-kuma:3001/ from another service (or externally via your proxy). This catches container crashes.

Uptime Kuma's SQLite database grows slowly. On my T5810 with 24GB RAM running Ubuntu 24.04, a year of monitoring ~40 services takes about 50MB. No cleanup needed unless you're paranoid about storage.

Common Issues

Status page shows "No data": You haven't given monitors enough time to collect history. Wait 2-3 cycles and refresh. If it persists, check docker logs for errors.

Notifications spam, then stop: Your notification provider (Telegram, Slack) might be rate-limiting repeated alerts. Uptime Kuma respects provider limits, but confirm your interval isn't too aggressive. For testing, use a longer interval than your actual production setup.

HTTPS shows certificate errors even with a valid proxy: Ensure your reverse proxy is passing the correct headers. Test with curl -v https://uptime.lab.local/ and look for X-Forwarded-Proto: https in response headers. If missing, your proxy config is wrong.

Monitor keeps failing despite service being up: Check timeout settings. HTTPS handshakes on slower systems can take 8+ seconds. Bump timeout to 15 seconds. If it's a self-signed cert, make sure Uptime Kuma isn't rejecting it—go to monitor settings and disable "SSL Certificate Verification" if needed (only for internal services, obviously).

Docker volume permissions error on startup: If uptime-kuma-data volume is owned by root and your container runs as a non-root user, you'll get permission denied. Fix with docker compose exec uptime-kuma chown -R node:node /app/data or rebuild with proper volume ownership.

What You Now Have

You've deployed Uptime Kuma on Docker with persistent storage, wired up notifications to Telegram (or your preferred channel), built a public status page, and secured it behind a reverse proxy with HTTPS. You're now getting alerts when services go down instead of discovering it retroactively.

Next steps: add monitors for everything that matters (database uptime, backup job success via cron webhooks, UPS status if you have one), integrate Uptime Kuma's API into your own dashboards if you're building custom monitoring UIs, or layer in Grafana for detailed historical analysis beyond uptime tracking.

Related reading: Uptime Kuma official docs cover advanced features like maintenance windows and webhook integrations. For reverse proxy patterns, see the Caddy documentation and Nginx docs if you prefer that path.

Read more