TrueNAS Scale: Complete Homelab NAS Setup Guide
Stop Running Separate Containers and Storage Silos: Why TrueNAS Scale Consolidates Your Homelab
If you're managing NAS storage, Docker containers, and automated backups across multiple machines in your homelab, you're adding operational complexity you don't need. TrueNAS Scale unifies ZFS storage, container orchestration, and scheduled snapshots into a single appliance—and it's purpose-built for this exact workflow.
This guide walks you through a production-grade TrueNAS Scale homelab setup on minimal hardware, with working examples for ZFS pool creation, SMB share configuration, Docker app deployment, and automated snapshot policies.
Prerequisites: Versions and Hardware Requirements
This guide assumes TrueNAS Scale 24.10.1 (current stable release as of Q1 2025) on modest but capable homelab hardware. You'll need:
- TrueNAS Scale 24.10.1 — download the installer ISO from truenas.com/download-truenas-scale/
- Minimum 2 CPU cores, 8GB RAM (I run it on an Intel i3 with 16GB; that's comfortable)
- Minimum 4 drives for a mirrored ZFS pool (3 is acceptable for testing, don't do this in production)
- Static IP address (DHCP is fine for initial setup, but lock it down immediately)
- Existing home network with IPv4 connectivity
- SSH client on your workstation (Windows 10+, macOS, Linux — built-in everywhere now)
Gotcha #1: TrueNAS Scale installer will wipe the target drive completely. Back up any existing data, and ensure you're installing to the right disk. I've seen people accidentally format the USB drive they booted from by selecting the wrong device.
Installation and Initial Web UI Access
Write the TrueNAS Scale ISO to a USB drive using dd or Balena Etcher, then boot from USB. The installer is straightforward—select your target drive, confirm, and wait roughly 3–5 minutes.
# On your workstation, if using dd (Linux/macOS)
dd if=TrueNAS-SCALE-24.10.1.iso of=/dev/sdX bs=1M conv=fsync
# Replace /dev/sdX with your USB device; check with lsblk first
After the installer completes, TrueNAS boots and you'll see the console menu with the web UI URL. Find your NAS on your network:
# Scan your network for TrueNAS (assumes 192.168.1.0/24)
nmap -p 80 192.168.1.0/24 | grep -i truenas
# Or check your router's DHCP client list
Navigate to http://<your-nas-ip> in a browser. The default credentials are root / password. Change this immediately—TrueNAS will prompt you to set a new root password on first login.
Gotcha #2: If you can't reach the web UI, check that IPv6 isn't interfering. Some homelab networks force IPv6 and disable IPv4 broadcast. On the console menu, select "1. Configure Network" and manually set a static IPv4 address.
Creating Your First ZFS Pool
A ZFS pool is your storage foundation. For a homelab with 4 drives, a mirrored vdev (RAID1 equivalent) gives you fault tolerance without the overhead of RAID10.
In the web UI, navigate to Storage → Pools and click Create Pool.
# Your pool configuration (example: 4x 4TB drives)
Pool Name: tank
Encryption: ON (strongly recommended)
Vdev Configuration: 2x Mirror (pairs disk 0+1, disk 2+3)
Disk Layout:
- vdev-0: mirror
- disk-0: sda
- disk-1: sdb
- vdev-1: mirror
- disk-2: sdc
- disk-3: sdd
Select all four drives, choose Mirror topology (this creates two 2-disk mirrors), enable encryption with a strong passphrase, and confirm. ZFS initialization takes 5–15 minutes depending on drive size.
Once the pool is created, you'll have a usable pool with approximately 7.2TB capacity (accounting for RAID1 redundancy and ZFS metadata).
Configuring SMB Shares for Your Homelab Clients
Your other machines (Proxmox, Media Server, Windows box) need access to this storage. SMB (Server Message Block) is the standard. Navigate to Sharing → SMB and create a share:
# SMB Share Configuration
Share Name: media
Dataset: tank/media
Purpose: Read-write access for all clients
Permissions:
- Owner (user): nobody
- Owner (group): nogroup
- Mode: 0755
Browseable: ON
Guests Allowed: OFF (set this if you want anonymous access; I don't)
First, create the dataset from the web UI under Storage → Pools → tank, then add a child dataset called media. Then create the SMB share pointing to it.
On your client machine, mount the share:
# Linux client
sudo mount -t cifs //192.168.1.100/media /mnt/truenas \
-o username=root,password=<your-password>,uid=1000,gid=1000
# macOS client (use Finder → Go → Connect to Server)
smb://192.168.1.100/media
# Windows client
net use Z: \\192.168.1.100\media /user:root
Verify the mount with df -h (Linux/macOS) or net use (Windows). You should see the full pool capacity available.
Deploying Docker Apps via Kubernetes Integration
TrueNAS Scale runs a lightweight Kubernetes cluster (k3s) under the hood. You can deploy applications directly from the web UI without touching kubectl.
Navigate to Apps and click Available Applications. TrueNAS includes official charts for common homelab services: Home Assistant, Plex, Nextcloud, Pi-hole, and others.
For example, deploying Home Assistant:
# Home Assistant via TrueNAS App
App: Home Assistant
Version: 2025.1.0
Configuration:
Storage:
config_data: tank/k8s-apps/home-assistant
backup_path: tank/backups
Networking:
Hostname: home-assistant
Port: 8123
Environment:
TZ: America/New_York
PYTHONUNBUFFERED: 1
The UI walks you through storage allocation, environment variables, and networking. Persistence is handled automatically—Home Assistant data lives on your ZFS pool, so snapshots protect your configuration.
Pro tip: All app data goes to tank/k8s-apps by default. Set up automatic snapshots on this dataset (see next section) to protect your app state.
Automating Snapshots for Data Protection
ZFS snapshots are your insurance policy. They're lightweight point-in-time copies that cost almost nothing until you modify data. Create a snapshot policy on your main dataset:
Navigate to Storage → Pools → tank, select Periodic Snapshots, and create a new policy:
# Snapshot Policy
Dataset: tank
Frequency: Hourly
Keep Last: 24 (hourly snapshots, delete after 24 hours)
Naming Pattern: auto-%Y%m%d_%H%M
Second Policy: Daily
Frequency: Daily (runs at 2 AM)
Keep Last: 30 (daily snapshots, keep for 30 days)
Naming Pattern: auto-%Y%m%d
Third Policy: Weekly
Frequency: Weekly (runs Sundays at 3 AM)
Keep Last: 12 (keep 12 weeks of backups)
Naming Pattern: auto-%Y-%U
This creates a tiered snapshot strategy: recent hourly backups for quick recovery, daily backups for longer-term rollback, and weekly backups for off-site archive. Total storage overhead is roughly 5–10% depending on change rate.
View existing snapshots:
# SSH into your TrueNAS box and inspect snapshots directly
ssh [email protected]
zfs list -t snapshot | head -20
# Output shows all snapshots with their size and creation time
To roll back a snapshot (destroy all changes since that point):
zfs rollback tank@auto-20250115_1200
# This instantly reverts tank to the state at that timestamp
Common Issues and Real Troubleshooting
Slow SMB Performance Over the Network
SMB performance degrades quickly if you're not tuning the protocol version. Force SMB3 on the server and clients:
# On TrueNAS, via SSH
sysctl -a | grep smb
# Check if SMB3 is enabled; if not, go to Services → SMB and set minimum protocol to SMB3
# On Linux client
mount -t cifs //192.168.1.100/media /mnt/truenas \
-o vers=3.0,username=root,password=<pwd>
Kubernetes Apps Won't Start After Reboot
K3s sometimes loses its state if the system reboots uncleanly. Restart the Kubernetes cluster:
ssh [email protected]
systemctl restart k3s
# Wait 30–60 seconds for the cluster to stabilize
kubectl get pods -A # Verify all pods are running
Out of Disk Space Despite Free Space Showing
ZFS reserves space for metadata and snapshots. Check actual available space:
zfs list tank
# Column 'AVAIL' shows true available space
# If it's low, reduce snapshot retention or add more drives
Next Steps: Backup Strategy and Monitoring
You now have a fully functional TrueNAS Scale homelab NAS with redundant storage, network shares, containerized apps, and automatic snapshots. Your next moves:
- Set up off-site backup: Use System → Advanced → Cron Jobs to push daily snapshots to a remote server via SSH or cloud storage.
- Enable alerts: Navigate to System → Alerts and configure email notifications for disk failures or pool degradation.
- Plan capacity: ZFS pools don't expand easily. Buy your drives and vdevs in advance—you can add new vdevs, but not expand existing mirrors.
- Document your pool layout: Export your pool configuration under Storage → Pools and save it offline. Recovery requires this if you rebuild the system.
Your homelab now has enterprise-grade storage reliability without the enterprise price tag. Snapshots protect against ransomware and accidental deletion, ZFS redundancy handles drive failures, and Kubernetes apps run with persistent data on your pool.