I work with Kubernetes clusters at my day job, managing bare metal vSphere setups, Talos, multi-node deployments, datacenter migrations. And then I come home to a 15-year-old Lenovo running a second-gen Intel Core i3 with 4GB of RAM.
The Hardware #
ElysianLab is a laptop that should have been retired years ago. i3-2nd gen, 4GB RAM, running Debian. The battery is dead so it’s permanently plugged in, which makes it a desktop that happens to have a keyboard attached to it. By any reasonable measure it should not be running production workloads.
And yet.
It’s been running for 10 months. The constraint of limited RAM turned out to be a feature, as it forced me to actually think about what I was running and why, instead of just throwing containers at it.
What’s Running #
Everything runs in Docker, organized into logical stacks, each with its own docker-compose.yaml:
- Nextcloud: personal cloud storage, the thing I miss most when it’s down
- Gitea: self-hosted Git, because I don’t want all my personal repos on GitHub
- VaultWarden: Bitwarden-compatible password manager
- Linkding: bookmark manager
- Memos: quick markdown notes
- MariaDB: shared database backend for several of the above
- Redis: key-value store, shared cache
- Traefik: reverse proxy, handles routing and TLS for everything
- Glance: home dashboard showing server stats and container health
- Beszel + Beszel Agent: lightweight monitoring, tracks CPU/RAM/disk across nodes
And then there’s the music stack, which deserves its own mention.
Navidrome + Lidarr + Lidatube is a fully self-hosted music pipeline. Lidarr monitors artists and manages the collection. Lidatube handles the actual downloading via YT-DLP. Navidrome sits on top of all of it as the streaming server, with a clean web UI and Subsonic API compatibility so any Subsonic client just works.
The result is something that behaves like Spotify but the library is mine, the data stays local, and there are no ads, no algorithm, no “this song is no longer available in your region.” I’ve been building the library for a while now and it’s the service I’d miss most if the machine died. Even more than Nextcloud, if I’m honest.
Networking is handled through Tailscale, everything is reachable from anywhere without exposing ports to the public internet.
Backups are a set of shell scripts: backup-all.sh orchestrates the rest, mariadb-backup.sh and nextcloud-backup.sh handle the stateful bits, vaultwarden-backup.sh for the passwords (obviously can’t lose those).
The Kubernetes Problem #
I wanted to run Kubernetes on this. Mainly because managing a dozen compose files manually gets old. Service discovery, proper health checks, rolling updates, the whole thing.
The i3 laughed at me. 4GB of RAM is not enough for a Kubernetes control plane plus actual workloads. I got a single-node cluster running once, watched it eat 2.5GB at idle, and killed it.
So the homelab stays on Docker Compose for now. It’s not glamorous but it works, and the uptime is honest.
Enter the Mini PC #
SeleneBox is an i5 6th gen mini PC.
The obvious move is to migrate everything to SeleneBox and retire the laptop. But that feels like a waste of two machines. The more interesting question is: what do you do with two nodes?
Kubernetes is the real answer, but there’s a practical problem: if SeleneBox crashes, someone has to physically press the power button to bring it back. There’s no IPMI, no remote power management. That rules it out as a reliable Kubernetes worker node. A control plane that can lose a node and never recover because nobody’s home to press a button isn’t really high-availability, it’s just optimistic.
So I’m thinking about something softer. Maybe SeleneBox runs the heavier stateful workloads i.e., Nextcloud, the databases, etc. while ElysianLab keeps running the lightweight services. Connected over Tailscale, with some level of awareness between them beyond just being on the same network. Not a full cluster, but not two completely independent boxes either.
Haven’t figured out the exact shape of it yet. But the constraint (unreliable power on one node) is actually an interesting design problem.
What I’d Do Differently #
Running infrastructure at home teaches you things production environments paper over with money. When Nextcloud goes down at 11pm because MariaDB ran out of connections, you fix it yourself. When Traefik drops a route after a config change, you debug it. There’s no on-call rotation. There’s no senior engineer to ask.
It’s made me a more careful operator. I write the backup scripts. I check the monitoring dashboard. I think about what happens when a container restarts unexpectedly.
The 15-year-old laptop was the right machine for this. A more capable box would have let me be lazier.