Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Please recommend how to manage personal serverss
45 points by scott01 8 months ago | hide | past | favorite | 66 comments
Hey guys,

I'm not an infrastructure engineer nor do I work in web, but I'm pretty comfortable with Linux. I realised I need to spin up a couple of home servers and VPSs to simplify and localise my digital life, and I have an RPi and an x86 NAS in my home network, and a VPS in the cloud. They run different hardware and distros, so I have to set them up a bit differently, which is a pain of itself, but what makes matters worse is a situation when I mess something up real bad or when there's another reason that essentially forces me to reinstall.

I tried Ansible and find it hard to use. E.g. at some point I decided to redeploy my server to a different VPS type in the same cloud, but I had to patch my Ansible scripts to do so, even though it was the same Rocky Linux distro (and it failed at some random docker compose networking config IIRC). I guess, Ansible scripts aren't reproducible and require constant work to keep them working. But I very much like them vs just SSH-ing into servers.

That leads to my question. Is there anything I can do to write config once and just deploy it more or less reliably? NixOS looks interesting, but learning another programming language just for this feels a bit too much for me. Or maybe there's another way to do stuff like this which I overlook as I'm in a different industry?




Docker-compose and be done with it. Kubernetes and NixOS are great and more powerful than docker compose, but the learning curve is longer. Feel free to graduate to k8s or NixOS, once you are up and running with compose. Docker compose has the most tutorials, YouTube videos, the widest support among projects you want to self host, and many more people who can readily advise you on troubleshooting. Check out r/selfhosted to get a feeling.

You can use portainer if you need a GUI but command line is not that complicated if you are comfortable with CLIs.


I completely agree.

I did a disaster recovery test of my services (all on docker) from scratch and without documentation.

I started by downloading the Debian ISO and went from there.

It took me about 60 minutes to be up and running, this includes DHCP from pihole and getting everything from borg backups.

I took notes to be faster/better next time but I lost them somehow :-|

Docker everything and you will be fine.

I used docker compose for years and recently moved to portainer because it is easier to keep track of the docker compose files and you have a lot of things readily available


I have “runbooks” for all my servers and software in Obsidian.

Every time I bootstrap it, I go through the instructions I wrote and improve/update if necessary.

Much simpler than Ansible


My last test was pre-Obsidian time (now I put everything in there). I will have to run one more time and, this time, make sure the steps are saved somewhere.

Since I basically install the OS, configure networking, install docker, install borg, recover from backup, start portainer, start the stacks - it would be overkill to use Ansible for that (not to mention hat I am not a big fan of Ansible after having tried for a year to make it reliably work on lossy/unstable networks -- IMHO salt is much better)


+1 to all these recommendations, and the ordering in terms of (time/money) investment, inclination, and synergy with projects at work. Compose is just the right fit for a small to medium setup and a great place to start.


Opinionated take: "a couple of home servers" are probably the wrong solution to your problem. Almost everything you do as a private person works better, faster, more reliably, and with much less time investment if you use a single machine for it.


This.

Also, why run different OSes on your machines? And why the need to reinstall stuff?

I just run Arch on everything, and I haven't had to reinstall a machine in many years.


I can't install Arch on OVH Cloud. It's by far my favourite distro, tbh.


There is one solution I've seen being used to solve this issue.

It is to overwrite the current Linux OS with the one you want. I came across this idea here [0]. I researched and got Alpine Linux running on Hetzner (even they don't support custom images) using a similar method [1].

This seems to be the guide to do the same with Arch Linux [2], I'm not sure though.

Once you do create a successful Arch image on OVH, take a snapshot of the machine before installing anything else, in case you want to start from a fresh Arch image in the future.

[0] https://github.com/elitak/nixos-infect

[1] https://wiki.alpinelinux.org/wiki/Replacing_non-Alpine_Linux...

[2] https://wiki.archlinux.org/title/Install_Arch_Linux_from_exi...


Only reason I can think of is running native ZFS on the BSDs.


The ZFS support in Ubuntu has been totally fine for me on my home server. (Boots off of a different ext4 ssd and I have two big spinning disks for my zfs pool)


Totally.

And whilst I've used most provisioning tools (quite liking packer and terraform), for my own stuff I have a file with notes and snippets. Follow that file and within less than 10 minutes I've got a working server with Postgres, Nginx, and LetsEncrypt, and it's ready for Git push-to-deploy.

Simplicity is what you need, for as long as you can get away with it. Simplicity and backups.


Do you happen to keep your script public? I would enjoy taking a look at it to learn something.


And this single machine could run Qubes OS, which runs many VMs with a great UI and strong isolation. My daily driver, can't recommend it enough.


Yeah I use ansible (in pull mode) for my home automation for just such a purpose. I have the router open up port 443 to my one box, then have haproxy read the header for the incoming traffic and route it to the backend on the box whether a docker container or a local service. Foundry VTT, jellyfin, etc.

All from Ansible, all to a private github that I just SSH into the machine long enough to run ansible pull.

That way, when I run into errors, it's just rinse and repeat to iterate my ansible playbook code until it is perfectly fit for this exact situation. Is my code maintainable and enterprisey like my work ansible? NOPE. But it's okay, it's my private code that nobody but me uses. I even do direct commits on master because I can, it's my tiny naughty guilty pleasure.


Are you routing a traffic from a VPS to your home? I've considered a set up like this where my home PC would be permanently VPN-connected to the VPS, but I think the problem was making it secure, e.g. making a virtual network at home to separate my personal stuff to be potentially accessible from the internet.


Use Tailscale for this


When you follow an online guide, dump a copy of the text on the page and a link in a single text file called “readme.txt”.

When you create backups of the state of the machine (even if the backups are just tarballs sent over ssh to the other machines), include a copy of that file.

Learning another DSL or desired state config system is going to be a pain because they lack a lot of things programmers like, like breakpoints, good LSPs and crucially: reproducible environments.

Worse still, the DSLs shift around. I know cfengine, puppet, chef, salt and ansible. Because they keep getting replaced with cleaner abstractions over time or have a kinder eye on them from the community.

Do the simple thing, document what you do to your machines. Its not sexy but you dont have to unlearn patterns or spend time trying to fix your environment just to make your docs (which now are code) work automatically.


Check out Proxmox + https://tteck.github.io/Proxmox/ + lxc container snapshots on the NAS and set up Proxmox backup server on the Pi. I find such a setup to be "all benefit, no giving up anything", contrary to NixOS.


Yeah, I love that stack. I then rsync those backups to backblaze. No issues, single machine, no fucking around with stuff like kubernetes (and I also got rid of docker, so that annoyance is also gone)


I was going to suggest NixOS. It's a bit of a climb to learn it, but having a modular setup that works with all my devices is absolutely a killer-app. My desktop, laptop, VPS and Raspberry Pi all share the same terminal configuration from the same Git repo.

Waxing poetic about NixOS on HN is a horse well-beaten. Just try it, if you've got an extra machine laying around and a few hours to spare. I think it's a great halfway option for people that want complex server composition software without the Kubernetes built-in.


I strongly agree. NixOS is a league above things like Ansible that are highly mutable. Also getting started with NixOS really isn't that bad. Just read the wiki and set a few options. https://search.nixos.org/options

The hard part about NixOS is when you need to package something yourself. That can have a bit of a learning curve but since nixpkgs is the largest package repository you rarely need to do it.

If you are running custom stuff you can always start by just using NixOS to run a Docker container. At least that will be a reproducable OS and if you pin a specific image it will be fully reproducible. Then when you want to you can dip your toes into native Nix packages. (It really isn't that bad, just an extra thing to learn that you can defer to start)


Agreed fully! Just got finished packaging up a few desktop apps for the first time on NixOS and it wasn't as bad as I expected.

My biggest issue with Nix (as is everyone's I hear) is the mediocre documentation. It takes a lot of wheel-spinning to get up-to-speed with PKGBUILD or makefile knowledge. A lot of that difficulty curve could be remediated with better, flake-focused packaging tutorials. Surely though, that too will be Coming Soon™.


>Just read the wiki

note, there are two wikis now.


I will echo you. I use nix and kubernetes and have achieved great success. I have my configuration as nixos modules. Making a host a kubernetes controller is just a false -> true flip.

All my images are built and stored and served via tftp and a remote nix store for the servers to boot. Very easy to build the system and then do atomic upgrades. Best part is ... Rollbacks if the config is bad are easy and fool proof.

I would never use mutable distributions in production. Too scary.

My nix boxes need very little minding.


Oh and if you don't want to go full nixos, you can just install whatever the base image for your server from the provider and run nix as a separate install for your applications.


This is quite interesting, thanks for recommendation, I'll investigate this. Are you aware of any edge cases or inconveniences of this approach I can keep in mind while experimenting?


Well nixos is really great if your package and options are already thought through by the maintainers, although you might have to read through nixpkgs for options. It's more painful if you need X compiled with Y enabled and it needs to rebuild (might be slow on vps), and if your package isn't included at all, it can be a pain to add a new one and get all the dependencies included.

Once it works though, it basically never breaks - you can always pick an older working version.

Oh and it eats disk space like there's no tomorrow if you have multiple generations retained.


Late to the party but this is my configuration that I use for my home servers:

https://github.com/eh8/chenglab

I was a complete Nix beginner three months ago and thought Nix was terribly complicated and unnecessary. Glad to say I was wholly wrong and the transition was not that bad.

NixOS me provision my servers from scratch to functional file/media/home automation server in about 15 minutes using an entirely automated Nix installation process. It’s a beautiful OS for servers


Most cloud providers don't provide NixOS, e.g. I use OVH Cloud. IIUC, there is a way to run a script that will delete my OS and install Nix, but I think there might be side effects like cloud-specific network setup which will probably be too much for me :)

Edit: But NixOS looks really good, I have to agree. I guess 'immutability' will let me just install and forget about it.


> IIUC, there is a way to run a script that will delete my OS and install Nix

There is! It's a fucking nightmare: https://xiaoyehua.dev/posts/nixos-on-oracle-arm-machine

It works though, and thanks to the genius that wrote this script I've got a 4 core 24gb Oracle Always Free instance loaded up with NixOS at all hours of the day. I feel spoiled.


I think they're referring to this implementation: https://github.com/nix-community/nixos-anywhere

but cleverca22's work is solid as well

there's also the earlier https://github.com/elitak/nixos-infect

and the built-in NIXOS_LUSTRATE: https://nixos.org/manual/nixos/stable/#sec-installing-from-o...

I'd go with nixos-anywhere, personally


It looks to me like you've been poisoned by the modern scaling approach. You're not going to run N web servers and M load balancers and P application servers, deploying them automatically from a CI system. You're going to run nginx and six single applications behind it and one database, right? So when you adopt ansible or puppet or nix to run a config, you are adding complexity, not simplifying your life. Even docker may be overkill.

The points to consider:

- architecture. You have three boxes. One of them has lots of storage. One of them is cheap. One costs you monthly. I don't know that this is what you actually want. You probably need a main box that can do anything, a backup facility for that, and a proxy to expose services to the outside world.

- a common operating system on all nodes. I like Debian stable. Not everybody does. Being happy with it is more important than being the "best". But you should only have one.

- automatic backup of config and data. Snapshots are nice.

- if you can't have perfect snapshots, you can at least check your config into git. Use etckeeper.

- set up a common approach to running things. Make everything grab TLS certs from Lets Encrypt through nginx. Make a new user for each service. Make a new database user for each service that needs that, make a new PHP worker pool, whatever. Be consistent.

- document your policy and your exceptions. This can be a text file or a wiki or something weird.

- know where you are getting things, how to upgrade them, and how to get announcements of available updates.


I like the pragmatism in this comment, I think I can unify an RPi with a NAS. My concern was keeping important data backups separate from other experiments, though maybe I'm overthinking it.

Thanks for etckeeper, it looks really interesting, I didn't know about it.


For two publicly-reachable services I personally run, I decided the way that involved the least work and was most likely to be low-drama -- initially, and ongoing -- was to just put them on separate $5/month Linodes running Debian Stable.

My personal wiki has very short notes on how to rebuild each from scratch. (Pretty much: push Linode Web site buttons to make a new Debian Stable instance, get a shell and do this `apt install` command line, and edit config file like so). Data gets pushed/pulled via simple shell scripts run on laptop (usually using SSH and rsync).

Separate from those services, my GPU server is a separate box at home, and frequently changing at a low level, so blasting it entirely a few times has made pragmatic sense, so I'm glad it's not sharing config complexity any other resources. And setting up the large ML stacks down to proprietary drivers sometimes is initially very experimental, and I need to do it manually first anyway, not yet ready to make a Dockerfile or set up passthrough for containers, and after the experiment works, there's no reason to do that. Were I making a production setup, or something reproducible by others, I'd do more after the initial experiment setup.

Wrangling much more complex layers atop (e.g., K8s, Docker, Terraform, Ansible, NixOS, etc.) sometimes means more things that can go wrong, and sometimes more time spent learning someone else's bureaucracy. Most of tech work now is learning piles of other people's bureaucracy. That makes sense for businesses that actually need that complexity, and for people who just want to copy&paste cargo-cult command lines and hopefully it works, and for people who want to homelab it for experience (which is perfectly valid). But the way I run my important services and my experimental box seemed to be easier overall.

Of course, for curiosity/resume/masochism purposes, I do have a separate K8s cluster at home, which runs nothing important, and which I can obliterate and change and experiment with at will, without being encumbered by it running services I actually need.


The hard part here is idempotency. Ansible is great for a programmer because it's learning for fun. And you just have to spar with your machine to get good.

But for a non-programmer, it's understandable you don't want to be bother with the inner workings of your OS and how to maintain Ansible script idempotency.

And for every piece of software you want to run on your server, the idempotency task grows more difficult.

My honest opinion? Tolerate the learning curve for docker-compose. Each application you need can be managed and tweaked in isolation. Troubleshooting "works on my machine" problems will cost you more time in the long-run. You can't anticipate all the weird interactions between your programs and the os. Being able to nuke the setup and rebuild from scratch is your most valuable tool.

- thin base os (install just enough to run docker-compose)

- maintain images for each of your apps you need.

- mount the essential volumes of each image to well known location on your hard drive to make manual backups easy.


Kubernetes. No, seriously.

It's an orchestration tool that's common in the real world, and also notoriously hard to learn and "get right". Downtime due to obvious, important mistakes is common, and it leaves both engineers and lower management wondering if it was a good idea to adopt.

The thing is, in your home environment, you have no (or hopefully significantly lower) uptime requirements. If you break the entire cluster for a few days, because you ran into a network problem or upgraded it wrong, who cares? That's a potentially hundred-thousand-dollar learning opportunity at a large organization, for just the cost of electricity in your home.

For what it's worth, I run Kubernetes both in my day job and in my home lab. I've learned more about networking from running my own cluster on bare metal (HP DL360 boxes) than I have from ten years of managing infrastructure for bigcorp's, and it also gives me a safe place to play with functionality that I might want to adopt at work.


I've always struggled with how to get started. I get lost in picking which CNI(?) I need and how to manage stateful storage.

Do you have any pointers towards how to get going?


I agree that k3s is a great way to get started. You can easily bring new nodes online and upgrade them.

If you want your Services to behave a little bit more like physical boxes or VMs, where each service gets its own IP address (instead of using an ingress controller or service mesh, which are different beasts entirely), have a look at MetalLB. MetalLB allows associating an IP address on your home network with a Service, which is more or less exactly what you'd do with a VM or a Raspberry Pi.


Look up k3s and start there


This. K8s in the small is easy to set up and a great way to abstract your setup n differences from your apps.


+1 to this. I run Ubuntu 22.04 with microk8s. Keep all my yaml files in a local git repo, figured out how to hook up my NAS to provide storage via nfs.

It's definitely gone down a few times, but I've learned a TON tinkering with it. super easy to spin up a new hobby project, a nice web UI for seeing what the heck is going on.

I've completely borked it a couple times and survived one micro pc migration. Can't recommend it more


Depends on what you are doing. But you can take the path of app / os images.

My home network is just openwrt, and I use make plus a few scripts and imagebuilder to create images that I flash, including configs.

For rpi I actually liked cloud-init, but it is too flaky for complicated stuff. In that case I would nowerdays rather dockerize it and use systemd + podman or a kubelet in standalone mode. Secrets on a mount point. Make it so that the boot partition of the rpi is the main config folder. That way you can locally flash a golden image.

Anything that mutates a server is brittle as the starting point is a moving target. Building images (or fancy tarballs like docker) makes it way more likely that you get consistent results.


I use a single old PC at home. Put debian on it, install docker and unattended upgrades. Create docker compose files for all services. Make sure to use 'latest' everywhere and run watchtower to update all images regularly. While i expose a select few services to the internet I connect to most of them via VPN. In the local network I'm using pihole for local DNS and since I use a wildcard let's encrypt certificate I have ssl for everything which makes it nice to use.

Haven't had to touch the system more than once a year or so when I got an alert that unattended upgrades couldn't install something.


Agreed (including the old PC).

The only thing I do not upgrade automatically is Home Assistant (major tag, the minors are). I had one failed update and it created tension say home when lights stopped to work, people stuck in fridges, aliens landing and whatnot. It fid not help I was 1500 km away.


Maybe containerize most things since distros change a lot between releases. That way you can keep your distro on the latest version and even switch distros without too much impact.


what does "your distro" mean here? the distro in the container, or the container host, or the client host?

containers are just a packaging/isolation technique. you can keep using an obsolete stack in a container, regardless of what changes outside it. rebuilding containers from scratch is certainly not easier than rebuilding an install via ansible.


> what does "your distro" mean here? the distro in the container, or the container host, or the client host?

Container host.

> rebuilding containers from scratch is certainly not easier than rebuilding an install via ansible.

How so? The OP is giving an example of ansible scripts breaking because of OS version change, and having to fix them. With containers, the container OS is very slim, so fewer things to break with upgrades, and you can upgrade the host OS easily since docker is quite stable across OS versions.


I use nixos on my laptop, but never learnt enough to make it my everywhere-OS.

Might I suggest a different route that I took - use the base image from whatever vps and modify as little as possible of it. Then run everything else in docker.

That's how I migrated my placeholder website and my gogs install across to a new provider: I copied my data across and ran the original commands to launch docker containers that I used on the first server. These are now happily running on the new server.


Ansible has been better than Chef and Puppet for small environments. I looked at cdist but it wasn't faster for my use case. Also, Ansible executes rules ordered unlike Chef and Puppet which helps reduce your state space. If you incrementally maintain servers, by definition, your only tested configuration is the one you executed. The way to improve reliability is to start from the same (container) state, and now the only maintenance ought to be changes (OS upgrades has been it for me). Ansible across 1 server and 2 desktops with no changes takes ~3m17s and I wish it was way faster. As of know I manage with tagging things and only running a subset of things. Consider standardizing on a single distro (I use Debian and it's served me extremely well over the years). +1 on centralizing, too, till you have an use case that requires more servers. servers <= containers. Simplify. Kubernetes is complicated.


It'll take some time to setup but NixOS with nixops and maybe disko can do a lot depending on your use case..

I just use NixOS flakes with a syncthing'ed flake repo across 5 hosts (desktop, laptop, a media device (NUC7), a home server and a VPS). It has its problems, but I'll iron them out eventually.

As always start small...


I can't recommend nixops anymore. It is basically unmaintained. But I switched over to https://colmena.cli.rs/ and it is basically a drop-in replacement if you are just managing machines over SSH. (It lacks some of the tools to provision resources like VMs and DNS records)


NixOS or Guix Systems are the less archaic way, civilized people (Lisp vignette here) have in 2024 to manage digital life. Learning Nix is a pain but learning enough to been able to run a PERSONAL infra it's not so challenging.

Trying to replicate "the cloud" at home is a nice way to tie their own genitals, hang some loads than start jumping.

Said that: do not use Raspi or NAS, assemble a small desktop, it can be a NAS, a router, a server for any kind of service and it's just common commodity hw, the best supported in the FLOSS world, the quickest to be replaced/the cheap for spare parts. Desktop iron today does not eat that much electricity and have enough for most common needs. And using NixOS or Guix System you do not need to run a gazillion of stuff just to show a damn hello world, so you can milk you hw as needed.


Yeah, my 'NAS' is going to be a small N100 or 5700U PC, or something along these lines. Have you had any luck running NixOS or Guix on a VPS?


Never tried, I have static IPv4 and v6 so I feel no need for a VPS, but various guides exists for OVH and Hetzner, so I imaging it's not that hard, at least for NixOS.


>I realised I need to spin up a couple of home servers and VPSs to simplify

Presumably you're trying to replace some paid services with local self-hosting? Consider that paying for a service _is_ the simpler option.


You can't avoid learning another programming language if you want to describe your setup in such a way that a computer can recreate it.

But you can easily fall into the trap of having a bazillion underspecified informal languages if you try cobbling together bash scripts, dockerfiles, and whatever other thing you need ad-hoc.

Nix is probably a good investment in that light. My personal concern is that it moves rather fast, and some things should run themselves and stay secure without being touched more than once a year.


Take a look at Vagrant! https://www.vagrantup.com/ In my admittedly limited understanding I believe it offers closer to a Nix like reproducable rather than repeatable deployments. Like Nix I beleive you can also hash verify each VM to be confident you have the same image.


I used to have multiple RPIs, and different physical servers (old PCs). I tried dockers and others because I though I was cool. Until I decided to just use one modern PC (actually a work computer that was off lease), and run docker for each of my server stack. I can't tell you how much easier my life has become when it comes to admin.


My home NAS/server just runs Unraid. It’s drop dead simple and works.

For cloud/vps stuff I use a bunch of docker-compose files + configs that do pretty much everything. The underlying os is usually Debian because it’s what I’m used to and it doesn’t break stuff by going too fast.


I wouldn't go the route and use a VPS for personal stuff, ever. Or a Cloud Provider for that matter.

Find a Hoster which offers you a shell login where the Hoster manages close to all services you need, including backups, security updates and so on.

That should massively simplify your setup.


Funnily you've mentioned, I explored this route and found one 'normal' web hosting that provides an SVN repo. But in my case, I need a Git hosting to collaborate with friends on private stuff (I don't like an idea of my code being fed into an LLM by GitHub) and a VPN for travels, so VPS is a requirement, unfortunately.


this might help with the VPN needs: https://github.com/lattice0/true_libopenvpn3 git also doesn't need a special server.


I'd recommend just using cloud-init.

If you're running a server in the cloud it's already available.

It takes no effort to set up yourself .. and it's just a basic script that is run that sets up a server exactly how you want it.


cloud-init is nice if your VM is throwaway. Is that feasible in a home-setting?


cd /

dpkg --get-selections >>installed_packages

git init

git add installed_packages /etc /home/*/.* /root /whateverneeded

git commit -m "system init"

on a new system just copy over the .git folder

install packages from installed_packages, then git checkout

reboot

that's all :D

or there is chef, puppet




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: