Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very cool read. Not least because that setup is very close to what I am doing right now, and the infrastructure I have (DS218+ and NUC represent).

Please help me understand your choices. You went for reproducible VMs on your fourth, custom machine. The three NUCs are compute-only (stateless?) nodes for a k8s cluster.

What kept your from making your custom-machine a node as well? That gets you Infrastructure-as-Code too, since your Nextcloud (what I am setting up right now, too) and whatnot are defined in code, using containers.

By the way, since my set-up is a single NUC, I ended up going for docker-compose. That has just plainly worked. Single-node k8s with outside access (only need HTTP/HTTPS for Nextcloud and the like, with reverse-proxying through subdomains for more apps, like CodiMD) didn't prove to be friendly to me (not my field of expertise). I tried had tried multiple CNIs, traefik and contour as IngressControllers, and MetalLB as a LoadBalancer, but could not get it to work.

And even if it ended up working, it would have to be a stateful k8s cluster, and I imagine that adds a whole layer of problems, too. Do you then go for NFS to the Synology NAS? Seems like the best idea, but that will be much slower than the nodes' local SSDs. Do the SSDs then sit idly, without work to do? Seems like a waste. If the deployments/PVCs/PVs are tainted such that they are bound to specific nodes, you don't have much of a cluster, more like a spread-out docker-compose (which still beats docker-compose, I guess, if it gets running).

These are the questions I couldn't answer, or whose answer drove me to docker-compose.

Since k8s has so much more steam than docker-compose, I imagine in 1-2 years, k8s will be the go-to for single-node homelabs as well. What do you think?



Hey, thanks for all the good ideas.

About the custom "VM-only" machine: currently I'm a little lacking in my "k8s ops" skills. So, until I am more comfortable managing the k8s environment, I have decided to continue using VMs for my "production" workloads. Until then, the K8s cluster will be treated like more of an experimental zone and I can worry less about breaking things.

As for the K8s storage options -- I don't really know! Haven't quite figured that one out. Like I mentioned in the blog post, I haven't really placed any serious workload on the K8s cluster yet. However, I have heard good things about OpenEBS (https://openebs.io/) and am considering that as a potential option in order to make use of the local SSDs on each NUC.

For single-node K8s, there are lots of good options. I haven't used it personally, but I know that MicroK8s (https://microk8s.io/) works well even in a single-node environment. It might be worth checking out.


I'm running around 90% of the things in my homelab in a kubernetes cluster. I love it, but storage has been the biggest challenge. From my experience, OpenEBS and Rook both seem to be good options when you have multiple physical machines with disks on each one. Portworx is also very good, but the licensing costs are way too high.

I'm running my kubernetes nodes as VMs and I didn't want to have to backup the virtual machine disks. I just wanted to back up my freenas server, so I went with trying to use NFS.

I wanted to use the NFS client provisioner, as I thought I'd want to be able to add and remove volumes automatically, however, this makes redeploying onto a new cluster from scratch hard. Eventually, I just tore that out and just mounted each NFS volume individually because it turns out that I actually only need a handful of volumes.


Maybe useful information, I'm not sure: but I was at DellEMC world in 2017 and they showed off this really cool k8s storage tech. It was called RexRay and was obviously being demo'd with ScaleIO in mind.

However I see that it has multiple backends (including CEPH), might be worth looking into as it seemed very fault tolerant and fast. :)

https://github.com/rexray/rexray


For simple storage, rancher longhorn is also an option.


+1 for Longhorn. I started using it in my homelab as an easy replicated/distributed storage.

It was a lot easier to setup and had more of an "it just works" out of the box experience compared to openebs/rook/etc.


You can use an NFS provisioner so volumes are set up automatically over NFS, no need to set up taints.

https://github.com/kubernetes-incubator/external-storage/tre...

MetalLB is dead simple to set up, and you can hook it up with a DNS server with TSIG to get automatic name entries for your ingress hosts.


> k8s will be the go-to for single-node homelabs as well. What do you think?

Not gonna happen because it takes 3 master nodes to have a quorum and 1 separate worker node. The requirements are also pretty high with 2GB of memory per virtual machine. Don't think people have 4 extra machines or 10 GB of free memory to run VMs.


> Not gonna happen because it takes 3 master nodes to have a quorum and 1 separate worker node.

You don't, strictly speaking, need separate worker nodes. You just need to untaint the masters.

Or you can just run the masters on raspberry pis - under $200 for a properly redundant cluster. (Masters and workers do not have to share a CPU architecture)


Ok but I'm still hung up on the 2GB.

Management processes shouldn't take 2GB. What are they doing in there?


Kubernetes is actually using the memory. Just checked one master node, fresh cluster running nothing, it is sitting at 950 MB used with another 800 MB in buffers. Usage will jump quite a bit higher when running anything.

Very important thing to know about memory: Kubernetes nodes have no swap. Kubernetes will refuse to install if the system has a swap, gotta remove it.

This means nodes better have a safe margin of memory, because there is no swap to use when under memory pressure (things will crash). Hence the minimum requirement of 2 GB.

I tried to run complete clusters with 5+ machines in VmWare, with as little memory as possible because I don't have that much ram on my desktop, and all I got was virtual machines crashing under memory pressure.


You could get older server hardware pretty cheap, and just run everything on a single physical machine, no?

For example, I picked up about 96 GB of DDR3 ECC ram for around 75 euros. A quick check on ebay, and the same amount of DDR4 is selling for at least _twice_ as much. I imagine it's pretty economical to buy this older hardware and just assemble a single beefy server, instead of buying multiple physical machines.

The added benefit is that this older hardware doesn't end up in a landfill, and even though older CPUs generally consume more power than their current gen equivalent, I reckon a single machine would consume about the same, or less, than multiple NUCs (I have no source to back that up though, it's just my assumption).


Server hardware is designed to run in server environments, not home environments. Maybe if you had a basement or something, but the noise on those fans is going to drive you nuts. Off the shelf home desktop hardware is better, but amazingly inconsistent. The NUC platform isn’t a bad way to go if you have the cash.


You're absolutely correct about the noise. Those 1U servers sound like a jet taking off pretty much the entire time they run. However, there's plenty of motherboards that are ATX (or E-ATX), so they fit in regular cases with little to no modifications. (Though, I just keep mine in my attic)

I agree the NUC is a great platform, but if you could spend less cash and get more bang for your buck, and perhaps have the added benefit of having a platform with ECC memory (not sure if the NUC supports ECC, I'm assuming it doesn't), then I think the latter is what most people would go for (or well, at least what I would go for :p).

We're also talking about home _servers_, so it doesn't seem that odd to me to use actual server hardware. The homelab[0] subreddit has a bunch of folks running actual server hardware for example.

[0]: https://old.reddit.com/r/homelab/


Or drive your family nuts, who will take you with them.

I had plans to build a noise isolated data closet in the basement (tied into the furnace air return) but I never ended up with the right sort of basement.


A closet or the garage also works. I ran a couple of Dell 1U's in a closet for years, you could barely hear them thru the (fullsize, non-sliding) door


Sure, but desktop hardware is also more likely to fail quicker as they're not designed to be used like servers. So there's a trade off.


I use Dell/HP/Lenovo workstations that are off lease for this very purpose. Dual CPU, large RAM capacity, and quiet.


He wants the fun homelab not savings. His particular NUCs are ~$600 a piece, the nas was a few $k as well. Definitely you can get more compute on a single node but I think you'd be missing the point.


I finished setting up my Homelab Kubernetes setup this week. It's running two K3s[0] (a lightweight kubernetes) "clusters". One single node on a HP Microserver for media, storage, etc and one multi-node on Raspberry Pi's for IoT. For my homelab I don't need pretty autoscaling, so just a single node is enough. For the rpi's I'll probably try K3s HA cluster as they can be unreliable when the SD card eventually fails.

Previously I used hand-crafted, Ansible, Salt, Puppet and Docker setups (that last one died two weeks ago prompting me to start the K3s setup). But they all ended up becomming snowflakes, making them to much hassle to maintain. What I like best about the K3s setup is that I can just flash K3OS[1] to a SD card with near zero configuration and just apply the Kubernetes resources (through Terraform in this case, but Helm is also a great timesaver).

I still have to figure out a nice way to do persistent storage. But for now since it's one node (the IoT cluster does not have state) the local-path Persistent Volumes work well enough. Might have a look into Rook.

I will admit it's not trivial to get started with Kubernetes, but since I already needed to study it for work this provides me with a nice training ground. Alternatively Hashicorp offers nice solutions in this space as well with Consul and Nomad. But it needs a little bit more assembly, whereas K3s comes with batteries included (build in Traefik reverse proxy, load balancer and DNS).

[0] https://github.com/rancher/k3s#what-is-this

[1] https://github.com/rancher/k3os


> Not gonna happen because it takes 3 master nodes to have a quorum and 1 separate worker node.

Can you explain that more? I had a single-node cluster running: the master needs to be untainted so work gets scheduled on it, and then it runs. There is no need for a quorum because there is nothing to decide on.


Assuming one is doing a kubernetes homelab to train for work or future job interviews, they should probably get a full cluster going.

Kubernetes depends on etcd which requires an odd numbers of instances. There might be ways to run a one or two nodes cluster for testing (see minikube) but that's not how it's going to run in a company.


You can run a one-node etcd cluster for testing, but it's not recommended for any real usage.

Most companies won't be running their own etcd clusters anyway, except for on-prem clusters. Cloud users will generally use a managed Kubernetes cluster. If you do need to learn how to run etcd, you can learn the basics in about a day. In our org, the etcd portion of the training is less than half an hour long thanks to good documentation.

For a homelab, consider running something like k3s which can use a SQL database instead of etcd.


I've seen plenty of single-node K8s setups that are actively used and maintained. If you're more after the IaC and K8s API and less concerned about raw HA, then it's a workable solution if you have enough RAM and CPU.

Some Kubernetes-based distributions, like OpenShift, require 3 masters at minimum, but that's not the norm across K8s installations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: