Hacker News new | past | comments | ask | show | jobs | submit login
Stripped-down Kubernetes on the Raspberry Pi (alexellis.io)
171 points by alexellisuk on June 1, 2019 | hide | past | favorite | 57 comments



Why SD card when USB boot is available for Raspberry Pi 3BX& RPi 2B v1.2[1]. SD cards are the abomination of single board computers, I understand that it is a choice taken to limit cost and I'm grateful for that but SD cards are not designed to run OS.

One might argue that USB 2.0 interface of RPi doesn't add much value when a USB SSD is connected to it, but benchmarks show that it adds at-least double the performance[2].

[1]:https://www.raspberrypi.org/documentation/hardware/raspberry...

[2]:https://jamesachambers.com/raspberry-pi-storage-benchmarks/


I stumbled on that second link awhile ago when I was curious about the best SD card for Raspberrys. There's a newer post now: https://jamesachambers.com/raspberry-pi-storage-benchmarks-2...

At first I thought the recommendation to use an SSD with a Raspberry was crazy, but the benchmarks do indeed show an improvement. And, as that article points out, since the SSD's performance isn't important (hitting the USB 2.0 limit) you can grab the cheapest, decent SSD which ... comes in at just $20!

Though I still like SD cards for the form factor. There's the newer A1 class SD cards with better minimum IOPS, which I'd be totally fine with. But reliability is still such a big issue with SD cards. I had one die a few days ago; always annoying.


Just to clarify by USB SSD, I mean a 2.5" SSD with an USB adapter and not the expensive USB-SSD like Samsung T5.

Reg the cheap SSD, yes also the benchmarks show the cheapest SSD (Crucial BX 500) on the top. I'm not certain, why this is so; I think it is because people put the cheapest SSD due to RPi limitation but then again Samsung SSD in those benchmark score less which scored high in actual SSD benchmarks for PC.

I think the conditions in the RPi doesn't bring out the best performance from Samsung's controller but does so with Crucial/JMicron. SD cards are supposed to be much more reliable than SSD, it is our improper application that's causing it to fail especially in the OC RPi.


I've been playing around with k3s myself on Azure:

https://github.com/rcarmo/azure-k3s-cluster

...as well on my own ARM cluster, with a private registry:

https://taoofmac.com/space/blog/2019/05/18/2034

I find it refreshingly straightforward for personal and testing setups (and more practical than microk8s for me right now), and am waiting for rio to hit a couple of stable milestones:

https://github.com/rancher/rio

(I try openFaaS now and then, but after contributing a deployment template early in the beginning, I lost my enthusiasm for it - it also ran the gateway and the admin UI on the same process, which I considered a design flaw).


Have you tried minikube? If you have, how's it compared to k3s/microk8s? The reason I ask is, k3s is not supported on Windows.

Also, Rio sounds like "host your own Google App Engine" — am I right?


Rio is a wrapper around Knative and Istio, from what I can tell. The thing I don't see (and I haven't tried Rio, so maybe someone who is using it can say this better) how does it build your apps? Because it wraps Knative, I can assume it uses Knative's build.

I don't know if that means I'm responsible for writing Dockerfiles, or if I can change out for things like Buildpacks.io buildpacks v3? But I do think that means the system can scale to zero replicas when the traffic diminishes to the point where we haven't seen any requests for something like 10 minutes, and it's fully in doubt whether or when I'll see another request I need to serve.

I have been wondering about Rio but so far not enough to break down and try it.


It's been a while since I last used minikube, but it was a bit slow then. There is a new alternative called kind: https://github.com/kubernetes-sigs/kind I only tested it briefly (on linux), but it seemed faster than minikube. In contrast to minikube, kind does not use a vm but instead implements a cluster node as a single docker container.


I've never seen microk8s work. It depends very heavily on iptables rules, and I suspect that if you have routes to anything on 172.16.0.0/12 it will work unpredictably. (I had a similar problem with a VPC that had subnets that conflicted with what Docker chose to use.) Obviously microk8s works for someone, but it's never worked for me. But I work at an ISP and our route table on the corp network is excessively large.

One of my coworkers tried to use microk8s instead of minikube and we debugged it extensively for a couple days, but ended up baffled. We had to setup some rules to forward localhost:5000 into the cluster for docker push; instead we got a random nginx instance that we couldn't figure out where it was running. Even after uninstalling microk8s, we still had a ton of random iptables rules and localhost:5000 was nginx... It was weird.

Minikube works great, however. You will still need some infrastructure to push to its docker container registry in order to run locally-developed code. Out of the box, you can persuade your local machine to use minikube's docker for building, but it runs in a VM and unless you use non-default minikube provisioning settings, it doesn't have access to all the host machine's cores, which is kind of slow. I ended up making minikube's container registry a NodePort so that every node (all 1 of them) can get at localhost:5000 to pull things. I then added some iptables rules to make localhost:5000 port-forward to $MINIKUBE_IP:5000 so that "docker push localhost:5000/my-container" works. It's kind of a disaster.

I also had to write an HTTP proxy that produces a proxy.pac that says "direct *.kube.local at $MINIKUBE_IP" so that you can visit stuff in your k8s cluster in a web browser and test your ingress controller's routing.

After those two things, I quite like it.

I still don't think minikube is a good platform for developing microservices, though. The build/deploy times are too long (and things like ksync don't work reliably, even if you generate a docker container that can reliably hot-reload your app, which kind of involves a lot of setup). I once again wrote something that takes a service description and a list of its dependent services, allocates internal and external ports, puts them in environment variables, starts Envoy for incoming and service-to-service traffic, and then runs the apps wired up to receive requests from Envoy and make requests to other services through Envoy. It took a while but now that I have it, it's great. I can work on a copy of our entire stack locally, it starts up in seconds, and is basically identical to production minus the k8s machinery.

I am still surprised I had to solve all these problems myself, but now that they're solved, I'm very happy.


I'd like to do this with envoy. I found this blog post https://blog.turbinelabs.io/local-development-with-lots-of-m... did you do something similar? Thank you!


There are similarities and differences. The thing I wrote to run everything locally obviously doesn't call out to external services; it runs everything it needs locally. I also didn't use the xDS Envoy APIs, instead opting to statically generating a config file (though with the envoyproxy/go-control-plane library, because I do plan on implementing xDS at some point in the future).

What I have is as follows. Every app in our repository is in its own directory. Every app gets a config file that says how to run each binary that the app is composed of (we use grpc-web, so there's usually a webpack-dev-server frontend and a go backend). Each binary names what ports it wants, and what the Envoy route table would look like to get traffic from the main server to those ports. The directory config also declares dependencies on other directories.

We then find free ports for each port declared in a config file, allocating one for the service to listen on (only Envoy will talk to it on this port), and one for other services to use to talk to that service. The service listening addresses become environment variables named like $PORTNAME_PORT, only bound for that app. The Envoy listener becomes $APPNAME_PORTNAME_ADDRESS, for other services to use.

Once Envoy has started up, we then start up each app. The order they start in doesn't matter anymore, because any gRPC clients the apps create can just start talking to Envoy without caring whether or not the other apps are ready yet. And, because each app can contribute routes to a global route table, you can visit the whole thing in your browser and every request goes to the right backend.

I used Envoy instead of just pointing the apps at each other directly with FailFast turned off because I needed the ability to send / to a webpack frontend and /api/ through a grpc-web to grpc transcoder, and would have used Envoy for that anyway. This strategy makes it feel like you're just running a big monolith, while getting all the things that you'd expect with microservices; retries via Envoy, statistics for every edge on the service mesh, etc. And it's fast, unlike rebuilding all your containers and pushing to minikube.

It kind of solves the same problems as docker-compose, but without using Docker.


Thank you for taking the time to write this up. It is extremely helpful on getting me on my way.


microk8s worked out of the box for me. On Ubuntu and even CentOS.


Whenever I hear 'cluster' I think of scientific computing applications and wondering how these fit in there. The author addresses a nice use case in another post:

https://blog.alexellis.io/build-your-own-bare-metal-arm-clus...


I am about to start work on a data science platform with Argo and K8s for a client. Really strong use case for embaressenly parrellel applications


Why Argo and not kubeflow Pipelines?


But why? Is this solving any real problem people have with home networks or software development?


I have a raspberry pi that runs a great deal of my home automation. When it has a problem, lots of stuff stops working. It would be nice to have k3s with more than one raspberry pi as a fail over.


I created a k8s on rpi3 once. Wrote a blog about it. In Swedish though. https://www.cygate.se/blogg/quake-hur-jag-byggde-ett-raspber...


If network speed and low latency are so important, then probably other boards should be considered since even the fastest Raspberry PI still implements its Gigabit Ethernet through USB and is limited to about 300Mbs; also its CPU performance lags behind many other newer and often cheaper boards.

This list might help. https://www.hackerboards.com/home.php


Does anyone know how you say "k3s"? I had a look at the docs, but didn't find it.


It seems like there’s no official pronunciation (yet). There are some options discussed in a GitHub issue that was opened to ask about this. [1]

[1]: https://github.com/rancher/k3s/issues/55


It's pretty strange to decide upon a name without having a way to pronounce it!?


if you cut the 8 in k8s in the middle, you're left with half of the features... that leaves you at k3s :)


Kubes.


I’ve heard it pronounced as “keys.”


First time I hear of K3S. Its like Minikube as far as I can see? Is it a drop in replacement?


k3s is basically a stripped down version of Kubernetes where legacy and alpha features are removed and few components replaced with lighter-weight alternative (e.g. SQLite instead of etcd) where Minikube is a local deployment of full Kubernetes in a VM for development.


Is it safe to use for multiple nodes if etcd is removed? Won't the cluster become incoherent without it?


I believe that would be the case if the node that runs SQLite goes down. There's also an option to run k3s with etcd though.


The article mentions that running kubernetes over WiFi is not a good idea. What's a better alternative for IoT with RPi that can ease deployment in a similar way but over WiFi? Is docker swarm a better idea?


Take a look at balenaCloud if you want to deploy containers to IoT devices. (Disclaimer: happy customer). Their free tier is free forever, too.


Question I couldn't find an answer to (and haven't used balenaCloud yet). How are updates to the OS managed? Will, for example, Raspberry Pi's running belenaOS update themselves and reboot nightly or something like that?

I have a few Raspberry's floating around the house doing odds and ends, and balena seems like a nice option for reducing their management needs. But I really want them to be able to get the latest security/etc updates without me having to manually update each of them from time to time.


balena founder here. balenaOS comes with all the infrastructure needed for robust host OS updates. We expose this functionality to our users via a button in the web dashboard. We don't yet have an automated, rolling upgrade style mechanism.

The main consideration for a feature like this is that sometimes containers have dependencies to interfaces exposed by the operating system which are not always stable. This is especially true for IoT usecases because containers will typically interface with some device connected to the system.

Tangential to this, we're working on an extended support release schedule (a la firefox) for balenaOS. I could see us building an automated OS update mechanism on top of that. We'll definitely think about it, thanks a lot for your feedback :)


Can you use your API to perform the updates?

Btw balena is a great service, good job!


Can you host the control server yourself without public cloud?


Their server component is FOSS and you can self-host. You need to do a bit of extra config on the devices but it’s documented. Disclaimer: I’ve never tried it myself.


First time I hear about k3s. Can it be configured to use lxc/lxd instead of docker?

(Nothing wrong with docker, its just that some nodes are already running lxc)


There are a few other options (kubelet doesn't support lxc directly) available for the container runtimes in kubernetes. Not everything will run on a raspberry pi (because of ARM), but here's a list: https://kubedex.com/kubernetes-container-runtimes/


Anyone looked at Rio from rancher yet? It's like a light weight overlay on k3


k3s is nice for local development.

It's faster and much more light weight than minikube, since you can quite easily set it up on your machine and avoid a VM.


Are you building custom docker images as part of your local development? I'm curious if you've found a good way to push locally built images into the cluster since it doesn't provide a private registry server.



If you want to avoid the VM I find minikube with --vm-driver=none works quite well.


Do you you know if persistent disk support has improved? I'd love to migrate over from minikube / microk8s, but that was the largest blocker for me last time I looked.


I'm mapping volumes to host folders that are shared via both NFS and SMB, which works well enough for testing/dev.


Why would minukube need a VM??


Minikube adopted VirtualBox as its default and recommended VM driver, and personally I never managed to get non-VirtualBox VM drivers to work with minikube, including Docker.


Maybe it's beating a dead horse and the article itself seems fine, but the submission title sounds a bit ridiculous.

"Install this software on your server to make it serverless!"


I'll never forget one time in a large meeting when the technical experts were asked by one of the c suites what serverless meant. One brave soul rose to the challenge and after some fumbling around in their answer, they ended it with "you basically just upload the code to the server and then the server runs the code for you".

Took awhile for the cringing to go away on that one.


I don’t think it’s a hard thing to explain though:

Serverless is marketing term that loosely describes a shared tenancy environment where you don’t manage the host.


This describes e.g. Heroku.


The opening bit “marketing term that loosely...” clearly explains that it’s pretty arbitrary when a service is called “serverless” and that decision is usually just a marketing one. Which is precisely true.

There’s no formal definition for “serverless”; it literally is just a marketing term that loosely describes shared tendency environment and The only reason Heroku isn’t “serverless” is because they don’t market themselves as that.

So it’s a pointless exercise nitpicking anyone’s definition since there isn’t an actual formal definition. Eg AWS uses the term serverless to describe other services that isn’t lambda.

The whole term is just made up marketing bullshit for shared tendency.

In the 70s we used to call it time sharing. But I doubt many people these days will remember that term.


A commonly made distinction between the two is that Heroku still exposes instances to you: You buy a number of "dynos". Whereas a "serverless" solution doesn't, e.g. lambda just spins up workers as needed and bills you for the CPU time used.


In that case that falls into the latter exception I made where you manage the host; albeit the management is just “buy an instance”.


War is peace, freedom is slavery, Serverless is actually servers.

https://darkscience.net/quotes/#209


-


But an automobile is literally horseless. There's no rented horse hidden under layers of abstraction. I guess that "someoneelsesserverful" just doesn't have the same ring to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: