Why SD card when USB boot is available for Raspberry Pi 3BX& RPi 2B v1.2[1]. SD cards are the abomination of single board computers, I understand that it is a choice taken to limit cost and I'm grateful for that but SD cards are not designed to run OS.
One might argue that USB 2.0 interface of RPi doesn't add much value when a USB SSD is connected to it, but benchmarks show that it adds at-least double the performance[2].
At first I thought the recommendation to use an SSD with a Raspberry was crazy, but the benchmarks do indeed show an improvement. And, as that article points out, since the SSD's performance isn't important (hitting the USB 2.0 limit) you can grab the cheapest, decent SSD which ... comes in at just $20!
Though I still like SD cards for the form factor. There's the newer A1 class SD cards with better minimum IOPS, which I'd be totally fine with. But reliability is still such a big issue with SD cards. I had one die a few days ago; always annoying.
Just to clarify by USB SSD, I mean a 2.5" SSD with an USB adapter and not the expensive USB-SSD like Samsung T5.
Reg the cheap SSD, yes also the benchmarks show the cheapest SSD (Crucial BX 500) on the top. I'm not certain, why this is so; I think it is because people put the cheapest SSD due to RPi limitation but then again Samsung SSD in those benchmark score less which scored high in actual SSD benchmarks for PC.
I think the conditions in the RPi doesn't bring out the best performance from Samsung's controller but does so with Crucial/JMicron. SD cards are supposed to be much more reliable than SSD, it is our improper application that's causing it to fail especially in the OC RPi.
I find it refreshingly straightforward for personal and testing setups (and more practical than microk8s for me right now), and am waiting for rio to hit a couple of stable milestones:
(I try openFaaS now and then, but after contributing a deployment template early in the beginning, I lost my enthusiasm for it - it also ran the gateway and the admin UI on the same process, which I considered a design flaw).
Rio is a wrapper around Knative and Istio, from what I can tell. The thing I don't see (and I haven't tried Rio, so maybe someone who is using it can say this better) how does it build your apps? Because it wraps Knative, I can assume it uses Knative's build.
I don't know if that means I'm responsible for writing Dockerfiles, or if I can change out for things like Buildpacks.io buildpacks v3? But I do think that means the system can scale to zero replicas when the traffic diminishes to the point where we haven't seen any requests for something like 10 minutes, and it's fully in doubt whether or when I'll see another request I need to serve.
I have been wondering about Rio but so far not enough to break down and try it.
It's been a while since I last used minikube, but it was a bit slow then.
There is a new alternative called kind: https://github.com/kubernetes-sigs/kind
I only tested it briefly (on linux), but it seemed faster than minikube.
In contrast to minikube, kind does not use a vm but instead implements a cluster node as a single docker container.
I've never seen microk8s work. It depends very heavily on iptables rules, and I suspect that if you have routes to anything on 172.16.0.0/12 it will work unpredictably. (I had a similar problem with a VPC that had subnets that conflicted with what Docker chose to use.) Obviously microk8s works for someone, but it's never worked for me. But I work at an ISP and our route table on the corp network is excessively large.
One of my coworkers tried to use microk8s instead of minikube and we debugged it extensively for a couple days, but ended up baffled. We had to setup some rules to forward localhost:5000 into the cluster for docker push; instead we got a random nginx instance that we couldn't figure out where it was running. Even after uninstalling microk8s, we still had a ton of random iptables rules and localhost:5000 was nginx... It was weird.
Minikube works great, however. You will still need some infrastructure to push to its docker container registry in order to run locally-developed code. Out of the box, you can persuade your local machine to use minikube's docker for building, but it runs in a VM and unless you use non-default minikube provisioning settings, it doesn't have access to all the host machine's cores, which is kind of slow. I ended up making minikube's container registry a NodePort so that every node (all 1 of them) can get at localhost:5000 to pull things. I then added some iptables rules to make localhost:5000 port-forward to $MINIKUBE_IP:5000 so that "docker push localhost:5000/my-container" works. It's kind of a disaster.
I also had to write an HTTP proxy that produces a proxy.pac that says "direct *.kube.local at $MINIKUBE_IP" so that you can visit stuff in your k8s cluster in a web browser and test your ingress controller's routing.
After those two things, I quite like it.
I still don't think minikube is a good platform for developing microservices, though. The build/deploy times are too long (and things like ksync don't work reliably, even if you generate a docker container that can reliably hot-reload your app, which kind of involves a lot of setup). I once again wrote something that takes a service description and a list of its dependent services, allocates internal and external ports, puts them in environment variables, starts Envoy for incoming and service-to-service traffic, and then runs the apps wired up to receive requests from Envoy and make requests to other services through Envoy. It took a while but now that I have it, it's great. I can work on a copy of our entire stack locally, it starts up in seconds, and is basically identical to production minus the k8s machinery.
I am still surprised I had to solve all these problems myself, but now that they're solved, I'm very happy.
There are similarities and differences. The thing I wrote to run everything locally obviously doesn't call out to external services; it runs everything it needs locally. I also didn't use the xDS Envoy APIs, instead opting to statically generating a config file (though with the envoyproxy/go-control-plane library, because I do plan on implementing xDS at some point in the future).
What I have is as follows. Every app in our repository is in its own directory. Every app gets a config file that says how to run each binary that the app is composed of (we use grpc-web, so there's usually a webpack-dev-server frontend and a go backend). Each binary names what ports it wants, and what the Envoy route table would look like to get traffic from the main server to those ports. The directory config also declares dependencies on other directories.
We then find free ports for each port declared in a config file, allocating one for the service to listen on (only Envoy will talk to it on this port), and one for other services to use to talk to that service. The service listening addresses become environment variables named like $PORTNAME_PORT, only bound for that app. The Envoy listener becomes $APPNAME_PORTNAME_ADDRESS, for other services to use.
Once Envoy has started up, we then start up each app. The order they start in doesn't matter anymore, because any gRPC clients the apps create can just start talking to Envoy without caring whether or not the other apps are ready yet. And, because each app can contribute routes to a global route table, you can visit the whole thing in your browser and every request goes to the right backend.
I used Envoy instead of just pointing the apps at each other directly with FailFast turned off because I needed the ability to send / to a webpack frontend and /api/ through a grpc-web to grpc transcoder, and would have used Envoy for that anyway. This strategy makes it feel like you're just running a big monolith, while getting all the things that you'd expect with microservices; retries via Envoy, statistics for every edge on the service mesh, etc. And it's fast, unlike rebuilding all your containers and pushing to minikube.
It kind of solves the same problems as docker-compose, but without using Docker.
Whenever I hear 'cluster' I think of scientific computing applications and wondering how these fit in there. The author addresses a nice use case in another post:
I have a raspberry pi that runs a great deal of my home automation. When it has a problem, lots of stuff stops working. It would be nice to have k3s with more than one raspberry pi as a fail over.
If network speed and low latency are so important, then probably other boards should be considered since even the fastest Raspberry PI still implements its Gigabit Ethernet through USB and is limited to about 300Mbs; also its CPU performance lags behind many other newer and often cheaper boards.
k3s is basically a stripped down version of Kubernetes where legacy and alpha features are removed and few components replaced with lighter-weight alternative (e.g. SQLite instead of etcd) where Minikube is a local deployment of full Kubernetes in a VM for development.
The article mentions that running kubernetes over WiFi is not a good idea. What's a better alternative for IoT with RPi that can ease deployment in a similar way but over WiFi? Is docker swarm a better idea?
Question I couldn't find an answer to (and haven't used balenaCloud yet). How are updates to the OS managed? Will, for example, Raspberry Pi's running belenaOS update themselves and reboot nightly or something like that?
I have a few Raspberry's floating around the house doing odds and ends, and balena seems like a nice option for reducing their management needs. But I really want them to be able to get the latest security/etc updates without me having to manually update each of them from time to time.
balena founder here. balenaOS comes with all the infrastructure needed for robust host OS updates. We expose this functionality to our users via a button in the web dashboard. We don't yet have an automated, rolling upgrade style mechanism.
The main consideration for a feature like this is that sometimes containers have dependencies to interfaces exposed by the operating system which are not always stable. This is especially true for IoT usecases because containers will typically interface with some device connected to the system.
Tangential to this, we're working on an extended support release schedule (a la firefox) for balenaOS. I could see us building an automated OS update mechanism on top of that. We'll definitely think about it, thanks a lot for your feedback :)
Their server component is FOSS and you can self-host. You need to do a bit of extra config on the devices but it’s documented. Disclaimer: I’ve never tried it myself.
There are a few other options (kubelet doesn't support lxc directly) available for the container runtimes in kubernetes. Not everything will run on a raspberry pi (because of ARM), but here's a list: https://kubedex.com/kubernetes-container-runtimes/
Are you building custom docker images as part of your local development? I'm curious if you've found a good way to push locally built images into the cluster since it doesn't provide a private registry server.
Do you you know if persistent disk support has improved? I'd love to migrate over from minikube / microk8s, but that was the largest blocker for me last time I looked.
Minikube adopted VirtualBox as its default and recommended VM driver, and personally I never managed to get non-VirtualBox VM drivers to work with minikube, including Docker.
I'll never forget one time in a large meeting when the technical experts were asked by one of the c suites what serverless meant. One brave soul rose to the challenge and after some fumbling around in their answer, they ended it with "you basically just upload the code to the server and then the server runs the code for you".
Took awhile for the cringing to go away on that one.
The opening bit “marketing term that loosely...” clearly explains that it’s pretty arbitrary when a service is called “serverless” and that decision is usually just a marketing one. Which is precisely true.
There’s no formal definition for “serverless”; it literally is just a marketing term that loosely describes shared tendency environment and The only reason Heroku isn’t “serverless” is because they don’t market themselves as that.
So it’s a pointless exercise nitpicking anyone’s definition since there isn’t an actual formal definition. Eg AWS uses the term serverless to describe other services that isn’t lambda.
The whole term is just made up marketing bullshit for shared tendency.
In the 70s we used to call it time sharing. But I doubt many people these days will remember that term.
A commonly made distinction between the two is that Heroku still exposes instances to you: You buy a number of "dynos". Whereas a "serverless" solution doesn't, e.g. lambda just spins up workers as needed and bills you for the CPU time used.
But an automobile is literally horseless. There's no rented horse hidden under layers of abstraction. I guess that "someoneelsesserverful" just doesn't have the same ring to it.
One might argue that USB 2.0 interface of RPi doesn't add much value when a USB SSD is connected to it, but benchmarks show that it adds at-least double the performance[2].
[1]:https://www.raspberrypi.org/documentation/hardware/raspberry...
[2]:https://jamesachambers.com/raspberry-pi-storage-benchmarks/