It' so excellent to see more "tiny" distros for Kubernetes. The resource requirements of the control plane has plummeted lately, and it'll be exciting to see more IoT devices running the full k8s api.
At any rate, we have a couple customers using microshift at https://kubesail.com and it works like a charm for home-hosting! Might have to add docs for it soon!
It used to be microk8s by a long shot, but starting about 6 to 12 months ago k3s has leaped head in terms of simplicity and memory/cpu usage. k3s runs great now on a system with as little as 1 or 2gb or RAM.
> Functionally, MicroShift repackages OpenShift core components* into a single binary that weighs in at a relatively tiny 160MB executable (without any compression/optimization).
Sorry, 160meg isn't 'relatively tiny'.
I have many thousands of machines running in multiple datacenters and even getting a ~4mb binary distributed onto them without saturating the network (100mbit) and slowing everything else down, is a bit of a challenge.
Edit: It was just over 4mb using .gz and I recently changed to use xz, which decreased the size by about 1mb and I was excited about that given the scale that I operate at.
160mb is indeed relatively tiny compared to a full k8s distribution. The "i have issues distributing a tiny 4mb binary to my severs" sized hole that you've dug yourself doesn't change that.
Relative to Kubernetes (well, specifically OpenShift, which is Kubernetes packaged with a lot of commonly-needed extra functionality / tools)
A "hello world" written in Go is 2mb to begin with so ~4mb is a bit unrealistic for any substantial piece of software written in Go. Although if that colors your opinion of Go itself, you're certainly allowed to have that opinion :)
I would argue that 15K lines of code is far from a complex app as is normally understood. It's a rather small app, especially in Go which isn't exactly the most expressive language out there.
I'm going to take a guess that even a "mini" distribution of Kubernetes is more than 15k lines of code though. k3s is quite a bit smaller than this but it's still only described as "<50 megabytes"
I recently wrote a single-process k8s distribution (for tests). Compiled into a linux/amd64 binary with no special settings it's 183M with go1.18beta1.
If you use the reflection package in Go, the dead code elimination optimization step does not run if you use reflect.Method to look up a method because you can use reflect to reach those methods and the linker preserves the possibility.
Unfortunately, almost every large client library has public methods AND references someone who calls the reflection method somewhere, so in many cases you get the full tree of dependencies.
It’s unfortunate because any one dependency can bring in reflect and trigger the behavior. It hurts large generated method libraries the most since you usually only use a small subset.
That makes sense. Reflection is the enemy of anyone trying to produce a compact binary (or webpack bundle). A quick grep of the k8s codebase and there are plenty of situations that kill the dead code elimination:
staging/src/k8s.io/apimachinery/pkg/runtime/scheme.go: if m := reflect.ValueOf(obj).MethodByName("DeepCopyInto"); m.IsValid() && m.Type().NumIn() == 1 && m.Type().NumOut() == 0 && m.Type().In(0) == reflect.TypeOf(obj) {
I found go compiler isn't very efficient in not linking in the dead code.
I ran into it by switching from aws-sdk-go to aws-sdk-go-v2 and the binary jumped from 27Mb to 66Mb [1]
Granted fully featured app will likely use all of the module's code so it's not a factor.
Yeah, I have a feeling that these giant autogenerated clients are tough to optimize. Too many layers of indirection; even the compiler is confused and just crosses its fingers and hopes it doesn't have to investigate too much.
I don't understand the point of your argument, because nobody (including the announcement above) is talking about phones.
From the post:
> Imagine one of these small devices located in a vehicle to run AI algorithms for autonomous driving, or being able to monitor oil and gas plants in remote locations to perform predictive maintenance, or running workloads in a satellite as you would do in your own laptop. Now contrast this to centralized, highly controlled data centers where power and network conditions are usually very stable thanks to high available infrastructure — this is one of the key differences that define edge environments.
> Field-deployed devices are often Single Board Computers (SBCs) chosen based on performance-to-energy/cost ratio, usually with lower-end memory and CPU options. These devices are centrally imaged by the manufacturer or the end user’s central IT before getting shipped to remote sites such as roof-top cabinets housing 5G antennas, manufacturing plants, etc.
> At the remote site, a technician will screw the device to the wall, plug the power and the network cables and the work is done. Provisioning of these devices is “plug & go” with no console, keyboard or qualified personnel. In addition, these systems lack out-of-band management controllers so the provisioning model totally differs from those that we use with regular full-size servers.
I don't read this and think "phones". This sounds like it's targeted at embedded industrial / telecom devices. (At least based on the examples they chose, I'm sure you could use it for other things).
The word "mobile" doesn't actually appear anywhere on the page so I'm not sure where you got "mobile devices" from.
They reported the 160mb as uncompressed (and probably unstripped?) size. If you compress that it will be aroudn 50mb, still more than your 3mb xz example but at least you're now comparing apples to apples
> I have many thousands of machines running in multiple datacenters and even getting a ~4mb binary distributed onto them without saturating the network (100mbit) and slowing everything else down, is a bit of a challenge.
Maybe I suggest CAS (Content-addressable storage) or something similar for distributing it instead? I've had good success using torrents for distributing large binaries to a large fleet of servers (that were also in close proximity to each other physically, in clusters with some more distance between them) relatively easily.
Thanks for the suggestion, but as weird as this sounds, we also don't have central servers to use at the data centers for this.
The machines don't all need to be running the same version of the binary at the same time, so I took a simpler approach, which is that each machine checks for updates on a random schedule over a configureable amount of time. This distributes the load evenly and everything becomes eventually consistent. After about an hour everything is updated without issues.
I use CloudFlare workers to cache at the edge. On a push to master, the binary is built in Github CI and a release is made after all the tests pass. There is a simple json file where I can define release channels for a specific version for a specific CIDR (also on a release ci/cd as well, so I can validate the json with tests). I can upgrade/downgrade and test on subsets of machines.
The machines on their random schedule hit the CF worker which checks the cached json file and either returns a 304 or the binary to install depending on the parameters passed in on the query string (current version, ip address, etc.). Binary is downloaded, installed and then it quits and systemd restarts the new version.
> Thanks for the suggestion, but as weird as this sounds, we also don't have central servers to use at the data centers for this.
Same here, hence the suggestion of using P2P software (BitTorrent) for letting clients fetching the data from each other (together with initial entrypoint for the deployment, obviously), and you'll avoid the congestion issue as clients will fetch the data from whatever node is nearest, and configured properly will only fetch data outside the internal network once, after that it's all local data transfer (within the same data center).
I’d seriously look into BitTorrent for this use case, as it sounds ideal. You can even configure your client to run a script after torrents are completed, so you could script the update migration with minimal code. You can also setup upload/download limits easily.
I think Resilio Sync might also be a good option; I think it may even use BitTorrent internally. (It was formerly known as BitTorrent Sync, not sure why they changed the name.)
Over engineering. You're trying to solve a problem that I don't have. What I have now works great with a page of well tested code and doesn't have the complexity of BT.
I'll duplicate the comment above that the machines are all on separate vlans and don't really talk to many other machines.
You know your stack best. I appreciate the insights you have into your process and workflow. Do you use anything special to bridge vlans like Tailscale or WireGuard?
Thanks, indeed. P2P file distribution is an interesting idea, but doesn't really solve any major problems here. ~3mb file to a lot of machines in an hour a few times a week. Simplest solution is that each machine can just download the file itself.
The data centers don't need to talk to each other at all. We use wg at each dc for vpn access. I do need to spend a bit more time with my own wg client setup so that I can switch between dc's more easily... right now it is kind of a manual process and that is definitely a pain. One of these days...
I’ve heard that there is third party tooling that makes managing wg easier but I haven’t really looked into that space. I need to do it though, as that’s one of the major pain points of wg, that each client needs to be whitelisted serverside essentially.
Edit: Maybe something simple like rsync or rclone would be best for the update sync/distribution mechanism rather than BT.
A binary diff (or even plugin) mechanism could potentially optimize the download size even further. Such that only the parts of the app that change need to be downloaded.
That said, for now, downloading the binary really is a solved problem for me, with a relatively simple solution. I'd rather work on new features that drive revenue. =)
"relatively" is the operative word there. Compared to regular/full openshift, it is tiny. I would imagine they chose the word "relative" because in absolute terms nobody would call 160mb tiny.
> I have many thousands of machines running in multiple datacenters and even getting a ~4mb binary distributed onto them without saturating the network (100mbit) and slowing everything else down, is a bit of a challenge.
You may find murder[1] / Herd[2] / Horde[3] -type tool of some use.
If you're really that sensitive to size, may want to try 7z. I can usually get a few % smaller archive sizes than xz with faster decompression to boot. Of course then you might need to install a 7z lib which could be an issue.
Input binary size: 12996608
Folder contains a few more bytes for the service file and installer script.
4476005 (gz)
3421456 (xz)
3447940 (7Z)
tar c -C ./build $(BINARY) | gzip -9 - > $(PKG_NAME_GZ)
tar c -C ./build $(BINARY) | xz -z -9e - > $(PKG_NAME_XZ)
tar c -C ./build $(BINARY) | 7z a -si $(PKG_NAME_7Z)
I played around with the compression options on xz. If you have some suggestions on improving 7z, I'm all ears.
Decompression time isn't an issue here.
Installing 7z on the hosts isn't great, but could be done.
You could try the compression options for 7z, not sure if they would help in your case since I don't know the defaults off hand. Here's an example from the man page.
There was already a 'MiniShift' version of OpenShift, so the name makes some sense. Not that RedHat has been consistent with naming this product generally, what with RedShift, OKD, OpenShift, etc.
I wonder if maybe they could have gone with OpenMicro instead? I don't really dislike Microsift that much but my brain did immediately auto-insert the ending of "Micro"...with ...soft. So, there's that prejudice right there. ;-)
I don't see any reason to bring the IoT buzzword into this - but I am not a marketing guy. Standard OpenShift (Ok, I will try to refrain from profanity) is extremely resource intensive even for most fat enterprise environments. They could have just one product based off of what they are calling MicroShift - less resource usage, modular to the core so customers can add precisely what they need - some will use the IoT profile, some will use minimal-enterprise and so on. Right now they just try to smother you with lot of baggage and burden and their solution to everything is run a managed one in AWS - i.e. dictate the choices.
I just never liked the idea of taking something like open source k8s and creating a Redhat specific version that requires different treatment and whole lot of other enterprise stuff including RHEL. And it doesn't work all that better than GKE or EKS or even building your own cluster (I have done all 3.)
They should have just created tooling around standard K8s and allowed customers to use the good parts - deployment workflows, s2i etc. basically plugging the gaps on top of standard k8s. I can totally see lot of customers seeing value in that.
I'm running both OpenShift and k3s in production, and there isnt that much that requires different treatment between the two.
There are some specific OpenShift APIs (like routes which are terrible) and some quality of life improvements (service-ca signer) but nothing drastic.
Huh, interesting. What do you not like about routes? My team is providing an IaaS solution for internal developers in my company and a lot developers seemes to have less problems with Openshifts service exposition abstraction (Routes) in contrast to pure Kubernetes.
A big inconvenience is that for HTTP2 Routes or edge / re-encrypt Routes with a custom TLS certificate, the TLS certificate and keys must be inline in the Route resource instead of referencing a secret like Ingress resources do. I think this is a big oversight where Routes mix secrets and Ingress configuration together.
It makes GitOps annoying because I don't want to treat the whole Route resource as a secret that needs to be encrypted or stored in vault.
Do I also then treat Route resources as sensitive and deny some users access on the account they could contain private keys?
I also have to worry about keeping the route updated before certificates expire instead of having it taken care of by cert-manager.
For context, the reason Routes were designed to inline cert info (about 6-12mo before ingress) was that ingress controllers (which run on nodes) having the ability to read all secrets (which means the ingress controller has to be considered to be as powerful as any ServiceAccount on the cluster) was too scary. The alternative chosen was to prevent ingress controllers from seeing any secret but the ones the user decides to expose via the route.
Later on, the pattern of referencing secrets in extensions became more common, and things like the NodeAuthorizer (which allows nodes to only read the secrets associated with the pods scheduled onto them) demonstrated a possible different pattern we could have chosen to implement (although nothing that can be efficiently implemented without changing kube itself today).
Agree routes should have added a ref - that was feedback informing Gateway API, and once that hits GA and provides the best of both routes and ingress, we would probably suggest using that instead. Routes is mostly frozen for all but critical new features now though so we can ensure Gateway has everything we need to replace it while still providing the necessary forward compat.
I would treat routes as sensitive. Note that within a namespace there is minimal cross user security (not part of the kube / openshift threat model), so giving namespace read access to a specific set of routes and only infrastructure users access to all routes, OR using a wildcard cert on the routers and keeping all key material out of the user’s space. On 4.x versions you could also create multiple ingress controllers and assign them to different namespaces, preventing leak between them.
That I can relate to. For us, there's only one dev team which uses HTTP2 (financial industry, so HTTP2 is still seen as "new = beta") and encountered that problem. I have no idea how they solved it though.
FWIW, although I've known for a while that OpenShift coverts ingress resources to routes, I just found out that the Ingress Controller sets up a watch on the secretref which keeps the inline TLS in the Route in sync. That could be enough for some people.
Routes suck because they're basically the sameish API as ingress but now your developers have to maintain two kinds of manifests/helm templates depending on if they're targeting openshift or kubernetes.
You can create Ingress resources on OpenShift and it will automatically create routes for you.
You can customise the generated Route by using annotations on the Ingress resource.
This has worked well for us because not all helm charts are OpenShift friendly but they usually do allow customising the ingress resource with annotations or we patch it in via Kustomize.
When Gateway API lands we definitely plan to streamline everyone moving to that (since it’s the superset of Routes and Ingress and supports the features we didn’t want to lose going back to ingress). That should help.
We have considered having a controller that mirrors both routes and ingress to a gateway http route (since gateway is similar to routes), but plans aren’t finalized yet.
Have you tried running istio for example in an enterprise env - they needed you to install a RH specific older version and IIRC that wasn't just for support. I could list some more things if I recalled hard enough.
> I don't see any reason to bring the IoT buzzword into this
IoT is certainly a buzzword, but it also does have real meaning, and this product is aimed squarely at the IoT edge devices themselves. Seems quite appropriate to use the term IoT to describe it.
It's getting harder and harder to find the kubernetes part of openshift--wtf is openshift-dns? I'm not sold that this is better than k3s, and I think red hat isn't capable of balancing the needs of both a sprawling borg-like platform that assimilates all kube functions vs a lean kube experience.
I think one thing I'm definitely missing from the OpenShift docs [1] is reasoning. What does it add? Why do I want to learn to use an operator instead? Otherwise, it's pretty clear that it's just an operator on top of CoreDNS.
I do think that the docs are utterly devoid of Kubernetes content. I think historically, RH tried to differentiate themselves from K8s. Now, it can definitely hurt the knowledge migration and transfer.
I can't speak for Red Hat but I have a lot of experience with OpenShift.
Basically all of these operators are there to allow customization. The philosophy behind OpenShift is to move everything possible to operators so they can be managed in a cloud native way. This has a bunch of benefits like being fully declarative, and able to keep your whole config in version control.
Red Hat is definitely not going for a "lean" kube experience. OpenShift is heavy AF (for good reasons but still). They're going for "this includes everything you need for a full platform." For users that are openshift on the backend it's pretty nice to have uniformity with the edge too. This will be "light enough" for most cases although of course it can be further improved.
Openshift-dns is just their custom operator which deploys and configures CoreDNS. OpenShift is k8s with alot of RH operators bundled in to configure stuff like monitoring, logging, networking, ingress, etc.
One demo using Microsoft Azure IoT hub, docker, digital twins (a glorified json from memory), and a raspberry pi was fun because it took minutes to deploy something to make a light blink.
It's worth mentioning that MicroSHIFT is a large Taiwanese bicycle components manufacturer. It's unlikely there's a trademark infringement issue as they're in different markets, but this is kind of like a bike company naming its new product "Hewlett-Packard".
I could see a socks or underwear manufacturer called MicroSoft. Aren't trademarks domain specific anyway? Unless Nadella suddenly decides to branch into fashion of course.
Call me grumpy, but I spent 5 minutes on the site and still have no idea what "micro.dev" is other than "a platform for cloud-native development" (yeah, what isn't nowadays?)
I understand where the name is coming from, but Microshift is treading dangerously close to certain other company's name and trademarks. What are the chances Microsoft will see this as trademark infringement?
At any rate, we have a couple customers using microshift at https://kubesail.com and it works like a charm for home-hosting! Might have to add docs for it soon!