Hacker News new | past | comments | ask | show | jobs | submit login
End-of-Life Announcement for CoreOS Container Linux (coreos.com)
180 points by thebeardisred on Feb 5, 2020 | hide | past | favorite | 78 comments



Fedora CoreOS seems like it has some pretty significant restrictions

> It does not yet include native support for Azure, DigitalOcean, GCE, Vagrant, or the Container Linux community-supported platforms.

> The rkt container runtime is not included.

> Fedora CoreOS provides best-effort stability, and may occasionally include regressions or breaking changes for some use cases or workloads.


This is the Red Hat "stay in business" business model, not the CoreOS "make it up in volume" business model.


How is breaking compat good for business?


Or IBM


[I work at Red Hat, but not related to CoreOS.]

A small nit-pick: Not having Vagrant is not a "significant restriction".

FWIW, there is the robust and well-documented virt-builder[1] tool that plays well with KVM, Xen and other stacks. (Related documentation here[2] -- replace "f27" with the latest Fedora release.)

[1] http://libguestfs.org/virt-builder.1.html

[2] https://developer.fedoraproject.org/tools/virt-builder/about...


>> The rkt container runtime is not included.

rkt is dead


It is? The GitHub project page does not indicate that: https://github.com/rkt/rkt/


A former rkt dev here. rkt was archived by the CNCF with our blessings. Was just speaking to other rkt folks at FOSDEM about archiving the project on GH as well, which should happen shortly. We will also announce deprecation of rkt in Flatcar Container Linux very soon. rkt really changed the container runtime landscape for the best and we're happy to see that other projects improved because if it and that the space was able to consolidate a bit.


It's really dead. rkt never got much adoption and now Red Hat is promoting Podman instead.


But Podman is not the same thing at all.

Oh well. I always liked rkt, mostly as a sane alternative to Docker’s client/server and security model. What’s the best alternative nowadays ?


The most robust alternatives are containerd and lxd.

- containerd was pun out of the Docker engine to address community criticism. Pretty much every reason for creating rkt in the first place, has been addressed by containerd.

- lxd is very similar to containerd, but evolved out of the lxc userland tool.

There is also Podman and Cri-o, but I would not recommend those.

Unlike containerd and lxd, they were not created to solve an actual user problem, but to advance the interests of some vendors to the detriment of others.


> Unlike containerd and lxd, they were not created to solve an actual user problem, but to advance the interests of some vendors to the detriment of others.

Where can I read more about that?


'Between the lines'


systemd nspawn


Wow, I'd never heard of that. I've been using LXD for a while now and love it. From a quick glance at the docs, I'm not sure what benefits this has, apart from not requiring Snap. :D


If you're on a systemd distro, one advantage is you already have systemd-nspawn. Although, on debian boxes, it's split out into the systemd-container package.

Another advantage is it's somewhat integrated into the rest of systemd, having hooks into systemd-machined and the machinectl tooling, and an out-of-box instance unit file for systemd-nspawn@ where the instance name maps to the machine name. Meaning you can trivially start a container w/`systemctl start systemd-nspawn@that-contained-webservice` having nothing more than something useful in /var/lib/machines/that-contained-webservice/, or enable it to start at boot like any other systemd service i.e. `systemctl enable systemd-nspawn@that-contained-webservice`.

BTW, rkt was basically just a wrapper around systemd-nspawn, though the pluggable stages supported alternative containment mechanisms. The nspawn stage1 is what was originally shipped from the beginning.


Won’t be long until we do systemd ls...

I jest but systemd really is taking over a lot of functionality


It's less worrying if you view systemd as a mono-repo containing a collection of related projects maintained in one place, one of which is PID 1 - which really should have been renamed to systemd-initd.

The fact that Debian is able to isolate the nspawn-related bits into systemd-container without breaking everything speaks to the modular arrangement. Though the project may be a mono-repo under the systemd umbrella, it's not a monolithic beast antithetical to unix tradition as many like to claim.

It's odd how *bsd people don't get all up in arms about their core system pieces being in one repo, but the linux world loses their minds when sprawling messes get a little more consolidated even though it's for the better.


You say that the mono-repo contains a collection of "related" projects, but how many of those projects is it possible to install and use without installing and using at least one other project from the same repo?

It's possible to have a system with "ls" and without "grep", and vice versa, at least in principle. More importantly, it's possible to replace "ls" with a competing implementation, without having to change "grep". The systemd ecosystem is not structured in a way that lets alternatives be explored.


As cycloptic mentioned, journald is one of the few tightly-coupled components.

The difference is the utilities you mention interoperate at the level of bare UNIX pipes and execv/argv. Systemd components are largely integrated via dbus (UNIX domain sockets), as they're mostly services, daemons, which users don't generally interact with via execv/argv.

You're comparing apples and oranges here, and I'd like to note that GNU has already consolidated a lot of those CLI utilities into monorepos and I expect more consolidation like that to happen in the future as it's a natural evolution as the system stabilizes and active developers move on to other projects.

I think your argument would be more valid if systemd weren't establishing stable dbus interfaces, and instead were inventing all sorts of snowflake, constantly changing interfaces, but that simply isn't the case.

We already have examples of systemd components being forked and used independent of the project, to fulfill some of those interfaces without bringing in the entirety of systemd. [0] [1] I have mixed feelings about those efforts, but it at least demonstrates an ability to relatively trivially break off pieces you like and leave out the stuff you don't.

[0] https://wiki.gentoo.org/wiki/Eudev

[1] https://wiki.gentoo.org/wiki/Elogind


> I think your argument would be more valid if systemd weren't establishing stable dbus interfaces, and instead were inventing all sorts of snowflake, constantly changing interfaces, but that simply isn't the case.

This didn't _used_ to be the case, but then I just looked it up, and… Yay, it actually is now! † I just want to say thanks (to the whole systemd team) for that!

https://systemd.io/PORTABILITY_AND_STABILITY/ — the last time I checked was when the freedesktop.org page was still current, so it had been a while.


>but how many of those projects is it possible to install and use without installing and using at least one other project from the same repo?

Nearly all of them. Have you actually tried to do this? The only thing I can think of off the top of my head that won't really work separately is journald.


People actually doing this (i.e. trying to use systemd components on a non-systemd system) end up forking the critical bits and pieces, because trying to build and use them directly from upstream doesn't work well.

See: https://github.com/elogind/elogind https://github.com/gentoo/eudev

So yes, systemd is a mono-repo containing a bunch of loosely-coupled projects, but they are still coupled too tightly to sanely distribute and use separately without a fork.


> but they are still coupled too tightly to sanely distribute and use separately without a fork.

That's not true, forking becomes necessary when what you actually want is a different implementation fulfilling the same dbus interface.

If you just wanted intact systemd-logind and none of the rest, you could fairly trivially build systemd from source and package just logind and libsystemd and get on with your life. Maybe you'd have to carry some patches to inhibit some things like cgroups meddling in the systemd way, but that's no different than what say Debian does for practically every upstream tarball it packages.

Those projects have in a very real sense forked the components for the purposes of modifying their implementations in ways too substantial for some small packaging-time patches to cover.

I'd argue that it actually speaks to the modularity and organization of systemd's code that forking was a more attractive option for these folks than starting over with just the dbus interface in hand.


systemd should just be serviced and all these other projects should have their own name. That would make far more sense.


Systemd didn't take this over, nspawn is a pretty small wrapper around functionality that already existed. It turns out containers are not really that special compared to services, and most of the plumbing was already there in the service manager anyway.


> nspawn is a pretty small wrapper around functionality that already existed. It turns out containers are not really that special compared to services, and most of the plumbing was already there in the service manager anyway.

That's more than a little misleading. It's not like nspawn just calls into the service manager to get things done on its behalf via dbus or something like that.

If that were the case, rkt would only have worked on systemd hosts, since it used nspawn to setup its containers.

While it's true nspawn shares a bunch of code in common with the service manager, being in the same repository, it's a substantial program on its own and can function entirely independent of the service manager.

There was a time when nspawn actively required running on a systemd-booted host, but it was completely unnecessary and that check was removed while rkt was being developed. [0]

It's not some thin little ergonomic wrapper around existing service manager facilities.

[0] https://github.com/systemd/systemd/commit/4f923a1984476de344...


>It's not like nspawn just calls into the service manager to get things done on its behalf via dbus or something like that.

Yes, it literally does? https://github.com/systemd/systemd/blob/master/src/nspawn/ns...

Additionally there is a lot of shared functionality in libsystemd. Take a look at the rest of the code in nspawn and see how little it actually accomplishes.


> Yes, it literally does? https://github.com/systemd/systemd/blob/master/src/nspawn/ns...

No, it literally doesn't. That's just registration with the service manager, and it's optional. Basically it's to make the service manager aware of nspawn's actions, when it's on a systemd host.

I already pointed out they share a lot of code. The service manager process doesn't do squat on behalf of nspawn.


It doesn't matter that it's optional and you could use some other service manager. The minimal amount of stuff it does is not really that useful without registering with a service/container manager. Of course it doesn't have to be systemd and machinectl, but anything else would have to implement the same dbus interface if it wanted to work the intended way. My point is that nspawn would not have been written if it couldn't piggyback on this work that was already done. Otherwise all you have is a cgroup, some firewall rules and some mounts in a random folder, which as demonstrated here recently, can just be done with a small bash script.


This 'joke' has been repeated so many after every single release or even mention of systemd that it utterly baffles me how somebody could actually type it again.


systemd is a container runtime even without nspawn... you can control all of the namespaces and control groups via regular service units. Not sure if you can pivot_root too, but I would not be surprised.


Void Linux, Gentoo, and Arch Linux package LXD without Snap. Perhaps other distros too.


Podman is just a "control panel" for CRI-O.


It was donated to the CNCF years ago. As of today, it's the only "Archived" CNCF project:

https://www.cncf.io/archived-projects/ (https://web.archive.org/web/20200205190817/https://www.cncf....)


It is 100% we're finishing our migration away from it right now. Never getting kubernetes support killed it.


Last release was 2 years ago. It's either dead or feature complete and bug free.


Seems it’s at least trending towards dead. Hashicorp Nomad recently deprecated the rkt driver for lack of adoption.


As noted elsewhere in the thread: indeed rkt is dead.

https://github.com/pascomnet/nomad-driver-podman is a WIP Podman driver. We're discussing creating our own containerd-based driver, but there's no plans yet.


Yet another instance of RedHat buying something I used/liked and killing it off and offering a shitty alternative. And yet another reason I've tried to avoid giving RedHat money over the last 20+ years.


I'm especially sad since I finished porting some of our Docker infrastructure to CoreOS today. I guess I missed the proverbial writing on the wall.

Thank god Flatcar Linux looks to be a viable alternative and easy changeover.


How did you miss the announcement back in July? https://fedoramagazine.org/introducing-fedora-coreos/


You have the stable release of Fedora CoreOS now.


What product have RH killed after acquisition?


well, good thing there is https://www.flatcar-linux.org/


> In fact, if CoreOS Container Linux disappeared tomorrow, it would have very little impact on the Flatcar Container Linux project.

Well we will learn how true this is very shortly.


Chris from Kinvolk here. Our Flatcar builds are already completely independent of CoreOS builds. This is quite exciting for us as we can finally start updating the included software versions. We've been doing this a while for the edge channel (https://www.flatcar-linux.org/releases/) and have been waiting to do that with the other channels.


it would be cool if https://github.com/coreos/container-linux-update-operator would also be part of flatcar.


Interesting that they are yanking AMIs on 1st Sep.

> New CoreOS Container Linux machines will not be launchable in public clouds without prior preparation.

Are people here moving to FlatCar or Fedore CoreOS?


FlatCar. I’m using coreos-cloudinit for configuration, and that seems to be deprecated with no proper replacement.


Fedora CoreOS. Oh my. I din't even know Fedora Core is not the full name of the ordinary Fedora OS any more...


12 years have passed. Glad that you woke up.


I still don't feel like Fedora CoreOS is a good name choice for a new OS distinct from what used to be named Fedora Core. Perhaps they could name it after another kind of hat or something.


What about RancherOS? Is that a viable replacement?


I'm surprised more people aren't mentioning RancherOS. It's a great "Docker Only" OS, like many of the other options mentioned. It doesn't have the market that CoreOS did, that's for sure, but still a great tool. It's tiny and being actively updated.

Plus it's got good RPi support for all those serious Big Software Company deployments /s


Can someone recommend an immutable Linux distro, which offers auto-updates out-of-the-box and supports arm64 on RPi?

Have been searching for something like this for a while for a personal project, but unfortunately neither Flatcar nor Fedora CoreOS seem to support arm64, RancherOS does not to seem to offer fully automatic updates.


I'm not sure exactly what you mean by "immutable" but you might find NixOS (https://nixos.org/) interesting. However, it might not be the best experience at the moment for the Raspberry Pi 4 (https://github.com/NixOS/nixpkgs/issues/63720).


As choward already mentioned, I think NixOS is a very good choice: aarch64 is decently well supported with a bunch of people running NixOS on their Raspbys, and cross compilation support is really good too if needed. See https://nixos.wiki/wiki/NixOS_on_ARM/Raspberry_Pi for more info, or hop into #nixos and/or #nixos-aarch64 on Freenode for help


The ARM version of Fedora CoreOS is just known as the Fedora IoT Edition: https://iot.fedoraproject.org/

It's slimmer and more tuned for AArch64 and SBC type systems.


I mean this makes sense, right? Fedora Silverblue exists. Or are they totally separate products?


Silverblue is an effort to bring this technology to the desktop AFAICT. Fedora Atomic Server uses some of the technology for server workloads, but maintains some traditional structure and architecture.

CoreOS is intended to be a “new” architecture for servers if I am not mistaken. I may be :-)


CoreOS was built using Gentoo / ChromeOS / Chromium OS family. There is a minimum set of services to run containers. Rootfs is read-only. Updates are done atomically by updating the standby rootfs, and the swapped at boot.

There is no package manager. You bring everything else with images and containers. If tools are not found on the distro, you have to get them yourself by starting up an image that contains those tools.

The build system that builds a new release is Gentoo. All the packages that are there gets updated, though the final release does not contain emerge or anything that actually can compile or build anything.

After CoreOS got bought out by Redhat, they started porting over those ideas using the Fedora build system (so Redhat packages, yum, etc).

I think the CoreOS Container OS will live on inside GCP as the customized container os as the default distro used on GKE (managed Kubernetes).

Edit: I see someone mention FlatCar. Neat. I guess that is more of a successor project than the one used in GKE.


>"I think the CoreOS Container OS will live on inside GCP as the customized container os as the default distro used on GKE (managed Kubernetes)."

I found this interesting. Does anyone have any insights how or why Google ended up choosing CoreoS for their GKE offering?


(Former CoreOS/Red Hat)

I am 99% sure that this offering is only similar in nature, not a fork or using any bits from Container Linux


Historically CoreOS was the first OS you could easily PXE boot and have a working container host without performing installation. This was supported not as a secondary thought, but specifically designed to make scale out easier.

A lot of these concepts and benefits apply when you consider cloud images as opposed to PXE boot images. With CoreOS, there was little difference in configuration with these two patters, if the infrastructure had the same topology you could essentially use the same config (VM or Baremetal).

Google Ventures also invested money in CoreOS at one point.


Thanks for the insight. I was wondering what the two patterns are exactly that you referred to here:

>"With CoreOS, there was little difference in configuration with these two patters, if the infrastructure had the same topology you could essentially use the same config (VM or Baremetal)"

Also are re "cloud images" VMs then, similar to AMIs in AWS?

Cheers


Yeah. I didn't mean to imply Google chose CoreOS to work off of, just that the ideas are there.


They didn't. Google's Container-Optimized OS is based on Chromium OS. It's a similar concept, that's all.

https://cloud.google.com/container-optimized-os/docs/concept...


Didn’t know GLE bade OS was based on CoreOS. Is this documented somewhere?


Sorry, I misspoke. I did not mean to imply that the Google containeros itself is a hard fork, but rather the ideas behind them lives on. I think those were based on ChromiumOS, but not CoreOS Container OS at least when I peeked into running GKE nodes.

As I mentioned in the edit, it looks like FlatCar is a true successor fork.


Fedora Silverblue is for desktops, Fedora CoreOS is for servers, and RHEL CoreOS is for supported enterprise servers.


What did CoreOS offer that say ClearLinux, or good ol' Debian do not?

It seems like the only difference is you've gottta `packagemanger_xyz install` a couple packages...?


There was no native userland. You picked an userland and it ran in a container along with /etc and other mounts. The idea being that the OS itself should be separate, developed as a whole and immutable like an appliance OS like FreeNAS, XenServer or VMware ESXi. For ops lifecycle supportability (installs, upgrades, downgrades, etc.), keeping the OS immutable has a lot of advantages in the "system entropy" department because users and admins can't corrupt the OS itself as easily, can replace it more easily and it's far easier for itself to run HIDS against. If it I were developing an immutable OS, there would be a hidden volume containing signed squashfs volumes and updates would be delta overlays that could be merged when the older base volume is no longer needed.

Another interesting Linux distro that uses virtualization rather merely containers to isolate system components for security is Qubes OS.

Really, I think any new Linux distros should:

1. be developed as an immutable whole

2. run most system and user-added services in containers

3. have package collections like nix or habitat that don't have dependency hells

4. make service and configuration management easy

5. not reinvent the wheel without specific goals and niche/s to occupy that are better than what came before

6. do damn good jobs of smoothing package rough edges, upstreaming patches and rapidly releasing security fixes (including ksplice/kpatch)

7. least surprise everywhere

8. think carefully about whether to go systemd or s6/runit/daemontools

9. manage logs better, preferably without local text files, streamed across a distributed system that scales like flume or kafka


My team at a Very Large Software Company runs CoreOS for two main reasons:

1. Reduced attack surface due to a very slim OS with no package manager.

2. CoreOS Ignition, a killer app OS provisioning/customization tool. No weird barely-documented kickstart scripts, and no long build and upload times for OS images. We can make changes/customizations to our OS and have them deployed in a cloud within literally minutes. We consider this a competitive advantage.


I think kickstart is quite well documented, both upstream and in RHEL:

https://pykickstart.readthedocs.io/en/latest/https://access....

Any concrete examples of stuff the existing docs do not cover ?


I've maintained both kickstart and ignition professionally. The documentation and readability of Ignition is absolutely way better than kickstart.

A simple example would be downloading a binary file and checking it's hash during boot, with retries if the connection is flaky. Three lines of declarative state in Ignition vs complex scripting in kickstart.


Ideally one uses mainly kickstart commands to configure the system during the installation - then the installer (Anaconda) takes care for everything for you. If you use custom scripts with kickstart, you are pretty much on your own, like with any other custom script.

BTW, maybe open an RFE to include a helper script in the installation environment for easy & robust file downloading ?


stuff that is happening with CoreOS is a complete NIH show. it's really akward what they are doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: