Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fedora Atomic Desktops (fedoramagazine.org)
112 points by HieronymusBosch on Feb 9, 2024 | hide | past | favorite | 105 comments


If you're not familiar, the Atomic project is really interesting. Its focus is stability and reproducibility, trying to solve the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`.

There's a community offshoot called Universal Blue (after the original Atomic image Silverblue). It uses the standards set for containerization to make userland configuration reproducible as well. There's a manifest (Containerfile) that enumerates all the modifications, which means an upgrade is bump the version of the base image and replay all the modifications from the manifest. It's also meant to limit `sudo` usage, so you're not in the habit of giving root to random software you downloaded from the internet.

Their most famous image is Bazzite, which will replicate the SteamOS experience on generic hardware. They also have Bluefin for software developers.

I haven't used it myself, but I find the concept fascinating. I expect Jorge and Kyle from that project will find their way to these comments.


> It uses the standards set for containerization to make userland configuration reproducible as well. There's a manifest (Containerfile) that enumerates all the modifications, which means an upgrade is bump the version of the base image and replay all the modifications from the manifest.

Is the containerfile syntax and reproducibility as good as configuration.nix / NixOS?

I love NixOS but it’s a very acquired taste, to the point where even I occasionally wish I was running something bog-standard. If this is similar to NixOS but closer to regular Linux, that’d be nice to recommend to friends.


It's not as exactly reproducible because there's no version locking, however after running the Containerfile you have a snapshot of the filesystem that is ready to be used and that you can save. Universal blue images use GitHub container registry, and it's 90 days of history to have at least 90 days of rollbacks available.

I'm currently setting up a Bazzite machine by using a GitHub actions to build every day an image from Bazzite's image and adding/removing packages and files on top. I have the DE, login manager and all its customizations in the image and for the CLI utilities and thing like that I use home manager.

I like this setup because you just need to know Linux to customize the image, Containerfile's are just series of commands or file copies from the repo, compared to nix it's easier.


Thank you for the info!


I've only used NixOS, but the Containerfile looks more like a shell script than a Nix config:

https://github.com/ublue-os/bazzite/blob/main/Containerfile


    AS incomprehensible && \
       as nix can be && \
       i would take it && \
       any day over && \
       configuring a desktop OS && \
       with the horrible && \
       Dockerfile syntax


Containerfile is just what the IBM containers group calls a Dockerfile. They are 99% compatible.


That looks like something I wouldn't wanna use, all imperative


Same. I can't be the only one who feels that Nix is doing the right thing the wrong way. The right thing being reproducible, declarative, composable environments; the wrong thing being its language and tooling. Too often I feel like serious Nix users spend a distressing amount of time manually doing package manager tasks, so the way forward is to stop doing exactly that. Going back to imperative composition is a step backward that will never help people free up time away from package management.


FWIW I am starting to use home manager on my new macOS workstation, and I haven't had to dig too deep into Nix, nixpkgs, or NixOS.

I might hit limits soon as I rice my neovim install.


Make sure to have a look at nixvim: https://github.com/nix-community/nixvim


The language is just JSON with functions. It's actually so nice to write configurations in that I wish it was more easier to use as a standalone thing.


I wish guix was a little more mainstream.


I will say, I prefer Bazzite for dev too.

In general Project Bluefin is newer, but seems to be trying to get into gaming too.

Likewise, Bazzite is considering developer images also. So I might switch to those.

But it's so easy to switch. I currently use Bazzite + Nix + HomeManager + Flatpaks and it has been fantastic. I only layer Tailscale and a few minor things that need be system level to operate right.


Bluefin is actually older, it's the base that kickstarted the whole development of Universal Blue. They are pretty much as mature, too. Bazzite is more gaming oriented, Bluefin more general purpose. Bazzite is way more popular since gaming attracts people.


Can you explain what you mean by:

"the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`"


Yeah, I am wondering the same. Is this referring to some kind of versioning conflict (like the old Windows DLL Hell)? Does that regularly happen in any Linux distribution repository? Or is this a matter of people going cowboy and mixing in other random repositories on top of the distro? I see the whole role of the distribution maintainers being to provide a self-consistent repository that doesn't have this kind of problem.

And as a long-time Fedora user, I don't think I've seen such conflicts with the moral equivalent yum/dnf command. But, I am somewhat rigid about not adding third party repos or RPMs to my systems. The only two exceptions I've come to accept are repos from rpmfusion.org and postgresql.org.

While I have certainly seen some bugs in Fedora over the decades, I don't see how some "atomic" solution helps here unless it means reorganizing the community QA resources to test some "minor releases" which batch together a set of package updates, versus trying to support continuous integration where each package can update individually. That would actually worry me though, as my own career experiences cause me to prefer continuous-integration approaches like the traditional Fedora distribution.


> But, I am somewhat rigid about not adding third party repos or RPMs to my systems.

This is the reason why packages seem so stable, you're deliberately staying within a well-tested ecosystem.

Fedora release upgrades probably go well for you also!

Packages themselves are a perfectly fine distribution method [under the same guidance].

Once you start mixing packaging spec guidelines; packages of varying quality, you end up wanting compartmentalization like containers/bubblewrap

Off the cuff example: Fedora makes heavy use of macros in their RPM specs. Most third party packages don't.


Every package can do whatever it wants to your system, upgrades fail when surprising config changes occur, and it all hinges on maintainers knowledge of the state of the distro as well as the software they’re packaging


This issue is not specific to Linux and anyone who's worked in development should be pretty familiar with it. You get onboarded to project A which requires Node, okay, you install Node. Some time later you get onboarded to project B which requires a different version of Node. Fine, get nvm and jump between different versions of Node. A bit later you're asked to help out on legacy project C, which only works with Python 2, while you also need Python 3 for newer stuff. After that it's only a matter of time until you need something which requires Homebrew and then all the cards are off the table. Etc.


> trying to solve the fragility that can happen when the default way to use software in Linux is `sudo apt-get install`

What fragility is that?

Is it something outlined in https://wiki.debian.org/DontBreakDebian ?


“What fragility is that? The one described in detail in this document?”

Yes, indeed it is


so, installing random software from random repositories equals fragility? that doesn't seem specific to apt at all. however the article is written Fedora-specific, so maybe people don't like to point out that dnf/yum is susceptible to the same problem. In fact, the article doesn't even try to call out apt, or fragility.

There is a use case for immutable distributions, just as there is one for those distributions which are not immutable.

It is dishonest to attribute fragility as a basic flaw in apt, when system fragility is a consequence of ignorance.


Yes, I do think that is fragility. Immutable distros, iOS etc have it right - installing software shouldn’t be able to fuck up the system.

People gotta install from “random repositories” because shit they need is not in official repos, further showcasing the shortcomings of the entire setup and its reliance on maintainers. This derogatory statement only works against your argument, rather than supporting it.


Nobody's saying "apt is fragile." I used it as an example because it's the install command I'm familiar with, and the one I see most often in Linux install instructions. Ubuntu's popularity made it the default package manager when outsiders think "Linux."


Bazzite is pretty great (I have set up a Steam streaming VM using it). I am also using Silverblue as an ML sandbox, with good results.


Ugh, the Sway/Kinoite Atomic spin is so tempting for me as a regular Fedora user. Two weeks ago, a kernel update borked my laptop, and for some reason it deleted the older versions, so I was forced to live boot, chroot, and install a beta kernel. A few days ago, my desktop stopped booting - something to do with Nvidia drivers, and older kernels don't help. I don't have the time to try and fix it any time soon.

But, there was definitely pains with this kind of desktop when I tried it last time. Regular old software I want to install via dnf is painful to install - you have to layer it on top of the base image, and then that makes it "nonstandard" basically from then on. I know they push you toward flatpaks, but the vast majority of apps I use don't have it (or I don't prefer using the Flatpak version).

Can anyone give a more recent perspective? It's been about two years - I probably used Fedora Atomic 35/36.


I use Bazzite. It includes nix and fleek, so you can easily install packages from the massive nixos ecosystem.

I then add JetPack's devbox which is another nix porcelain and use it with direnv to install custom packages for each of my projects.

And I have a few distrobox's (toolbox on generic Silverblue) to do things like build Debian packages.


Bazzite's sibling Bluefin is specifically meant for developers (the bluefin-dx image) and standard daily driver. Bazzite is targeted more towards gaming and Steam handheld although you can use it as a desktop too.

https://projectbluefin.io/


I tried Bluefin and it's still early days and such. Also Bluefin has gaming images, so it seems it's weirdly trying to compete with Bazzite.

Also Bazzite has an issue open to make `-dx` images.

It seems both are trying to do it all with a different main focus. Bazzite is Gaming but can do other stuff just as well. Bluefin is Dev but can do others.

I personally find Bazzite more diverse and capable.


I'm using Silverblue 39 for about 2 month coming from NixOS Unstable. It's working very well for me. I have some packages layered like Nvidia and fish shell and https://github.com/CheariX/silverblue-akmods-keys for AKMODS modules work with secure boot. Things like neovim, pyright, helix, starship, LSPs and CLI applications I install with brew (brew.sh). For desktop things I use Flatpak.

I had a problem with some Flatpak applications (like Steam and Discord) and brew because brew puts its folder in the $PATH before the default ones (/usr/bin ...) and those Flatpak applications tried to use SSL keys from brew instead of the system ones. I just changed the order of the $PATH to make brew bin path to be after ther system ones.

For VSCode I'm not using the Flatpak I'm using the tarball one I just extract in ~/applications and symlink the code binary in the ~/.local/bin. It's working well, I don't have problem with VSCode not executing LSPs and lint things. The only problem is VSCode from tarball cannot updates itself, so I need to download the newer version and extract to ~/applications. There is this VSCode CLI version (https://code.visualstudio.com/docs/?dv=linux64cli) but I was not able to make it use the wayland backend.


Ah, glad to hear it's working well! I recall layering issues but I'm going to look into switching over. Thanks :)


> a kernel update borked my laptop, and for some reason it deleted the older versions

Deleted older versions? That sounds severe. I hope you filed a bug report if there wasn't one already.

Did you change your installonly_limit?


Hm, going back and checking the dnf log, I believe the old kernels disappearing was due to chrooting in a few times before, in my efforts to try to force it to install the older kernel. perhaps it cleaned the package cache or otherwise uninstalled/ran autoremove.


I really like Silverblue and run it on a couple of secondary machines (like in my workshop), but it’s still rough for anything off the beaten path.

The largest pain points for me:

- Any kernel modules. I know Ublue has images but I wish Red Hat would just have an official solution that doesn’t require hacky RPMs and such.

- Kernel cmdline args or any initramfs changes: can’t package in image and need to be applied manually. Maybe it’s possible to build a custom initramfs to distribute?

- Secure boot and enrolling moks is very annoying. My current workstation just uses sbctl to sign a UKI against custom keys and everything “just works”. This is part of why kernel modules are a pain in Silverblue too.

If you don’t care about kernel modules with secure boot it’s quite nice though. Practically zero maintenance.


After working with these types of systems, I'm convinced we need a new type of package manager that works with overlays and merges package databases somehow. That way you can update the underlying image (at your own peril, maybe) and have the overlay package manager see the new versions. Constantly rebuilding everything when the underlying changes is a waste.


Nix?


AFAIK Nix wouldn't solve this as it has the same issue (/nix/var/nix/db). Here's a scenario to better illustrate:

I'm using systemd nspawn with my host root as a lowerdir overlay. In this container I install some packages not present on the host. The overlay upperdir now includes the new packages and the new package database. I upgrade my host, and now the nspawn package database is wildly out of date because overlay doesn't track line-level file changes.

OverlayFS is really handy but it causes a ton of churn from rebuilding everything.


The kernel modules problem really highlights why the push to do more in userspace in recent years increasingly makes sense. Hope to see more kernel changes to support this push.


I just got into Silverblue (also love Nix), and I really feel that it's the way Linux "should be". I say this as a Linux user since 1999. If you haven't checked it out, imagine that the base (read-only) operating system (drivers, etc) changes in a very controlled and atomic fashion while all your userland stuff is updated via Flatpak (or Distrobox, etc). The odds of breaking your system are virtually zero, and everything works out of the box. It's amazing.


This is similar to the experience (if not so similar on the implementation side) of running MacOS with Homebrew or Macports: the stuff you directly use stays up-to-date, but the base system is separate so it’s nigh-impossible to mess anything important up just by updating your user-facing packages. You also don’t have to choose between stable-but-old and unstable-and-new—the base OS (including gui!) is stable, the stuff you use can be new, because the two are barely coupled at all.

It’s a lot closer to the right way to manage this stuff, for desktop systems, than traditional Linux package management approaches. That approach is painful to go back to, after getting used to this. I’ll have to give this a try next time I poke my head into Linux-land.


Does Silverblue have the same plaintext system configuration capabilities like NixOS?


I know that Silverblue and Kinoite are more established, but I would have liked to see a consistent rebrand. "Fedora GNOME Atomic" is a better name for the same reasons that "Fedora Sway Atomic" is.


It would be called "Fedora Workstation Atomic" unless they wanted to rename the Workstation branding too.

Keeping Silverblue makes sense to me with that in mind. I feel like Fedora Kinoite should be renamed though. The one "halo" distro can use a distinct name but everything else should follow the new pattern IMO.


Yeah, I've been running Kinoite since it came out and I agree. The name is still very obscure, I'd be happier to just tell people "I run Fedora KDE Atomic".


Silverblue is itself a rebranding of "Fedora Atomic Workstation"


I knew what Silverblue was and was decently certain about Kinoite, but had heard about neither Onyx nor Sericea. I think the rebranding is a smart move here for both brand recognition and searchability. I might have gone a step further and renamed the Gnome and KDE versions as well.

Beyond the naming change, I'm really excited about those projects. I strongly belive that atomicity is the way to go and believe that eventually many distributions will evolve in that direction. Right now I think the tradeoffs are already worth it, but there may be a ways to go before I'd recommend it as the default for new users. (Even if they might in particular profit from easy error recovery.)

EDIT: I want to add that the easy error recoverability that atomicity provides isn't just important for errors upstream that break one of your upgrades, it also enables much more experimentation. I have learned a lot more Linux systems because I was able to fearlessly tinker with many integral parts that I would never have touched in a traditional system for fear of having to reinstall. After all, if I broke it, all I had to to was to reboot to unbreak it!


I never tried Fedora before, but a few weeks ago Bluefin got mentioned here and I went down a hole reading about Universal Blue and ended up making my own spin. Love it. Immutable is incredible


I use Fedora KDE as my laptop distro and have been interested in Silverblue/immutable versions. However I am not a developer so I’m not sure if it would offer any real benefit to my use case. Mostly use my laptop for web browsing and file transfers to my NAS.

I’ve seen people say that immutable is the future of Linux, can someone explain that if they can?

Does that mean one day all versions of Fedora will be immutable? Is it a security benefit?


In my limited experience, I'd say the immutable spins are even better for non-developers. Getting developer tooling going in Silverblue was enough of a hassle for me to disregard it, for the time being.


I have a SDK that I need to git clone from a repository, and then run a script file from the SDK to fetch the actual tools for my build environment.

Am I right in saying that kind of development environment would be hard to use and maintain in Silverblue?


Why not have single generic "Atomic" version and provide desktop environment as selection during installation? Like Debian does. Or is DE so tightly coupled into images, that it cannot be replaced?


Fedora immutable distros are image based, so yes, the base system is a pre-made unit, over which you can layer minor packages but not something as extensive as an alternative DE. Hence so many spins, as well as separate versions for stuff like Nvidia cards, Asus machines and so on.


Good, they changed the name of Sericea just as I learned how to spell it. :D

I started using Silverblue in October 2022 and now I've been using Sericea for the past 2 months.

Long story short, immutable is the future of Linux.


> immutable is the future of Linux

Or indeed, the past. Happy nixos user since ~2016 here.


I liked Nix, because it worked very well to reproduce "the same system" every time. However, it became unmanageable and a huge time sink to constantly remake how I scripted together my nix flakes per the current/modern way of doing that. And then finding a repository of how someone else managed this on Github and rewriting my configuration again.

Nix ultimately made me feel like I was running Gentoo. Excellent build system, but I could not invest the time in learning ebuilds, and I could not invest the time in using Nix's language to manage its packages. It was just a huge time sink and baggage to maintain.

rpm-ostree is lowering the barrier to entry greatly to produce layered [open container] images that we can rebase off of. That is the future among all this atomic stuff. It would be entirely possible to /someday/ build a system with the nix toolset, then commit the image through rpm-ostree, and have others rebase from it. Best of both.

In the future, with immutable systems, it probably also means being able to have those very robust update systems in place like Chromebooks where you update and switch to partition B with the new image, while having the fallback on partition A. It would also probably make it easier to verify and sign these images for secure boot purposes so then enabling secure boot on Linux becomes easier/convenient, and the secure default.

I still think the Nix language is yet another language I don't see the value in. I'm not learning this, or becoming comfortable in this language, or convincing myself it's a comfortable language to use ...just to maintain my 2-3 laptops when I don't use Nix anywhere else.

Time sink. But a very solid system.

PS: I want them to layer everything on Fedora CoreOS. Make CoreOS the secure thin base, then create all these atomic "Spins" as a derivation of CoreOS.


I have also invested a hugely disproportionate amount of time in writing my Nix config. However, that is only because I found it fun to massively over-engineer everything and wanted to make my work easily usable for others.

That said, I could have gotten away with significantly less and I'm really not sure what you're referring to with having to remake your Flake. I'm not aware that the output schema changed over the last years. Could you provide an example of what caused you to rewrite your configuration?

I'm not doubting your experience, the documentation situation is not great and I could absolutely see someone getting stuck in a situation where they felt they needed to start over. I'd just like to know what the community could do to avoid these pitfalls.


I'm using nixos and I never really needed to "learn the language". I used the installer, which gave me sensible defaults, and now I just dump more and more stuff to my configuration.nix as I need it.

Their website contains plenty of examples to copy/paste and since the distro is already prett old, chatgpt can easily help too.

Ok so perhaps my config isn't the prettiest but who cares. It works and it's a record of what my system is.


I never got around to the flakes thing - if something isn't already in the repos nowadays, I lose interest quite quickly. It's really helped with just getting on with life instead of futzing around with settings. What does work, is rock solid.


I am also a nixos enjoyer but I have mixed feelings around the fact that I still don't really understand the configuration language.

I'm happy in the sense that its simple enough that I can install programs and do all the other usual desktop stuff with minimal knowledge. But I'm unhappy in the sense that when I started with nixos I wanted to replace docker and docker-compose. I still haven't accomplished that.

But all-in-all I'm very happy with Nixos for desktop use. No crashes, no bugs. Its the reason I was able to permanently drop Windows.


I know they're kind of experimental still, but I have personally found Nix Flakes a lot more fun to work with, at least once I sat down and actually learned about them. I feel the interface is more consistent, and it gives me the "replacing docker" experience with the ability to make devshells and ephemeral builds.

Granted, I'm not running Nixos on my main laptop, because I have had a hell of a time getting me 2020 Macbook to get Linux running stably, but I do use Nix as my sole package manager within MacOS, and I do run NixOS on my server and use Flakes heavily there.


I went down the rabbit hole and now feel comfortable claiming I do understand both the programming language and the NixOS module system, which is effectively an embedded DSL, as well as a decent chunk of the nixpkgs conventions.

And it certainly is a great feeling, but I'm not yet sure it's really "worth it". Of course the effect is great, but it was also an enormous amount of work to get there.


Generally speaking, you should never learn a language. You should use a language and let learning happen.


And every time your libc needs to be updated, almost every other package you have installed must be updated at the same time.


And if there's the slightest issue with the update, you can just reboot into the previous generation and continue working until you have time to figure it out later, or just wait until it's fixed!


Fedora atomic desktops have that property, too, without the wasteful property I just described.


Yes, there is a garbage collector to get rid of the old stuff you don't need anymore!


The existence of the garbage collector does not negate the wastefulness of updating every package on the system just because the hash of the C library has changed: even if all the old versions of the packages get garbage collected soon enough, it still wastes network and storage bandwidth to download all the new packages into /nix/store/.

NixOS has some good ideas, but also some seemingly boneheaded or impractical ones.


This is what allows you to have multiple versions of something and all its dependencies installed and not conflict with each other!


No it is not. That is my whole point: NixOS could have one without the other.

Consider modifying NixOS to remove the requirement that the long hex id in the name of a package in /nix/store/ is a cryptographic hash of (among other things) the hashes of every package the package depends on (thereby eliminating the need to upgrade every package on the system every time the libc packages is updated). I am pretty sure you can do that while retaining the property that it is easy to have multiple versions of something installed that don't conflict with each other. I have not actually done that (I have not actually modified NixOS in that way, then tested the result, e.g., by making it my daily driver for a while) which is why I'm using qualifiers like "I am pretty sure", but I'm confident enough it can be done that I consider it worth bringing up in online conversations about NixOS. The cryptographic hash gives you a guarantee that a binary package you got from the NixOS package servers has not been adulterated in some way on its way to you--a guarantee that you can check without your having to go through the trouble of building the package yourself, but IIUC that guarantee is not actually used anywhere to make the supply chain any more secure.

In general NixOS seems to be bad at security or not to care about security: for example https://news.ycombinator.com/item?id=36268776

Again there are good ideas in NixOS (including ideas that seem like they could be used to meaningfully increase security) and I hope anyone creating a new distro studies NixOS, but as a distro to be actually used in anger in the present day I am not impressed.


These look like great ideas, I look forward to trying out your branch! It's pretty easy with Nix, I'd just have to NIX_PATH=<path>/to/nixpkgs-hollerith/ nix-shell -p <some package> to try it out. Good luck!


You probably don't even need to reboot. Only once have I broken a generation badly enough I actually needed to reboot. And that was entirely my fault: I mis-configured PAM and couldn't sudo anymore to switch back.


I don't think it does hot patching the kernel?


> Long story short, immutable is the future of Linux.

If I didn't want my computer to change I would simply not turn it on!


So can anyone help me understand why RedHat/Fedora has a containerized solution for desktop apps (flatpak), but nothing like that for CLI apps? It seems... odd.


I believe toolbx is supposed to fill that gap, if not normal containers


Toolbx, Distrobix, Nix are supposed to fill that usecase.


Because it was already considered a solved problem. You can use standard containers for that sort of thing, so why reinvent that wheel.


If anyone isnt aware, Fedora is currently the GOAT OS. Don't knock it until you try it. Everything just works, and it works like the Platonic form of a desktop should.

A Fedora Cinnamon Atomic would be a wet dream for me. I'm surprised that wasnt prioritized. Budgie Desktop looks interesting.


The spins are created by Special Interest Groups from the community. Spins exist if someone is interested in creating and maintaining them, they are not prioritized by the Fedora Project itself.


I tried getting to Fedora but the package manager was slower even to deb, the hardware video acceleration in browsers required me to juggle drivers from fusion due to licensing problems, and I use AMD which I would expect to not have issues.


The package manager is a little slower but `dnf history` is a lifechanger. It's so easy to list the history of all transactions and undo a specific one. There are huge performance improvements coming in DNF5, scheduled for Fedora 41 (this fall).


Is there a downside to just installing the dnf5 package right now? I see it's available for 38+.


It's not considered polished enough to make the default. Some functionality isn't reimplemented yet. It's difficult to transition backwards once you've switched.


Where can I read about it?


> I tried getting to Fedora but the package manager was slower even to deb

Try setting "max_parallel_downloads = 20" in "/etc/dnf/dnf.conf" the next time you try (the default is 3 which often doesn't saturate the network and is just slowing things down).


Try using the flatpak version of browsers. Acceleration should work ootb


Questions for Silverblue / Atomic Users:

I tried Silverblue a couple years ago and found myself rpm-ostree layering some basic tools like Fish shell and Mosh. Is layering still the preferred method for installing these types of tools or do you have "generic" container (made with toolbox/distrobox) that you jump into for generic shell work in like ssh'ing around and file management?

Also, how do you handle things like custom services? For example, Nebula overlay network doesn't really have an installer. Its just a single binary. I manually put that in /usr/local/bin, put the configs in /etc/nebula, chmod those configs to hide them, update selinux, and create a service file for it. How would I do that in an immutable system?


`/usr/local` is writeable so no change there, that should work fine. I keep a container as my "day to day" linux and just have my terminal autolaunch into mine. You can use any distro's container for this so it's personal preference.

I'm using Prompt, it's a new terminal designed to make the toolbox/distrobox flow much nicer: https://gitlab.gnome.org/chergert/prompt

It's still relatively new so it isn't on flathub yet but it makes everything mostly seamless.


Good to know. Thanks for the info! I'll be keeping on eye on Prompt.


How do you deal with the lack of systemd in the container? Where do you put software that ship systemd unit files?


I use containerized versions of things, ubuntu and chainguard images mostly.

You can always create containers with init if that's how you want to do that though. Some distros publish images that come that way: https://github.com/89luca89/distrobox/blob/main/docs/useful_...


Thanks, that link is chockful of useful tips!


Don't use immutable distros on machines you might need for bootable disc or bootable thumb-drive use in the future. Found that out the hard way.


> need for bootable disc or bootable thumb-drive use

What does this mean?


What's the catch?


I'm using Silverblue for 2 years now it is very nice to consume things, but it gets tedious when you want to some development, so I really just use it to play Steam games (flatpak), watch videos and browse the internet.

never had any issues so far for those use cases

its definitely something I'd install for my parents


I'm having a blast doing dev on Silverblue! I've got Nix and Devbox installed for reproducible per-project shell environments with just the right packages, which feels really reliable. I also have a distrobox with some general-purpose dev-related packages incase I just need to quickly compile something. Overall a better experience than installing all devtools systemwide IMO.


yes, devbox is definitely something I will try!


All my dev stuff, editors included, is in a Debian distrobox - since that's usually well supported by tooling, but you could also go with Ubuntu for even better support and/or Nix for more obscure packages.

GUI applications work fine from a distrobox and it makes the integration very painless (eg. VSCodium extensions). You can export them with a built-in command and it will even create a regular desktop entry for them.


I like the idea of atomic base systems; it's very BSD-like. I may have to give this a try.


Is BSD this way? I never used it, but I haven’t read about it either.


Yes, the base system is a cohesive whole, not a set of packages.

You don't update the kernel separately from, say, the core user land, the way you'd update linux-kernel and binutils as separate packages on a linux distribution.

BSD doesn't use the term "atomic" but as near as I can tell it's the same idea.


WTF is a "spin"


A Fedora distro with a different desktop (or sometimes a specialized purpose):

https://fedoraproject.org/spins/




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: