Hacker News new | past | comments | ask | show | jobs | submit login
Fermilab/CERN recommendation for Linux distribution (fnal.gov)
225 points by jaboutboul on Dec 8, 2022 | hide | past | favorite | 139 comments



All this time I had thought that Rocky Linux was winning the fork war over Alma (in the fight to be the successor to CentOS), but this post might change that with a good chunk of the science community throwing their weight behind Alma.

Do we have anyone else in the audience that has any insights over whether Alma is more prevalent over Rocky or is it the other way around?

I know I can run Rocky Linux from DigitalOcean which I appreciate.


Rocky is "winning" if by "winning" you mean more widely used. Here are some graphs charting usage through EPEL statistics: https://rocky-stats.tiuxo.com.

The source used to generate the graphs is available at https://github.com/brianclemens/rocky-stats, however please be kind to the Fedora servers and don't download the stats database too often if you use it.

Also Rocky Linux is far larger in terms of community size, etc.


> Here are some graphs charting usage through EPEL statistics: https://rocky-stats.tiuxo.com.

As someone who has had to use Oracle Linux for quite a few projects due to requirements, those graphs are actually a sobering look on things.

Of course, all of those distros are reasonably similar at the end of the day, but it's pretty clear that the popularity of Rocky Linux, Alma Linux, and even CentOS Stream are all formidable.


Wow: those Alma Linux curves look nice. The Rocky Linux trends are definitely more aggressive, but they look a little bit tulip-fever-ish.

Also noteworthy is the linear nature of the RH progress.

I welcome any further interpretation in a similar (or contradictory) vein to my thoughts.


Also note that genuine RHEL users generally don't want to use EPEL that is out of support.

> The EPEL repository is used by a significant portion of Enterprise Linux users. The data they share are unbiased, and it is reasonable to assume approximately equal proportions of users of each Enterprise Linux distribution use the EPEL repository.

From the article but I disagree.


While it's true that a good number of RHEL users don't use EPEL due to the lack of vendor escalation, there are many that do use it. In fact, they tell Red Hat that specific EPEL packages being available in the next version of EPEL block them from migrating to the next major version of RHEL. The feedback finally got loud enough that Red Hat decided to provide headcount to the Community Platform Engineering (CPE) group to create an EPEL team focused on improving EPEL (this is my team at work).

https://communityblog.fedoraproject.org/cpe-to-staff-epel-wo...

That said, the "vendor escalate-able only" group is large enough that RHEL is significantly underrepresented in EPEL stats. My guess is that somewhere around 40-60% of RHEL customers use EPEL. For RHEL rebuilds, I would guess that it's probably in the high 90% range, so their numbers in EPEL countme stats are probably fairly accurate. CentOS Stream on the other hand is also underrepresented, as we have countme data for both EPEL 9 and CentOS Stream 9, which shows there are over twice as many instances without EPEL as instances with EPEL.

Keep in mind that the countme data only includes systems connecting directly to Fedora's MirrorManager. Sites that run their own local mirrors are not included. We will never have a complete picture of popularity between these distros. For example, Facebook runs "millions" (a direct quote from their engineers' conference talks, they don't publish exact numbers) of CentOS Stream instances, which is more than everything else in the EPEL countme metrics combined.


Thank you for your number about how much RHEL+EPEL used. I work for old company (with RHEL local mirror) so not sure about whole real usage.


> Do we have anyone else in the audience that has any insights over whether Alma is more prevalent over Rocky or is it the other way around?

This is wrong reasoning. Distributions live on the shoulders of maintainers/developers. While popularity may be a factor attracting maintainers/developers it is not a deciding factor. If the less popular distribution attracts more maintenance/development effort (again, not necessarily head count) it will win back market share in medium term and once popular distribution will stagnate and fade.


It depends. As a counter example think of Scientific Linux [0] which was also an RH clone supported by Fermilab and CERN. It eventually got replaced by this "obscure" distro called CentOS.

[0] https://en.m.wikipedia.org/wiki/Scientific_Linux


Wouldn't it make sense for Alma & Rocky to now combine their efforts?


We picked Rocky Linux because it was the first to get full support from Mellanox/Nvidia.


> Alma

Having just played through the F.E.A.R. games that name gives me weird feelings.


Having been assigned the name in a creepy religious temple ceremony we are in the same boat


Means soul in Spanish/Portuguese


I love that game, but the first thing that came to mind was the satellite dish array somewhere in the desert.


Having just watched through undone series that name also gives me weird feelings.


While I didn't have many projects recently that focused on EL distros. Last year, I wanted to publish a public AMI built on top of Rocky. However, someone at the time decided to configure the published Rocky Linux AMIs to disallow republishing, so that's the only time I actually picked Alma over Rocky Linux.

After the dust settled behind the CentOS model changes, I think there might be a reasonable amount of users that also shifted to CentOS Stream and Fedora linux (Amazon Linux 2022 is also Fedora based instead of RedHat). For the scientific community something that changes way less is obviously a different story.


Alma is not exactly prevalent. It however releases faster than Rocky. Alma's mailing list is also more informative. Its Announce list post security advisory frequently. Rocky's Announce list in comparison is quiet.


I have insights.

Alma is done by professionals for professionals (the hosting industry). Those guys were behind CentOS, and are running 70% of the internet.

Rocky is an unpaid community effort.


No.

Alma Linux is done by CloudLinux, and still uses their infrastructure, secure boot certificate, etc (according to their page at https://web.archive.org/web/20221208102246/https://wiki.alma...). They have never had anything to do with CentOS.

Rocky Linux _is_ community oriented, and _is not_ beholden to a specific company, but it is not necessarily unpaid. The majority of the most active contributors to the project are being paid by their respective organizations (CIQ, OpenLogic / Perforce, etc) to do so.


Should be noted that AlmaLinux is a registered non-profit, whereas RockyLinux has not done such a thing and has bylaws that leave open the opportunity of being bought out by a for-profit

From RockyLinux:

> The Foundation is here for the benefit of the public community. We are a self imposed not-for-profit organization[^1] and thus we will never be driven, motivated, or manipulated by profit or monetary gain.

> [1]: This means the Foundation is a Delaware Public Benefit Corporation, with the objectives set forth in this Charter and the Foundation Bylaws. We do not have an objective to make money for shareholders. As of the time of this writing, the Foundation is NOT a 501(c)* US tax-exempt organization.

Also, AlmaLinux has its own infrastructure. It's not using CloudLinux infrastructure.


Most services are NOT on CL infra. There are a few minor things that need to be moved but all the core stuff is hosted on Alma's own infra including the build system/servers.

Source: I'm the infra lead at Alma.


AlmaLinux uses their own secure boot certificate, as noted in their last shim-review: https://github.com/rhboot/shim-review/issues/250

EDIT: actually link to the right shim-review ^_^;


Cloudlinux was the cPanel partner who provided their CentOS updates and support. They (Igor) were the ones who kept CentOS alive previously.

As nice as RockyLinux is, they can never compete with the dedicated support of the commercial partners from the hosting industry. See what they did: https://almalinux.org/blog/looking-back-leaping-forward-a-lo...


No.

Rocky is done by the guy that started CentOS.


Strictly speaking, Greg Kurtzer didn't start CentOS. He started cAos, and people within cAos started CentOS to bootstrap cAos. When the CentOS and cAos people couldn't get along, CentOS left cAos to be its own thing. Lance Davis probably has more claim to being the founder than Greg Kurtzer, as he was the one that started actually creating the Red Hat Linux rebuild work the led to CentOS.


Rocky McGaugh started building cAos-EL under the cAos Foundation which I created and led.

Due to my role with the cAos Foundation, I was part of the planning, inception, architecture, setup, leadership, management and led the project itself, but Rocky did 99% of the engineering work for cAos-EL-2 (later renamed to CentOS-3) and after Rocky was working on that, John Newbigin started on what would become CentOS-2.

Lance was there since the early days of the cAos Foundation, and he suggested the name "CentOS" (and he squatted and held the domain from the Foundation, which was how he took over the project), and he got involved with engineering/development of CentOS-3 after Rocky passed away. While I would agree, he is a co-founder, he proved to be opportunistic and acted very unethically and has been further demonstrated with the open letter sent to him from Russ Herrold (another co-founder) and the rest of the CentOS contributors for going AWOL, still holding the domain, and taking the project donations for years.

CentOS split off from the cAos Foundation (501c3) due to Lance's stronghold on the domain. To be clear, the separation was not mutual but it was cordial and sugar coated for the good of the project. I always enjoyed being part of CentOS and working on Linux distributions so not being part of CentOS was hard on me. This is one of the reasons why I announced a new distribution (Rocky) within 2 hours of CentOS being killed off, I was excited to do a distribution again! You can also imagine how and why I setup the Rocky Enterprise Software Foundation different from the cAos Foundation to better protect the project(s).

One last point, just because I wasn't working on core engineering and development of CentOS doesn't mean I wasn't deeply involved for the first years of CentOS and not a reason to discredit my role as project leader and co-founder.


Correct. Although I don't think Rocky has nearly as much money, staff, or industry support as Alma does.

The other issue is RH gets Cent 8/9 Stream CVE fixes out faster than either Alma or Rocky, at least so far.


> The problem is that Cent 8/9 Stream has quicker critical CVE patches because it's essentially the source and is closer to mirroring RHEL.

This isn't entirely true, at least for embargoed CVEs (so the most critical vulnerabilities). RHEL will always get embargoed patches first. Once RHEL releases the patched packages, Alma/Rocky can rebuild them too. The patches may not be available in Stream yet at this point. This has happened a few times in the past.

Also, there are no advisories for Stream, so it's not always easy to tell if a Stream package includes a patch for a particular CVE. Sometimes you have to go hunting in the changelog to figure it out, as the version numbers don't always match with RHEL's.


Hello fellow redhatter in the kernel engineering group ;)


Actually, I'm closer to the OP than to Red Hat, but I've heard this point explained often enough :P


you -nailed- it.


He (gmk) didn't start it, but he certainly provided support/funding to it.


Is he paid to develop it? Serious question, my CentOS boxes have been running for years and I'm not looking to upgrade so I'm not up to date.


Rocky Linux is developed by a large team, some who are paid too and some who are strictly volunteers, but yes, Gregory Kurtzer's own company (https://ciq.co) has invested quite a bit into Rocky Linux. (Many developers, FIPS validation alone is upwards of a million USD, etc)


He does not develop it whatsoever. Really all he does is talk, so basically zero development value. He certainly likes to take credit though for all the work all of rocky's volunteers do.


Yes, Gregory Kurtzer personally helped in development (primarily the packaging / tools during 8.3 and then 8.4). I know, I was there lol. For example, see the commit log to the early set of Rocky Linux devtools:

https://github.com/rocky-linux/devtools/commits/main?after=f...

"All he does is talk" is unfairly dismissive. We all have our roles, and Greg's is not release engineering.

The "taking credit" bit is an unfortunate misconception, media likes to attribute the entire project to gmk since he's a notable personality, but he himself does not.


>For example, see the commit log to the early set of Rocky Linux devtools

And that's literally it. After that... Nothing. Not surprising.

>"All he does is talk" is unfairly dismissive.

Funny because he's the only one ever mentioned or talked to in any article. It'd be nice to hear from the actual developers and not a figure head. Notice how it's only him? His role is to talk. His other company is also there to take credit, thanks to him and the media. We can blame the media all we want, but the reality is he basks in the spotlight. If that's his actual role he's doing it well.

I certainly hope your upcoming board didn't drink his koolaid and keeps him out.


I do try to ensure that others are always getting credit, I'm not even mentioned in the release notes of who did the work.

And the board doesn't keep people out. If you read the bylaws and charter, you would know that it is all contributing members of the projects that will vote for the board members, and the board will elect the officers of the organization.

You are obviously not an RESF Member and I trust the Members of the projects to make the best decision for the RESF and Projects. If I am among them, cool, I will always do my best job there. If not, I will support the decision and enjoy knowing that the structure that I helped to create, is working, and will keep the project open, free, and in the community for decades to come.

Last point, it isn't cool to discredit non-technical contributions, every role in an open source project is important.


>And the board doesn't keep people out. If you read the bylaws and charter, you would know that it is all contributing members of the projects that will vote for the board members

Good to know, I forgot how voting works. I hope they do right by the community and not vote you anywhere near the officers of the org. In your words, that means the structure will be working. But that requires faith that there are members who didn't drink your koolaid.

>Last point, it isn't cool to discredit non-technical contributions, every role in an open source project is important.

You're right, advertising your company CIQ using Rocky Linux is definitely up there in important open source contributions. I forgot about that small point.


I’m out of the loop on RHEL forks. Can someone explain why it is important to be binary compatible with RHEL? Is there a lot of software whose binaries will only run on RHEL? Or is this more a case of wanting something free that can still make use of RHEL-specific skills and knowledge?


When I was working for the US government we had to obey rules against shipping "freeware," which RHEL wasn't because we payed for it. But it was cheaper and easier to do most of our development on CentOS systems and because the software was the same we weren't worried about compatibility issues.


> we had to obey rules against shipping "freeware"

I am in the USA but this is new to me.. can you say more or better, links? regarding science applications, specifically...


I’ve worked at orgs where the problem with freeware wasn’t the lack of cost, but the lack of support/indemnity/assurance and the lack of a contract to establish a legal framework for the dependency.


ok- so if I understand those terms, roughly(?):

support --> contact system for humans to reach other humans with bugs or questions

indemnity --> a legal statement by the vendor that the client is not responsible for long term liability for patents or legal challenges

assurance --> a legal statement that the vendor will take responsibility if the product is not what it appears to be, or is missing parts that are required to fit a purpose

establish a legal framework --> define the copyrights and governing law for rights, responsibilities and disputes between the vendor, their suppliers, the customer and their agents


Spot on. The kind of features that accountable execs care a lot about. The engineer’s conception of companies like IBM and SAP and Oracle is that they sell terrible clunky software, but a substantial part of the value comes in the CYA that the above brings. The users are rarely the real customer.


For a concrete example, have a look at SQLite. SQLite is Public Domain, but it's used virtually everywhere, from phones to airplanes (https://www.sqlite.org/famous.html).

Not all companies are going to trust that something is actually public domain. So, you can get a "Warranty of Title" for SQLite. See https://sqlite.org/copyright.html

And those companies might not be satisfied with "community support". So, you can get paid support for SQLite. See https://sqlite.org/prosupport.html


This was when I was at a FFRDC doing work for the DoD, the rules might be different elsewhere but I wouldn't be surprised if they were the same either.


As soon as you use something like SAP or Oracle the number of offically _supported_ distributions quickly decreases. As example SAP HANA: RHEL and SLES or for example Oracle Database: RHEL, OEL (based on RHEL AFAIK) and again SLES.

So it is not only a question if it would theoretically/technically fail on another platform.


I don’t imagine SAP certification would extend to a community-supported fork of RHEL even if it was a 1:1 clone.


Many businesses will use the Forks for their dev environment.


Which is silly because Red Hat will literally give you free RHEL for non-prod if you pay for RHEL in prod.

https://developers.redhat.com/articles/2022/05/10/access-rhe...


I think the replies to you missed the main point: standing on the shoulders of giants. RH has a LOT of resources and being a fork of it means you get all that person power for free.


I would prefer to use RedHat based distros but their lack of official non free repos annoy me - I don’t want to add a community repo, I want packages maintained by trusted core engineers. Sure the community repos probably have that, but I last I checked I couldn’t find any assurance on the security of the community repos. SBOMs don’t mean shit if you’re installing stuff from random no-assurance locations.


Then you want Debian. We've had FAR less problems with Debians than anything from Red Hat stable


I've used a bunch of Linux distros over the years and always find myself installing Debian when I just want a desktop.


I’m currently on Ubuntu and have considered Debian. I like the concept of containerisation (snap, flatpack, etc) but the overly pushy snap integration has broken my workflows in multiple situations without much redemption so maybe I will give it a try.


If you want "just desktop" like grandparent, you can try https://linuxmint.com/, it's downstream from Ubuntu LTS, but with bad stuff (snap) removed: https://linuxmint-user-guide.readthedocs.io/en/latest/snap.h...


Yeah I’m all for those distros, but as I work in security industry i kind of feel like a ‘nobody ever got fired for buying an ibm approach’ is good.

If I’m hacked it’s not a good look at all, if im hacked and use an esoteric distro like gentoo, it would certainly look much, much worse. My key pain point is trust within repos, Ubuntu audit their repos as best they can (sast/sanity check) so at least there’s some security there.

Im otherwise very supportive of mint/arch/gentoo and similar systems.


Perhaps if you care about not being hacked, you could try Qubes OS, which is secirity-oriented.


I won't yell at people who prefer something else after giving it a try. But I do like it a lot.

On occasions, I had trouble with very new hardware, but all in all, it works very well for me. Does the job, and without drama.


Aren't those assurances what you get by paying red hat?


No, paying RedHat gets you support, not an official proprietary blob repo


Obviously stability is the most important factor for the people running nuclear experiments so they choose RHEL. Nevertheless, if I were them, I would consider trying Nix or Guix to potrntially turn the configs/builds into comparably strict math and making these reproducible.


Config can also be made reproducible with VM images, containers, home-built packages, configuration-management tools (ansible, puppet, …) or a mix of all or some of them.

Nix/Guix is no silverbullet.


Personally, I think the chase for silver bullets does more harm than good. I'd rather work with multiple configuration management tools that do one thing well, than to deal with new contenders that seem to want to do it all and spend little effort in being interoperable with other needs outside their use case.

As a single example: Guix does not support setting capabilities on binaries in the store; if you want to set CAP_NET_ADMIN on ping, or you want some service to run with CAP_NET_BIND_SERVICE, you're stuck. There is no way to make it happen inside Guix, so you're left with very ugly manual hacks (mount --bind /gnu/store /mnt ; setcap...; umount /mnt). Similarly, neither Nix nor Guix can be used as a deployment tool only, since they do not preserve post-deployment configuration changes (to the point that GuixSD even deletes user accounts if they're not in the system configuration).


How do you grow your system, or do you suggest that you build your system as a bunch of microservices running inside containers?


What do you mean growing your system? “How do you run more service?” Or “how do you handle more traffic?” Or “how do you handle more complexity of the application?”

All of these can be handled with any or many of VM images, containers, home-built packages and configuration-management tools.

If you're following the latest trend, you just run VM based on an image with pre-installed kubernetes.

Want to run more services? Just deploy more services on your k8s cluster. Want to scale horizontally to handle more traffic? Just boot up more VMs with the same image, and increase the size of your k8s pods. Want to handle multiple more complexity, split your job into micro services, and give responsibility of different namespaces to different teams.

If you're using a late 2000s, early 2010s model, you would just use VM, and boot them when necessary. You have a VM with HAProxy with dynamic backends.

Want to run more services? Just add more to the base image and route between them with a nginx local to your base VM image. Want to handle more traffic, just spin up more VMs with the base image. Want to handle more complexity? Use multiple types of VMs with different base images, depending on the service, and give the responsability to each team for configuring the base image.

I can go on and on for packages and/or configuration-management tools. But I think you get the idea.


> Want to run more services? Just deploy more services on your k8s cluster. Want to scale horizontally to handle more traffic? Just boot up more VMs with the same image, and increase the size of your k8s pods. Want to handle multiple more complexity, split your job into micro services, and give responsibility of different namespaces to different teams.

As someone who strives to keep things simple (as in running the least possible code to achieve something), this part gave me nightmares.

Nothing technically wrong with it, but the "let's just pile up layers" approach works well for maybe 10% of the workload where surface attack and resource efficiency can be traded for elasticity.

But then come the tradeoffs: complexity is multiplicative (this won't suffice, prepare to also manage an ingress, then a service mesh, a secret store...), which becomes a security nightmare, leads to technical debt, requires constant updates and manpower, is almost impossible to properly test and document, has high development costs and resource usage.

To make things worse, this behaves more unpredictably than a traditional 3-tier approach with Compose and Terraform, or nix-deploy.

Adding more complexity to solve complexity turns into a self replicating problem.


Complexity is relative. I'm a ansible + packages kinda-guy, so of course k8s feels bloated. But Nix with "its compile everything and store the results in deep directories with sha256 hashe names, and if two programs depends on two different patch versions of sqlite, just compile and keep both in parallel like npm" feels really bloated to me.


> if two programs depends on two different patch versions of sqlite, just compile and keep both in parallel

This can make sense in the scientific environment where reproducibility as precise as possible (incl. precise speed and memory usage of every specific algorithm) is very desirable. But in IRL this (different apps ignoring the proper "Unix way" of sharing libraries and updating them independently to the apps, depending on specific versions instead) always is very annoying. Every time I have to run an old app that requires old libs non-existent in my distro version I just symlink the names to the new ones and everything works great. Every time I download a Python project with requirements set to specific versions I replace the requirements with "this or newer", update and everything works great. Theoretically I can imagine a situation when an SQLite version upgrade can break somethiong but practically I can't - I bet this will never happen to me and if it does I'll just pay my bet by fixing the problem manually.


How do you share shared-object files between executables running in different containers? (Both on disk and in memory)


What I didn't like about AlmaLinux so far is that it takes more effort to google solutions to problems than for Ubuntu.

That may sound stupid or lazy, but there it is.


Ubuntu was/is by far the more popular distro to use outside of the corporate world. There's a lot more blogs, container base images, tutorials out there because it was popular for tinkerers. This is also the reason it has now become popular inside the corporate world.

You may find it harder to find resources for AlmaLinux/CentOS/etc. if you are using the same methods for searching for resources as you would for a Ubuntu (or even Debian) based OS.

However, there are much better and more fully comprehensive training material out there for 'Enterprise Linux' distributions (RHEL, CentOS which means also AlmaLinux and Rocky) because of the widely industry accepted RH certifications.

So to wildly make broad statements: Ubuntu based distros are good for finding resources like blogs and tutorials that are aimed at people who are tinkering and just starting out, maybe from a dev background. EL distros are good for comprehensive training for people who probably have had "Sys admin" as a job title at some point.


> You may find it harder to find resources for AlmaLinux/CentOS/etc. if you are using the same methods for searching for resources as you would for a Ubuntu (or even Debian) based OS.

Not only that, but as soon as you venture outside of the "happy path", you'll run into difficulties if others haven't also solved those problems ahead of time, which can be the case with many DEB distros but isn't always so with RPM ones.

For example, in many RPM distros you'd typically reach for using Podman for running container workloads, maybe even use something like OpenShift (their MicroShift project still being in the works).

In contrast, on DEB you'd at least sometimes go with Docker, because it's widely known, widely used and still has lots of market share - offering such lighweight orchestrators like Nomad or even Docker Swarm out of the box, which still is an okay choice for getting a dev environment running on a few boxes with 8 GB of RAM, when you don't have a separate node for managing the cluster.

Now suppose that you decide to go with Docker on RPM. It's definitely feasible and has gotten better, but I recall just a few years back Docker having issues with SELinux and also the container networking utterly breaking because of firewall configuration. You had to use the masquerade option if I recall correctly, which definitely fixed the issue but also meant that the software package ships broken by default, in a sense.

On an unrelated note, I recently ran into an issue with one of the RPM distros a week back, where if the node has swap enabled and it runs low on swap memory, it's going to have the kswapd process start eating a lot of CPU resources and basically bring everything else on the node on a standstill.

Not to say that RPM is worse, the LTS EOL cycle alone is amazing, so I wish Rocky and Alma (As well as other distros) the best of luck! But I can also definitely recognize why many would stick with Debian or Ubuntu (or other DEB distros).


It seems like you might be conflating RPM distros with RHEL and it's clones, which seems to me a somewhat common thing to do. Suse distros are independent of Red Hat and have a different set of pros and cons, but I don't recall seeing or experiencing the ones you mention about docker or kswapd.


I'm yet to see anyone using SUSE or openSUSE (might be a regional thing), though I've heard good things about them! Thanks for pointing out that detail!


AlmaLinux is highly compatible with Fedora and CentOS. You should be able to use the same solutions.


I am. The impression remains.


Half of my searches are thoroughly explained in access.redhat by very competent RH support answers. Ubuntu answers usually are just bad.


It’s totally impossible to google the solutions for Ubuntu because they’ve completely changed the way everything works, multiple times, so all the answers are wrong.


And after all of that, dnf auto update still doesn't support auto restarts for kernel updates, amazing.

Also no in-place upgrades since for ever.


I'm afraid to even update ordinary packages on a running system. People who want to update kernel on the fly are just reckless!


Since everything is already running as a container, it really doesn't matter


Containers are ordinary processes, just with some namespacing for separation.

And not everything is running in a container.


What happened to scientific Linux? It's a bit odd that it seems to be being discontinued rather than rebased on Alma? Do they no longer believe scientific Linux adds value?


It had been discontinued a while ago, in favour of a flavour of CentOS (at the time CentOS 7 was released, instead of Scientific Linux 7).


I went through the Rocks Cluster cycles in 3, 4, and 5 with SLC and Scientific until those fell by the wayside.

Rocky is the underdog you want to win.

Alma is the leader except in terms of security update latency.

The problem is that Cent 8/9 Stream has quicker critical CVE patches because it's essentially the source and is closer to mirroring RHEL.

It's hard to convince corporate folks to use Alma when Cent is still the "safer" choice technologically and provides continuity even if its governance and lifecycle maybe worse.


Apperantly stream is the slowest. https://news.ycombinator.com/item?id=33905616


Not exactly. For most CVEs, CentOS Stream gets them months before RHEL and RHEL clones. But CVEs rated important/critical (or otherwise embargoed) are required to go out to RHEL customers first. Once the fix is live for RHEL customers, two things happen more or less simultaneously:

- The RHEL package source with the fix is published, allowing clone distros to start their rebuild work. - RHEL maintainer starts working on the fix for CentOS Stream in public. This may or may not be the same patch as what was released in RHEL, depending on whatever other changes have already happened in CentOS Stream.

These tasks are not the same and take different amounts of time. On top of that, the release pipelines to get the fix out to users are different between the various distros. Sometimes the fix is live for CentOS Stream before the rebuilds, sometimes after. There were some notably slower exceptions with CentOS Stream 8 in the early days, because CentOS Stream 8 is built "inside out".

https://twitter.com/carlwgeorge/status/1439724296742576130

In CentOS Stream 9 going forward, RHEL maintainers own their CentOS builds and things are working much better. There is work in progress to migrate 8 workflows to match 9.


> reflecting recent experience

What happened? Is this about CentOS's demise?


Yes. It's too bad because they used to support Scientific Linux, but stopped because CentOS was so popular...

Anyway, I'm glad I went with Alma over Rocky, albeit for the wrong reasons (my daughter's name is Alma...)


Sounds like a wholesome reason to me.


You probably know this, but Alma means “soul” in Portuguese.

Is this why you picked the name? I like it. I went with Ada myself (after Lovelace - wife only found-out why years later).


FWIW, it also means "apple" in several Turkic languages (Kazakh, Kyrgyz, Turkmen, Azerbaijani, and maybe others).

Uzbek and Turkish are also pretty close.


I was aware of the Spanish meaning which is the same, but the reason we picked it was more mundane. It was one of the few names my wife and I both liked a lot.

ALMA is also a radio telescope in Chile (the Atacama Large Millimeter Array), which is somewhat closer afield to what I do.


Ha, I also suggested Ada for the same reason. And also didn’t tell my wife why.

But we ended up not using it.


Not really interesting. Without any technical reasoning other than "we want red hat with extra perks".


It's interesting because there's a question in the air right now over what fork of CentOS will be its spiritual successor since it got bought out. Will it be Alma, or Rocky? This post holds in favor of Alma.


As much as it pains me to say it, FOSS devs are a finite resource and it seems that being divided is just as if having been conquered. Neither Rocky nor Alma are progressing at the rate that a CentOS fork with a hostile RH upstream would progress if they were working together and not duplicating efforts.

What is the philosophical schism between the two? Why can't they split the workload on a single distro and reunite the communities?


It's just like with any distribution, each have their own philosophies and how they want to tackle something. Oracle Linux is still around for example, but we don't talk about them too often.

Rocky seems to have mostly volunteers. Alma has people who are likely paid because they come from or likely still work for CloudLinux. Either way, you're getting a RHEL clone if you go with either. And that is keeping in mind that devs are a finite resource and always will be, regardless of how you look at it.

The thing to keep in mind is that more choices are better than just having one. Think about it, let's say Scientific Linux actually made an 8. CentOS users would've likely gone over to them instead at the EOL date. Since SL didn't keep going, there was only CentOS (and Oracle Linux, but again no one really wants to talk about that - and there are folks who will avoid oracle like the plague; I don't blame them). You take away the one distribution a large amount of folks used and where are they going to go? Stream? Fermilab/CERN were going to stream and... not anymore. Perhaps the bugs and instability were a bit too much.

The long story short of it is, you don't want another situation like CentOS. More options, the better. All EL derivatives/clones should all operate and work the same.


> but again no one really wants to talk about that

That is the best way to treat Oracle, just pretend it doesn't exist.


  > The thing to keep in mind is that more choices are better than just having one.
I disagree that this is universally true. If (as it seems) Rocky and Alma have similar philosophies then significant dev resources are being duplicated, at the expense of other functionality.


> Fermilab/CERN were going to stream and... not anymore. Perhaps the bugs and instability were a bit too much.

Not exactly. CentOS Stream is still a standard offering for them, and they've told me they are pretty happy with the stability and getting faster bug fixes. The problem is that too many third party vendors are refusing to keep their software compatible with CentOS Stream, so the environments that depend on those vendor's software must stay pinned to RHEL or a RHEL clone. It's unfortunate because if those vendors also targeted CentOS Stream they would be compatible with new RHEL minor releases on day 1, rather than forcing their users to pin to older minor versions while they play catch up.


Probably "we picked RedHat ecosystem and are now stuck in it with significant cost to change", especially if they build their own pacakges


How are these long-term support Linux distros able to provide LTS for the thousands of packages they include?

If a security bug is discovered in an old version of RabbitMQ do the distro maintainers learn Erlang/OTP to patch the bug?


Alma/Rocky are just rebuilds of RHEL. And afaik, Red Hat does employ people to do that.


And, the package selection in RHEL is much smaller than in, say, Debian or Ubuntu [1]. There are the popular EPEL repositories with lots of extra packages, but these are community-maintained on a best-effort basis.

[1] And I think the Ubuntu 'guarantees' only apply to the 'main' repository, not the MUCH larger 'universe'.


Alma produces a rather snazzy script to migrate from CentOS which works really well.

I'm told it also works with Rocky, but I haven't tried it.


Rocky Linux has migration scripts available at https://github.com/rocky-linux/rocky-tools/tree/main/migrate...

An easy shortcut though, in case you have to hand type it out somewhere, is https://rockylinux.org/migrate2rocky.sh


Pretty sure he was talking about ELevate to go from EL7 to Alma 8.


Not a linux expert.

Skimming wikipedia it seems the main selling point of both Rocky and Alma is that they're binary-compatible with red hat, right? Could some experts clarify: if you want to be binary-compatible with red hat, why not just use red had?


RHEL costs money.


Yikes. Why do people use it exactly? (I use debian, that's all I know)


RHEL is used by companies that need the support Red Hat provides. RHEL and it's derivatives have less packages in the repo due to Red Hat willing to support every package they ship.


Absolutely love the integrated signing of containers (Sigstore technology).

If more distros support this, then our life would be far less stressful.

https://www.sigstore.dev/


I bet these guys have a lot of unusual legacy requirements, like wanting Motif and remote X11/Xdm to keep working, which takes Ubuntu out of the competition.


Is AlmaLinux systemd-free?


You're asking if Redhat, the creator and main proponent of systemd, uses it in its distribution.

Use FreeBSD, Devuan or Alpine if you want production-grade Unixes without systemd.


> You're asking if Redhat, the creator and main proponent of systemd, > uses it in its distribution.

I thought the main promoter of systemd these days is Microsoft?


It's a RHEL clone, so no. However, you can install runit or OpenRC if you want.


That doesn't solve the problem, because systemd developers have insinuated dependencies on it into multiple unrelated system components.


Linux developers have created dependencies on Linux. Portability is a lot of work and perfect portability does not exist.

If you think you don't want systemd, why don't you use BSD?


> Linux developers have created dependencies on Linux.

Sorry, ignoring the rest of the thread: this statement isn't true even marginally. There are linux distributions that can operate on other kernels than Linux. Debian was my favourite choice for this with their HURD (https://www.debian.org/ports/hurd/) and kFreeBSD (https://www.debian.org/ports/kfreebsd-gnu/) implementation.

The GNU userland itself does not depend on Linux either.

You can run GNU utilities on MacOS, FreeBSD, OpenBSD.


> There are linux distributions that can operate on other kernels than Linux.

Then by definition they're not specifically a Linux distribution?


Richard Stallman agrees with you.

I’m happy to take a different name, but Wikipedia at least refers to it this way.

Debian themselves say that they’re an operating system.

Regardless; many (if not most, if not close to all) utilities developed on top of Linux are portable across not only distributions, but other kernels or even operating systems.

It’s not fair to say that “those people who develop on systems with Linux kernels only support linux”.


There are cases where you might want to use Linux (for the hardware support, or for the userland) but without systemd: e.g. if you want to use musl libc, Lennard has said clearly he won't fix the incompatibility https://www.mail-archive.com/systemd-devel@lists.freedesktop... "we will rely on good APIs exposed in the generally accepted Linux API which is the one glibc exposes" (i.e. he won't integrate any required patch to make systemd work without glibc's nonstandard behaviors, and musl doesn't aim for a bug-for-bug compatibility with glibc).


Lennard now works for Microsoft anyway. It clearly suits him better.


Because systemd is not the reason why people choose to use Linux over a BSD.


That is interesting; can you point to specifics? I am generally a proponent of orthogonality in my system components, although I am not opposed to the core structure of systemd.


What is the problem with systemd?


Do your homework. This has caused enough wars, hatred, threats, splits and forks already over the years.


I mean if you look at it from that perspective systemd problem seems to be "incompetent people don't like it"

We got group of clowns that repeat stuff like "but sysV is simple, init scripts are simple to make" and we have (...had, after migrating off) thousands of lines of init script fixes to prove that they are not that simple to get right, all replaced by few lines of system.

Then there is another group of clowns repeating "but it's too complex, we don't need all of the unit features", then proceed to put that features in separate app that works badly (monit), separate app that is just different init system in disguise (supervise, which is entirely fine piece of software that does it thing well), or just don't do anything serious with their systems in the first place.

I have plenty of complaints about many things in systemd but I won't pretend it didn't save us thousands of lines of code thanks to some of those features and made running more complex stuff trivial.

For example one of our is "wait for network -> download key from key server -> decrypt partition -> mount encrypted partition -> start the services". init.d version of that was... gnarly at best, and pretty fragile, it was trivial under systemd with nearly no actual scripting needed.


Few people would defend sysvinit, and few would argue against systemd as an init and rc system.

People complain against systemd as a tightly coupled collection of subsystems replacing previously very loosely coupled subsystems. Those subsystems have turned straight from bazaar to cathedral, so of course this will raise concerns.

The debate isn't systemd vs sysvinit, but s6 vs shepherd vs runit vs systemd, bazaar vs cathedral, and choice vs absence of choice.


It is misleading to point to sysvinit failures while arguing in favour of systemd, because the latter is not the only alternative to the former.

As for the calling all opponents of systemd "incompetent", this is just plain bad faith.


> Do your homework.

You assume I don't know what systemd is or why people have complained? This is frankly a rude comment to make and shows yourself in a very poor light.

I thought it was pretty common knowledge in 2022 that systemd is fine. It's the default in the vast majority of Linux distros. The vast majority of people have gotten over the drama of change. I was asking for a specific reason why someone was complaining about it now. I expected the reason to be new. Complaining alone doesn't add much to the discussion. You're dismissive comment also doesn't add anything to the discussion.


> I thought it was pretty common knowledge in 2022 that systemd is fine.

You thought wrongly.

> The vast majority of people have gotten over the drama of change.

Implying the only reason to object to systemd is "the drama of change".


> You thought wrongly.

In 2022 the majority of Linux distros are using systemd: Arch, Debian, RHEL, OpenSUSE... including derivatives like CentOS/AlmaLinux/Ubuntu/Manjaro. Sounds to me like the common knowledge is that systemd is fine.

> Implying the only reason to object to systemd is "the drama of change".

There's nothing in your reply that says otherwise.


> Sounds to me like the common knowledge is that systemd is fine.

By the same logic "billions served" proves that going to MacDonalds is fine dining according to the "common knowledge".

> There's nothing in your reply that says otherwise.

I was not aiming to educate you, but to point out your error.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: