Very annoying - the apparent author of the backdoor was in communication with me over several weeks trying to get xz 5.6.x added to Fedora 40 & 41 because of it's "great new features". We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added). We had to race last night to fix the problem after an inadvertent break of the embargo.
He has been part of the xz project for 2 years, adding all sorts of binary test files, and to be honest with this level of sophistication I would be suspicious of even older versions of xz until proven otherwise.
IIUC (I work for Red Hat but not on how sources are distributed), there is absolutely no change in practice.
CentOS Stream RPM sources are stored in GitLab therefore the whole history is available including past minor releases of RHEL. The only change is that the repositories will not be mirrored to git.centos.org.
3 medium sized dell frontend servers running jboss4 and one beefy backend mysql server. we ran our own bare metal because this was before 'cloud' services would allow porn sites to operate.
the ejb3 caching was so efficient that we really only needed one server, we just had the other two as backup so that we could do CI driven rolling deploys integrated with the load balancer. we used jgroups mcast to expire entities in the cache. jvms were carefully monitored and all the settings were heavily tuned for our environment.
the hardest thing about all of it was simply packaging it all up into a war file correctly. it is all knowledge that i've long since forgotten how to do.
One of the stated goal of MV3 by Google[1] was to avoid extensions with broad permissions:
> our new declarativeNetRequest API is designed to be a privacy-preserving method for extensions to block network requests without needing access to sensitive data
This MV3-based AdGuard extension still requires a broad permission to "read or modify host data" on all sites[2]:
"host_permissions": [
"<all_urls>"
],
So what you have now is the same required permission to "read or modify host data" as with MV2, but with a network filtering engine capabilities gated by Google (an advertising company).
We can't innovate anymore the filtering capabilities of our content blocker engines as we have been constantly doing over the years.
For a recent example, there has been discussions lately with filter list maintainers of whether uBO should support AdGuard's proposed capability of being able to support pattern-matching for `domain=` filtering option[3] (uBO supports AdGuard lists).
That sort of proposition is not possible to entertain with MV3 since only Google get to decide how the filtering engine will evolve, if at all. All content blocking issues will have to be resolved with the Google-controlled filtering engine, and left unaddressed if the solution can't be shoehorned in the declarativeNetRequest API.
I suspect the push for podman was more about how docker ignored CGroupsV2 for so long that Fedora eventually turned it on anyway which broke docker and then told users to switch to podman.
As a former member of the Red Hat desktop team who helped kickstart several of these initiatives (I did a huge chunk of Wayland, before I passed the reins off to Jonas Adahl), I believe in the vision, but like many other things in the Linux space, it's fighting a very uphill battle for a vision that I personally believe is good, but I can't ever imagine coming true, and when it does, one that's far too late.
Immutability is good, splitting the OS from the applications is good, but what that should imply is a commitment to not break anything, and that's not something you can really wrangle from open-source contributors, who are more interested in writing v7, v8, v9, etc. and deprecating everything before them (though I note this is not strictly a Linux problem, we've seen it in npm/pypi/rubygems and we're even now seeing it from even big vendors like Microsoft, Apple, and Google, but it tends to be more associated with FOSS/Linux communities). Flatpak is an attempt at a technical band-aid for a social and cultural problem, but that culture only makes the problem worse, and the solution ineffective.
We see this now manifest in 100 different application distribution formats, all of which have giant tables of "pro's" vs. "con's" on their homepage, all which are fighting for an increasingly miniscule userbase, heightening a war which never should have existed in the first place. Much like the sound server debates of the 2000s, applications can't simply choose one without getting into a large political turf war, and so they have to distribute in every format known to man to quell a userbase each interested in their own ideas of technical superiority. LibreOffice lists Flatpak, Snap, AppImage, alongside all of the individual distribution packages, right on their home page.
This is all to a userbase who's more often interested with tinkering than stability. Linux communities tend to be ones who have self-selected that they want their computers to be toys, rather than tools to just get their jobs done. Or they believe in some form of Linux elitism; that Linux is somehow technically superior to other operating systems, and adapting good ideas from those other systems means losing some form of symbolic war, one which they'll fight hard against. Moving the needle from a fun system that's endlessly tinkerable to a boring system that runs the apps you need and is stable is attempting to move the culture in that direction, and honestly a lot of the desktop Linux community is just uninterested in that vision.
Also, while I was at Red Hat, Colin Walters was probably one of the smartest and most influential people I met, and they're the real powerhouse behind a lot of these ideas (I remember when ostree was hacktree, somewhat made out of frustration so they didn't have to break their laptop while testing new OS versions). Their writing and the conversations I had with them was one of the big things to get me out of the "Linux elitism" spell. I highly recommend their writing on these topics: https://people.gnome.org/~walters/docs/packages.txthttps://people.gnome.org/~walters/docs/
You've discovered what many other people have: The cloud is the new time-share mainframe.
Programming in the 1960s to 80s was like this too. You'd develop some program in isolation, unable to properly run it. You "submit" it to the system, and it would be scheduled to run along with other workloads. You'd get a printout of the results back hours later, or even tomorrow. Rinse and repeat.
This work loop is incredibly inefficient, and was replaced by development that happened entirely locally on a workstation. This dramatically tightened the edit-compile-debug loop, down to seconds or at most minutes. Productivity skyrocketed, and most enterprises shifted the majority of their workload away from mainframes.
Now, in the 2020s, mainframes are back! They're just called "the cloud" now, but not much of their essential nature has changed other than the vendor name.
The cloud, just like mainframes:
- Does not provide all-local workstations. The only full-fidelity platform is the shared server.
- Is closed source. Only Amazon provides AWS. Only Microsoft provides Azure. Only Google provides GCP. You can't peer into their source code, it is all proprietary and even secret.
- Has a poor debugging experience. Shared platforms can't generally allow "invasive" debugging for security reasons. Their sheer size and complexity will mean that your visibility will always be limited. You'll never been able to get a stack trace that crosses into the internal calls of the platform services like S3 or Lambda. Contrast this with typical debugging where you can even trace into the OS kernel if you so choose.
- Are generally based on the "print the logs out" feedback mechanism, with all the usual issues of mainframes such as hours-long delays.
He has been part of the xz project for 2 years, adding all sorts of binary test files, and to be honest with this level of sophistication I would be suspicious of even older versions of xz until proven otherwise.