Why have any glibc? GCC et al. work fine compiled against musl (as proven by ex. Alpine only doing musl). Or is it for running on GNU/Linux systems (can't you statically link the build chain?)?
> I am the youngster volunteer at a local demonstration garden. The elder volunteers in their 70s are all very sharp. They usually don't stop until arthritis or back problems force them to stop. But their minds are agile, and they are all very social, cooperative, and upbeat.
Okay, ask the obvious question. Isn't that a perfect candidate for causality to go the other way? Anybody with basically any mobility or serious health problems is less likely to go out gardening, so of course you only see healthy people out gardening. (To be fair, I would expect that it is good for you and helps people stay healthy, but I would expect the selection effects to be a stronger explanation for what you're observing.)
Yes, I think I see what you are saying. Actually, since the pandemic lockdown, from what I gather locally, volunteer activity has dropped off by up to 75% in this garden. My best guess is that screen addictions (like Netflix and other streaming) are now widespread, and were exacerbated by the lockdown.
Locally, outside activity in yards and gardens is down drastically, and it shows. Weeds and neglected yards tell that story.
But sure, we have a 79 year old who has been at the garden for 20 years and, despite aches and pains, is the most active! He is also my mentor and I appreciate his devotion. Would he be as physically active if not for the volunteering? Doubtful. So I believe that I concur with you on this.
I don't use it much, but I've glued together sway+wayvnc+novnc in a container and it worked fine (exposing both raw VNC and the webified novnc interface).
I'm confused; xsel, as you might imagine from the name, is very specifically a program for manipulating the X11 selection and clipboard. So it does work on Xorg, but I'm very confused that it would work in any meaningful capacity on Wayland. Are you somehow using Xwayland?
The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.
Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.
I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes. Not everything needs to be Arch, but this opposite extreme is also bad.
> The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.
And then application developers fail to update their vendored dependencies, and thereby leave their users exposed to vulnerabilities. (This isn't hypothetical, it's a thing that has happened.) No, thank you.
>Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.
So... are you arguing that we do need to ship everything vendored in so that it can't be updated, or that we need to actually break out packages to be managed independently (like every major Linux distribution does)? Because you appear to have advocated for vendoring everything, and then immediately turned around to criticize the situation where things get vendored in.
> I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes.
I'm not sure GP's claim here about the runtime not changing in 4 years is actually true. There hasn't been a version number bump, but files in the runtime have certainly changed since it's initial release in 2021, right? See: https://steamdb.info/app/1628350/patchnotes/
It looks to me like it gets updated all the time, but they just don't change the version number because the updates don't affect compatibility. It's kinda opaque though, so I'm not totally sure.
> So... are you arguing that we do need to ship everything vendored in so that it can't be updated,
I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.
Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker. And that Docker (or similar) are actually necessary to reliably run programs on Linux because TheLinuxModel is so bad and has failed so badly.
If you believe in TheLinuxModel and also do not use Docker to deploy your software then you are, in the year 2025, a very rare outlier.
Personally, I am very pro ShipYourFuckingDependencies. But I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.
> I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.
That is a very silly argument considering that Docker is built on primitives that Linux exposes. All Docker does is make them accessible via a friendly UI, and adds some nice abstractions on top such as images.
It's also silly because there is no single "Linux model". There are many different ways of running applications on Linux, depending on the environment, security requirements, user preference, and so on. The user is free to simply compile software on their own if they wish. This versatility is a strength, not a weakness.
Your argument seems to be against package managers as a whole, so I'm not sure why you're attacking Linux. There are many ecosystems where dependencies are not vendored and a package manager is useful, viceversa, or even both.
There are very few objectively bad design decisions in computing. They're mostly tradeoffs. Choosing a package manager vs vendoring is one such scenario. So we can argue endlessly about it, or we can save ourselves some time and agree that both approaches have their merits and detriments.
> That is a very silly argument considering that Docker is built on primitives that Linux exposes
No.
I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).
Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs. A Windows PATH is relatively clean from random shit. (Less so when Linux-first software is involved). Stuffing a million libraries into /usr/lib or other PATH locations is a Linuxism. I think this Linuxism is bad. And that it’s so bad everyone now has to use Docker just to reliably run a computer program.
Package managers for software libraries to compile programs is a different scenario I’ve not talked about in this thread. Although since you’ve got me ranting the Linuxisms that GCC and Clang follow are also fucking terrible. Linking against the random ass version of glibc on the system is fucking awful software engineering. This is why people also make Docker images of their build environment! Womp womp sad trombone everyone is fired.
I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.
> I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).
> Windows and macOS don’t do this.
macOS does in fact have a /usr/lib. It's treated as not to be touched by third parties, but there's always a /usr/local/lib and similar for distributing software that's not bundled with macOS just like on any other Unix operating system. The problem you're naming is just as relevant to FreeBSD Ports as it is to Debian.
And regardless, it's not a commitment Nix shares, and its problems are not problems Nix suffers from. It's not at all inherent to package management, including on Linux. See Nix, Guix, and Spack, for significant, general-purpose, working examples that don't fundamentally rely on abstractions like containers for deployment.
I totally agree with this, though, and so does everyone who's into Nix:
> Stuffing a million libraries into /usr/lib [...] is bad.
> I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.
> Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs.
You can't be serious. Are you not familiar with the phrase "DLL hell"? Windows applications do indeed put and depend on random ass DLLs in system32 to this day. Install any game, and it will dump random DLLs all over the system. Want to run an app built with Visual C++, or which depends on C++ libraries? Good luck tracking down whatever version of the MSVC runtime you need to install...
Microsoft and the community realized this is a problem, which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store.
So, again, your argument is nonsensical when focused on Linux. If anything, Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications. You're not obligated to use any specific package manager.
> which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store
rofl. <insert meme of Inglorious Bastards three fingers>
> Good luck tracking down whatever version of the MSVC runtime you need to install...
Perhaps back in 2004 this was an issue. That was a long time ago.
You use a lot of relevant buzz words. But it’s kinda obvious you don’t know what you’re talking about. Sorry.
> Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications
I would like all Linux programs to reliably run when I try to run them. I do not ever want to track down or manually install any dependency ever. I would like installing new programs to never under any circumstance break any previously installed program.
I would also like a program compiled for Linux to just work on all POSIX compliant distros. Recompiling for different distros is dumb and unnecessary.
I’d also like to be able to trivially cross-compile for any Linux target from any machine (Linux, Mac, or windows). glibc devs should be ashamed of what they’ve done.
> Perhaps back in 2004 this was an issue. That was a long time ago.
Not true. I experience this today whenever I want to use an app without a package manager, or one that doesn't bundle the VC runtime it needs in its installer, or one that doesn't have an installer.
> You use a lot of relevant buzz words. But it’s kinda obvious you don’t know what you’re talking about. Sorry.
That's rich. Resort to ad hominem when your arguments don't hold any water. (:
> I would also like a program compiled for Linux to just work on all POSIX compliant distros.
So use AppImage, αcτµαlly pδrταblε εxεcµταblε, a statically compiled binary, or any other cross-distro packaging format. Nobody is forcing you to use something you don't want. The idea that Linux is a flawed system because of one packaging format is delusional.
You're clearly not arguing in good faith, and for that reason, I'm out.
> Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker
You've got the wrong audience here. Nix people are neither big fans of "the Linux model" (because Nix is founded in part on a critique of the FHS, a core part and source of major problems with "the Linux model") nor rely heavily on Docker to ship dependencies. But if by "the Linux model" you just mean not promising a stable kernel ABI, pulling an OS together from disparate open-source projects, and key libraries not promising eternal API stability, it might have some relevance to Nixers...
> I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.
Your sense of "packaging" conflates two different things. One aspect of packaging is specifying dependencies and how software gets built in the first place in a very general way. This is the hard part of packaging for cohesive software distributions such as have package managers. (This is generally not really done on platforms like Windows, at least not in a unified or easily interrogable format.) This is what an RPM spec does, what the definition of a Nix package does, etc.
The other part is getting built artifacts, in whatever format you have them, into a deployable format. I would call this something like "packing" (like packing an archive) rather than "packaging" (which involves writing some kind of code specifying dependencies and build steps).
If you've done the first step well— by, for instance, writing and building a Nix package— the second step is indeed trivial and "damn near instantaneous". This is true whether you're deploying with `nix-copy-closure`/`nix copy`, which literally just copy files[1][2], or creating a Docker image, where you can just stream the same files to an archive in seconds[3].
And the same packaging model which enables hermetic deployments, like Docker but without requiring the use of containers at all, does still allow keeping only a single copy of common dependencies and patching them in place.[4]
> Programs should ship as many of their dependencies as is technically feasible.
Shipping in a container just is "ship[ping] as many [...] dependencies as is technically feasible". It's "all of them except the kernel". The "barest minimum of libraries" is none.
Someone who's using Docker is already doing what you're describing anyway. So why are you scolding them as if they aren't?
> Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it
It's only really fixed in podman, with the special `--systemd=always` flag. Docker afaik still requires manually disabling certain services that will conflict with the host and then running the whole thing as privileged— basically, a mess.
> You should tell the full story: Someone compromised the supply-chain and snuck a miner into the anticheat binary. It was discovered immediately, and the fact that the miner was in the anticheat and not, say, a game loader, did nothing to hide it.
Software with that level of access having a supply chain compromise is not an argument in its defense.
See that's the thing, I'm not making an "argument in its defense", I'm just telling the truth (the whole truth). It might not be an important distinction to you, but it might be an important distinction to the next person, and glossing over points like this does everyone a disservice.
> I think one of the major problems with open source development is its hard to ever remove anything because the vocal minority who likes it will hound you. But removing things is just as, if not more important to good software as adding features.
As opposed to non-OSS, where removing features that paying customers care about is of course trivial?
I don't mindless comic and its original context, but it's gotten extremely old seeing it wheeled out to justify completely discarding user input on any change. Sometimes an update does break legitimate workflows, and that is bad.
The difference is for proprietary features, you can just charge that subset of users that care for its maintenance, using that money to hire additional developers, etc. For OSS you instead have a relatively fixed budget of time & resources and have to balance competing interests in a zero-sum manner. On the flip side, there's nothing preventing the vocal minority from forking if the feature is important enough to them!
> if they have to type `sudo apt search my_package` and then `sudo apt install my_package` all the time.
As opposed to the much easier `flatpak install com.fqdn.app.name`? Don't confuse underlying package format with CLI/GUI; Synaptic, GNOME Software, Plasma Discover, etc. are fine ways to install normal packages.
Why have any glibc? GCC et al. work fine compiled against musl (as proven by ex. Alpine only doing musl). Or is it for running on GNU/Linux systems (can't you statically link the build chain?)?
reply