I get so annoyed at the amount of Linux tools suggesting to install something using this method rather than focusing on package management inclusion. Even more so when they try to request sudo access.
I'm with you! If you're serious about your software (read: you want users)... either make a Flatpak manifest, package spec, or something equivalent.
Point is, somebody made something better than this little install shell script. I'll accept pip, I'm not picky.
There is almost surely no reason for $thing to be writing to my system directories. Nobody should be coaching this, it can go wrong in every way.
Binaries and libraries can come from my home directory. You won't find /home mounted with noexec outside of strict compliance environments.
Build services like OBS and COPR make the actual equipment investment of building packages basically non-existent. Roaming repositories for any distribution you could want.
Leaving the one-time cost of writing out the specs... and I suppose maintaining dependency creep.
That maintenance isn't that expensive because you'd be maintaining them in the script anyway. You're just using common domain language
Do these things and you cover 95% of the DSL:
- PKGBUILD (basically the script, AUR means you'll get maintainers)
- RPM
- DEB
Say a fourth significant option comes around, I'll guarantee you the concepts are similar enough.
Macro creep is real, but this is the cost of maintenance work. Give us something to maintain.
Signed,
- A person maintaining several packages for projects with no upstream involvement
Pip/npm/gems are just as bad as are debs/rpms from an untrusted source. Piping curl to sh is no worse than any other way of installing unvetted software from an untrusted source and the only better alternative is either verifying the software installation instructions yourself or relying on a trusted third party to do this for you (e.g. the Debian maintainers)
Those services I mentioned -- COPR/OBS, they can be the trusted sources by your users.
They're signed the same way as your first party packages.
If you trust the developer/maintainer, you can trust these services. It's literally the same infrastructure as OpenSUSE/Fedora/Red Hat.
As a developer you don't have to provide the infrastructure or equipment, simply whatever is to be built.
I'm not suggesting people provide their software by pure RPM or DEB files. The repositories do the important part of actually distributing the software.
If you're on either OBS or COPR you're on the fast path to having the OS maintainers do the work for you
> Those services I mentioned -- COPR/OBS, they can be the trusted sources by your users.
But there's nothing trusted about them is the point, you can ship a deb or rpm with all sorts of scripts running at installation, this is no safer than curl | sh.
If anything it's worse, when you "curl | sh" and it requests sudo you can go "mmm no we're not doing this I will happily risk compromising all my user data but it stops at my system integrity".
rpm or deb you're already running as root.
> If you trust the developer/maintainer, you can trust these services.
And you can also trust their site which you're curl-ing from.
This was the main thing I’m reacting to: installing from something like pip is usually running a ton of unvetted code downloaded from the internet. If you trust such package managers, you might as well curl to a shell.
Apologies, that's fair -- there are levels to my rant. That's a very important part.
I'll take package managers over shell scripts in concept for one main reason: they reduce reinvention.
The supply chain is always of concern, of course.
A shell script benefits from coreutils -- cp, mv, things like that. You're on your own for everything else, I don't trust that.
They themselves are untrusted code -- for all we know there's no VCS behind it at all. A package manager at least offers some guardrails!
With packages on OBS/COPR, you can at least know/verify the sources you're installing were built in a clean (offline) environment from the upstream sources. It's a small, but notable, improvement.
Also consider you need to rebuild your system and you'd like it to look the same after.
Will you find it easier to run 'pip freeze' / 'dnf history userinstalled'... or crawl your shell history for curl | bash and resolve any drift?
Package management isn't realistic unless you only pick a few distros to support -- and then you get criticized for not supporting every distro, some of the distros end up way behind on the version, some of them modify your code to make it fit better in their system...
Lately I've been coming around to the idea that AppImage is the best middle ground we have so far. Basically, it lets you provide static binary for people to download and use as-is if their repos don't have it available but then bundles in special flags starting with `--appimage-` for stuff like extracting the bundle libraries/assets into a directory structure that's the same for every AppImage. It seems like it's a step in the direction towards being able to automate making packages for various distros; It would be a tough design problem but really amazing if the format being expanded to be able to include source files so that each package manager could write their own plugins for converting the extracted AppImage into their own package format (appimage2deb, appimage2rpm, etc.). Maybe AppImage isn't the right basis for this sort of thing, but instead of trying to drive adoption of distro agnostic package managers which will face resistance from distro maintainers, I feel like the right solution would be something that provides for distros what LSP provided for editors and what LLVM provided for compiler authors. We're not lacking in package managers as a community, but we really could use a standard protocol or intermediate layer that _doesn't _ try to replace existing package managers but instead works with and relies upon them in a way that benefits everyone.
It’s worth noting that some big companies have something like what you describe. I was building packages for one and it’s basically reverse engineering however something is packed (e.g an RPM) and making it build/bundle in the way big corp operates.
It’s a lot of busy work but we guaranteed that we had all dependencies in house and that the same package had the same layout of files across every OS
I guess my hope is that rather than requiring a bunch of hardcoded boilerplate for each package management solution, we could come up with a reasonable set of primitives that they'd all need and then find a way to express them agnostically. The power of LLVM and LSP are that it's not just one entity producing all of the implementations, but that each side of the communication can contribute the stuff that's specific to their use case so that no one else needs to worry about those internal details. If I write a new programming language, all I need to do is implement a language server client, and then everyone can get plugins for pretty much any editor they want without needing to do a huge amount of work. It's possible something like this exists for package management right now, but my current impression is that the only products that try to provide something like this would not be as easily extensible to adding new distro package formats that might be invented but instead hardcode a few of the most common types (debian, rpm, etc.). The key part that seems to be missing is a standard for what the bits in the middle should look like; what's the package-level equivalent to LLVM IR or the LSP?
Author should not be packager. Distro users/maintainers should be packagers.
Look at FreeBSD ports: there are more than 25K packages, most of them with latest upstream versions. How much of these packages are prepared by authors? Maybe, 1 in 1000 (I'm optimist, I know). NetBSD's pkgsrc is the same. It works.
Authors better stick to traditional UNIX-style build system, which allows to pickup dependencies from local system without problems, and all distro-specific thing will be done by distro guys.
I've ported 10+ software packages to FreeBSD ports in last 20 years, on principle "I need this, it is not in the ports yet, make new port for it". Typically it takes 2x-3x time to single source build, i.e. one day top, if it is not something super-complex like KDE and it is possible at all (i.e. not very Linux-specific when FreeBSD doesn't have required APIs).
Modern build systems like npm and maven, which want to download and build all dependencies by themselves, are problem for this, I admit.
The alternative is something like AppImage. The benefit of curl | bash is that it's a command that can be run on any distro and just work. There is no need to worry about how the Linux ecosystem still hasn't agreed upon a single package format.
I've worked with people creating curl scripts to install software and most of the time it should scare people. At least with package management there is some community validation and version history of what changes are made, as well as checksum. I'm not saying you should discount all installs using curl, but you should at least check the script first and see if you're comfortable proceeding.
One of the the things I love about Gentoo & Arch is that packaging for them is incredibly easy. At least, as long as upstream isn't doing anything insane. But this means that I as a user can wrap an upstream package, without too much of an issue. (E.g., if you have a binary tarball that can be unpacked to, e.g., /opt. Even if it has to be built, so long as your dependencies are clear, that's usually not too hard. Some languages fight package management a bit more than others, though.)
I mean, you're right -- but security and maintainability aside doesn't it feel odd to advocate against the use of a universal method and FOR the use of one of many hundreds of package managers for NIX operating systems that claims to have gotten it right?
Adding maintenance overhead to a FOSS project to support a package manager is one thing, adding support for every Flavor Of The Week package manager after that initial time investment is tougher, especially when the first one is no longer en vogue.
tl;dr : the thousands of ways to package data for NIX creates a situation in which hurts maintainability unless the package maintainer lucks into picking the one that their crowd wants for any length of time. Piping data from curl works just about anywhere, even if it's a huge security faux-pas waiting to happen.
semi-unrelated aside : it strikes me as humorous that people on that side of OS aisle have cared so much about pipes being a security issue for years and years, whereas on the MS side of things people still distribute (sometimes unsigned) binaries all over the place, from all over the place, by any random mary/joe. (not to say that that's not the case on the nix side, but it feels more commonplace in MS land, that's for sure.)
> If I give you an rpm package, that isn't really any better. It can run any script during installation and you'll already be running it as root.
But it's not about you giving an rpm package at will. It's about the distro including packages in its official distribution and many people installing the very exact same package. Instead of people randomly pulling install scripts from the Web which can, for all we know, at every curl, be fetching a different install script.
In addition to that Debian has an ever growing number of packages which are fully reproducible, bit for bit. All these executables, reproducible bit by bit.
When I install a package from Debian I know many people have already both scrutinized it and installed it. It's not a guaranteed nothing shady is going on but what's safer:
- installing a Debian package (moreover reproducible bit for bit) which many people already installed
- curl bash'ing some URL at random
Anyone who says the two offer the same guarantees is smoking something heavy.
It's all too easy to sneak it in a backdoor for, say, once every 100 download when you moreover detect, as in TFA, that curl bash'ing is ongoing. And it's hard to catch. And it's near impossible to reproduce.
When you install from a package that's full reproducible: there's nowhere to run, nowhere to hide, for the backdoor once noticed. It shall eventually be caught.
Here's why it matters (FWIW there are tens of thousands of Debian packages which are fully reproducible):
consider the broader class of attack this article is demonstrating: stealthily delivering different payloads to different requests. i don’t know about rpm specifically, but most new-ish package managers do actually ensure this more strongly than any curl-based approach: a hash of the 3rd party content is provided through some more secure chain (e.g. directly in your OS’s or language’s package database, or signed by some key associated with one of those and which you’ve already trusted).
yeah, if the package is delivered through the same channel as the bash script, and not anchored by anything external, you lose those benefits. but even hosting the package contents through pip or cargo or AUR or just unaffiliated and manually synced mirrors is a (relatively easy) way to decrease that attack surface.
It's sometimes preferred because package repositories don't always include the updated version of the program, and saves the tedious work on uploading to every package repository like Arch Linux or Debian's.
For self-contained stuff, there is AppImage that's just a filesystem subtree [1], or a Docker container. The latter doesn't need Docker, or even Podman; you likely already have systemd, and it knows how to run a container.
If you want to depend on system-wide stuff, or, worse yet, provide shared system-wide stuff, it's time to create a proper OS package (a bunch of them: Debian, Ubuntu, Fedora, Nix, etc).
The curl | bash approach has only one upside: you can store the script, inspect it, then run. I did that a couple times. Because otherwise it's a pretty scary operation, a very literal RCE.
> The curl | bash approach has only one upside: you can store the script, inspect it, then run.
Not having to waste countless hours on whatever distro's package format of choice is a pretty big upside.
And those are also RCEs if you're not upstreaming the package (which you likely are not because that increases the amount of time needed by an order of magnitude) as most package managers have multiple scripting points which run arbitrary code, and usually run as root already.
… you don't have to be part of the distro's official package set.
Ubuntu, Debian, Gentoo, Arch, etc., support third-party repositories and packages with bad licenses: the user adds the repo / installs the deb / etc. Pacman, in particular, even has direct support for such, and calls them out (to help ensure the user knows, and perhaps reads, the license).
Then I know I can gracefully uninstall the package by just asking the package manager to do that.
(You don't have to unvendor libs, either: if you install into something like /opt/$pakage_name, you can keep vendored libs in there. You should unvendor them, though.
Yeah, downloading stuff from the Internet in the middle of a package install is definitely harder with some package managers, but IMO that's a poor practice.)
I agree with your sentiment, but do any of those package managers prevent some random repository from adding Firefox version 190 which will install on next apt upgrade? Not that they need to - presumably any package I’ve installed already basically has root access.
Yes. Apt has preferences and pinning. Basically the official repositories by default have higher priority than other repositories. You can change the defaults, but you'll know when it happens (because you have to do it yourself).