I'm using it for current Firefox, Zotero, Joplin and two or three more programs, none of which are packaged in Debian (except Firefox, but only the LTS version that doesn't work with all my extensions).
Unless you can offer something better, I'll keep using it.
Or just use Arch and have almost everything in official repos (and the rest in the AUR). I have had less issues with arch than with debian, because Arch is so simple. If you want to install something, you install it, and the it's installed. One command. Always. I found that debian would break more easily because I had to mess around with unofficial repos and things like flatpak just to get basic programs. More complexity, more that can break, more reliance on 3rd parties. Arch has been rock solid.
I've been using Arch for over a decade and never liked using the AUR. Too much vetting and building. So I use flatpak for the non-DE graphical applications now.
should be able to just use the mozilla's official build which comes with an auto-updater (and it implements the sandbox itself, so no need for another one on top).
> Zotero
> Joplin
both electron shells. also come with their sandbox already. most rolling release distributions would just package these with a system-provided electron build.
> both electron shells. also come with their sandbox already.
Not sure about the 2 specific apps posted, but web applications packaged as electron apps often do so in order to easily escape the normal browser sandbox without having to prompt for permissions? Or even call into native code which would be impossible from a web app.
I would not think that because an app is electron based, it is sandboxed from your system.
Ideally if you can run the same app under your normal web browser, you'd be fine. I see many people install the Slack app for example, but the web version works just as well within the full browser sandbox.
Zotero is a XUL application, not Electron. The soon to be released version 7 is a major rewrite and is based on Firefox.
Zotero is one of those cruitual applications where Flatpak is nice. I want it to be self contained. I don't want anything messing it which could lose me weeks of research.
I always assume Electron apps are going to be more vulnerable than your average app. They tend to have the same vulnerabilities as web browsers (who are a big target for exploits given the reach) but have 2 additional layers of "bureaucracy" (the App's own update schedule and Electron's) before the underlying vulnerable engine is patched.
I always get a little annoyed at "X is not the future" post because we don't live in the future, we live now.
Much as we like to personalize them, computers are tools that we use to get things done, and Flatpak is among the better things we have right now for dealing with the awfulness of Linux packages. If a better thing comes in the future I'll use that.
But this is Hacker News where people who build things hang out. If you are fine with the state of the world and are not involved in advancing it, good for you, you can close this discussion. But many people around here are building the next things. And in that context it makes sense to think about what's the future and what's not.
You sound like every other person responsible for the rampant NIHism in Linux and the reason why the "year of the Linux desktop" is in the year 6002 at this rate.
Nix, NixOS. It has firefox LTS and nightly, and the other two packaged as well. You can in fact freely go back and install any combination of versions/configurations for each of these, even multiple times.
Flatpak mixes up packaging and sandboxing, these two imo should not be that close coupled. Especially that it doesn’t even solve the former properly (not sure about the latter).
Isn't that just because you aren't using a decent distro? I mean I am surprised that firefox or librewolf or whatever fork is not packaged by Debian in a current version.
That's kinda Debian's shtick. Its ethos is to be rock-solid stable no matter what. No changes but bug fixes, and it gets those in a very timely manner. It's an amazing distro for what it is, but in a desktop or workstation sometimes you just need up-to-date software and for that, Flatpak makes a whole lotta sense. Stable OS core with the latest applications shipped on top. It may not fit your own use case but it's one of the leading distros that exist, it far exceeds just "decent."
Neither are snaps. The future, for regular daily use, are appimages. Much like MacOS dmgs, these are a "single" file (from an end user perspective) that you download, double click, and run. That's it. Ideally we'd see more work in this area. I am slowly trying to figure out how to automate builds for GUI apps and I am considering somehow settings up an inexpensive server to pull, package, and submit appimages to appimagehub. A crowdsourced effort would be cool. Imagine a distro where by default if you place apps in ~/Applications you can just execute them. Comes with risks, but having them "signed" by appimage a lot of it would be mitigated. Linux, while excellent as a desktop, and I am referring to the many awesome distros available, is really user friendly. Let's make it even more user friendly.
While I love bleeding edge packages, normal users don't want to have to deal with compilation workflows and they expect software to work a year after it has been installed.
To name an example: steam is a nightmare, even though they try their best to host and ship as many libraries from ubuntu as possible. The experience with botched steam installations shifted my perspective a bit, and I think for these desktop use cases, especially of proprietary software, AppImages are the way to go.
It's the easiest way to guarantee behavior and stability of software, while not putting the virtualization burden upon the end users.
Snap and flatpak run a daemon to integrate into your system. Appimage has an optional daemon to give you the same integration https://github.com/probonopd/go-appimage
Handles making it executable, automatic upgrades, no need to move it to your $PATH, and adds the application to your app list.
Only other thing you might want to do is symlink a friendly name for cli
AppImages waste even more disk space. At least with Flatpak I only need to install libraries once, with AppImages I'm getting a second copy of everything with every application I download.
If AppImages were actually directories, like they are in macOS, I could at least run duperemove to fix the disk space issue on BTRFS, but with full images that become a lot harder.
As far as I'm concerned this is a good thing. I'm completely and utterly tired of fixing the next 14916498th bug that's caused by package X and Y having shared dependency Z, then X updating Z to Z+1 and breaking Y. In the last 2 decades I used linux I had this issue so many countless times that as far as I'm concerned nix-style, or AppImage-style "waste space" solution is strictly speaking better. Sick and tired of this "upgraded MuseScore3 to MuseScore4, so now Ardour is broken" insanity, this is not how software is supposed to work. I want each package to have its own dependencies, period.
> I'm completely and utterly tired of fixing the next 14916498th bug that's caused by package X and Y having shared dependency Z, then X updating Z to Z+1 and breaking Y
This is not "Linux", it's users who mess around with (and ultimately break) dependencies in order to get the latest version of programs at any cost.
If one uses, say, an Ubuntu version and 3rd party repositories for that version only, they're not going to get any dependency issue, since all the programs will share the same library versions (dependency bugs do happen once in a long while, but they're rare, and they're exceptions).
I used Ubuntu and even Debian for many years. I strongly disagree with you, actually I find Arch significantly more stable than Ubuntu. At least in Arch when dependencies are broken, both X and Y are swiftly upgraded.
At this point, I'm also tired of convincing others that linux package management is broken beyond repair. Until you experience my pain, you won't be convinced. I understand many people will never be able to sympathize with my pain but all I can truthfully and honestly say is that I'm a very experienced GNU/linux user, I used all kinds of distros from Ubuntu to Fedora to Arch to Gentoo and, no, it's all a complete and utter mess when it comes to dependency management. I end up spending hours every month fixing broken versions. The sweet spot is using pacman for very basic things, and AppImage for everything else. I don't care about memory efficiency, I want `MuseScore4.appimage` to contain everything about the app, I want it to behave exactly the same every single time I click on it. No, I don't tolerate even the slightest behavior difference, I do not want glibc to upgrade from x.y.z.t to x.y.z.t+1 because it causes insanity when t+1 causes a behavior change in some random software synthesizer I happen to use. I know that this probably doesn't make sense to 99% of users, but maybe I have a special case. Case in point, in the year 2023 while trying to ship a lot of work I'm producing, a single update broke tons of my workflows, and I finally decided that anything not frozen in AppImage is cursed. If it doesn't work you, I respect your patience and expertise, I just hope some people can understand the pain other users go through.
I have a degree in CS, I write code full time, I manage linux containers in my day job, and I still can't manage the mess I have at home in my local Ubuntu/Arch installs. I don't know how people who don't know how to code do, but all I know is that I'm done spending hours at a time on fixing glibc at this point. I just want to work on my hobbies, thank you very much.
EDIT: And before people come here, no, OSX and Windows are even worse. I won't consider using them either.
Zypper has excellent dependency resolution, it hasn't given me any trouble. DNF is just as good, from what I hear. Apt and yum are an earlier generation of package manager, and I've seem them get themselves into jams on more than one occasion so I understand what you're saying and can sympathize with your perspective, but those shortcomings aren't inherent to the premise of traditional package management.
I use AppImage for the sort of software that won't get packaged (proprietary or obscure) and zypper for pretty much everything else (excepting a few programs I build myself due to reasons.)
This is very gross disinformation and/or incompetence (which explains why "others" won't listen).
The default Debian tools, apt and apt-get (and related tools, like gdebi/aptitude) won't put the system in an inconsistent state, unless the user forces them to do so. Dpkg will do, which I suspect is the tool you're using.
Even when one adds a repository with incompatible package versions (say, the system is Focal and the repository distribution is Mantic), the upgrade will stop before upgrading the packages.
What's happening is that you're forcing broken dependencies, likely to chase the "latest and greatest versions", then complaining that the package manager is broken.
Uh... Such is linux life, when you're hit with issues and complain, it's "This is very gross disinformation and/or incompetence". No wonder some people won't bother dealing with it.
Again, I truthfully experienced all of this and my day job is to handle things like this, so I personally know that it's neither of those things.
What's happening is a variety of issues (not one) and it can be anything from package maintainer not knowing whether a version is incompatible with their package, to dependency being noted wrong in package. For example, if I write package X and depend on numpy, but don't realize numpy 2.24 breaks it, Debian will happily accept the update breaking my package for users. It's the maintainer's fault but ultimately if the OS didn't upgrade dependencies for every program it would have worked. Which is why freezing dependencies is the way to go in terms of stability.
There are countlessly other scenarios, for example using a program compiled outside of Debian and dyn linking then Debian upgrading the dependency not knowing about the 3rd party program. It's all a trap for broken software.
Linux has definitely come up with the worst solution to DLL hell, but Flatpak solves this problem neatly without dumping huge files all over my disk (and without making me manually create shortcuts for them). It also solves the update problem much better (update applications all at once rather than open them one by one and waiting for the "there's an update, click here to download" popup to show up).
I generally stick to the distro repos exactly because external software is a pain to run, with Flatpak as a fallback for applications not packaged in any repo. More and more third party applications are being compiled statically, though, which has solved a lot of issues for me when it comes to these externally managed programs (though it also causes annoying security issues).
Technically speaking appimages are compressed files so maybe someone clever can come up with an idea on how to dedupe them. With mass adoption this issue i think may be solved by someone concerned about disk space. Personally, and probably many casual users, are not as worried about space as they are about convenience. But I think your point is valid, and I think this would make an interesting side project for someone out there. For me, I'd like to see more adoption. The nice to haves will follow.
Optimizing for disk space in 2023 seems misguided at best, all computers come with hundreds of GB at least, often one TB or more. Most people never fill these unless they download media, and people who download media have additional hard drives anyway.
Ease of use, RAM usage, startup time and security should all rank higher than disk space.
I'm not going through the effort to upgrade my 250GB system SSD because a random podcast app can't figure out how to distribute a binary without shipping half an OS. Most people aren't going to upgrade their storage at all.
We're talking about Linux users here, not your run-of-the-mill generic end user. If you download an AppImage, you're most likely an advanced computer user. Maybe you don't download media yourself, but you'll probably have huge node_modules/cargo/venv/maven directories slowly clogging up your drive.
AppImages aren't easy to use (you can't double click them like on Windows and they all have to provide a mechanisms of their own to register a shortcut in the system menu), their startup time is affected by the compression tying all the files together, the security is no better than any other application (worse, in fact, because when I update my system's openssl client I'll still need to wait for every AppImage program to publish a security update with the patches included, which usually takes months or longer). I don't know about RAM usage, but the duplicate libraries being loaded by the executables will cause at least a few megabytes of unnecessary RAM usage.
AppImage is mostly there to help the developer spread the software. The benefit for the user is "at least there's something I can run, I guess".
> AppImages aren't easy to use (you can't double click them like on Windows
Uh, but that's exactly how they work. What file manager are you using them that doesn't recognize them? Also you can just chmod+x them and execute them.
As for their size, my XPS13 has a 250GB SSD too, not easily upgradable, but AppImages just aren't a significant burden to me. Once a month or so I move downloaded media to my NAS, but all of my AppImage applications together add up to maybe 15GB or so, the redundancy in them is really the least of my concerns.
When I double click an AppImage, I get ab error ('Could Not Display "Something.AppImage"\n There is no app installed for "AppImage application bundle" files. Do you want to search for an app to open this file?'). That's in Nautilus. If I click on the file from the Firefox download thingy, it asks me how I want to open "file links" with no suggestions for what to do next. Thunar asks me to pick a program to open "AppImage application bundle" files with, but has no recommended application.
Dolphin does seem to support AppImages natively and will execute the file (after clicking through two security prompts). I guess KDE users are AppImage's target audience, then?
Until a few years ago Apple was shipping new Macs with 120 GB. Probably same was (is?) happening with Windows vendors. TB-level disk space isn’t ubiquitous yet.
The difference between apple products and non apple products is that you can easily upgrade them. My core i9 laptop (a cpu that easily competes with anything apple can offer, so it should be fine for heavy workloads, albeit at bateryy cost), has 5600 mhz ram speed, 7 gb / s dual nvme support, both of which upgradeable. Most folks i know with non mac laptops can at least upgrade their hard drives and often ram. Disk space and ram not an issue for non apple users. TB disk level is absolutely feasible outside that platform. You can buy 2TB nvmes capable of 5 gb / s for less than 100$ easily.
Okay but how much dedupable dependencies does the average AppImage user have across his programs? A few hundred megabytes? A few gigabytes at most? Even 120 GB is big compared to this problem.
I'm not familiar with details of any of these packaging formats. Are they all single-file? Is there any technical reason they couldn't be unpacked to allow the fs to deal with the deduping?
AppImage is basically a runnable .zip file. It contains a loader and an application with all of its dependencies, packed into a single file. If the image was uncompressed, basic extent-based deduplication could save space, but as far as I know the compressed AppImages just don't match up the files like that.
I don't believe Snap natively offers any deduplication. Snap also uses compression (squashfs as far as I can tell) so deduplicating filesystems are equally powerless.
Flatpak has its own deduplication system (that will confuse df/du if you try to run it and often leads to confusion about download time). Through tools like https://gitlab.com/TheEvilSkeleton/flatpak-dedup-checker/ you can check how effective the dedup process on Flatpak is. On my machine, Flatpak is using 10.5GB of disk space, but reports 13.75GB of files. Running duperemove on /var/lib/flatpak found two identical extents (belonging to cached files) that didn't get deduplicated already.
As for why they couldn't be unpacked: I don't know. They'd take up more disk space, I suppose. Many standard Linux file systems don't have any form of deduplication mechanism built in, so I'm not sure what the balance would be. I would appreciate the ability to decompress AppImages/snaps without writing custom wrappers for them, but if you're on ext4 like many (most?) Linux users, you'll only see the downsides.
AppImages sounds good on paper, but when you download one, get library errors and there's nothing you can do about it, you start thinking that Flatpak with its runtimes was not so bad idea after all.
The question is which method is more prone to issues. Perhaps a QA process of sorts could be builtin to the "app store" to at least be able to boot the app without errors? Also I am not saying flatpaks or apt should go away. Just saying that it would be nice to have a realiable way to download a package and run it, no hassle.
Do appimages have a proper well maintained store now with a good cli for maintaining them? When I last year moved to portable Apps, I check out flatpak, appimages and snap, and move to flatpak, and some appimages. But while flatpaks are easy to maintain, the appimages are basically dead weight. Some can be updated, some not. There were a number of different tools for maintaining them, not one-for-all. Similar there were multiple stores/sites for finding new appimages, not seem particular trustable. It was not very impressive at the time.
Currently, and that's an issue I've also encountered, there are apps on appimagehub.com. Which is why I am hoping more effort will be put in populating it. But i don't think it should have a cli package manager, i think it should be gui based, and focused on convenience. I don't think it should replace apt, yum or pacman. Just a new way of managing packages that is focused on convenience. And perhaps a convention that in a distro if you download any apps to ~/Applications then these will be auto updated provided they are sourced from an "app store". Trust can be fixed with a set of built in security certificates as apt does, but in a basic, low maintenance good enough fashion. When I switch to casual user mode, or even for daily work, good enough is ok as long as its super convenient.
This works for MacOS because there are system frameworks you can rely on being installed (like AppKit) and that actually have backwards-compatibility (at least for a while). Linux distros don’t have this (not even for glibc), so you’d have to literally everything in your app image folder. Every GUI app bundling QT or other gui framework would be a ridiculous waste of space.
I feel appImage is missing metadata. I'd be great when the user downloads an appImage, it is automatically linked to the mime types is can handle, shows up in relevant menus, and can properly register itself to any services it needs (e.g. Cron).
i completely agree. for me, the only issues i've had with appimages are "housekeeping" UX:
* i know they can technically be updated but i really have no idea how. would be cool to see a de-facto solution similar to sparkle[0] on mac
* i use appimagelauncher[1] to integrate with my menus etc but sometimes it works and sometimes it doesn't? i haven't really figured out the rhyme or reason
Just built a new Gaming Linux PC which is intended to replace my aging Dell XPS 13 as my daily driver. Decided to go all-in on flatpaks, as I've been trying to stay away from rpm-fusion.
The Steam Flatpak has been an adventure, to say the least. I added a second SSD just for games that gets automatically mounted on boot, and I gather that having the games installed somewhere outside of steam's /home/ directory was not jiving with flatpak's security model. It took some non-trivial editing (thanks, flatseal) to finally let the Steam flatpak be able to write outside of its own directory and install the games.
I still get occasional weirdness, especially on older games. I wasn't hearing any sound effects on Team Fortress 2, which I eventually discovered was tied to an selinux alert. At last check in, I still can't launch CS:Go, because of some backend problem while trying to play the opening movie...
Fine fine-grained permissions systems like selinux end up introducing bizarre error conditions that are outside of upstream's test suite. Those error conditions are often exploitable.
They also assume that having distributions and end users produce a multi-MB security policy written in an arcane, poorly-documented policy language will somehow lead to a correctly configured sandbox.
I greatly prefer the OpenBSD approach, where the upstream application developer builds calls to things like pledge(2) into their program, and then tests that it behaves correctly before releasing it:
>I gather that having the games installed somewhere outside of steam's /home/ directory was not jiving with flatpak's security model. It took some non-trivial editing (thanks, flatseal) to finally let the Steam flatpak be able to write outside of its own directory and install the games.
I think flatpak could use a built in notification method of some sort to add exceptions to paths it's allowing access to, though I imagine it would still require effort on the application's part which maybe would never happen (especially with steam using its own custom file browser)
>At last check in, I still can't launch CS:Go, because of some backend problem while trying to play the opening movie
I've had no such issue (and it's worth noting the -novid launch option), but regardless valve still treats Linux as a second class platform despite the steam deck which is fairly disappointing.
The issue I do have is that the new overlay will crash cs:go in openGL mode and Vulkan mode has massive stutters.
I'm optimistic CS2 will be better, but to be determined.
I would just bind-mount the appropriate part of the second disk as /home/steam, or whatever. I don't see why one should persuade a program to follow a complicated setup when the setup can just be made straightforward with OS tools.
Problems like this seem like a natural consequence of trying to mash together distribution and sandboxing, rather than leaving the sandboxing to the OS. It’s so much easier to build something that “just works” if you’re concerned only with distribution (really, it’s as simple as macOS style app bundles. Yes it takes a bit more space but the UX improvement is worth it).
Following this, it seems like there should be an XDG standard for sandboxing which distros are free to implement whichever way they feel is best. With that, Linux app packaging solutions need only worry about playing nice with that spec.
I had similar problems with Lutris, but never came around to fully solve them. Now I'm using the native debian-package, and never had any problem again. Flatpaks overall are working fine, I use them for portable apps on my mobile SSD. But the security can be a hassle on a hacking friendly-system which is doing too much outside the expected. So one should be aware that Flatpaks might be a tad different from native packages.
Same for me. I had really weird stuff to debug. Could not save from FF to /tmp for instance (and I really like that for downloads I should use immediately and can be removed).
I found Keepass to be a snap/flatpak once: so many extra layer of complexity for a passwrd manager? No way that's good for security.
If there's a security issue, I want to be able to update libraries and such independently of applications rather than waiting for the application devs to do it.
I don't think it's caused a problem for me in the last 10 years or more. It may have and just forgotten about it, though. If it happens, it's not a huge problem to fix it.
I just want to point out that Windows actually has similar packaging problems. For some reason, Windows didn’t ship with C/C++/.NET runtimes. So practically every app ships with either a copy of the runtime DLLs or an installer to install those runtimes globally. Every Windows installation inevitably gets a million msvcrt dlls across random places, never getting any security updates. I believe this situation is a lot better on recent versions of Windows 10/11.
That lets Microsoft break ABI on every release if they see fit. I thought the model was to install the runtimes globally, so you only have one copy of each version, but you install whatever version a given program wants. So you end up with lots, all somewhat different to each other.
Flatpak seems to follow a similar path as many other Linux technologies like Wayland or Systemd, in the sense that they seem to arouse the anger of a small but very vocal crowd who really can't stand any challenge to the status quo. So this is the template of the story:
There is a new tool or workflow trying to replace or complement an old one. This new tool tries to solve many different complex problems that the old tool, usually designed decades ago, doesn't solve well in this the current world. To do so, obviously, some sort of compromise is required, the new tool won't do certain things that the old tool used to do, but in exchange of that it will do a lot of new things that many users really want. However this small crowd is really pissed by this, without understanding that there is no holly grail solution and some sort of compromise is always required. Additionally, as it's natural with any new technology, the very first incarnations of the new technology are not very mature and there is a lot to polish, a lot of tooling missing, and a chicken and egg problem of not enough users to drive its take off. And the small crowd will use all this as much as they can to try to prevent the world from moving on.
However, as time passes, the new tool starts becoming more mature, the obvious shortcomings get fixed, and all the new possibilities that the tool enables start to really shine. And while the initial compromise will always remain, the majority of users realize that the tradeoff was worth it.
This has happened with Systemd, it's starting to happen with Wayland, and I believe it will happen with Flatpak was well. We'll see.
> without understanding that there is no holly grail solution and some sort of compromise is always required.
Why do you assume that such people don't understand this? Sometimes those compromises mean that the tool can no longer accomplish something important to some users.
I don't think that being upset that software has become less useful is terribly unreasonable or hard to understand.
> Additionally, as it's natural with any new technology, the very first incarnations of the new technology are not very mature and there is a lot to polish, a lot of tooling missing
And while the software is in that state, it shouldn't be forced on anyone. It's not unreasonable for people to want to use software that actually works well in the present.
> arouse the anger of a small but very vocal crowd who really can't stand any challenge to the status quo.
This sort of narrative is common cope from developers who make substandard software. Ego blinds them to their own limitations so they blame the users. Just look at the difference between Pulseaudio and Pipewire. Pulseaudio is widely hated and the developers said it's because people just hate new things. But Pipewire is newer and people love it, the supposed reflexive hate for new things doesn't manifest for Pipewire. So what's the difference? Pipewire does what it's meant to and doesn't cause problems for people. Pulseaudio caused endless grief, that's why people hated it. SystemD earned ire by causing people problems; had it not done that most users never would have realized they were using it in the first place.
While Pipewire is certainly better than Pulseaudio, let’s not forget that the latter actually surfaced the millions of bugs in sound drivers by simply using them in a more advanced way than just putting out audio, so while it was buggy initially, most of those stem from a layer below.
Pulseaudio is buggy to this day, and switching to Pipewire is most often the easiest way to fix problems with audio, particularly bluetooth audio. They're both meant to do the same thing, but using the same exact drivers Pipewire just works better. This is why all the major distros are switching to Pipewire; a transition which hasn't earned the ire of the users who are supposedly mad at anything new.
Systemd and Wayland actually solve a real problem, and they do it well. (And I agree that they have a terribly loud and more often than not technically unfound criticism going on in each such thread)
In case of Flatpak we have a so so much better solution in the linux space (nix) that I feel that it is blindly going in the wrong direction. Package management is hard, but it finally has a solution. One might disagree with the implementation of Nix, but the idea is sound, and this is the first thing ever that doesn’t just push it a layer down, but actually solves the problem. The linux world should definitely ride this moment similarly to git’s success back then.
We're currently using AppImage because that was the first thing that worked for us, but most of the reasons are the same either way: We want to spend time developing the software, and that means it's hard to justify packaging every release for a dozen distributions. And I'd say nobody particularly wants to do it.
We also expect our users to keep reasonably up to date, not whenever it's convenient to the distribution. Code changes can change the networking protocol, and some of those can require everyone to upgrade.
So at least to me it makes perfect sense to package some kinds of applications this way. Maybe not KDE's calculator, but definitely things like games and tools with specialized markets, where it may be difficult to find people wanting to do the work of packaging them for a distribution.
I install only debs because I don't want too much redundant code around and having to update all of it, if it ever updates. I prefer Debian to care about updating shared libraries.
I make few exceptions, none for snaps and flatpacks so far.
I installed Firefox from the tar.bz2 on their site, as I did with Windows before my switch to Linux in 2009. It auto updates and so far it's OK. I'm on Debian 11, I'll upgrade to 12 to stay more current.
Other exceptions: docker containers for redis and the PostgreSQL versions I have to run for compatibility with the production servers of my customers. I use asdf for that sometimes and also for languages, of course. We can't rely on the versions coming with distros.
If I'd really have to use Overte I would do an exception for that too.
While the problems you cited certainly need to be solved by the Linux ecosystem, I don't see why that solution should involve the heavy-handed sandboxing with a thousand overlayfs, containers and whatnot.
I wish there was a more straightforward solution that didn't have so many complicated moving parts, more like the static binaries that I get from Go or Rust.
One reason is that big applications can have many dependencies, and once in a while I find something dlopens something from the host filesystem, finds something incompatible and crashes. So I really want my stuff to run in a sandbox where I know exactly what it's loading and there are no surprises.
The other is that we've got a complex system under development and there may well be security exploits. I like the idea of that if somebody breaks our code it's going to take some work to get to the user still.
One major headache with trying to run precompiled binaries on Linux is that if they were compiled using a newer version of glibc than the target machine, they won't be able to run. Back while working on Factorio, I was trying to get around this problem with endless Docker containers, but coworker Wheybags came up with a much simpler solution to this, which is simply to, at compile time, link to the oldest compatible version of glibc by including a header: https://github.com/wheybags/glibc_version_header
It's too bad this hasn't been standard practice for the past 30 years!
Article:
> How many people on Earth will truly understand how this all works?
I feel this. And I think the software industry in general seems to have decided the answer is “No one does or will, and meh, we don’t care.”
This is why every mysterious issue with every OS seems unsolvable. Every app is spewing random exceptions to the logs 100 times a second because in fact it does not just work together, most things barely work and nobody cares. We don’t get to have good software that works reliably anymore.
Like my phone that turns its ring volume to 100% every few days. It’s not a rogue Shortcut, cuz Shortcuts doesn’t even have that feature (only Media volume). No one will ever know why that occurs, because no one understands the whole stack from top to bottom. And in a way that’s the best case scenario with such a walled garden controlled by one “benevolent” dictator.
Using all these packaging frameworks and libraries at once, no one will ever be able to make sense of some kinds of problems, because everyone is cargo-culting some large section of this stack of complexity, because a single person couldn’t hope to make sense of everything.
This debate will never die, but while people have been complaining about it, Flatpak has quietly just become a better way to package software for end users. My criteria is that I'm a user, I don't care about what's elegant to developers -- and I have fewer problems with Flatpak than I have with non-Flatpak software. The vast majority of Flatpak problems I do have as a user come down to sandboxing permissions that I actually quite appreciate. The (very) few architectural problems are problems I would have had with other bundling systems too.
"Developers are lazy" -> No, no user ever wants to debug dependency issues, and developers can't get rid of dependency issues. This feels like a repeat of the Rust debates where C developers kept complaining that good developers just don't have memory errors. Okay whatever you're very talented, congratulations; but most software isn't written by people who can reliably support multiple distros and lowering the skill requirements to maintain software is good actually because I use hobby projects all the time. Even outside of hobby communities, GoG's Linux installer is so borked that half of the time it's easier to install the Windows versions of the games and run them through Wine (because then you can use Bottles which provides dependency isolation). And I am completely convinced that dependency management is the problem -- Flatpak apps don't have these issues, at least not nearly as many.
I'm not saying everything should be a Flatpak, but certainly at the very least most Linux games should be, anything that's graphical that isn't being distributed through an official package manager is a good candidate to at least consider Flatpak. I'm always grateful when I can install a graphical app through Flatpak instead of AUR.
Is it the future? Flatpak critics spend a lot of time bashing Flatpak and very little time proposing equivalent fixes or acknowledging why Flatpak exists in the first place. If those issues were solved and the solutions popularized on mainline distros, maybe Flatpak wouldn't be the future. But I'm not holding my breath. This article proposes GoG's system as an alternative and says the existing problems are minor and easy to solve. 2 years later, I have literally never gotten a GoG native Linux installer to run without problems on the Steam deck.
I'm not even saying it has to Flatpak, but whatever system you want to propose (Snap, AppImage, whatever) very clearly dependency isolation is better for end users and results in fewer bugs. "It takes up too much space" just isn't a real critique when the alternative being proposed almost universally fails to run on my hardware.
I don't know, the only flatpak packaged application I tried to use today is crashing at startup and it just makes it more annoying to debug and do an actual bug report.
These docs are a bit sparse and more for debugging some very specific things. If all you want is a stack trace:
- You can use `flatpak install --include-sdk --include-debug THE-APP` to install the SDK and debug info for an app
- Then `flatpak-coredumpctl -m MATCH THE-APP` will use coredumpctl to open the matching coredump inside the corresponding SDK's gdb
I guess millage may vary, the only thing I can say is that there are multiple instances (particularly GoG which the article seems to praise) that literally never work on some of my systems.
The few times I've run into Flatpak crashes, they're architecture problems that would have been present in any version of the app, so I'd be doing that work regardless. They're harder to debug in Flatpak, but also heck debugging crashes every time I try to install a piece of software. I'll happily take the added complexity of needing to boot a shell into the sandbox if it means I get to debug 50% fewer problems (and in practice Flatpak tends to reduce my number of issues by way more than 50%).
The average user is never going to open a debugger, minimizing the number of crashes is more important for that user than making the crashes easier to debug.
But on the other hand, app crashes aren't something that common on linux. I mean at least on my distro. Whenever I have an app available as rpm, I choose the rpm over the flatpak version, for space saving purpose mostly.
None of my apps crashes except one. And that is a piece of shit of a proprietary app that is not packaged by the distro maintainers: Microsoft Edge.
> But on the other hand, app crashes aren't something that common on linux. I mean at least on my distro
Strong disagree, my experience is that app crashes are extremely common on Linux if you step outside of official repositories; I say this as someone who literally only runs Linux and nothing else. I'm not necessarily saying Windows is better but... it's not like nothing ever breaks. It's impressive how well developers are able to hold it all together, but my experience is that Linux systems are fragile the moment anyone stops actively managing the dependencies and putting in the work to compile everything to match.
> that is not packaged by the distro maintainers
It is not feasible or scalable for Linux for every single app (even every Open Source app) to be distributed and managed by the distro maintainers. And this is what I'm getting at with dependency isolation -- the vast majority of crashes and bugs I see on Linux (and I mean by a massive margin) are all due to dependency mismatches and shared dependencies. A lot of Linux software is generally stable if the system looks like what it's expecting the system to look like. But if you're not going through an official repository where a bunch of volunteers are putting in the work to make it consistent, then it very often doesn't look like what developers expect.
This is why people run games through Wine instead of using the Linux versions -- it's not because it's impossible to build good native versions, it's because if they don't use the Linux version they can use Bottles. That's the biggest reason; it's about the dependency isolation.
The major distros and BSD have shown, that yes, that scales fairly well in fact given the number of apps provided through distros official repos.
Most apps that I have been getting from third party repo were not included because most majors distros are us projects that can't ship patent encumbered libs/apps, not because distros mainteners couldn't package them.
> This is why people run games through Wine instead of using the Linux versions
No, the main reason is that the linux versions do not exist for the most part because devs don't want to bother supporting a non uniform software platform that represent a tiny fraction of their market.
> The major distros and BSD have shown, that yes, that scales fairly well in fact given the number of apps provided through distros official repos.
I mean... citation needed :) I run Arch and I am not dismissing at all the frankly incredible work that the Arch maintainers do bundling software and making it available. It is a miracle that it works as well as it does, to the point where my Arch systems are often more stable than non-rolling-release distros I occasionally run. Fantastic work by the maintainers.
But it's not a solved problem and I can only imagine how much effort and work is getting burned to keep it running as smoothly as it does. Step outside of the official Arch repos into AUR or (heaven forbid) into completely separate ecosystems and all of those problems come back. And I don't want to ignore the software outside of the repos, I didn't start using Linux so that I would be beholden to some kind of "official" distro app store.
There are tons of Open Source applications with no legal barriers in place that are not getting packaged in official repositories for no other reason than that they're niche and there is a lot of software to package and not enough people to do package it all.
And of course any non-OS games are also going to run into these problems. That's a problem that distro maintainers can't solve, it doesn't matter how much work they put into it, they can't repackage source-available or closed source software. "People shouldn't ship that" -> but they do :) So ideally we'd be able to handle that without descending into dependency hell.
> No, the main reason is that the linux versions do not exist for the most part
I'm not talking about games where the Linux versions don't exist, I'm explaining why I'm currently running the Windows version of Inscryption on my Deck even though it has a native Linux port. Do I want to be doing that? No, of course not. But the Linux version doesn't boot, most likely because there's some dependency chain missing or an environment variable is wrong, or... I don't know, I don't want to crawl through forums and debug that myself, I want to play the game.
And I'm not alone in that, it is common advice on Linux to use Proton instead of Linux native versions. And that stinks, it's bad for the ecosystem and it's bad for users and it's bad for games. But the Linux versions have so many more problems because they make assumptions about the underlying system that turn out not to be true. Of course Windows builds also have those problems, but the difference is that they run in a containerized environment that gives them the system they expect.
I would love to as well, if the Linux versions would boot up and run on Steam Deck. Even when I was gaming on my desktop, which is pure Arch, I remember regularly needing to edit Linux games or recompile dependencies to get them to work.
Flibitijibibo has some good commentary on when to dynamically/statically link libraries, leaning towards statically linking dependencies when possible to avoid relying on the OS too much. Coincidentally, Flibitijibibo's Linux ports are some of the few where I can just download them and be confident that they're likely going to work out of the box with zero troubleshooting.
It is a general comment about Linux. Arch Linux is a major distro, if people using Arch can't run Linux software using existing packaging systems, the packaging systems are broken.
If the natural result of existing packaging systems is that software only works on distros like Ubuntu and Fedora, then that is very much a general Linux problem.
I suspect GoG tests mostly on Debian-based systems, probably Ubuntu and it's variants. On Arch, things get weird (at least in my experience, maybe other people have had better luck).
It's frustrating because Arch is generally more stable for me than Debian, but you can kind of see the niche status of the OS play out whenever you're working with a package that wasn't packaged by the Arch team. When developers have to maintain packages for multiple distros, my experience has been that usually the popular ones get serviced first and the niche ones get serviced last.
I wonder if the author of the article feels differently about Flatpak today (a great deal has changed in the past 2 years, and Flatpak seems to have a vibrant future today).
Like the fake security feeling people have using flatpak because of the advertised sandboxing, the size of the software being downloaded or the slower startup of applications.
It is strange to watch everyone fight about snaps, flatpaks, silverblue ad nauseam when Nix (or its full-OS version, NixOS, or the GNU alternative, Guix) has already definitively solved this problem but is still considered too arcane for most people to use.
It only uses the disk space it must, AND every app only accesses the dependencies it needs. It's the best of all worlds (except for the learning curve, which is of course why it's considered "arcane").
As a use-case where its capabilities came in handy just today, I had some old files I encrypted with gpg1 which didn't cleanly decrypt with gpg2. Accessing the old version of gpg, just for this one console and this one time for this one task, was a one-liner: "nix-shell -p gnupg1orig". With that one command it installed everything that version required, and put its executables at the front of my PATH in my shell, so I was able to do the decryption. When I exit, that stuff will get collected on the next garbage collection.
I really like Nix, but I think "is still considered too arcane for most people to use" is a contradiction of "has already definitively solved this problem". Being usable for most people is an important part of solving this problem, which I unfortunately don't think Nix has accomplished. Maybe there is some way to improve its UX while keeping the fundamental model intact, in order to solve this problem.
The core idea of Nix that has basically "solved this problem" is simply trying to control for all possible inputs to a build-time and run-time environment, and lock them all down with hashes, which is in essence basically treating a build like a pure function. (In theory, this should result in deterministic builds and deterministic runs. And in practice, nearly 100% of the time, it does.) The point of a "derivation" is to provide that for code that does not. Nix, and derivations in general, are thus sort of a "scaffolding" over all the existing build tool and dependency managers out there that bring them "over the line" regarding this. If these build tools and dependency managers all embraced the Nix way of specifying dependencies, and they all agreed to store it in a Nix store (i.e. a Merkle tree), then hypothetically, Nix would not even be necessary (except perhaps as a group of small tools to manage the store).
1) This is probably too much to ask of people. 2) This still leaves behind decades of software that would still need to be built in the future and would thus still need something like the "scaffolding".
But again, the fundamental idea is really just this: Treat builds and runs as pure functions. Every other advantage derives directly from that principle. If someone else can figure out how to do this as simple as possible, I'm all for it! In the meantime, I think every developer should read Eelco's thesis paper on this idea: https://edolstra.github.io/pubs/nspfssd-lisa2004-final.pdf
Of course, the OTHER way to solve this problem is just to statically-compile everything, leading to an explosion of disk-space usage. And even then, you wouldn't get guaranteed rebuilds.
I understand why it's a great theoretical solution, but my point is that if hardly anybody is using it, then it hasn't solved the problem, because most people are still experiencing the problem. It doesn't really matter why it isn't being widely used, just that it isn't. Like, if it were the opposite, a very poor theoretical solution to the problem that is very easy to use, but nobody used it because it just didn't solve the core problem well enough, that would also clearly not be a solution. A solution requires both things, it must be both fit for purpose and usable.
That’s just it. Nix has been growing significantly. A third of its new users were gotten in 2022, the year after I FINALLY joined the club (after scoffing at it for... at least a decade? I was one of you, basically...).
Have you worked at a startup? Do you know what a “hockey stick” growth curve looks like? Because Nix may be on the cusp of one.
Its package repo (check out search.nixos.org) has more packages that are ready to download and run than any other Linux distro, while having fewer maintainers than most distros. If nothing else, this alone speaks to the power of deterministic builds and runs. When something doesn’t randomly break, turns out that it needs much lower maintenance and thus fewer people…
The funny thing about NixOS (and I heard about this before I experienced it, which I found intriguing at the time and which I can now say is very real) is that the second you "grok" it... you will want it on ALL of your machines
So when I dove in, I said "I'm going to figure out how this works and then simplify it." Unfortunately, the closest I've come to that so far (and this is partially due to... having a 2 year old and working at a startup) is this commandline wrapper that makes most of what I need Nix to do, easy: https://github.com/pmarreck/ixnay
I still don't understand the entirety of the Nix language, but this is an excellent, excellent interactive tour: https://nixcloud.io/tour/
> but is still considered too arcane for most people to use.
Devs can keep arguing about what the "best" solution is, but Flatpak already won this debate by its ease of use.
If NIX had the same workflow as Flatpak, it would be a clear contender.
I see this argument all the time with Linux, what is the best vs what is usable. With gatekeeping nerds clinging to whatever is the hardest to use, so they can feel special, vs everyone else using what is easy.
Depends if you want the known but finite difficulty upfront, or the unknown and possibly much larger difficulty at some point in the future when you try to troubleshoot a hard problem that can't happen with a better system but you're also on some deadline which is now going to slip
> It's the best of all worlds (except for the learning curve, which is of course why it's considered "arcane")
Which is the most important one. "It's the best at everything except the thing that it needs to be good at, where it's one of the worst" is not a good marketing pitch.
Having surmounted (and benefited from) the learning curve of functional programming, I find your argument of "everything must be easy or it is by definition a failure" to be suspect, AND a cop-out.
Matrix algebra isn't easy either. Guess ML will never be a success! It's literally failing at the thing it needs to be good at!
Not dissenting generally, just want to point out that the author is wrong about the file permissions dialog thing:
> This is the most complicated and brittle way to implement this. It’s also not at all how other sandboxed platforms work. If I want file access permissions on Android, I don’t just try to open a file with the Java File API and expect it to magically prompt the user. I have to call Android-specific APIs to request permissions first. iOS is the same. So why shouldn’t I be able to just call flatpak_request_permission(PERMISSION) and get a callback when the user approves or declines?
On macOS you try to open a file and it’s handled transparently. “iOS is the same” also could use a citation (I don't recall off hand if it is, and kinda doubt it based on the macOS behavior, so I feel a citation is appropriate). I’m slightly confused why the author is comparing Linux desktop with mobile rather than existing desktop implementations of sandboxing… feels a tad disingenuous.
Fully agree. When the user selects a file via the file selection dialog, that automatically implies s/he has given permission to read that file. So the Flatpak libportal approach has really good UX. Having a second popup to grant access to the file would be horrible UX. Which is why apps ask for coarse-grained permissions like "access to all files" in order to bother the user as little as possible with multiple permission dialogs. Which then kinda defeats the point of sandboxing. I'm reminded of how Android apps need to know your "location" in order to scan for wifi networks.
In general I think all permission dialogs should be reframed as selection or confirmation dialogs.
• Open file dialog -> grants permission to read that file.
• Open file for edit dialog -> grants permission to read/write that file.
• Save as -> grants permission to read/write that file.
• Select which wifi network to connect to -> grants permission to use internet
• Do you want to display events in your neighborhood? -> grants permission to location data
• Select which camera & mic to use for this call -> grants permission to record video & audio
--
I have to say though, apart from that permissions thing, the author makes a lot of good points I hadn't realized before.
> Personally, I’m much more interested in how to get Excel and Photoshop on Linux rather than untrustworthy drive-by apps and games, so I don’t really care about sandboxing, permissions, portals, app stores, alternate runtimes or really any of the stuff Flatpak does.
Guess what Excel and Photoshop will want if they were to ever port their software to Linux.
A related part of the problem is apps that have a large number of dependencies. We should all be careful about which dependencies we use, keep that to a minimum, and try to use things with a stable API.
The other part is library developers need to aim for backward compatibility so apps don't need to care so much about what specific version they're using.
Is there anything that offers mobile-style sandboxing & permissions API like described?
I'd love that, but I'm not even sure how it would work, I don't want it via walled-garden app store where you basically have to target it as an extra platform, because of dealing with those APIs... It would need to somehow just sort of slot into Unix, and if you didn't have it 'enabled' on the system it would just plough on uncontrolled as it does today.
What's the story or usual recommended practice on NixOS? Seems like the overlap with security-minder types would be quite high, and if you did use something like Flatpak wouldn't that interfere with Nix's own management? (Or at least not use it.) (I didn't learn much from the Flatpak Nix wiki page.)
Just goes to show the amount of third-party fragmentation that goes on in the Linux Desktop ecosystem which tons competing alternatives of alternative system components and now installers all in conflict and fighting against the users system.
Snaps are only available in the default install in Ubuntu and Flatpaks is not in the default install of other Ubuntu-based distros [0]. Even other distros have neither in their default installs.
It is not the same thing with macOS with its first party installation methods provided by Apple that just works.
On my old NVIDIA (never again) laptop, Flatpak left around 30GB of NVIDIA runtimes lying around. Each version approaches 1GB in size, and Flatpak will install updates for all versions of the driver you have ever installed. Downloading a few GBs just to update Firefox is just bloody annoying, even with cheap disk space and fast internet. Fixing it was easy enough, but it took figuring out what's going on, and requires regular maintenance (two things the target user for this project probably wouldn't know how to do).
The drivers also never work properly. I've attempted to use several official game flatpaks, and have run into various forms of crashes (including the entire system stalling).
I only use linux for daily use but deploying software on it is the worst out of any OS that I know of with windows being the best. Every update can potentially break things because who knows what changed in that library you depend on, and its not like you can avoid that by shipping static libraries to prevent this since for whatever reasons everyone has conspired against static libraries, I'm guessing because they take up more space?.
Instead of having some minimal set of conventions of where things are supposed to be stored, instead of allowing static libraries to actually work it seems that the solution is to now include the whole system instead. After all storage is cheap right?.
Flatpak is good overall but it has one things I dislike:
- Not being oriented to also CLI apps.
Apart from that, I like it for its convenience and I think it will be the future (or a similar approach). The reason for my thinking that is that, putting myself in the shoes of a distro maintainer, I can see the appeal and benefit of "isolating" the system packages and libraries from the packages installed by the user. I find it similar to the great relief that Docker brought back in the day when reduced the system administration overhead as it reduces the amount of moving parts on the functioning of an app.
• Both use reverse DNS to globally identify themselves, neither actually verifies DNS ownership.
• Almost everything is a bundle, except for CLI apps. FlatPaks on the other hand are being auto-converted from previous packaging systems.
• Bundles don't have dependencies. In theory they can, but in practice they never do. You depend on macOS/iOS as a unitary platform and bundles expose what the min version they require is.
• There is no update mechanism except the app store. If you want that you need to use something like Sparkle. A tool like Hydraulic Conveyor [1] can create a bundle with integrated Sparkle for your cross platform application without needing a Mac. Likewise no attempt to deduplicate redundant files. Interestingly, if you use the latest MSIX tech on Windows then the OS will deduplicate shared files (including libs) that are bundled with apps, in a transparent manner.
• Sandboxing is optional on macOS. If you don't opt in, you are put inside a relatively weak sandbox that just blocks access to a few folders in your home directory and stops you tampering with other apps/the OS. If you do opt in, you get a PowerBox design that's like what FlatPak is trying with portals. There's no way in the UI to see if an app is sandboxed because it's intended as a vulnerability mitigation and not a way to run untrusted code.
• Both bundles and the binaries within those bundles advertise which version of macOS/iOS they're expecting, and the OS frameworks can change behaviors depending on that for backwards compatibility reasons. It's a bit easier to maintain compatibility with Objective-C APIs than with C++, but still, Apple does it for all their APIs including the C++ ones.
.app packages are merely the binary with assets and a bunch of metadata for code signing, entitlements, icons and locales. If you "Show Package Contents" there's not much to them.
What is not a myth, though, is that if you think you have patched some bug or hole in a system library, think again, because flatpaks are a distro in your distro (old yo dawg meme slide follows), sometimes a few even. And their update cadence is not the same.
Yes, it means that sometimes they will be ahead on patches, why not.
So at the very least, two systems on your system. Given that there are hacks to share some of your fd.o stuff, let’s say one and a half.
The fundamental problem to me has always been that people cannot agree on standards, so this a relatively good solution to that problem. I can't think of a better one that doesn't involve people needing to come together to agree on a standard.
Personally I prefer an environment where people are free to experiment and not be held back by backwards compatibility and opinions. Any utility that can automate backwards compatibility is good imo.
I was kinda forced into flatpaks with Fedora silverblue and 10 months later I don't really mind them. They are good enough for a laptop with no special requirements. I have Dino, Steam, Gnome Tweaks, Fractal, Signal, Gimp (crashes a lot), Firefox (because it comes with codecs, unlike the Fedora flatpak), Transmission, Chromium (mostly to run Teams and Outlook for work).
What is the linux app endgame in an ideal world? Imho that is true linux on mobile. With painless, secure, efficient downloads from app stores. (This way the FOSS impact will skyrocket and the promise will be fulfilled)
Work backward from that requirement and constraint. Solutions and designs that barely work for the linux desktop will never bring a revolution...
Flatpaks break a number of apps that I use. So no, it's not the future. Come up with something that works before making it the solution to distribution packaging.
I have one AppImage installed on one system (of four, at the moment). Zero flatpaks and snaps. That one AppImage appears to be available only as a Windows install or an AppImage. I dread any future where everything is an AppImage (or whatever).
Why not just ship everything with a complete OS? Hell, let's go the whole hog; hardware is also a dependency, it's not just libraries. So let's ship every app as a hardware appliance.
Doesn't matter, Flatpak won over Appimages and Snaps in adoption numbers.
And the example given here for GIMP having r/w permission to your home doesn't hold water. The distro-packaged app probably has the same permissions in comparison. At least with Flatpak, to deny it this permission is a simple toggle with Flatseal.
The point is the proponent raise the security flag like it is a huge advantage and you could trust anything coming from flathub while it is mostly pixie dusk.
Untrustable apps aren't more trustable because they are delivered as flatpaks.
> The point is the proponent raise the security flag like it is a huge advantage and you could trust anything coming from flathub while it is mostly pixie dusk.
Okay then, as you criticize Flatpaks give us your alternative to a trusted application.
> Untrustable apps aren't more trustable because they are delivered as flatpaks.
> Okay then, as you criticize Flatpaks give us your alternative to a trusted application.
I am not criticizing, I am saying it is mostly a moot point. The sandboxing allow a bit of isolation but this it ranks quite poorly in term of actual security benefits for the typical end users use cases.
> Nobody made this claim.
Well, not the authors of flatpak, but yes some did. On medias that many people watch such as youtube videos.
> The sandboxing allow a bit of isolation but this it ranks quite poorly in term of actual security benefits for the typical end users use cases.
Ranked poorly in what checklist?
> Well, not the authors of flatpak, but yes some did. On medias that many people watch such as youtube videos.
Let's try to stay on topic. The point I made was that, the author's example about Flatpak GIMP doing something unauthorized on your system applies to any package format. The differentiating factor here is that Flatpak/Flatseal allows you to sandbox the application easily and quite effectively if I may add.
yes and the point I made that usually when you are using most applications that aren't fetching content from the internet, this is to work on your data, so you have to give those applications access to your data and thus if the app is malicious it can do stuff on your data. Worse is if your application needs local files and internet access, said app can exfiltrate your data, receive payload and the fact it is sandboxed to a subset of your data doesn't change a lot compared to a non sandboxed app if this is data you cannot allow to be stolen/modified/ransomwared.
Sandboxing can limit a bit the attack surface / scenarios, but that's it.
Ubuntu seems to dwarf other Linux distrobutions in terms of numbers of users. Are you saying more users have Flatpak installed than have Snap installed?
The numbers of app installs for popular software available from flathub.org but sure you can use the number of supported distros as an easy base. Including Ubuntu with a few workarounds.
> Ubuntu seems to dwarf other Linux distrobutions in terms of numbers of users.
I'd ask you the same thing. Based on what figures?
> Are you saying more users have Flatpak installed than have Snap installed?
Yes until Ubuntu's Snapcraft store provide download numbers. I can almost swear they used to provide this sometime back but can't see anything like that now.
vscode flatpack just didn't work for me - found a workaround for terminal not working, but CoPilot extension refused to authenticate. No issues with non-flatpack version :(.
Random tangent. When I see articles linked from this time frame, my brain automatically thinks
"Ah, but that was during Covid when a disproportionate amount of people were working from home and a lot of the normal feedback loops weren't running normally. Those were lonely/different times. I'll consider this article accordingly."
It's not really rationale/logical/founded, but that's where my brain kind of initially goes. Do others experience this?
The #1 story on Hacker News at 2023:08:21T15:41Z is a 2021 discussion of Linux desktop packaging tools.
Hypothesis: HN story up-voters are heavily drawn from Free / Open Source Software folks interested in issues that were broadly discussed in "tech" two decades ago (Linux for the desktop!) and are much less broadly discussed today.
I'm using it for current Firefox, Zotero, Joplin and two or three more programs, none of which are packaged in Debian (except Firefox, but only the LTS version that doesn't work with all my extensions).
Unless you can offer something better, I'll keep using it.