Hacker News new | past | comments | ask | show | jobs | submit login
Ubuntu 20.04 LTS’ snap obsession has snapped me off of it (jatan.space)
604 points by uncertainquark on Sept 5, 2020 | hide | past | favorite | 538 comments



Snap was neat when I first discovered it. I could get some bleeding-edge dependencies automatically delivered to my machine, without building it from source or having to manually update it. Wonderful! But the problem is: it's the slowest thing in the world. I've complained about this on HN before, but it can be hundreds of milliseconds from when you type a command to when the actual application starts running -- the intermediate time is being consumed by Snap doing god knows what. The slowness is a dealbreaker for me. My disk can read 500,000 4k blocks per second. My RAM can do 3400M transactions per second. I have 32 CPU cores. How is loading some bytes into RAM and telling the CPU to start executing instructions something that can take more than a few microseconds!? Easy: run that app with Snap!

I blew up my Ubuntu install and switched back to Debian. I haven't missed Ubuntu at all. I am resigned to the fact that if I care about a particular piece of software because it's the reason I use a computer (go, Emacs, Node, etc.) then I just have to maintain it myself. There simply isn't a good way right now. And you know what? It's fine. Everything is configured exactly the way I like, and it will never change unless I change it.


I think the last time the startup issue came up on HN we concluded it was because they are mounting a squashfs filesystem image that was created with the compression turned up to 11, and not using the kind of compression algorithm that is only slow on compression. That and the general terribleness of squashfs.

Now keep in mind we can't blame squashfs here. It was developed for use on <16 MiB large NOR flash chips for embedded devices likely connected over SPI - the underlying flash and interface is so unbelievably slow that no amount of compression or terrible kernel code would ever start showing up in some kind of benchmark. Using it on super fast desktop machines with storage that rivals RAM bandwidth and latency is just the opposite of what it was developed for.


The thing that irks me about this is that we have carefully optimized file systems, virtual memory, and dynamic linkers over the years to try to make starting a program scale only in the number of touched pages, and somehow it was considered sane to destroy all of that and store the program compressed on disk, when we know from experience with every other package manager we had been using that he software wasn't what was using up all of our disk space :/. Most of the large assets that come with software (such as images) don't even compress well as they are either difficult to extract entropy from (machine code; we do it but it usually uses more specialized algorithms) or files that are already compressed (such as images)! In contrast, iOS extracts packages to disk when you install them, and Android has developers use zip files that are configured to just not compress (to instead "store") anything the developer might have expected to be able to memory map at runtime and then has them run the file through zipalign to add padding that ensures all the stored files begin at memory mappable page offsets in the file, allowing them to not extract the file (keeping it as a single unit as signed by the developer)--though they do still extract the code to prelink and now even precompile it!--while not compromising on startup performance.


From my experience squashfs can give quite okay perf if you use the right params and compression. I'd expect one of the largest linux distos to tune those params correctly.


We are living in a world where Electron is acceptable. I expect nothing.


The linux distros are usually not the ones pushing Electron and this is one step lower from where a Electron app would run.


Is “quite okay perf” really the goal post though? Using applications is what makes a computer useful. It’s bad enough that applications these days are terribly slow and inefficient to begin with, I don’t want even more unnecessary slow downs on top of that.


I'm not advocating this usecase for squashfs, just saying that the perf I'm seeing in snaps is not necessarily what properly configured squashfs would give.


Is there any replacement for squashfs? I don't know of any other format on Linux that captures the state of the file system as fully as squashfs.



Squashfs is not a problem. Squashfs can be faster than ext4, sometimes many times faster (lots of small files).

Selected squashfs parameters are the speed issue.


> But the problem is: it's the slowest thing in the world.

Exactly. I recently had to replace Chromium with Google Chrome just because the Ubuntu maintainers thought it'd be a good idea to replace the Chromium apt package with an installer that pulls in the snap…

Good thing Firefox is my main browser and I only use Chromium/Chrome for testing (and whenever websites forget that there's more than just one browser to test for), otherwise I would have long ditched Ubuntu, too.


I've been getting bug reports for a browser extension because of Snap. The file system isolation of Snap and Flatpak breaks Native Messaging, so extensions that need to communicate with a local app, such as password managers, are now broken.

https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+...


I had totally forgotten about this! For me it wasn't just the extensions. The font in the "Save Download As…" dialog in the Chromium snap, for instance, has been completely broken for me – all characters get displayed as the infamous glyph-not-found block. To make things worse, the dialog by default suggests some random directory deep inside the snap tree as download destination. Good luck navigating to a folder where you'll find your download again, given that you can't read any of the file or folder names…


I've been using Linux for over 14 years exclusively. Build several linux hosting companies. I know Linux rather well. I've built a simple font manager for gnome in the past. I know the weirdness of gnome font thingies.

I've had this glyph issue for over a year. In chromium, Signal, some ebook app, and several other snaps.

I've tried many things. But gave up. Snap is not 'one layer of indirection' too much. It's hundreds of them. There's chroot, some VM, a virtual gnome, containerisation, weird user and permissions. And so on.

This complexity made me conclude that snap is a bad solution (to a real problem). Not the glyph issue, but the fact that I cannot fix it, is, for me, the reason to conclude it has, or will fail.


Wait, since when is Signal a Snap app?


Snap is more like a virtualization tool that you see in VDI and Citrix environments.

It’s sets out to solve a couple of problems in an app-focused way. As with any packager that packages dependencies, it introduces a few dozen more in the process.


Snap is a distribution format, akin to .msi or .pkg.


No, incorrect. Snap is a distribution format akin to VMDK or anything else which is not intended to be "installed" to a system, but rather run in a virtual machine or sandboxed.


If we are getting into details I think calling it a "virtual machine" is not correct either. Also since snaps have some integration with the host DE through launchers and so on they are not like VMDK's.


To be even more pedantic, there's nothing stopping you from using VMDK (or other disk images like VDI, VHD, raw/dd, etc) files as a container (like tar or zip), other than some extra overhead. They can easily integrate into your desktop environment, possibly even easier since there's probably more support for mounting disk images than there is for mounting archives...


Well, a squashfs file is already a disk image, so that's pretty much what they are doing.


Snap is already a standard in the IOT world. It's the desktop section in which it struggles.


Right! I ran into this not five minutes into a new 20.04 install and could not for the life of me figure out where the hell my download was, or why I couldn't access /home/julian/Downloads

Why this isn't more of a deal breaker I have no freaking idea.


This could be related to Why Chrome is using a different font size that the rest my system. Chrome is the only snap program that I have . The rest are plane Debian packages or Flatpack


I'm not a fan of Snap, but I'm surprised something like this font issue passed QA.


I think you overestimate the amount of QA resources available to the Ubuntu project. Looks like this was an issue that only happened after repeated use and seems somewhat related to individual systems' font configurations?

There's more info here: https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+...


I hope they conduct automated testing with screengrabs for high profile apps such as Chromium (that have something to gain from the snap isolation) and that the test cases have this added.


> I think you overestimate the amount of QA resources available to the Ubuntu project.

  s/available to/allocated by/
I don't know that Canonical cares too much about doing QA themselves. It seems like they've decided to delegate that to the userbase.


Snap programs won't let me access /tmp/ which is really annoying ;(


Yeah, and I can find no way to either modify the fs or at least mount out. Opera, I think it is, requires codecs be installed in a specific directory. So, the snap is effectively useless.


oh my that THAT is why it takes over 5 seconds to start chromium on my Xeon...


That is really disappointing to read. We put a lot of effort into making Chrome start quickly. Back when I was responsible for Linux Chrome, I gave a lighting talk at the Ubuntu Developer Summit about it, and demoed Chrome starting faster than Gnome Calculator.


Not being snarky towards you — Chrome starts really quickly — but was that the snapped calculator from Ubuntu 18.04?


Hats off, Chrome starts instantly on my Debian machines. Even on a 10 years old, half dead Thinkpad it's just couple of seconds. This must be a Ubuntu/Snap specific issue.


Today, starting faster than Ubuntu's calculator is not that hard. I bet this also became a snap app.


firefox from the ubuntu repository is over 5 times faster to start. on any windows machine, it's almost the opposite. sorry to break the news. for what it's worth I still use it everyday... I usually start my browser once a day so it's not that big of a difference.


Both chrome and Firefox on Ubuntu are basically instant for me. Chrome is installed without the snap rubbish tho.


I have been using the snap versions of Chromium, Discord, CLion and Spotify for a while now and I haven't noticed any slowness whatsoever. Sure the applications need some seconds (at most 5) to start up on a freshly booted machine but other than that it's pretty snappy (heh). The only issue I have encountered so far is that some applications don't respect the environments cursor theme (looking at you Spotify and Postman) but that's easily fixable by the package maintainers. Other than that I really don't understand the hate-train for snap. Coming from Arch Linux, PPAs seem like a PITA to me and an elegant solution such as the AUR doesn't exist in the Ubuntu ecosystem so snap / flatpak are the next closest thing to it.


Most applications on Macs start up instantaneously these days. 5 seconds is 50 times the limit for "instantaneous" feel. [1]

All of the raging discussions in this thread would be totally absent if Canonical had taken the time to make apps installed with snaps fast. Unfortunately, these days, the "make it work, make it right, make it fast" mantra seems to stop at the "make it work". At least the 20.04 release seems to be at that stage.

Congratulations, you just got a bunch of users who are going to avoid updates even more because you are going to make everything slower with your shiny new release.

-------------------

[1] https://www.nngroup.com/articles/response-times-3-important-...


No, most apps are slow the first time they are run on Mac (not as bad as first run of snap though, usually). After that, pretty fast. AppStore apps seem to always be fast.


The 5 second load time is exactly the slowness people are complaining about. It's a ~100x slowdown.


> Coming from Arch Linux

Genuinely curious, why would you not just keep using Arch?


In my case, too much gardening.

I got sick of it after six years or so and moved to Linux Mint. (This was before Manjaro was widely visible.) Been on Mint ever since: it's a better Ubuntu than Ubuntu, and a better Windows than Windows (for ordinary uses).

Note that Mint 20, although based on Ubuntu 2004, has removed snap from the base install. `apt install chromium-browser' takes you to a web page explaining why.


I'm curious to the kind of gardening you're referring to, that applies to Arch but not Mint?

I will say, Linux Mint is my recommended Debian based distro for desktop use, it really is a better Ubuntu than Ubuntu.


Debian Stable is the last great linux distro. I keep trying to go back to Fedora or Ubuntu but they are so damn buggy.

And as someone living with OCD (the real thing) the whole Snap thing just makes me so anxious. And like you experienced, it is soooo slow. I cannot even go back to windows because for some reason the wifi on my laptop does not work well with it.

I do not care about bleeding edge anymore, but I am a casual user mostly doing genetic research, so Debian Buster with Cinnamon is, to me, the best distro on Linux today.


Debian Sid is also a great option for people who used to be into the bleeding edge Linux distro scene but can settle for Debian's more conservative version of it.


I've been loving Pop OS! lately. It feels like a "what if" version of Ubuntu had they not spent years pushing Unity, Snap, and their other side projects and just focused on the distro. Super responsive and easy to use, regardless of if you're using free or proprietary software.


In my dotage, I’ve come to appreciate the Debian Stable cadence: I can just set it and forget it for a year at a time. The mostly-annual upgrades have saved me so much time; everything just works in concert for long stretches of stability.


> I care about a particular piece of software because it's the reason I use a computer (go, Emacs, Node, etc.) then I just have to maintain it myself. There simply isn't a good way right now.

Don’t do this to yourself! Debian is not going to give you bleeding edge. But there are plenty of distros that can. Despite being a meme, Arch Linux is one of the best distros available, and has been for years. Node, golang, are usually updated within hours of upstream, while the core system remains stable. If you’re looking for something more modern, Solus has been gaining popularity and also has relatively up to date packages. Debian is great for servers.


Arch has two major differences from Ubuntu and other non-rolling distributions.

1. You're using latest upstream, after it passed through the archlinux-testing repository. This means you won't have to install software from somewhere else just because the repos are outdated. (nighly builds of software are different)

2. Sometimes you have to sys-upgrade to install/fix something, and you don't have much say in when that happens. Typically you need to preemptively do this about once a week to not potentially get interrupted by manual intervention at a bad time. Yes, you will need to do manual intervention, but you won't spend much time on it. It's far less work than getting up-to-date software running on your debian stable.


my favorite part of arch is using the AUR and customizing PKGBUILDS to suit my needs. I recently was messing around trying to get the emacs native-comp development branch built on my machine. I was running into some issues, then tried in an ubuntu container. Eventually I just fired up my Arch VM, found an AUR package for it, tweaked the PKGBUILD options and dependencies for my needs, and then had it compile on the first try. The build worked great, and once I'd native-compiled everything in my emacs configs, it was super fast to start up even in the VM.

Arch will give you small issues every now and then, but it gives you the tolls to fix them and makes it easy to do things that are much more difficult on other distros.


I can't really agree, to be fair it's been about 5 years since I stopped using Arch so things may very well have improved. But my experience using it as my daily driver on my work ThinkPad was that I kept running into quite a lot of problems every other week from upgrades breaking things, it was of course always fixable but probably spent a few hours every month just on maintenance, this is fine for me on a hobby machine at home but not really something I want in a worn environment. I'm not particular fond of Ubuntu either, it was fine in the 00s but then they lost it, nowadays I primarily use 3 operating systems, FreeBSD, Debian and macOS, they all are somewhat "boring" but also tends to mostly keep working without bothering me to much and most things are easy to look up and well documented.


Debian absolutely can give you bleeding edge if you just change your repos from their stable release to testing or sid/unstable. Same thing applies to Fedora and pretty much every major Linux distribution.


Sure it technically can. But it’s not how the distro is designed. The testing repos are meant for just that: testing. I’ve run Sid and always had issues with major version upgrades. Rolling release is just a better model IMO.


The point of using Debian or Ubuntu lts is the reverse of getting bleeding edge software. Sure you want your browser and a couple of other software to be up to date but the reset should stay the same and only have critical/minor updates.


If you need more control over versions, NixOS is by and far the best choice. You can mix and match versions of software from stable & unstable, depending on your needs.


Manjaro is a really great option too, if you want something Arch flavored that’s a bit more user friendly.


Definitely this. Manjaro is out-of-the-box ready, but based on Arch. Manjaro stable is also two more testing steps away from Arch, so the packages are less likely to cause issues. And there are four sources for software: Manjaro repos, the AUR, flatpacks, and snaps. I have never been in a situation where I could not install the latest software via one of those routes. I ran Debian for years and often ran into situations where new software was just not installable due to library incompatibility, etc. Manjaro feels like a Swiss Army knife in comparison.


And with all the options you'll almost never have to rely on snaps.


I couldn’t get Manjaro to install successfully. First operating system I’ve tried to use that just didn’t work. Their easy install thing is a bit too easy, in that it can’t even tell you what went wrong. Also docs were very weak.

I like what they are trying to do but it just did not work for me.

Arch is hard to figure out too with the wiki based docs but at least that’s advertised as unfriendly.


Yeah I've used it for a while with no issues. on my "living on the edge" machine. I still stick with ubuntu for my daily drivers. I don't understand the rant though. I understand why they moved chromium to snap.


Fedora is also a solid OS that allows you to easily stay near the bleeding edge.


Honestly, try out Linux Mint. I'm on 19.3 and it's the best OS I've ever used. Convinience of Ubuntu repos and ease of use, no Snap, some fantastic QoL widgets and apps developed for it, a great DE (well, you can choose multiple DEs but I love Cinnamon). If you want a Debian package base you can even get LMDE (Linux Mint Debian addition) which has all the perks of LM, but based on Debian instead.


I want to like Linux Mint but their Cinnamon DE still breaks stuff occasionally that expects either pure GNOME or Unity. Probably the devs' faults for hardcoding it to those two DEs, but still.


At one point Ubuntu had lots of their built-in apps on snap. I distinctly remember firefox starting seconds faster than the simple calculator or system monitor apps.


Indeed. That the calculator takes longer to start than Firefox should have caused the project to be abandoned.

I see the value of file system and dependency isolation, but it shouldn't result in such significantly degraded performance.


Or at least a hard stop on migrations until the performance characteristics were hashed out.

Sadly I noticed a while ago that Windows began to outrun Ubuntu on my older machines; it doesn’t seem like Canonical really cares about performance anymore.


I'm surprised Mark Shuttleworth allowed that. He has a very low tolerance to mistakes.


To me, Ubuntu's glory days were with 10.04. It seemed much more polished than the alternatives at that time.

IIRC Shuttleworth took a more governance and less hands-on role after that, and Ubuntu started to focus on server and enterprise support.


I saw him tearing people down in 2015-ish. And for much less than this.


I've started to use nix (both nixos and nix installed on ubuntu) to manage those critical packages where I want both control and the option for major updates outside the OS's release cadence.


I tried nix on Ubuntu recently so that I could get an up-to-date, non-snap emacs (if you already think emacs is slow to load, you should see it as a snap), and the resulting build was giving me all kinds of issues, even with just the vanilla config. I tried one other package and had issues there as well. I ended up just compiling both from source and they worked perfectly.

Is there anything special about getting nix working well with other package managers? I'll be honest, the main thing keeping me from digging into it further are the docs and the syntax. I can never tell if I'm reading about nix-the-package-manager, nix-the-language, or nix-the-operating-system; and looking at the syntax makes me understand what most programmers feel when they look at lisp.


Yeah the syntax is intimidating. Its definitely one of those things where its easier to start off by copying examples and then slowly learn all the details. After using it for a while it mostly starts to feel like a slightly different JSON syntax (there are a few exceptions to this but you don't end up needing that for most of what you do).

Sorry to hear you had issues on ubuntu. Its hard to say how you might improve your experience without knowing more details. The nix forum is probably a good place to get support for that sort of thing.


I'm sure I could track down the issue, I just didn't have the time. I was just hoping to use it to install a more up-to-date package here and there just to dip my toes in the water and go from there. I really like the idea behind it, and hope it or something similar catches on. I may just have to throw myself into the deep end and try NixOS out in a vm.


Yeah, you kinda have to drink the whole koolaid before it clicks. It’s great when it does though.

It only clicked for me when I started using NixOS (on my primary laptop in my case) rather than just Nix.


How does it compare to using Ubuntu with Ansible? Do you ever miss being on mainline stable Ubuntu/Debian when using NixOS?


On having your system be managed from configuration its sort of similar to Ansible. One major difference is with NixOS you can easily roll back to previous states of the system. That means both rolling back config changes and also rolling back package versions. That's something that Ansible doesn't really give you. NixOS also forces you to configure things the "right way" (e.g. you can't hand edit files in /etc). That is very good for reproducibility, but sometimes its frustrating when you just want to make things work quickly.

I think the biggest challenge using NixOS vs Ubuntu is if you've got some weird obscure piece of software you need to get working there's a better chance that someone has already figured that out on Ubuntu and you might have to do the work to get it running on NixOS.

On the other hand I've found contributing to Nix easier and less intimidating than contributing to Ubuntu. To add a package to Nix you just open a PR in the nixpkgs repo on github. I've found the community to be friendly and helpful.

I use a lot of LXD containers for when I just want play around with something in a non Nix environment.

Oh and I love being able to run `nix-shell -p <package>` to use a package without "installing" it.


NixOS is leauges above Ansible and similar. They are barely even playing the same game.

The TL;DR is that Ansible is given a description for some part of the system, then squints at that part and trys to make it match the description. This means that it doesn't unify anything that you haven't described and if you stop describing something it doesn't go away (unless you explicitly tell Ansible to remove it). This means that your Ansible configs end up unintentionally depending on the state of the system and the state of your system depends on the Ansible configs you have applied in the past.

NixOS is logically much more like building a fresh VM image every time you apply the configuration. Anything not in the configuration is effectively gone (it is still on the filesystem but the name is a cryptographic hash so no one can use it by accident). This makes the configs way more reproducible. It also means that I can apply a config to any system and end up with a functional replica that has no traces of the previous system. (other than mutable state which Nix doesn't really manage.)

Nix has other advantages such as easy rollbacks (which is just a bit more convenient than checking out an old config and applying it manually) and the ability to have many versions of a library/config/package without conflicts or any special work required if you need that.

I wrote a blog post a while ago that tries to go a bit more into detail over what I just described https://kevincox.ca/2015/12/13/nixos-managed-system/


It seems to me that if Nix were a little more beginner-friendly, it could fill a lot of the space Docker occupies.


I was settling up WSL2 the other day and needed Chrome so I could run it headless for some E2E tests I had going. It took a lot of messing about to get a copy of it that didn't use Snap. Partly because apt has always worked great for me and I didn't want to switch with something else, but also because it required systemd to run it as a daemon?

That they clobbered the apt install just to push Snap forward left a bad taste in my mouth.


It's worth noting that you can use Arch and various other distros on wsl[2], not just Ubuntu.


It is not official, like with AUR check sources

http://archlinux.2023198.n4.nabble.com/Windows-Subsystem-Lin...


You can officially use any official image on WSL2 https://docs.microsoft.com/en-us/windows/wsl/build-custom-di...


I don’t understand why it’s so hard. It’s literally visit chrome website and download the deb file and install...


About the slowness - is it just on launch, or does it keep being slow if the app keeps running? A few hundred ms is indeed a deal-breaker for standard Unix-y CLI app that might be chained with 5 other simple apps. It doesn't seem like a big deal though if you're launching something like Chromium that you might normally leave running for weeks.


The problem is that if you use chromium in any automated testing suite, it can get restarted after each test making a test suite take much longer to run.


For me it's just on launch. More precisely, the period before the application actually starts to launch. I guess it has something to do with mounting the various file systems and setting up the connections or whatever it is that it does.

I've had to install Chrome to try a thing or too (so not as a daily driver) and I haven't noticed anything weird during use. I've used Jetbrains' PyCharm daily for a few months and no problem there either (although that is a "classic" snap, not sure if it matters).


It's still pretty bad even for GUI apps. Do you really want to add 0.3s to the startup time of Chrome? I would definitely notice that.


Chrome use deb. Also, I do not remember last time I closed chrome. I can remember my whole system breaking because of ppa hell.


have you tried chromium on Ubuntu? It takes several seconds, maybe 10 on my machine whereas firefox loads in under a second.


Ah I'm glad to finally understand why Emacs 27 starts so slowly: I installed it as a snap (while my Emacs 26 was from a PPA), and I spent a great deal of time trying to find what was the culprit in my init.el config, to no avail.


I switched from Debian to Manjaro after seventeen years this summer. The Arch-isms are odd at first but it is an enormous pleasure to have a system that is up to date, stable, and works with me instead of against me. That and brew for Linux (which is very rarely needed on Manjaro) has vastly improved my user experience. Doesn't address all of what you said, but hope this is useful.


I also abandoned Ubuntu, and after various adventures in distro land I now use Debian Testing with Xfce.


I am currently using Ubuntu 20.04 with the Sabrent Rocket M.2 SSD and AMD Ryzen 3900x and several times my OS continiously locks up some programs for a short time (Chrome, Firefox, etc.) or becomes really slow. This computer I built is relatively new and used for work mostly so I am not sure if it's Ubuntu or I downloaded a stray snap package but man I have noticed Snap packages can be ... slow.

I used Debian and it seems to be gaining detractors from Ubuntu, my only question is... what made you switch and sick with Debian?


I suppose I'm used to apt-based systems, although I don't know if it really makes any difference. So many other distros are based on Debian, but often it seems like a pretty thin layer, perhaps poorly maintained, over Debian itself. Debian seems to have so many more developers, so why not just use it directly.

The rough edges I've found: no automatic updater or security updates in testing, just run apt update/upgrade once a day. An initial problem with the video driver not working because it requires non-free firmware, solution is just to add Debian non-free (it would have been helpful if a warning was given during installation). Firefox/thunderbird still on an old long-term-service release: I've been installing tarballs manually for now.


> I blew up my Ubuntu install and switched back to Debian. I haven't missed Ubuntu at all.

What are the biggest drawbacks? I know you said you don't miss Ubuntu at all, but is there anything which is causing you pain because it works differently or is just missing?


Not sure about Emacs, but node and go can be maintained with `asdf`


I know Snap isolates dependencies... But does it share them if they are common at all? I think this would make the sweet spot...


Snap sucks. The way open source is straying further and further from its principles is highly annoying. The whole idea that you'd need a container like environment to install an application on a Unix system is very far from where we should be. What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls.

And it's yet another way to do an end run around repositories, instead you will sooner or later get an app-store like environment that can be controlled by some entity. These large companies should stop fucking around with Linux, it was fine the way it was. Just fix the bugs and leave the rest to the application developers.


> The whole idea that you'd need a container like environment to install an application on a Unix system is very far from where we should be. What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls.

There are two models.

The first model is the traditional distribution model. The distribution curates and integrates software, picking (generally) one version of everything and releasing it all together. Users do not get featureful software updates except when they upgrade to a new distribution release - all at once.

The downside of this model is that developers who want to ship their new software or feature updates without waiting for a new distribution release get stuck into dependency hell and have to operate outside the normal packaging system. Same for users who want to consume this. Third party apt repositories and similar efforts are all fundamentally hacks and generally all end up breaking users' systems sooner or later. Often this is discovered only on a subsequent distribution upgrade and users are unable to attribute distribution upgrade failures to the hacked third party software installation they did months or years ago.

The second model is the bundled model. Ship all the dependencies bundled in the software itself. That's what Snaps (and Flatpaks, and AppImage) do - same as iOS and Android. This allows one build to work on all distributions and distribution releases. They can be installed and removed without leaving cruft or a broken system behind. They allow security sandboxing.

All your objections seem to be criticizing the bundled model itself, rather than anything about Snaps themselves except that they use the bundled model.

If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.


I agree with everything you write, except:

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

No, you can't. Ubuntu has started replacing apt packages with snaps. So if you want to install those packages (such as Chromium) you now have to use the snap.


You can install Chromium from a third party source just like you could before.

New distribution releases always add and remove packages. Chromium has been removed (as a deb). It's is no longer packaged as a deb because it's a rolling release upstream and it was too much work to backport featureful rolling release to debs.


The transition in Chromium packaging is understandable from a maintenance point of view. However, the general argument that you can just keep using the system without using snaps "just fine" doesn't really hold if the snaps begin to replace rather than supplement software that used to be packaged without it.

The same is also somewhat true if first-class tools such as Software (as in the "store" GUI) begin to push snap packages as the first thing, because then you will, by default, need to go through additional steps to find the packages that you want, and which used to be available as the first choice.

You can still do things, of course, but it might be that you need to start getting around the snap-centric design choices more often, and at that point it doesn't come at no usability cost to you anymore.

I don't actually use Ubuntu at the moment, so I don't know how much that is the case now, but if it is (or if seems like it's going that way), I understand the frustration.


> However, the general argument that you can just keep using the system without using snaps "just fine" doesn't really hold if the snaps begin to replace rather than supplement software that used to be packaged without it.

Well, if you don't like it, you can always fork it. That's the beauty of open source. /sarcasm

Seriously though, all out replacing stuff with Snaps doesn't seem like the right move.


> it was too much work to backport featureful rolling release to debs.

Considering that (as other posters have mentioned) deb packages are released by the vendor in this case it feels like a flimsy excuse.


>No, you can't. Ubuntu has started replacing apt packages with snaps. So if you want to install those packages (such as Chromium) you now have to use the snap.

You can almost always install via deb if you want instead.


For _now_, you can.

Slowly boiling the frog is really effective.


You believe that Canonical will remove the ability to install from deb?


At some point, they likely will.

I'd wager they will require deb packages to be signed with a certain Canonical key that they will use strictly for basic system packages. Everything else will be a sandboxed Snap, left to third-parties to maintain, distributed through a Canonical Appstore that enables payments.

Maybe they will give you an option to "root" the system, and if you use it you'll lose any right to support or updates from Canonical.

Snap is fundamentally a commercial play to reduce support costs and enable an appstore.


IF they do that I'm obviously leaving their ecosystem, though I'm just a consumer so no clue how much effect I would have.

I really don't see what benefit that would provide them over other distros though and I'm not sure why they would make that choice to close down their system?

Fundamentally Linux works so well because of the free software movement and I don't see any app maintainers choosing to charge a fee for their software if they aren't already.


Arguably the Linux desktop ecosystem does not “work so well”. Adoption rates are still minuscule when compared to Apple or Microsoft numbers, and there is precious little support from commercial developers. The “year of Linux on the desktop” never happened, even after Canonical made it really simple to run Linux, so commercial support is still lacking - which in turn keeps users away. The Snap play is their latest attempt to increase monetization rates on the platform.


And will help to kill what little adaptation there is. This really should stop.


Ubuntu concentrates on server's where security patches imported from Debian. Desktop and snap very little part.

If anything can go away its Ubuntu desktop.

Snap may stay since it is the defacto standard in IOT world.


The third model - rolling release. No need to test against many releases, no outdated dependencies and why so many people want to run latest browser on outdated system?

The forth model - multiple versions of dependencies live on the same system, environment constructed for each application. Deduplication works.

Bundled model has its values (just like vm) but it really shines with inadequate package management. If you don't like the bundled model switch to Arch Linux or NixOS.


I can't believe that in 2020 we're still discussing what is the best way to package and distribute software. Can anyone explain why it is taking the profession so long to get this fixed?


Because different solutions have different tradeoffs. It doesn't appear that there is a single solution which is best for all use cases.

So different systems switch between different methods as their prieceved value of the different usecases change which upsets people who see the values of the tradeoffs differently.

If you have the perfect solution I'm all ears.


One issue on Linux in particular is the [relatively] tight bond between your kernel version and libc, which makes using software compiled with different version of libc problematic part of the time (a particular libc version still supports multiple kernels though).

I experienced this lately when my Rust-compiled binary used too modern a libc version for the aging Docker container environment we used for deployment, which forced me to use another Docker container for local development -- which obviously isn't ideal and removes the 100% reproduceability promise.


Because we love re-inventing the wheel instead of repairing the wheel.


fundamental issue between "change everything once in a big step" and "many small changes".

First is pre internet, because updates cost money, and because pre internet security issues weren't common.

Second is now, change one thing every day, always run the latest code, automated testing to make the latest code always work. Also means don't need a security branch and a feature branch.

People hate change, and linux people have the ability to say no and do their own thing.


Because it is a hard problem?

Python has easy_install, pip, anaconda and wheel; virtualenv to isolate packages. Node has npm and yarn (is it resolved?). Ruby gem defines version in code, bundler in Gemfile, Gemfile.lock; vendor/, rvm gemset, BUNDLE_PATH to isolate packages. Even developers can't find right answer.

Because it matters?

Package management is the main differentiator, I love pacman, I love PKGBUILD and makepkg.


I don't like the bundled model. I'm running Arch on my main machine currently. The AUR makes getting third-party packages easy. Unfortunately, I need VirtualBox for a class, and Arch's VirtualBox package won't run on the current Linux 5.8 kernel, and downgrading Linux on Arch is not easy. I switched to Windows for VirtualBox, but in retrospect I could've installed linux-lts.

Ubuntu is more stable in theory, but I've encountered broken packages (like hex editors with an incorrect hard-coded temp path, causing it to be unable to overwrite files).


It is not super simple but I lived with outdated kernel and Xorg for a couple of years (because of Intel Poulsbo [1]). Process described in wiki [2] and uses virtualbox-host-modules as an example.

    $ pacman -Qi glibc
    Depends On      : linux-api-headers>=4.10  tzdata  filesystem
1. This version is not present in Arch Linux Archive anymore [3], search for package page [4], View Changes, search for corresponding PKGBUILD revision [5], makepkg. It is much easier if package is not that old and can be found in /var/cache/pacman/pkg/ or archive.

2. install with `pacman -U linux-4.15.8-1-x86_64.pkg.tar.xz`

3. skip package from being upgraded with /etc/pacman.conf IgnorePkg [6]

I had problems both with Ubuntu and Arch Linux updates. At least I can fix Arch Linux issues, Ubuntu felt broken.

[1] http://sergeykish.com/linux-poulsbo-emgd

[2] https://wiki.archlinux.org/index.php/Downgrading_packages

[3] https://archive.archlinux.org/packages/l/linux/

[4] https://www.archlinux.org/packages/core/x86_64/linux/

[5] https://github.com/archlinux/svntogit-packages/commits/packa...

[6] https://wiki.archlinux.org/index.php/Pacman#Skip_package_fro...


But then you can't run leading edge hardware... My motherboard's network interface, wireless interface and my gpu all require a kernel version that's newer than the 19.10 ubuntu (and even 20.04 support for stability), let alone the last LTS release.

Ukku worked well for me to get newer kernels without issue, but kvm/virtualbox were totally borked, just as my hardware support was stable.


Sorry to hear, KVM does not work? I expect problems with virtualbox but KVM, can you share a story please?


Latest Manjaro has the latest VirtualBox in the package repository. It is easier imho switch from Arch to Manjaro than going to Win10


I agree with your post except for this:

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

While it's true you technically can, at some point you have to ask yourself why you're going against the grain instead of picking a different distro (or not upgrading to 20.04, at least). Ubuntu wants you to use snaps. You can sidestep this, but you should ask yourself if you shouldn't just switch distros.

I just recently upgraded from Ubuntu 16.04 to 18.04. I don't see the reason to upgrade to 20.04 as long as the software I need works and I keep getting security fixes. Once this LTS gets EOL'd, I'll see what the current deal is with Ubuntu and seriously consider switching to another distro.


Manjaro has 99% of what Ubuntu has, runs a much more modern kernel, and has packages for literally anything you can think of in the extended software repository (namely Arch Linux's AUR).

I regret not taking the plunge and installing Manjaro against the will of our IT during onboarding (they don't forbid it, they simply can't promise good support if you don't install Ubuntu). I am sure I could have found a way to install the 2-3 corporate VPN / spy / monitoring agents my employer requires on the machines they issue for employees.

But anything I've needed of Manjaro, I always got it. Granted that's an anecdotal evidence, obviously -- I haven't tried running games, for example.


> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

Not really. The platform, Ubuntu in this case, has clearly signaled that they want to move from debs to snaps. And apart from the technical benefits of snaps (sandboxing) you also get the drawbacks (they are slow, updates are forced, and there is only one source for them). What you propose is fighting the platform you are staying on, and that's not a great place to be. Far better to move to platform more aligned with your choices as a user.

P.S.: If anyone is looking for quick and dirty suggestions, there's pop!_OS which is quite close to Ubuntu so there won't be many changes to the user experience. In my case it was Fedora. The linux ecosystem is diverse enough to offer many choices.


ROFL. When did they signal it ?

Servers are all deb which is main source of their income. If anything will go away is Ubuntu Desktop.

Snap may stay since it is the standard in IOT world.


> The downside of this model is that developers who want to ship their new software or feature updates without waiting for a new distribution release get stuck into dependency hell and have to operate outside the normal packaging system.

When I was writing apps for distros, I had the opposite problem. Every single release of GNOME and Ubuntu would break PyGTK in some way so that my software that used zero new features in the OS, would require at least some modifications and force me to maintain many versions. Finally, Gtk changed something that broke something in a fundamental way (something in Gtk TreeView, I forget what exactly) and I simple gave up maintaining software rather than suffer through figuring out how rewrite everything again.


Wouldn't some kind of static linking of the right GTK libraries inside your binary help? Completely newbie question, I am not claiming anything.


I agree with most of your post. However the last line is off.

> If you don't like the bundled model, then you can carry on using Ubuntu 20.04 without Snaps just fine.

This blog post and many of the contents are claiming that you can't just use Ubuntu without snaps anymore as they are being forced upon users.


Those of us that prefer bundles have been "forced" to use centralized repositories our whole lives.


Isn't the sandboxing a good idea though? It feels that Linux got caught in the past and is actually one of the least secure OSes out there, and what keeps it safe is just its small desktop market share.

Disclaimer: I never used snap and don't use Ubuntu.


Flatpak gives you a sandbox, and has basically skirted around any shortcoming of Snap (faster, open, extensible, adopted by every other distro, doesn't pollute /proc/mounts)


Flatpak refuses to respect the hosts value for XDG_CONFIG and HOME, making it impossible for some applications to share your configuration.


Flatpak's sandboxing is an absolute joke and entirely voluntary.


At worst the "joke" sandbox value of flatpaks is the same as deb/rpm packages and snap "clasic" confinement. So as a user, you don't lose anything security wise by moving to flatpaks.

Flatpak, as well as snaps through clasic confinement, allows the developers to "escape" the sandbox because they know that they don't have all the permissions required to provide feature parity with deb/rpm packages. Another reason this is needed is that application developers are not writing their applications with flatpak compatibility in mind. However flatpak is going in the right direction.

Mobile operating systems have proven the value of sandboxing apps.


> is an absolute joke

Please elaborate.

> entirely voluntary

Fedora Silverblue would like to disagree. And in any case all parties know it's still in flux, but the stable parts are stable in my experience. I would not want something like Snap or otherwise immature forced down my throat.


Entirely voluntary in the sense that flatpak enforces sandboxes which the application tells it to enforce. If the app asks for full system access flatpak doesn't deny that. It kind of destroys the purpose of a system built to run third-party code. If I download a malicious app from flathub or some other repo which asks for full system access flatpak doesn't do anything in the name of security.

Note: I'm not making a value judgement about flatpak's sandboxing, merely describing it to the best of my knowledge.


Enforcing permissions and auditing third party code are two different things. The point of app permissions is that when you know an app should never do something, you constrain it from ever doing that. Then if it has a security vulnerability, it's limited in the damage from the resulting compromise.

The people configuring the sandbox should be packagers that you trust. The upstream developers might provide some recommendations to the packagers, but if it's obvious that an application shouldn't need a permission, it shouldn't have it.

But permissions can't save you from an actually malicious app. Constrain it from accessing the camera and it will still be using your device to host pornography. Constrain it from accessing the filesystem and it will still run up your electric bill mining bitcoin. You either need to trust the developer or you need to get the app through someone you trust to have audited it for you.


I don’t want the sandboxing to be mandatory! Some software inherently doesn’t work in a sandbox, or works less well. A good distribution format should still cover those use-cases, the user just needs to be informed about the capabilities of what they install. Especially on Linux, which is supposed to be about user-empowerment.


None of the flat pack apps I’ve used on Pop_OS work correctly / as stable as installing a deb directly.

I’ve not been impressed at all.


I guess just to provide a counterpoint, I can't remember having had any issues with flatpak on Pop!, including both open-source apps and proprietary ones like Spotify.


How about zoom? Was the worst and completely unusable for anything more than viewing a meeting.


No issues for me--I've run Zoom pretty regularly, including participating in some fairly large meetings, plus screensharing, etc.

Occasionally I need to replug my headset in at the beginning of the meeting to get my voice audio working, but I'm not sure if that's actually an OS issue or not. Either way, it's never taken more than a couple seconds.


YMMV. Firefox is the golden standard, at least on Arch Linux. Never have I noticed it was running on a sandbox. I run Discord, Spotify, Thunderbird from Flatpak without any issue either.

Visual Studio Code is a regular package, just because its use case it not really suited for a sandbox (access to system binaries, libraries, etc.), but I honestly haven't even tested its Flatpak version.


VSCodium flatpak works fine for me. The caveat is that I use the operating system's terminal instead of the one from VSCodium (for things like make test, python environments, etc). VSCodium's terminal operates on a sandboxed environment and I haven't bothered to figure out how it relates to the OS environment that I get from the terminal app.


You set your shell as some wrapper which allow sandbox escape. It works for shell itself, but then you need to apply it to all external utilities extensions use as well and it gets annoying.


Interesting, are there any examples on the internet for that process?


The problem with flatpack is that many apps don't really run in a sandbox. Or at least not in something with isolation features which you might expect from a sandbox.


To be honest, to me the selling point is not the sandbox, is the reproducible environment that's common for all distributions, which is a first in the Linux world, the sandbox is just the icing on the cake.

The other day I pushed an update to some flatpackaged app, and guess what, it's available to all Linux users. Packaging has become incredibly easier for third parties with these kind of technologies.


The problem is the combination of how updates are (often not) done (in time) + no sandbox.

I also am a strong believer that the future for sane desktop PC's is that every program (except the most fundamental core services) of a desktop OS should be sandbox by default. With basically no permission to access any local files or communicate with any other program/service.

It would need some MAJOR changes, slowly step by step. And I had hoped with snap & flatpack we would be slowly transitioning there. But it doesn't really look that way anymore to be honest.

(PS: And it can be done with reasonable UX experience without requiring the user to configure some magic access rules or anything, but it's tricky to get right and it not be fully backward compatible but often changes just need to go into the GUI toolkit (QT, GTK) so it should be possible.


Access control is a good idea, sandboxing on top of it is saying its too nuanced a problem so heres a kitchen sink of overhead.

Linux has had some of the best software security through MAC (SELinux, Apparmor) and cgroups. The problem is that there is no culture in free software to actually write specifications at the point of development, it generally falls on distros / maintainers to try to sort out MAC profiles or cgroups restrictions on a per-package basis.

That is why the packagers largely went with the Snap / Flatpak route of saying screw doing the grunt work heres a total sandbox with all the libraries built in.

It would be great if we could convince the whole ecosystem to start provisioning access specifications for libraries and binaries so upstream could start building apparmor / selinux profiles from provided files rather than having to do learning mode auditing that drove distro maintainers to not even bother.


It's not.

For instance: the Pinebook Pro just got gles3 support via upstream mesa, but all the flatpaks with mesa haven't or won't update with any alacrity.

Users are left having to abandon sandboxing in order to get necessary updates.


Pinebook pro is using an arm64 cpu instead of an x86. That creates a big list of problems that you don't have on x86 because that's what all linux desktop developers have (mostly) been targeting.

As one example, I was surprised to find that the tor browser doesn't have an arm64 build :o


I've installed a few dozen AUR packages that only needed aarch64 added to the arch list. I think only one, syncterm, required any other modifications to build and even that was just setting a preprocessor define (__arm__).

Thus far, it's my favourite laptop purchase ever. Beats the feeling I got with the IBM X series or the Ti Book. It's light, zippy enough, great Linux hardware support, and totally silent.


Even if you did swap a symlink to upgrade to GLES3, all the apps will still call the GLES2 functions. No matter what, the devs would have to go back and rewrite their app to actually use the new features.


Many apps detect GL level and adapt renderers accordingly.

Still, it's not just graphics drivers: there's the oft-cited security patches, but also features for user-facing interfaces; Ie, an update to a URL parsing library might add additional codepage support transparently to an app that uses it.


The freedesktop runtime updates Mesa faster than most distributions. The current version is 20.1.5.


And Arch let me build and install git HEAD, so I'm using 20.3.


Forcing Arch model on eveyone is a perfect way for Linux to forever stay minority platform. There's a reason why most people do not run rolling distros.

Might be nice for hardcore fans, but good luck supporting anything there.


There is a very real shift happening from Ubuntu to Manjaro.

If the sandboxed app package model is what you desire then there's already a great and popular Linux distro for you: Android.


I've observed a lot of recent Manjaro adoption lately (I don't know what the real numbers are). Anecdotal, but in my social circles I'm seeing people moving from Ubuntu to either Manjaro or Arch.


Don't be surprised when all the mainstream distros switch to sandboxed app model as the preferred one. Distro-based packaging of the entire world is hiting the not enough manpower problem today and eventually will be eventually relegated to super fans distro only, with similar status as Amiga has today.


What is Manjaro?


Best way to describe it is: a user-friendly version of Arch that is based on the AUR (Arch User Repository).


As Arch Linux user I am completely confused. Looks like there is no official comparison page like [1]. Web search first entries [2], [3] gives another picture — more stable, graphical installer, graphical package manager, hardware detection, prepackaged desktop environments — that I can relate to.

Nothing about "based on the AUR". Because it is liability. It is insecure, have to check PKGBUILD each install and update.

[1] https://wiki.archlinux.org/index.php/Arch_compared_to_other_...

[2] https://linuxconfig.org/manjaro-linux-vs-arch-linux

[3] https://itsfoss.com/manjaro-vs-arch-linux/


I should have been more clear. In terms of package management Manjaro has access to the AUR, but it's unsupported. Manjaro does not have access to official Arch repositories. Sorry, I should have said "based on Arch" not "based on the AUR." https://wiki.manjaro.org/index.php/Arch_User_Repository


Ok, I would not recommend Arch as a first Linux distro, Arch based and easy to start is a plus. That said I've seen troubled Manjaro user on xmonad IRC, we tracked problem to AUR package, it worked fine on Arch.


Yeah, Arch Linux is definitely not for beginners. The install process alone is hands-on and requires users to be experienced with command line configuration. Arch is for power users who want a ports-based Linux with great package management (pacman).



Why would you expect to have a git snapshot shipped as the official freedesktop flatpak runtime? Could you not build your own freedesktop flatpak runtime using mesa git head?


> Could you not build your own freedesktop flatpak runtime using mesa git head?

It was easier to build it and not use flatpak.


Almost no apps are actually delivered via the Apple or Microsoft app store. Apps from outside app store apps aren't obliged to use the sandbox although mac apps can and they do have a line of defense against unsigned applications. Its harder but not impossible to end up pwned by signed apps on mac.

The number one and two attack vectors have always been tricking users into installing malware and attacking old insecure software.

Distributing virtually all software via app stores substantially solves acquiring safe software and ensuring it receives updates.

Defense in depth is virtuous but Linux is already more secure than windows in the ways that actually count and unlike MS they are actually positioned to in the future sandbox software because its all already mostly coming from app stores.


You don't need snap-style distribution for sandboxing.


Containers, chroot jails and whathaveyous have existed long before snap came along. Like other have said, flatpak is a more sane alternative that doesn't impose idiotic requirements like systemd or x server.


> doesn’t impose idiotic requirements like ... x server

How are you going to sandbox graphical apps without knowing about (and having capabilities around) the system by which a containerized app would communicate with your OS’s graphics subsystem?

I mean, if you’re not going to run any X11-client graphical apps, it should probably be optional to have an X11 server installed; but either way, you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.


> How are you going to sandbox graphical apps without knowing about (and having capabilities around) the system by which a containerized app would communicate with your OS’s graphics subsystem?

Wayland is the way forward.

> you’ll need the X11 wire-protocol libraries (“xorg-common” in most package repos) for the sandbox to link in.

Today, runtimes do contain client x11 libs. However, nothing in flatpak requires it and it is possible to phase them out in the future releases of runtimes.


> Wayland is the way forward.

Wayland has been "The Way Forward(tm)" for 10 years now.

That may be. But nobody in RedHat/Canonical/etc. believes that enough to put sufficient manpower on it to make it true.


> But nobody in RedHat/Canonical/etc. believes that enough to put sufficient manpower on it to make it true.

It is default display server in RHEL8. If that is not believing enough in it, I don't want to even know what would be sufficient to prove otherwise.


Right, sand-boxing should be made at OS-Level not in User-space.


> is actually one of the least secure OSes out there

Linux is one of the most secure platforms to run web applications on, however, because more man hours than I can comprehend were spent hardening that use case.

All of those hardening measures can transfer over to the Linux desktop use case.

For example, seccomp, cgroups and MAC can all be used to harden a Linux server, and they can also be used to harden the Linux desktop. It's just that no one has thrown the same billions of dollars at desktop Linux that were thrown at solving web application security.

If you really wanted to, you could run a lot of your software in unprivileged containers secured with seccomp.


>If you really wanted to, you could run a lot of your software in unprivileged containers secured with seccomp.

We've come full circle, because Snap does run software in unprivileged containers.


They are not the same thing, however, and the complaints people have about Snap don't stem from its use of unprivileged containers.


Not if it slows down things as much as it does, especially launching. What are you doing on your machine that things like apparmor and selinux (for app isolation) aren't enough? Just use a VM. Not everything needs to be sandboxed if it ruins the zen of interacting with your machine


Can’t you actually sandbox any app you want with chroot and cgroups yourself?


chroot is not the same as lxc (linux containers)

You can do very similar, but if it's a gui app, or has specific system dependencies there will be issues to work around.


Sandboxing is great in theory, but last I checked the snap daemon opened Internet connections as root. I'll take my chances with traditional Unix permissions over that.


> Might as well ship a pre-linked binary that just does system calls.

I actually think that’s exactly what we should do. Containers are an over-engineered solution for a problem that never needed to exist.

Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.


Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.

Good luck patching that security vulnerability in all those static binaries without proper dependency tracking ;). Not that I am on a particular side of the fence, both have their downsides.

To me the problem are package managers from the '90ies that use a single global namespace, only allows UID 0 to install packages, and do not really provide reusable components.

Modern packaging systems like Nix and Guix allow users to install packages. Packages are non-conflicting, since they do not use a global namespace (so, you can have multiple versions or different build options). They provide a language and library that allows third-parties to define their own packages.

Not to say that they are the final say in packaging, but there is clearly a lot of room for innovation.

Snap and Flatpak are copying the packaging model of macOS, iOS, and Android. This is a perfectly legitimate approach (and IMO the execution of Flatpak is far better). But it is not for everyone -- e.g. if you prefer a more traditional Unix environment.


Rolling out shared library updates to resolve security vulnerabilities is not without its own issues.

The big one, which surprisingly places still manage to fumble due to poor process controls or simple mistakes, is that you have to restart all running processes that use the library after you update it.

I actually prefer to deploy static builds of critical services for this reason, because you already have to know that you're running version 1 build 5 everywhere -- and if everything is build 5, then they all have the fix. You don't also have to check if the process was started after May 5th.


> without proper dependency tracking

Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked. The infrastructure isn't currently there [x], but it definitely is something that languages and language package managers could with each other and provide.

[x] But can be built, now that more and more languages have access to language package managers with proper dependency tracking. One way would be to create a standard for how to query a binary for what it depends on. Then a computer could have a central database of the dependencies of static binaries that is installed.


Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked.

Didn't say so. It is just easier with dynamic linking, because you can see what libraries (and versions) a binary is linked against.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

Actually, approaches such as Nix' buildRustCrate (where every transitive crate dependency is represented as as a Nix derivation) + declarative package management offer this today.

But with curl | bash or traditional package managers, which are most widely used today, this is kind of dependency tracking hard/ad-hoc.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

But then a static C library is used and nobody knows where it came from. Even if you look at the Rust ecosystem, which generally does things well when it comes to dependency handling, crates are all over the place when it comes to native libraries. I have seen everything from crates that use a system library (or something discoverable via pkg-config), via crates that have the library sources as a git submodule and build them as part of the build-script, to crates that download precompiled library from some shared Dropbox link.

Another fun example from another language ecosystem. numpy uses OpenBLAS. They compile their binary wheels on CI. However, OpenBLAS itself is retrieved as a precompiled binary from another project [1]. However, the rabbit hole goes deeper. In case OpenBLAS is built for macOS, a precompiled disk image is retrieved from yet another repository [2]. This disk image is added to that repository, but comes from yet another place.

This is all sort of the opposite the lessons to take from Reflections on Trusting Trust and the bootstrapping that the Guix folks try to do.

Anyway, with the mindset that most developers have, we will never have proper dependency tracking.

[1] https://github.com/MacPython/openblas-libs

[2] https://github.com/MacPython/gfortran-install/tree/d430fe6e3...

[3] http://coudert.name/software/gfortran-4.9.0-Mavericks.dmg.


I agree. Are there any major disadvantages to that, aside from size and having to wait for each application to update its own dependancies?


The 'horror story' people always mention, which is a special case of "having to wait for each application to update its own dependencies", is that if there's a security vulnerability in a much-used library, you have to wait for each application maintainer to update their application, rather than simply having the distribution maintainer update the shared library. I'm not sure I agree that this would be worse than the current situation...


No plugins or switching implementations. This mainly affects stuff like opencl, opengl, vulkan, qt, Apache, PHP & PAM.

A few this could be solved (esp PAM and GPU ones) by making the full thing work over IPC. Already opengl is a pain to work in generic containers.


Are there any major disadvantages to a nuclear winter, besides that everyone would die and the environment would be destroyed?


I just honestly don't understand this. On Windows and Mac (the OSes that 99% of the planet uses on the desktop), this is exactly how things work. The OS provides a set of APIs. If an application author needs a library that isn't in the OS, they have to ship it themselves. If a vulnerability in that library comes to light, they have to fix it and ship an updated version of your application. If it's a commonly used library, a lot of application authors are going to have to ship updates.

Why couldn't this work for a Linux-based OS? Honest question.


On Windows and Mac, there is Microsoft / Apple who decides what is set of OS APIs, and everything outside is external library.

In Linux, there is no such authority and therefore no sharp line separating 'core OS' and 'external library'. It is just conglomerate of Linux kernel and independently developed tools and libraries (where each of them is more-or-less optional).


That way leads to high numbers of boxes with vulnerabilities, which may be "fine" for non-technical folks. That's not the audience linux serves however.


I don't think it's at all clear that it would lead to high numbers of boxes with vulnerabilities. Is it clear that a Mac is more vulnerable than a desktop Linux box, if you control for for the technical sophistication of the person maintaining it? I don't think that's at all clear.


While it's not guaranteed they'll be installed, the vast majority of linux desktops get security updates, including for all normally installed applications. That's a pretty big advantage over a manual update strategy.


It does seem like there could be a, well, canonical set of common libraries of specified version-ranges for a particular version of a particular distribution which are dynamically linked and with updates pushed by the distro maintainer, and if the application developer needed something else it would be statically linked.


I think the point is it "could" work on Linux, but a significant portion of devs consider that to be be a bad method of solving the problem.


If you have a BSD app that uses an LGPL library, congratulations, your app is now (L)GPL... let alone a differently licensed other library, and now your app can no longer be distributed/licensed.


> Just fix the bugs and leave the rest to the application developers.

Some dev flat out[0] refuse (and I am not debating if it's right or wrong) to package their app for every distro (even major ones: ubuntu, debian, arch, rhel/fedora) so it's up to distro maintainers to package them so users are always at the end of a line of other people packaging the apps (either through distro packages or sandboxed one click installer).

[0] Words are too strong and user CJefferson https://news.ycombinator.com/item?id=24384206 is right to call me out on that. I agree, sorry about that. Poor choice of words. I had a very specific example in mind but there's obviously a whole gamut of reasons for not packaging. My position is that we can't nor should we expect dev to package their apps for the distro we use. Also it's no like app devs and distro/os devs/maintainers are living in hermetic boxes and their code/apps never interacting or evolving. Not editing out for context.


I don't like "flat-out refuse". I don't "refuse" to package my apps for every distro any more than I "refuse" to do my neighbour's gardening.

I tried packaging for Debian once and after 2 days I have up -- I have neither the time or patience to do work for distros I don't use for free.


Yeah, there's more than a whiff of entitlement to that phrasing.

If you haven't seen it, I highly recommend looking at fpm for packaging. Unless you're doing something weird or need an obscure format, it is the tool you want.

https://github.com/jordansissel/fpm


> Yeah, there's more than a whiff of entitlement to that phrasing.

I have been thinking about that. I disagree. It's always nitpicking o'clock on HN and I specifically wrote "Some" and specifically wrote in brackets that I wasn't debating if it's either right or wrong. I agree that it should have been worded differently but the facts remains and that doesn't follow that there's “more than a whiff of entitlement“.


I'm not clear what you mean then, to be honest. Anyone who isn't packaging their software is, effectively, flat-out refusing to package it. I don't really see how I, or anyone else, could "more" flat-out refuse.


There's a spectrum. Some don't care, some refuse and some flat-out refuse. I dislike being accused of entitlement because I point that out.


[dead]


Read it again. I'm agreeing with the parent.


Ugh, apologies.


> What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell?

Not sure if you were commenting on the whole approach or just snap, but FTR, flatpak uses shared base layers which can be updated individually, so there's still an upside to dynamic linking.


Dynamic linking and containers aren’t necessarily incompatible, though nobody has combined them well yet.

Of course, half the point of containers is to “vendor” your dependencies — a container-image is the output of a release-management process. So the symbolic reference part of dynamic linking is an undesired goal here: the container is supposed to reference a specific version of its libraries.

But that reference can be just a reference. There’s nothing stopping container-images from being just the app, plus a deterministic formula for hard-linking together the rest of the environment that the app is going to run in, from a content-addressable store/cache that lives on the host.

With a design like this, you’d still only have one libimagemagick.so.6.0.1 on your system (or whatever), just hard-linked under a bunch of different chroots; and so all the containers that wanted to load that library at runtime, would be sharing their mmap(2) regions from the single on-disk copy of the file.


Hey, you've invented WinSxS.

The primary issue with this approach is that if every program only sees its own version of the library anyway, there's no incentive to coordinate around library versions - you end up with tons of versions of everything anyway, maybe not one for every application but close to.


> Hey, you’ve invented WinSxS.

Oh, I know :)

> there’s no incentive to coordinate

True, but it potentially works out anyway, for several reasons that end up covering most libraries:

• libraries that just doesn’t change very often, are going to be “coordinated on” by default.

• people building these container-images are the same people who actually tend to be running them in production, so they (unlike distro authors) actually feel the constraint of memory pressure; so they, at development time, have an incentive to push back on library authors to factor their libraries into fast-changing business layers wrapping slower-changing cores, where the business-layer library in turn dynamically links the core library. This is how huge libraries like browser runtimes tend to work: one glue layer that gets updates all the time, that dynamically links slower-moving targets like the media codec libraries, JavaScript runtime, etc. Those slower-moving libs can end up shared at runtime, even if the top-level library isn’t.

• on large container hosts, the most common libs are not app-layer libs, but rather base-layer libs, e.g. libc, libm, libresolv, ncurses, libpam, etc. These are going to be common to anything that uses the same base image (e.g. Ubuntu 20.04). Although these do receive bug-fix updates, those updates will end up as updates to the base-layer image, which will in turn cause the downstream container-images to be rebuilt under many container hosts.

• Homogenous workloads! Right now, due to software-design choices, many container orchestrators won’t ensure library-sharing even between multiple running instances of the same container-image. We could fix this issue without fixing the rest of this, but designing a container-orchestrator architecture around DLL-sharing generally, would also coincidentally solve this specific instance of it.


Unless you coordinate the different app to be compiled with a specific version of a library when compiling, effectively creating a distribution.


Apple does something similar with their dylib cache but they sadly don't use content-addressable storage.


> The whole idea that you'd need a container like environment to install an application

Simple example. App A wants tensorflow 1.10, CUDA 8, and python 3.7. App B wants tensorflow 2.2, CUDA 10, and python 3.8. You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible. The two pythons will fight with each other for who gets to be "python3". How do you deal with this without containerization?

I don't think it violates the principles of open source at all, it's just making sure each application gets the exact versions of libraries it wants without messing up the rest of your system.


> You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible.

And that's the exact problem. Instead of solving it in the proper way you end up with kludge-on-kludge to paper this over.

Backwards compatibility is a great good, you let it go only if you absolutely have to rather than because you can upgrade stuff so easily.

> The two pythons will fight with each other for who gets to be "python3"

An even clearer case, obviously python 3.8 should be backwards compatible with 3.7.


> obviously python 3.8 should be backwards compatible with 3.7

I think so too, but HN downvoted me to oblivion the last time I advocated for that. That's part of the problem, I guess, is that the dev community doesn't actually agree with 3.8 being backwards compatible with 3.7.


Would you argue the same if they were called Python v37.0 and v38.0? Let's imagine that they are and move on. The problem is to use the alias "python3" as if it were the executable name.


At the risk of getting kicked off HN for all these downvotes for trying to have a discussion ... (thanks free speech haters, enjoy your echo chamber after I'm kicked off)

My understanding of semantic versioning is that:

- (x+1).0 and (x).0 don't necessarily need to be able to run code writen for the other

- 3.(x-1) doesn't need to be able to run code written for 3.(x)

- 3.(x+1) should always run code written for 3.(x)

Hence, you should be able to point "python3" at the latest subversion of 3 that is available, continually upgrade from 3.6 to 3.7 to 3.8, and as long as you have a higher sub version of 3, you shouldn't break any code that is also written for an earlier subversion of 3. That's why it is supposed to be okay to have them all symlinked to "python3". If a package install candidate thinks the currently running "python3" isn't recent enough for the feature set it needs, it can request the dependency manager upgrade "python3" to the latest 3.(x+n) with the understanding of not breaking any other code on the machine.

Unfortunately that isn't true between 3.7 and 3.8. There are lots of cases where upgrading to 3.8 will break packages and that violates semantic versioning.


> My understanding of semantic versioning is...

...irrelevant, I'm afraid.

Python doesn't use semantic versioning, so you can't really expect them to follow it. As GP insinuated, if you just pretend that 3.7 is 37, and 3.8 is 38, you'll pretty much be able to apply semver thinking, though.


> Python doesn't use semantic versioning

Right, so because Python doesn't coooperate, we end up needing containerization, which is what I was trying to explain in GGGGP. Because apt will upgrade 3.7 to 3.8 and unfortunately break anything that was written in 3.7 (and vice versa).

An app needs to be able to say "I'm ok with python3>=3.7" and be fine if it gets 3.8, 3.9, or 3.20, if we want to be able to run it without a container. (And likewise for all its other dependencies besides python)


If appA needs python3.6 then call it with `python3.6` not `python3`. It can exist in your /usr/bin in parallel with 3.7. The standard python used by your distribution is python3. I think it's currently the way it's done.


One problem is, that those open source principles have not been codified somewhere. With free software we have its licenses guarding the principles. Like copyleft and the requirement to share modifications if distributing software etc. With many open source licenses we have almost nothing in the way of protecting the principles, because licenses like MIT basically say "I do not care.". Open source will be exploited, until people learn, that they have to protect it.


> With many open source licenses we have almost nothing in the way of protecting the principles, because licenses like MIT basically say "I do not care.".

What's wrong with that? It's perfectly okay to say "I don't care".

I do release all my code under MIT because I care about attribution. I don't mind if people want to use it in commercial or closed source applications, nor if they want to modify it somehow.

I distribute code because that makes _me_ happy, not because I want to share an ideological statement about how others should distribute their code (or not).


It is not unethical to do what you do. I would say, that perhaps it is only a little short sighted. I don't mean this in a negative insulting way and I will explain why I think so.

The problem is rather, what it means in the long run. The point is, that this kind of only caring about for example attribution makes the ecosystem exploitable. It does not uphold ideals or enforce principles. Without upholding ideals and enforcing principles, how do we expect our principles to be followed in the future? If there is no legal obligation to do anything, which capitalistic (We need to maximize our profit! Ethical principles? Nah, come on ...) big company is willing to go the extra mile to respect the principles of some open source community, perhaps even at the expense of making more profit from a closed source solution? And I mean going the extra mile, without seeing it as an opportunity to use the very action as another means of promoting oneself. Simply going the extra mile, because it is the fair thing to do.

As I see it, as long as there is a chance to deviate from following the principles (no copyleft), someone somewhere will do so. Heck, even with such obligation to adhere to principles some people will deviate from the path. The tendency is always towards the wrong direction, if we do not enforce our principles of openness and such. It is an uphill battle. The whole ecosystem goes towards a not so open direction, by these initially small "missteps".

Especially when a a big company with a lot of developers takes stuff and makes proprietary stuff out of it, as its product, which usually initially has more functionality than its open source counterpart, users will quickly switch to the non-open proprietary version. They do so because they want that new shiny functionality immediately. The slightest inconvenience is sufficient for many users to drive them towards proprietary software. They do not know nor often do they care initially about using a non-open, non-free thing. Until they are sufficient users to create a bubble, in which the open source ideas are no longer existent. Then however, the network effects are already strong. "But all my friends use X. No on uses Y. I cannot convince them all to switch from X to Y!"

Example: There are loaaads of at least open source (and some also free software) messengers out there. All people need to do is to use them. But the network effect and features like integration with (a)social media are so convenient for them, that they give up on their freedoms and use things like Whatsapp or Facebook.


I care, and I want the world to be able to do whatever they want with code I gift to it.

MIT is beautiful is its concision, and reasonably reflects the "use however you want but don't blame me" legalese I used to custom craft before I found it.


> They do not know nor often do they care initially about using a non-open, non-free thing.

Man, I get this now esp with AWS services and everyone recommending how "easy" it is to x with y service, and why it we should use to too and it will magically solve all the problems… and I'm like: "no, i'm going to use this open source software that we'll have the code for and be able to tweak to do whatever we want to do and see how it all works (oh and is free to use), and unless you are going to be willing to hack around all the edge cases yourself that with y service without me getting involved at all" and then that usually works, though I suspect once I leave, the costs of running infrastructure are going to go through to roof and no one else is going to have any clue as to why that's so (but more likely, think that its impossible to have it any other way than paying to have y)…

Moments like these are great opportunities for folks that just don't accept "the non-open proprietary" by default, but its only an opportunity because most choose to accept "the non-open proprietary" by default… we all have to pay for the choices we make… some just want to pay more to not have to think about things… tradeoffs.


To answer on your AWS example.

Honestly, it really depends on which stage of life your company is at, and the resources you can allocate to infrastructure work.

At the very beginning of my company, we did exactly what you mentioned here.

- Pay for a managed NAT gateway? No thanks I can do the NAT myself with iptables on a cheaper EC2 instance.

- Pay for a managed NFS? No thanks I can do it myself

- Pay for managed VPN? No thanks I can setup IPsec myself

- etc.

With time though, as the company starts to gain money, and the number of users increase, we switched back to more managed services. The key here is that you want to refocus your infrastructures efforts on more business centric issues.

Also, most of the time, the time spent to maintain a service is exponential with its scaling. NFS is a good example here. Setting up a number of NFS shares for 5/10 users is fine. Once you get 20+ NFS users, you just better focus on your real company product rather spend month and money on maintaining NFS yourself.


A small company in an EM country, I don't think infra spend will ever come close to those in the US to being within the budget without significantly affecting margins… and considering that a lot of companies in the US are funded with either massive amounts of debt and on E rounds of funding… I don't think most can afford it either…

Though saying this, even at a small corp level, are still affordable proprietary solutions out there (not necessarily in AWS), but most people trend towards whats trendy…


Who/What has been exploited here? As others mentioned, you could install debs just fine.


The work of open source community has been exploited to create closed source, walled gardens with superficially more convenience to attract users and make profit of them.

This pulls users from the open source projects and since contributions do not flow back to the open source project, it can quickly become obsolete in the eyes of the most users. The principles of open source will live in that open source project, which in the end (exaggerating) "no one uses" and will become pointless. Most users are not protected by these great ideas or principles any longer, because they will be sucked into the closed source swamp, because all their friends are there already.


"What's the point of dynamic linking if we then end up shipping half an OS with an application just to get around dependency hell? Might as well ship a pre-linked binary that just does system calls."

I'm curious - are there any Linux distributions that contain nothing but fully static, self-contained binary executables ? As in, ships with no libraries ?


It's about sandboxing your apps... though not every app needs to, or should be sandboxed, but plenty can and should be. I happen to like flatpak a bit more, just feels like a more open community, and appimage is okay as well.

It's about getting dependencies with the app. I can't tell you the number of times I borked my OS install because I wanted a version of a single application that had a feature newer than a year or two old version supported in my distro's repository.

It helps to have both as options.


We're already there. The snap store is not open source and can only be run by canonical.


This is the biggest downside for me. I understand why they want to use snaps of huge software applications especially for rolling releases. I just don't like one company being in charge of the method of distribution.


Yes indeed, I don't mind snaps for the really big third party stuff. But they were using it for literally everything. Even htop was snapped at one point. Seriously... :/

Clearly they are looking for a way to put some kind of proprietary dependency in Linux by propagating snap to other distros so they can then milk it for cash (e.g. charging a fee for access to the snap store to big publishers like Microsoft/Google). I don't think they realise the mainstream Linux users will hate it for exactly that reason.


There are many reasons to hate snap for. I have a comment that explains it in more detail, but seriously, this is the most anti-FOSS, anti-Linux crap I have ever encountered. Shame on Canonical.


Original Unix didn't support dynamic linking (despite having already been invented), and for some purists it was a mistake to add support for it. Plan 9 refused to add support for it. Dynamic linking is not some kind of holy principle.


Perhaps, but dynamic linking solves a very important set of problems (Speed, storage, memory, bandwidth, ...etc).


Security update problem.


> These large companies should stop fucking around with Linux

Canonical has 443 employees according to the wikipedia page. Is that large in this context? I don't really think so. Redhat (13k employees) is large. Canonical isn't large.


Large in their mind share, not necessarily by employees.


I disagree on the containerization part. It can absolutely improve security a lot by facilitating isolation of namespaces and filesystems.

But perhaps base container images on scratch, not on ubuntu:latest ;)


> What's the point of dynamic linking

There's a certain subset of developers who are against dynamic linking at all, and they do have some convincing arguments that are worth reading.

I don't necessarily agree with them, but their arguments are worth acknowledging.


Thankfully both Arch Linux and Gentoo have native Chromium and don't require Snap. I can recommend them to everyone. Arch is more practical for everyday use though as compilations take time.


Dynamic linking certainly doesn't work for Steam et al.


Canonical has been doing this sort of thing for a very long time, they way Ubuntu tough was designed should have been enough to make most people abandon them.


Statically linked binaries with an apparmor profile is not such a bad idea.


> The way open source is straying further and further from its principles is highly annoying.

I do think that's why the distinction between "free software" (copyleft) and merely "open-source" matters.

If you look historically, "open-source" became a thing as a reaction to free software where it preserves the most visible benefits, (source code in the open, modifyable by others), but treats these as purely a convenient workflow when working on code, whereas free software is more of a philosophy and so less likely to erode on principles.


I agree that something based strongly on principle rather than convenience is less likely to drift from them, for better and for worse.

Free software doesn't need to be copyleft, though. The MIT license, for example, is a free software license, even though it's not copyleft. Projects such as the various flavours of BSD can have pretty strong principles regarding their software distributions remaining free even though they don't prefer copyleft.


> flavours of BSD can have pretty strong principles regarding their software distributions remaining free even though they don't prefer copyleft

That is true, however speaking to many members of the FreeBSD community in particular, there seems to be a strong sentiment of this simply being a practical model of development, rather than a strong ideological stance. In fact a large portion seems to be rocking Macs, "cause it's BSD anyway", which to me does not seem particularly principled.

In fact they seem to take pride in completely closed systems being based on FreeBSD, like the PS4, Nintendo Switch etc.


Open source never had any principles. Open source is just a libertarian hijack of free software to ensure that it is accessible to companies without any legal issues.


The principles are just that, principles.

In practice though, they don’t matter for a vast majority of users, and package managers are far less hassle. Maybe it’s time for the principles to change?


Could you clarify "package managers are far less hassle"? Did that statement have something to do with snap? Less hassle than what?


I switched to Pop_OS precisely because it doesn't support snaps. Snap is arguably half-free : the server isn't free at all, no working free implementation exists. Therefore I don't want any snaps. Flatpak is OK.

This is all part of a sneaky attempt to get free software in control: Microsoft grabbing more and more power in the Linux Foundation, RMS ousted from the FSF, GitHub becoming more and more central as a default go-to-hosting (beware of Github!)...

All of these are part of a dangerous trend of appropriation and control by well-known monopolists. Don't fall for it. The GAFAM aren't nice guys.


>beware of Github!

I've been slowly moving off GitHub to SourceHut, and it's been a breath of fresh air knowing that not only is all the software free (so you can self host) but the maintainer (Drew DeVault) is also quite committed to keeping it that way. I feel like I'm in safe hands.

Thankfully it's not yet another GitHub clone, but built around git+email -- so it's far more decentralized by nature.

https://sourcehut.org/


Mint steers away from snap by default. I like the approach, and Mint 20 Xfce has been an absolute DREAM to install from scratch. Sane basic install, auto-upgrades that lets you choose, possibility to restore OS to previous snapshots and good polish overall. Intuitive and lean interface that lets you mute all notifications.

Currently playing Elders Scrolls Online with Lutris (sorry for the internet downtime, was my first character creation ;). Also playing with minikube and docker to sharpen up some knowledge. All the modern toys easily within reach, making this another year of linux desktop use, only happier with a new nice clean install of a modern solution that mostly works very well. 1 search resolved audio issue for good, everything else was provided from OS.

Something like snap breaks too much of all these community-efforts, that I can only reject it vehemently.


> Sane basic install

I think Mint's installation is too dumbed down tbqh. But, that said, I use Mint on all of the computers that I don't want to spend time to personalize.


I use Linux Mint on all of my home computers. I personalize all of them to my needs and I do not find it hard or difficult or time-consuming. I find the installation process straight forward.

Perhaps however you have more intense personal customization needs than I.


Just a hunch, but he probably means disk partitioning, allocating a partition for swap and other such stuff you can customize most easily during installation. It's nice to have excellent options for that, but I seriously don't have that as a deep need right now personally.


Well, I'll be that guy: I don't get all the complaining about snaps.

First, I use the command line for package installation and couldn't care less if the store experience is suboptimal.

Second, I use Firefox and anything that discourages people from feeding the Chrome monopoly, frankly, that sounds like a good thing to me.

Third, for some desktop apps I want auto updates. Having an option for some software to be on the latest is pretty slick and previously could only be solved with PPAs, which had their own problems (maintainer headaches, dependency issues, etc).

Would I want them on a server? No.

As a desktop (well, laptop) user, do I want all my software deployed with snaps and auto updating willy-nilly? Also no.

But as a desktop user, I appreciate that I have the choice.

And by lowering the barrier for maintainers who no longer have to worry about multiple distro versions, dependencies, etc, it means I get more software options. Sounds good to me!


You have a good point about making the desktop experience more painless and idiot-proof.

The real problem for me though is that snaps are slow as hell. I mean like taking 4-5+ seconds to open on a box with an SSD, i7, and 64GB of RAM. That's unacceptable.

The icing on the cake for me is that even through the command line as you mention apt now seems to be giving me snaps instead of debs for a great deal of programs, which affects much more than the store experience. And, also, regarding said store experience: if stuff like Spotify takes 5+ seconds to open I doubt a user coming from Windows giving Linux a try is going to want to stick around long...it would be great if there was just a better solution.


I second this, I couldn't care less about snaps, flatpacks, debs.

But snaps are - for me - A LOT SLOWER than everything else out there.

*.deb, binaries run stuff in less than a second.

Flatpacks, appimage, I have those running in a second or two. Snap, for the same app takes 3-5 sec. sometimes (I wouln't know why), it evens takes as much as 8-10 sec.

NOBODY can get a pass on artificially making slower apps in 2020.


> making the desktop experience more idiot-proof.

"If you make it idiot proof only idiots will want to use it". - this holds true. Canonical made the conscious and deliberate decision to treat users like morons by not even giving us the ability to decide how and when to install updates.

I gave up on Windows because of their blindly hostile approach to users, I won't be installing the latest Ubuntu - opting for Mint instead.


You're right, Apple is heating in the same direction by locking down macOS for power users :(


> The real problem for me though is that snaps are slow as hell. I mean like taking 4-5+ seconds to open on a box with an SSD, i7, and 64GB of RAM. That's unacceptable.

Spotify is specifically one of the snaps I use and frankly, I noticed it seemed to start a little slow but just assumed that was because of Electron or something. I literally don't care and never thought anything of it. I run it, it starts, and then I don't close it.

Besides, if Spotify users reject it, they can always switch to PPA or something else. It's their choice.

> apt now seems to be giving me snaps instead of debs for a great deal of programs,

"a great deal"? I've seen two mentioned, chromium and lxd. Where else have you encountered this where Debian has a package available from a maintainer but a snap shim is used instead?

Apt will also tell you a snap is available if there's no deb but that's just useful information.


> if Spotify users reject it, they can always switch to PPA or something else

Some apps, like Chromium have no alternative ppas available.

I installed KDE Neon 20.04 and when I discovered that Chromium was being switchted to snap, I searched for any current *.deb out there. NO proper ppas, just found some outdated Chromium 1-3 versions behind the current version.

If it wasn't for the KDE from Neon, I would have switched of distro in the hour. I switched to Chrome instead.

Got some old compiled Chromium just to have the thing available (I can just run it when I need it, it takes maybe 1/4 of sec to start).

Just hope Canonical doesn't try its snap thing in more critical packages or (FAR) worst, in the LTS server versions.

I would be getting popcorn to see the show when half the Internet start to ditch the LTS overnight over some half-propietary half-baked software being put in charge of its otherwise perfectly GPLed infrastructures.


Right, this is why Canonical moved Chromium to snaps - It's a ton of effort building Chromium for 20.04, 18.04, and all the intermediate releases every few weeks for a package that's in universe.

It's cheaper/easier for them to publish one version across all of Ubuntu.


> Besides, if Spotify users reject it, they can always switch to PPA or something else. It's their choice.

This is not what happens. The vast majority of users don't know or care why something is slow. They'll just say "Ugh, Linux is slow, I'm going back to Windows."


It's definitely not launching slowly due to Electron, because it's not Electron :)

It's C++ with CEF


You're essentially saying "I don't care because nobody tried to trick me specifically into installing a snap if I wanted a debian package" - I hope you know that on the way forward, Firefox will be snap-shimmed too.

The Enterprise Linux ecosystem is a solid alternative to Ubuntu. Fedora for desktop and either CentOS or Fedora for servers, depending on how stable they need to be and how much maintenance you're willing to do.


And SUSE/openSUSE for an alternative in the same segment.

Very similar roots, also RPM-based, but with a different take on things, such as using Btrfs and KDE by default, which I prefer.


SUSE (the enterprise edition) uses Gnome as default, i love Opensuse Tumbleweed, but i think Gnome (with the SLES-look) is much more stable than KDE. Anyway i use i3 nearly exclusive.

BTW: KDE is not the default, it's just the first in the list :)


btrfs is coming to Fedora 33 as default


> I hope you know that on the way forward, Firefox will be snap-shimmed too.

I see, so we're angry about things that haven't happened yet. TIL the OSS community has precogs and this is Minority Report!

Look, I get that two whole packages out of literally tens of thousands did this (I've seen lxd and chromium mentioned). But why don't we convict Canonical after they commit their crimes, eh?


Surely if it's on the roadmap then better to address it now and potentially avoid it rather than dig in deeper until sunk-cost fallacy suggests there's no way back?


Sunk costs?

How are there greater sunk costs related to switching distros later instead of now if Ubuntu starts actually misbehaving in a significant way?

I grant you, if you haven't picked a distro yet and this whole thing bothers you, by all means, pick something else! That's the beauty of the Linux world!

But if you're already in the ecosystem (and I have to assume most people complaining are... Otherwise why waste energy complaining about the actions of a company that doesn't affect you) I don't see how the sunk costs fallacy applies here.


It's sadly not as easy as you make it sound. I was recommended to use Ubuntu for my work laptop because the IT team couldn't guarantee support if you used another distro. I know my way around and particularly love Manjaro (and worked with Debian and SUSE and Mint) but the company uses proprietary VPN and a spy/monitoring agent and I wasn't sure I'll be able to make those work on a non-Ubuntu machine.

So yes, there could be huge sunk costs. After fighting with this work laptop for a full work day to get every single step of the onboarding completed, I am not looking forward to doing it again -- even less so if the proprietary and mandatory programs that I have to install can't be guaranteed to work on anything outside Ubuntu.


I've had a number of problems with snaps.

For one multiple problems with linux containers due to lxd now being a snap package, which auto updates, restarts and sometimes doesn't like switching networks.

Apps that needed mounts that weren't there.

I mainly use nix packages now for ease of installation, removal, updates and for nix shell.

Need to run a shell with ffmpeg? `nix-shell --run bash -p ffmpeg` And ffmpeg is gone after the shell terminates.


> Second, I use Firefox and anything that discourages people from feeding the Chrome monopoly, frankly, that sounds like a good thing to me.

I'm sorry but can you explain? How exactly is Snap helping Firefox? (I'm a Firefox user and I have never used Snap)


It's an admittedly weaker point: If folks want to avoid snaps (as this blogger professes) but choose Ubuntu then their next best option is (the default browser on Ubuntu): Firefox.

I guess a more succinct way to put it is: I actively oppose chrome and really don't care if the snap haters are screwed by Canonical's approach to packaging it.


I agree with you about chrome, but your point only works if nothing other than chrome is dependent on snaps, which seems clearly not what Canonical is aiming for.


Especially since Firefox have an officially maintained Flatpak package which works beautifully, at least on my machine.

Is the snap version of FF maintained by Mozilla, by any chance?


I second that question. Currently Ubuntu only forces Chromium to be installed as snap, not Google Chrome.


Many of us have no problem with the idea of snaps, simply the implementation and forced preferences. If I wanted corporate policy shoved down my throat I'd simply use Windows. :D

https://news.ycombinator.com/item?id=23772524


> Third, for some desktop apps I want auto updates

> But as a desktop user, I appreciate that I have the choice.

Just so we're clear, it's not that the apps auto-update, it's that there's no way to stop them auto-updating.


So then don't install snaps and use PPAs or flatpak or compile yourself.

Unlike Windows 10 you do have a choice here.


I use Windows 10 and have a choice. I can use a package manager of my choosing. I can install an application that self-updates. I can pause OS updates in a control panel.

One of the biggest problems with Windows is that every app sets up its own update policy and comes from its own App Store. (Steam, battle.net, etc.,) But that's Microsoft's fault for doing too little, not for doing too much. And that's actually kind of nice.


> I use Windows 10 and have a choice.

So do I.

Their botched rollout of Edge, including installing it without my permission and placing it on my taskbar, shows that no, you don't have a choice.

> I can pause OS updates in a control panel.

This is misleading at best. You can pause updates for a time period but they're forced at that point.


Canonical's actions with snaps are very clearly steering us away from choices that were once easy to make. They come by default, you can't easily use a non-Canonical store (not even really possible at the moment), they force auto-updates on us and they trick us into installing them when we think we're going to install a traditional .deb.

I don't know why you're comparing it to Windows 10.


> Just so we're clear, it's not that the apps auto-update, it's that there's no way to stop them auto-updating.

Actually you can.

https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--... for instructions on how to defer updates.

If you want to install a snap such that it never updates, see https://forum.snapcraft.io/t/disabling-automatic-refresh-for...


Deferring isn't a solution, it's a band-aid. Just as you can only defer Windows updates but not completely opt out, you can only defer snap updates; there is no global opt-out. Having to do a convoluted procedure for every app as outlined in your second link shouldn't be necessary and definitely shouldn't be hidden behind an obscure source that only snap gurus even know about.

This issue, combined with the snap server being closed source and controlled by a for-profit company that seems to care less about FOSS with every Ubuntu release, makes snaps a very hard sell to anyone with common sense.

With the direction Linux is heading between snaps and systemd, and old-school holdouts like Void and Slackware suffering from a lack of developers and long-delayed releases respectively, I'm leaning more and more towards adopting OpenBSD for daily driver use. Performance has improved, quality of life has immensely improved, and the team behind it actually cares about putting out correct, working, secure software.

Sorry, rant over and I really went out on a tangent there, but my original point is that snaps are definitely no good for the future of Linux and I believe will be a detriment to the platform going forward.


Good, good.

I'm still of the opinion that this should be a front-and-centre feature. Install from the snap store, disable updates for that app specifically, still have to tied to its store instance for manual future updates.

Is this a practical issue for me? Actually, no. However, the fact that they didn't just include it from the beginning is a clear indication that Canonical and I are starting to differ in our thinking when it comes to the autonomy of the user. Also the fact that they didn't allow you disable updating, but merely postpone it for a maximum of two months.


The ask is not really to have a snap never update. The ask is that you have the option to update only after a user allows it. This has the side effect that if the user never says yes, the app never updates. However the true goal is that you are in control of when your apps update. This has many benefits, I 'll only list two. If something breaks, you can immediately deduce that it is because of that specific update, instead of having to investigate your whole system on what's happening. Also you can always say no to updates on the day you have a presentation so you have no surprises.


The author of the article didn't mind the principal behind snaps, if you read his diatribe it was more about the performance of it and feeling snookered that Ubuntu was pushing snap so hard, even tricking people into using a deb that runs a script that then installs the snap version. It shouldn't take chromium 5-10 seconds to startup with firefox launches in under a second


Same here, I have the choice and so far snap worked good for me. I would rather install some random application as a snap then a deb that I need to use root to run the installer.

When I need to install some random app from the internet (last one I had to install was an advanced PDF viewer/editor) I manually unpacked the deb and found the binaries and run them as regular user.


I don’t get it either. Just fresh installed 20.04 last week on a new drive and have had no trouble completely avoiding snap or even really understanding what it is. I’ve used Ubuntu for years and may have used the App Store one time.


It's still in the background. Running a root daemon, spamming mounts and filesystem, whether you notice or not. Unsafe and obnoxious is not a good combination.

It's the same poor behavior that lead to the marginalization of Docker.


I came here to make basically this comment.

I install everything via CLI and pretty explicitly always avoid the GUI they have for installing software. It's never been good, ever.

I'm on 20.04 and I think it's a fantastic release, head and shoulders better than 18.04.

The only thing in the author's list that I really take issue with is installing a snap when a user attempted to install from a .deb file. That sort of shit is antithetical to linux, and if it continues to get worse would actually be a reason for me to leave Ubuntu.

But everything else is just a non-issue to me.


Just remove snap [1] and move on with your life. That's the very first thing I do in every single Ubuntu installation for years (including 20.04), and never had any problem.

At least in my case I don't need / don't want any kind of app store in my system (but I understand this is not for everyone; less technical users may not be inclined to installing .deb packages using command line).

And I still follow this thread [2], hoping one day they will clean up the $HOME/snap directory and fix the performance so gnome-calculator doesn't take several seconds to load, but I'm keeping my hopes low.

[1] https://www.kevin-custer.com/blog/disabling-snaps-in-ubuntu-...

[2] https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1575053


So how do you install Chromium without their snap? Is installing non-snap Chromium as easy as installing non-snap packages in their repo? If not, I'd rather use Linux Mint.


The bad decision here is not "Ubuntu's snap obsession" as OP claims, but the decision by the Chromium team to piggyback on a half-baked packaging mechanism, instead of simply distributing .deb packages (like the official Chrome).

Thankfully there are plenty of fixes available. Here's one of them [1]:

  sudo add-apt-repository ppa:saiarcot895/chromium-beta
  sudo apt update
  sudo apt install chromium-browser

[1] https://launchpad.net/~saiarcot895/+archive/ubuntu/chromium-...


> Thankfully there are plenty of fixes available. Here's one of them

Sounds wonderful, along with the accrued online karma points. As great as one person is, they will never be the upstream source.

Here's my ppa also, I have a proven track record of updating browsers with security hotfixes hours before anyone else, but on the downside I really don't secure my system at all and leave my laptop unattended in Starbucks, please subscribe:

    sudo add-apt-repository ppa:TOTALLYNOTMALWARE/chromium-beta
    sudo apt update
    sudo apt install chromium-browser


Of course you should pick the poison that matches your risk level. I agree that installing random ppas brings all sorts of risks, just like any other 3P code you install - pip/gem/npm-installed libraries, chrome extensions, etc. All these are real attack vectors.

My original point was that replacing your OS because you can't install Chromium seems ludicrous to me, when you can easily find alternatives.

Here's a few better options:

1) Use official packages from debian: https://askubuntu.com/a/1206153/161744

2) Use Pop!OS repositories (assuming you trust System76 folks):

  sudo add-apt-repository ppa:system76/pop
3) Compile Chromium from source https://www.chromium.org/developers/how-tos/get-the-code


A PPA is not a replacement, especially for something like a browser that almost everyone needs to have installed.


Since LM is based on Ubuntu, if you can install Chromium there using a deb, the same package/repo should work on Ubuntu.


I'm not saying that you should do this, necessarily, but I just installed regular Chrome from the Google .deb. That also fixed an issue with snap-Chromium not handling web app shortcuts properly.


The only thing I want (not need) that is unavoidably distributed via snap is Canonical's Livepatch. I don't want it enough to keep snapd around but it irritates me that I can't have a useful security feature.


If I may dare stereotype, I think that maybe OP is approaching this from the perspective that operating systems such as macOS or Windows have instilled in us.

I use Ubuntu 20.04 just as I have used 18.04 and 16.04, and the presence or absence of snap doesn't alter the experience for me. GNU/Linux based operating systems will always have a sort of idiosyncrasy inherent to it that stems from the fact that it's really isn't built on fads, and fads are always what would detract from it. Hence, Snap should be used by those who like it, and history will tell us whether it were a fad or not.

I think Ubuntu in general tries to appeal to more of the masses and would try to use things to draw people to it. Nothing wrong with that; you could always use Arch.

But please don't regard a feature that really isn't forced on you by GNU/Linux in general as a reason to, eh, be snapped off it. I use Xubuntu 20.04 and have used Snap here and there and for the most part I couldn't be bothered by whether it works well or not. I do think the topic of whether Snap makes sense, whether it's too slow, etc., is a valid topic to discuss, but I don't see how it should be an argument for or against Ubuntu.

In any case, being for or against something, when really it's only your own business whether you use it or not, does always have a sort of political tinge reminiscent of the aforementioned proprietary operating systems.


I don't know why but Ubuntu (and Canonical) have always been the black-sheep of the FOSS "community"...

I am 100% agree with you.

Also, people complain on other OS that they can't remove or replace an existing software... Here, they can. But they prefer to move to another distro just because of this. I don't understand this world anymore... :(


The problem is their engineering is not very good. Not horrible, but just not great either. And they almost never respond to criticism, so canonical projects are unable to improve significantly, post debut.

This is ultimately why their projects are doomed to a 95% failure rate.


I wonder if snaps will actually survive. The history of Ubuntu specific innovations seems to be that they invariably fail. But it's never not for the lack of trying, so it only makes sense that Ubuntu is trying to push this on the users hard.

I do run 20.04, and one of the first things I did was disable the snap store and enable flatpaks.


Do you have a link to a tutorial on doing that? If not, no worries, I'm sure I can find one. I was going to install Ubuntu for my Dad as he's tired of Windows and seriously don't want to have to do tech support every other week because a third party decided it was upgrade time and he doesn't know why something changed. That's the sort of WTF he's sick of Windows for :( Ubuntu seem to be getting more and more user hostile as they go.



thanks . that's helpful


Be nice to your Dad and buy him a macbook or Mac Mini


Peak Ubuntu was 18.04 LTS :(

But, maybe it's not a bad thing - consider that this maps closely to the app store experiences on both Windows and OS X today. Perhaps it's a great decision for the future users of Ubuntu but not for the current users of Ubuntu. It could turn out beneficial for both Ubuntu and Debian, as I imagine many will switch over. And as much as I'm married to my current X11, it's pretty cool to see them push Wayland through.

Ubuntu is not for me anymore, but I still appreciate what they have done and are still doing for Linux, regardless of what I think of their product and market decisions.

Looks like Mint/MATE is the new Ubuntu for desktop and Debian has always been for server?


MX Linux is good


Yeah, but given the difference in ideology and stance on most notably systemd, I don't think it can be called "a new Ubuntu" :)


Snaps aren't supported inside Linux containers. Chromium headless is now snap-only, therefore Ubuntu can't be used to run our frontend tests. Switched to Fedora 32 almost immediately. Didn't even bother to search for a workaround.


I LOATHE this direction that Canonical is taking as SNAP is such a steaming pile of shit and it’s being shoved down everyone’s throats. It’s going to be the thing that makes me abandon Ubuntu. It was a neat idea in concept, it’s been horribly executed. If I wanted what you’re trying to push Canonical I’d just rub Qubes OS. Please give SNAP the quick death it needs and deserves!


Same thing they did with Unity they keep shoving at everyone. I still hate that stuff. I don't need my Linux system to be Mac-like. I'd be using a Mac if I liked Macs.


Unity was actually rather good though. Snap... has a ton of problems that makes it very much not so.


There were several misdesigns in it that they would not allow the option to change. That and other missteps always doom their projects.


The one ray of hope there is that they eventually gave up on Unity. Hopefully they will give up on snaps also...


And upstart. And Mir.


I run ubuntu 20.04 and don't have one single snap installed.

If you can switch distro, surely you have the skill to just ignore snaps and use ubuntu like a regular debian system. It required zero effort on my part at least.


This comment seems very condescending and doesn't bring anything to the conversation. There's many people in this thread stating that:

1. Ubuntu is replacing "standard" apt packages with packages apt packages that just install the snap version.

2. Many snap packages have bugs that are caused by the snap setup itself.

Thus, it is impossible for these users to "just ignore snaps and use ubuntu like a regular debian system".


Just use ubuntu like you did before snaps. No change to make. It works as before. All the debs are still here. The ppa. The apt.

Let others figure out the snap debacle.


The chromium apt package isn't there, which is one often pointed to example.


Linux Mint 20, which is based on Ubuntu 20.04, ships without any snaps or snapd, and has some kind of mechanism to prevent snapd from being installed by apt. Might be a good option for folks that like Ubuntu but want to avoid the snap fiasco.


Pop!_OS also avoids snap, opting for flatpak support instead.


I’m still waiting to experience the issues snaps are even trying to solve. It’s like having a worse experience for reasons I myself don’t even see.

My issues with snaps are part the ugly and clearly “non-native” startup time, and part that many lack in system integration. For example sometimes their font rendering is uglier or the open file dialogs look bad or present me with the confusing “snap worldview”. Bah.

Auto updating? Who cares. Package managers makes it super easy to keep systems fresh. I heard many distros even do it themselves or come with one click interfaces for it...


The only reason I have been using Ubuntu is because it was the distro with the most installable software available.

I'm ok with default GNOME, I'm beyond cool desktop, I just want something that works. On the other hand I need to be able to install a large selection of software easily, and recent-ish versions of them (hence why not Debian). I also can't have things that upgrade themselves on their own schedule (kids in school with capped bandwidth).

Having done it myself in the past, I understand how taxing creating official .deb and .rpm is for the developers, and how difficult dependencies issues are. Given how Ubuntu is digging itself into the snap hole, what's the best solution here?

Why pick Flatpack vs Appimage? Reading about it they both seem to have pros and cons.

I see a lots of PopOS proponents here, but found that they don't produce that many .deb packages, and the choice will dwindle as Ubuntu's moving more and more stuff to snap. It also was a lot of work to un-customize (going back to a vanilla GNOME).

I'd honestly be happy with even Fedora, if I could easily install most OSS.


Out of general interest, which kinds of OSS have you come across that aren't packaged for Fedora?

I'm sure there are obviously some; anything proprietary or with patent encumbrance requires using rpmfusion or something, and that may be a step further than enabling multiverse. Some stuff just might not be packaged. Ubuntu and Debian repositories are also exceptionally broad.

But I can't, right off the top of my head, think of that much of OSS that I couldn't install either from the official repos or rpmfusion.

Maybe I've just forgotten about any loops I had to jump through, but I've been using Fedora for about six years now (after moving from Ubuntu), and I've found that the repos are much richer than I thought they were. But maybe I just haven't run into the holes that you have found.

There are some things I preferred about Ubuntu, but generally things have been working surprisingly well for me.


Last time I had checked, the issue were with software with non-free codec and smaller more niche software, but checking rpm repositories there isn't a software I'm missing yet.

I'm definitely going to have another look at Fedora and OpenSuse for that matter (specifically their rolling version for small single service servers).

Thanks for your comment.


When Ubuntu killed Mir and Upstart and adopted Wayland/Gnome and systemd, they really invested into a whole competing desktop ecosystem (freedesktop) and it was shortsighted of them to not see that they would not be able to just take bits and pieces. They were never going to compete with flatpak (aka xdg-portals, from freedesktop).

My point is that this has been in the making for years now. They didn't fail to deliver a good product, snaps just should have died when the rest of the Ubuntu differentiation on the desktop did.

It's frustrating because for most people, trying linux on the desktop means trying Ubuntu. Freedesktop has succeeded in making a cohesive desktop environment atop linux that is as polished as commercial offerings, but Ubuntu doesn't make that accessible.

My advice to anyone who hasn't tried linux on the desktop lately: try Fedora or Debian on a live CD. I think you'll be impressed, especially if you're coming from Ubuntu. Unadulterated gnome3 is a breath of fresh air.


Remember that snaps solve the problem of third parties wanting to ship software directly to users.

If you don't want to consume such software, then you don't need to use snaps, and don't need to care that Ubuntu 20.04 supports snaps. The system works perfectly well without them. Snaps aren't being "forced". If you insist on using curl piped to sh to install third party software, you can still do that, or use any other third party mechanism in between.

Too often snap critics conflate the installation of the third party software use case with the distribution itself. Ubuntu 20.04 itself is based on debs, not snaps, and the distribution itself continues to work using apt as always. Claiming that installing another distribution to "solve" this nonexistent problem is disingenuous.


Except they’re pushing first party, sometimes core software to snaps. TFA lists Ubuntu Software Center as a victim first and foremost, makes me wonder (guideline-breakingly) if you read TFA.

Now, I don’t use Ubuntu Desktop, but even in server space, lxd (again, first party) has been pushed to snap, with the deb package being a mere shim.


The Snap Store is a Snap, yes. That's surely entirely unsurprising?


It is if you have zero interest in snaps and just want to install good old debs with it.


Why would you use the snap gui to install debs? Use the normal software center thing.


Don't use it then? The apt CLI, GNOME Software and Synaptic are all available as debs.


What do you do if you've been using Chromium as your default browser, though? (…and are concerned about privacy, so you don't want to install Chrome.) Yes, as you say elsewhere, Chromium is not the default browser on Ubuntu. But what do defaults mean, anyway? Ubuntu switches the default music player every other year, too, and at some point people will just stick to the app they like better.

I had always been under the impression that distribution defaults are suggestions for novice users. We've never had a situation before where a distribution like Ubuntu didn't properly support common alternatives to the default app.


That is incorrect. In 20.04, among others the Chromium package is a blank .deb file that installs a snap package instead.

You can't choose to install the .deb version instead, unless you add an unofficial repo.

That is absolutely forcing snap on people, since Chromium was a standard .dep package in the previous releases. And it's not the only package where the transition to snap is forced.


Chromium is not packaged using debs in Ubuntu. Firefox is the default supported browser, as it has been for many years, and that works perfectly fine as a deb.

The reason the Chromium deb package pulls in a snap is so that users upgrading from 18.04 or 19.10 continue to get a working Chromium. If you have disabled snaps, then apt will not pull in the Chromium snap.

> That is absolutely forcing snap on people, since Chromium was a standard .dep package in the previous release.

It really isn't. The default web browser works fine and isn't a snap. Chromium has never been installed by default on Ubuntu.


A lot of people prefer using Chromium. In previous versions, it was a standard .deb package.

The subterfuge pulled by Canonical is that it now looks like a standard .deb package, but all it does is pull in the snap package, with all the attendant mounts, unmovable snap folder in $HOME, and other snap-related issues that standard .deb packages don't have.

If you disable snaps, you cannot install Chromium on 20.04, unless you use an outdated third-party repo.


> Ubuntu 20.04 itself is based on debs, not snaps

This just isn't true. Its clear that Ubuntu is moving away from debs and to snaps. LXD is software primarily developed by Canonical and is only distributed as a Snap. Look at Ubuntu Core, the IoT distro by Canonical that doesn't use debian packages at all, it just uses snaps.


> This just isn't true. Its clear that Ubuntu is moving away from debs and to snaps.

It's clear that Ubuntu is promoting snaps where they make sense. That means for third party software distribution, software that is "rolling release" upstream only, and software whose new versions have to be made available to all supported previous releases at once.

There is no evidence that snaps are being promoted outside these specific use cases, none of which fitted proper deb-based packaging anyway.

> LXD is software primarily developed by Canonical and is only distributed as a Snap.

LXD is the type of thing where users expect the latest version on the oldest still supported LTS release. It's not practical to backport as a deb. That's why it's a snap.

> Look at Ubuntu Core, the IoT distro by Canonical that doesn't use debian packages at all, it just uses snaps.

Well, yes. That an entirely different platform, for when the system is updated as a read-only image, which makes sense for appliances and which apt cannot support. It's got nothing to do with Ubuntu the general purpose OS except that snaps are supported on both platforms.


Your argument doesn't hold much water as Debian is able to package the same software in .debs.

If you just look at the list of canonical owned snaps it becomes clear that every release they move more packages from being debs to being snaps. If the snap store was just for third parties no one would care.


> Too often snap critics conflate the installation of the third party software use case with the distribution itself.

I don't think it's accurate to say that it's the desire to install rolling-released third party software in itself that's the problem. It's the mismatch between the distribution's and the third party software's release cycles that make this a problem, and using a distribution that more closely resembles the software's release cycle does solve this problem without needing Snap.


Good to see people are downvoting instead of replying and explaining the disagreement.

I happen to agree with you completely, and I say that as a 25 year veteran in the Linux world.

The open source world goes through things like this periodically. But then the change passes and we all get used to it and the next controversy comes along.



That's why I'll stick with Fedora.

It's "pure", modern, and I really like the KDE spin.

The non-pure stuff - when needed - is available through rpmfusion (the non free repo).

Sure, I'm kind of a "beta tester" for RedHat Linux, but on the flip side there's nothing commercial in it.


I've been running Ubuntu 20.04 on WSL2 and haven't had any issues with snap being pushed on me—mainly because I don't use the Ubuntu store and have everything installed through apt on the command-line.


If you’ve installed Chromium, lxd, (or in prior releases the gnome calculator), even with apt, you’ve installed snaps. Even without using the snap store. That’s the sneaky part that bothers a lot of people and makes it seem like snaps are forced on users.


In previous releases I wondered why such a small basic program like the Gnome Calculator would take several seconds to start from the time I clicked the icon in the "Activities" screen. Then when the whole 20.04 and forced Snap thing came to my attention I simply uninstalled the snap version and installed the proper version through apt. Now the program pops up instantly when I start it.

This single interaction basically turned me off snaps from then on.


I'm not a fan of Snap, in fact I've complained about it on HN before, but it really isn't that big of a deal. It's a minor annoyance in the grand scheme of things.

At least for me, it would be a spiteful decision to go to macOS of all operating systems, where system package managers don't exist, and the only viable option is a 3rd party Homebrew or Macports repository.


It's true that builtin package manager doesn't exist for mac, but homebrew has managed to become one of the best package manager in any os.


I have to disagree. Homebrew doesn't even come close to what's been available in Linux for decades. Whenever I have to use macOS I'm perpetually frustrated by the state of package management on the OS.


Homebrew maintainer here. Care to elaborate? We have lots of happy users but it’s equally important for me to learn how Homebrew hurts people.


Don't know why Ubuntu keeps pushing it, especially on serverside this is very undesirable. Also the extra options of flatpak and appimage on desktop are more than adequate


They keep pushing it because it makes it much easier for Canonical employees to package stuff like Chromium.


So users are less important than their employees? Doesn't sound like sound business plan.


Admittedly my source is the Ubuntu podcast, and it would probably take me awhile to find the exact quote, but according to the employees, they previously had one or two employees whose full time job was simply maintaining the Chromium deb package. Switching to snaps made it much easier and freed them up to do other things.

I think we should consider snaps like any other framework, be it deb, flatpak, or even something like Electron: some of them have serious downsides, but people choose them because it allows them to more easily push a single binary out to multiple platforms rather than being bogged down in maintaining a distinct build procedure for Windows/Mac/Deb-based/RPM-based/Arch/Solus/every other percent-of-a-percent Linux distro.

I don't like snaps either--I much prefer flatpaks. But I don't think it's constructive to insult Canonical employees for wanting to make their own jobs easier while they work to provide you a free product.


Users are less important than their employees and paying customers. I've been told that rather clearly by a canonical employee in Ubuntu's issue tracker.


It works well if the developers support it as a first-class delivery mechanism (the nextcloud snap for instance is superb), but there definitely needs to be more fine grained control, if not about if to update, then when to update.


What's the best Ubuntu alternative out there?

I dislike Snap, but I like the stable / out-of-the-box nature of Ubuntu.

I don't like distros where I take a week to setup it the way I like.


For work and "power usage", Fedora Workstation has the best "Ubuntu" experience outside of Ubuntu I'd say. You can also go for Fedora Silverblue to get some NixOS-like powers with your Fedora (I expect that to be folded into Workstation eventually).

Notable diferences that might influence your decision are:

- RPMs instead of DEBs;

- Flatpaks instead of Snaps;

- Podman and Buildah instead of Docker (although you can get Docker if you really need to use that);

- SELinux enabled by default (some people don't like this for non-server usage, but I dig it);

- firewalld comes enabled by default, which may be annoying and unexpected if you're trying to get some iptables rule to work (I personally always remove firewalld and install ufw for the things I need);

- Fedora 33 (due for release next month) will be switching to btrfs as the default filesystem, whose features are definitely welcome for home usage (but I'll wait a bit before upgrading and see if people run into any issues);

- I'd also highly recommend installing Pop!_OS's Pop Shell [0] to add great tiling support for GNOME, but that goes for anyone using GNOME really

[0] https://github.com/pop-os/shell

If you like gaming though, I never tried setting up Steam or a ProtonDB game on Fedora to be able to report on that (I think it would be complicated enough to make me wanna switch distros), but if you'll be doing this a lot, Pop!_OS (Ubuntu based, snaps disabled) has a great out-of-box experience with Steam, as does Manjaro (Arch based) which has an excellent hardware detection and driver auto-installer tool called mhwd, and makes setting up NVIDIA cards and other finicky hardware a breeze.


> - Flatpaks instead of Snaps;

snapd is packaged in Fedora, it's just not installed by default, nor does Fedora have any shim things to force installation of flatpaks or snaps.

> - Podman and Buildah instead of Docker (although you can get Docker if you really need to use that);

By request from upstream Docker, Inc, Fedora renamed the docker package to "moby-engine". It _does_ get installed if you do "dnf install docker" and provides the docker CLI command and docker daemon service.

> - Fedora 33 (due for release next month) will be switching to btrfs as the default filesystem, whose features are definitely welcome for home usage (but I'll wait a bit before upgrading and see if people run into any issues);

This won't impact upgrades. Fresh installs will get this change, systems upgrading will not (unless you want to reinstall to change to Btrfs).

> - I'd also highly recommend installing Pop!_OS's Pop Shell [0] to add great tiling support for GNOME, but that goes for anyone using GNOME really

pop-shell is available as a COPR for Fedora: https://copr.fedorainfracloud.org/coprs/carlwgeorge/gnome-sh...

> If you like gaming though, I never tried setting up Steam or a ProtonDB game on Fedora to be able to report on that (I think it would be complicated enough to make me wanna switch distros), but if you'll be doing this a lot, Pop!_OS (Ubuntu based, snaps disabled) has a great out-of-box experience with Steam, as does Manjaro (Arch based) which has an excellent hardware detection and driver auto-installer tool called mhwd, and makes setting up NVIDIA cards and other finicky hardware a breeze.

GNOME Software will let you easily install Steam and the NVIDIA driver with a few clicks in Fedora Workstation. It generally works pretty well.


Debian. It’s Ubuntu without the commercial parts and without snap.

Or jump ship for a BSD if your hardware is well supported or this is for servers.


Mint and Pop come to mind if you want to stay in the Ubuntu ecosystem without snaps.

Debian is even more stable and retains apt as a package manager, but you will have older packages for stability and have to customize it a little.

If you can venture out of the Ubuntu space, I have found Manjaro to be an excellent experience in the few months I used it earlier this year. It has very fresh packages since it is Arch-based, but also uses an LTS kernel and has some measures in place on its own repositories for stability's sake. I can't attest to the effectiveness of the latter since I've never had stability issues on Arch, but Manjaro is certainly wonderful out of the box and a very pleasant experience in my opinion.


Author of this post recommends Pop_OS!, but I personally like openSUSE. It’s easily configurable both using GUI and from CLI.


I really like Fedora honestly - I switched off of Ubuntu a few years ago after the Unity shakedown and have not regretted it one bit. Works perfectly and has very up to date pkgs available in core repos (unlike Ubuntu).


fedora+rpmfusion+flathub

silverblue effectively gives you the ability to have the benefits of a rolling distro and the benefits of a distro that does releases (stability focused).


Pop!_OS and Linux Mint would probably both suit your needs.

They both opt for flatpak instead of snap. Pop prefers apt over flatpak in the graphical app store, I don't use Mint so I don't know how it's done there.


> What's the best Ubuntu alternative out there?

Debian.


I tried.

For some reason I cannot run synaptic.

Removed debian that very second.


So I asked that question a couple months ago and the response that sounded most like a drop in replacement (and better in a lot of ways) was Pop!_OS.

I haven't had the time to try it out yet ... maybe a good weekend project. I've been using Ubuntu for over a decade now so it might take some time to switch over.


I've been slowly switching over all of my desktops and servers from Ubuntu to Debian.

The only annoying situation I've encountered is having to manually install a non-free network driver, but once that's done I haven't found a single thing I miss from Ubuntu.


I recently moved from Ubuntu 18.04 to Pop OS 20.05. So far everything works smoothly, and no snap.


I use macOS pretty much full time now, but my one linux box is still KDE Neon. It’s LTS Ubuntu-based but as far as i know doesn’t push snaps, and just maintains a clean KDE/Qt based environment (if that’s your fancy)


I've enjoyed Pop!_OS


If you dislike Snap but like Ubuntu, you can continue using Ubuntu. Just don't install any snaps! The system is perfectly usable without them - right out of the box. No customization required.


I am Firefox user so I don't miss Chromium. However, occasionally you hit sites that don't work with Firefox. It's rare, but if it is an airline check-in (less relevant these days) or the e-learning platform of my daughter's school (very relevant these days) you don't have any choice. With the market share of Firefox I fear this will become more common :(


And don't install Chromium, since it looks like it's installing the traditional way but, instead, will install the snap.

Is Chromium the only app that works this way or are there a set of applications where the .deb file results in a snap being installed? Right now, Chromium is the only one I've noticed.


I imagine they'll do that for more apps down the line...


The specific reason it's done for Chromium is that Chromium upstream is a rolling release. If you want security updates, you must also accept new features. Sometimes those new features need new versions of build dependencies. This does not fit the traditional distribution (eg. deb) model, since those build dependencies can't be bumped without impacting every other dependent package too.

What has ended up happening so far is that distributions bundle all these things into the deb and ship it and hope for the best. It's very painful from a packaging perspective, and effectively turns the deb into nothing better than a bundled app (like a Snap or Flatpak or AppImage) wrapped in a .deb anyway.

I can see this happening to Firefox in the end. For example Firefox upstream added a whole Rust toolchain dependency that wasn't packaged in supported distribution releases. I don't see it happening to anything else, since the rest of the distribution upstreams don't the thing that causes so much deb packaging pain for the browsers.


If snaps are yet another Canonical technology that they want to succeed despite their users' wishes, then forcing Snap versions of most common apps will come sooner or later, regardless of an app's original release or build model.


Kinda hijacking the conversation, but I discovered a great FUSE filesystem to distribute software over the network.

It allows to have HUGE amount of software available with the trade-off that the files you are asking for are not in cache, it is slow to retrieve them (need to retrieve each one of HTTP.)

Would people be interested in such filesystem distributed in the wild?

The /bin directory would be put at the end of your $PATH, as a fallback, you got the local version, great use the fast one from your SSD, you don't have that particular software, good, wait a tiny bit and get it over the network if you didn't store it in cache before.

I would find it useful for development tools like compilers or interpreters, that it is always quite a mess to install locally.


Better to make your question an Ask HN, or a blog post describing whatever it is you're talking about and post that on here.


I have a similar outlook. I kept complaining and basically the answer comes down to "if you don't like it, leave" - so I did. I understand it's not appropriate for all use cases but Manjaro has been an absolute dream for me so far.


All these containerization "solutions" are just the fever symptoms of the future shock from the extremely rapid rate of features and improvement in the underlying libraries (glibc, c++?, etc) used by programmers, and the programmer's tendency to use those fancy new features asap. It makes compiling, or even running, something written today on the dev environment of a 5 year old linux distro pretty darn difficult and worse with time.


Users and groups are not enough to secure Linux, especially on a desktop environment.

Snaps have a permissions system backed by AppArmor and Seccomp that confines the snap to a sandbox with limited privileges based on a security profile.

You can read about it here:

- https://core.docs.ubuntu.com/en/guides/intro/security#headin...

- https://snapcraft.io/docs/interface-management

Flatpak does have a sandbox but in practice, many flatpaks do not use it securely. You can read about it here: https://flatkill.org/

AppImage does not seem that security is one of its goals.

So, for the time being I'll keep using snaps. They're a great idea :)

So, tl;dr: Snaps are not only about packaging. They confine software to a sandbox with limited privileges.


I rarely find myself fixing things in Ubuntu, there is a level of acceptance though on certain things not always panning out the way I'd want. I've gotten very good a reading reviews on peripherals I buy to make sure they are compatible. There is the occasional UI glitch as well. I will probably use a different O.S. once I move laptops in three years (I buy a new laptop once every 5 years).

Regarding MacOS. I am using this for the first time at work and I don't know how you could go from a rich APT ecosystem to a so-so brew ecosystem. And really fuck the Command Key and all the other weird Mac-isms. I don't see the value in that over priced hipster operating system, but if thats where your at home than enjoy your home. Oh and I'm still trying to figure out if Docker is a comedy or tragedy on Mac.


I will say that half of these complaints are not valid. For example: Snap store navigation sucks? apt has no navigation at all. Chromium is a snap app? Compile from scratch or add an alternative apt repo.

The only real complaint I see here is that automatic updates don’t allow you to control them. That is a real shame but also isn’t inherent in the design of snaps so could be fixed.

Part of the problem is that making .deb packages is an arcane art. I wish it wasn’t. And dependency hell of “well this version of the distro comes with libxxx 1.1 and the other one has 1.0.7 and now I need to build multiple versions of the package just to make it work for 20.04 and 18.04” does suck.

I do love Debian as a platform and I don’t use snap, but I also use macOS as my desktop OS and guess how Chrome is packaged and installed there?


What I hate about snap is that it hijacks apt installs on Ubuntu. On WSL snap is not supported and when I tried to apt install Chrome, the install fails because it attempts to install it with snap.

If I had a visual way of installing snaps with the Ubuntu app store, I would use it no prob but adding 3rd party repos isn't a viable option for me when I could make do with the default vetted repos. I had to change my CI/CD config where I ran accessibility tests with Chrome's web driver into another distro because Ubuntu broke their previously functioning functions. Building from source would be possible but it's slow and costly to run for every commit.


Snaps are SLOW, chromium, and several other apps I tried, they start sloooowly.

It is clearly a NO GO Canonical, you can't ship slower apps (that start in 3-5 seconds), that out of snap (or in Windows), start in less than a seconds.

snapped Chromium takes 3-5 seconds in its first start in a 16GB ram, corei 5, ssd based machine. WHAT?

In the same hardware the .deb Chromium takes maybe half a second to load and be fully responsive.

The occurs with LOTS of software, and yeah, the start time of an app IS a thing.

If something takes more than half a second to start, and you know it is A LOT faster, you end up pissed off,

What have they done with MY hardware and why?

If Canonical start to annoy me with many more packages forced to snap like chromium, I'll be jumping to whatever distro lets me start MY apps in less than a second, as it is usual in 2020.

And if this unfinished, unpollished crap starts to show off in Ubuntu server in its current sorry state, I cannot express how fast I will be ditching LTS for Debian or anything "not snapped".

You NEED to make this thing to start A LOT FASTER,

and stop making "end user assumptions" about what you could mess up behind scenes in the system (yeah, you could end stomping actually useful things like sleeping in notebooks).


This seems like a legit problem. Hopefully one that can be easily solved, but thank you for pointing it out.


> apt has no navigation at all

There are multiple gui frontends for apt. Are we disingenuously pretending this isn't so?

> Compile from scratch

On a lot of laptops this would be 24 hours of straight building which would have to be redone when underlying libraries changed. This would further be non obvious until after an update your browser didn't open. Furthermore the setup for building it would be apt pun intended to be at least slightly complicated and errors caused by lack of say the needed dev packages would be non obvious and confusing to regular users. Pretending this is a reasonable alternative is of dubious value.

The browser is the single most important application on most users computer. Syncing it by itself and the users profile, settings, bookmarks, and passwords restores much of what the user expects from their computer. Providing no obvious way to install it without learning about ppas and wading through tons of out of date ppas to figure out the correct one that works with your system is non obvious, non discoverable and frustrating. Virtually all new users will end up with the snap version and thinking linux is just slow.

>guess how Chrome is packaged and installed there

Methodologies for packaging have different attributes and downsides Snap != Mac OS


There are multiple third party GUIs for apt. None of them provide the App Store type experience and ones that do aren’t exactly great or easy to use.

I never said to recompile everything, just specific things. You don’t need to rebuild your kernel, libc, compiler, and so on to compile Chromium.

Snap is not the same as macOS apps. But macOS apps are self-contained statically linked apps most of which don’t hook into centralized updating services. Yet macOS is more usable than Ubuntu (I say this as someone who desperately wishes the reverse was true). macOS apps have more in common with snap apps than with dynamically linked .deb packages.


I'm not even sure what you mean by app store experience honestly. The google play store is such a worthless piece of garbage with a garbage search experience that I have to use googles web search, which actually works, to find software to install via google play.

Conversely I have used a variety of app store interfaces on various linux distros that were quite usable and pleasant with search functions that actually worked.

Here for example is a relatively recent video showing installing software via the Linux Mint Software GUI

https://youtu.be/cNt_D2GyApk?t=218


Compiling Chromium from scratch is clearly not a reasonable thing to suggest that most people do.


Right. Most people don’t care how it’s installed or know what apt or snap is. Those that do can find easy work-arounds like compiling from scratch.


The point of distros like Ubuntu is to avoid having to compile major programs from scratch. If I wanted to compile chromium to get bleeding edge support, why even use Ubuntu?


Exactly. If you don’t care about stuff like that, why do you care about .deb vs snap?


IMHO this was the wrong problem to focus on (from a user perspective) for Canonical. The latest version is a mutant UI; you never know where the "Confirm" button in any window will be, some times is on the top-right, some times on the bottom right, and sometimes somewhere random. And like that for all of the OS UI buttons. Even the toolbar apps I was using broke.

The last (LTS) great UI experience and where I think Ubuntu peaked was 16.04, but of course it's not practical to keep running that forever. I've since moved to using a mac last year and it's been okay. I'd love to go back to Ubuntu, but not to the thing it has become. Maybe I try another Linux distro, but I also want something stable.


Funny how arbitrary peaks are, for me it was 6.06 — LiveCD, Gnome 2, awesome orange-brown theme and no PulseAudio.


Ubuntu has gone way to far in this. Reminds me of Microsoft in the 90s. It's really really sad and totally unbelievable. Why on earth would they make such terrible choises, I just don't get it. Linux Mint is an awesome alternative, recommended!


I went from Ubuntu to Fedora because of the recent decisions made by the devs and the hard push of Snaps. I use Chromium every once in awhile and am amazed at how long it takes for it to start up; it's almost instantly on every other distro.


Unrelated to the main thread, but

> "I have a soft spot for it, especially the amazing Unity days."

I found that funny since unity was why I left ubuntu (sort of, cos Linux mint) until I discovered i3wm and came back without unity's stupidity.


The handling of the 20.04 release was enough for me to switch my work machine to Fedora 32. I had been running it on my personal laptop previously, so I already knew what using it was like. Haven't looked back since.


Snap can be disabled in Ubuntu 20.04 LTS. See the following blog post: https://www.kevin-custer.com/blog/disabling-snaps-in-ubuntu-... (discovered this before seeing relevant comment by @guiambros on the 2nd page :-). Relevant HN discussion at the time can be found here: https://news.ycombinator.com/item?id=22972661.


> Ubuntu 20.04 LTS’ snap obsession has snapped me off of it. I have switched to macOS as my daily driver, for many other reasons. I don’t have the time to keep fixing things on Linux and be constantly finding ways to work around standard OS features that should otherwise just be available and work.

Funny, that's exactly what I thought when I tried a MacBook for the first time. Everything just seemed off about it and I spent 11 months trying to tweak it until it was just right, but never got to that point.

30 minutes on Pop!_OS and I'm good to go.


I have Ubuntu 20.04 LTS installed on WLS2 (I've added and using desktop/GUI layer as well) but the first thing I've done after installing original image from MS is to wipe out any trace of Snap after disabling it. All nice and rosy but yeah as I am not sure what other shenanigan Ubuntu will come up with in the future I'll be switching to some other distro. I do not trust Canonical after such f..up unless they will openly revert the decision and get rid of that dreaded Snap themselves.


It seems that current LTS relase needs more bug and glitches fixing, quality is worse than before :-( It's not just snaps, also software center have it's problems (crashing, glitches/lags in listing and installing apps, driver detection problems). I used Ubuntu for many years, it's normal that some relases are more polished that other. Overally great OS and it's "free". Let's hope they will fix it. I used error reporting functionality and also donated to Ubuntu.


I thought 20.04 would be the version where I went back to Ubuntu from Mint. Then along came the Snap stuff and, looks like I'll be keeping Mint for a while longer.


They haven't been on my radar since the Amazon Lens integration, but in light of snap, what does Canonical's near-future goal for Ubuntu Desktop look like?

Also, why snaps?


What I hate most about snaps is that they show up when I use "df -hl" ... but I don't use Ubuntu anymore so I don't have that problem.


I agree.. This incessant pushing of snap, and the closed nature of the snap store are also things I hate. And that practice with chromium is aim to Microsoft resetting edge as default browser every update. And it is indeed wasteful.

I'm sure they'll eventually abandon snap just like they did with all their other attempts at introducing their own stamp on Linux (like upstart, mir). Just hope it'll be soon.



TFA:

The article has been updated to accommodate changes since the original four months ago.


I honestly don't see the issue here. Snaps may not be perfect, but they are _optional_.

I use them to install Blender and Godot on my Elementary laptop because it's simpler to do so than other ways, and (even the isolation is slightly broken in Blender's case, which requires a legacy flag), it is _super convenient_.

But, again, why complain about an optional thing that benefits many people?


It's not optional for all software. Eg: Authy only publishes their electron based app through snap


Quite a few software does that. I just avoid them at all costs, and boycott them.

People who keep saying snaps are optional, and so forth are ignoring the bigger picture here. You know, I am just going to use a Linux distribution with saner defaults, where I do not have to disable Snap to begin with.


Has Canonical still said nothing about this?

The conclusion is inescapable: they do not care. And that is why we should all find alternatives.


Sadly some vendors of computer hardware have based their offerings on Ubuntu.

For example, NVidia's Jetson platform is based entirely on Ubuntu 18. You can't run it with other Linux distributions (or you end up with broken stuff left and right).

Therefore, vendors, please (!) build your products around kernel drivers, not around Linux distributions.


Once again I live mostly outside of the Ubuntu desktop world except for a few servers I have running. Dumb question here, but why 'should' a package management system (I imagine the job of which is predominantly installation/update/removal of software) impact the load time of a package once installed?


Snaps are shipped as compressed squashfs images that are dynamically read at runtime rather than being extracted to your filesystem at install time. The trade-off is that you win on hard disk usage and installation / revert time, but the payment is increased startup time.


My guess would be checking the internet to see if there is an update available before launch.


Forced automatic updates are a dealbreaker for me as I need an OS to run on a slow satellite connection on a ship with 35 people. I need total control over updates. I guess Ubuntu is designed for an idealised sort of user that does not need total control, and that's OK, no Ubuntu for me


Can someone offer a good alternative to Ubuntu for a home server?

I tried Debian but was disappointed with the “latest” versions of software I received on stable.

I do like the apt system and systemd and other features of the latest Ubuntu, so something that doesn’t fall too far from that tree would be ideal.

Any suggestions for alternatives?


If you don't want a stable server, use testing.


How stable is testing? I want a stable build with relatively recent packages. I don’t think they’re exclusive.


You can remove existing snaps and disable snaps, because Linux. It's fairly quick and easy:

https://www.kevin-custer.com/blog/disabling-snaps-in-ubuntu-...


Snap is just goddamn horrible! It goes against the FOSS philosophy in every way imaginable, seriously.


I tried Ubuntu, again, very recently and had to give up. This was like only 2-3 days of usage. And it is only the stuff I can remember now

I have two monitors, but Ubuntu thought their positions were swapped. I swapped them back in settings and it looked good and I hit enter to accept "is it good" pop up. Then I noticed my mouse was on left screen while it registers the clicks on right. Took me quite a while to revert it back

Firefox was really unresponsive. Was hanging at times and generally felt slower

This is subjective, but its UX sucks really. I tried to install some apps, there is like 3 different application managers? None of them was pinned.

Can't even install 7z via application managers. There is one application but it didn't work for me. Had to use console.

Can't install unrar by default unless you add some repositories. I guess since it is not free. It is even harder for codes since they are just binaries I think? And they come with a package that contains a lot of other stuff that includes chromium? Wth

Tried to figure out what my local ip is, couldn't find it on any user interface. Opened console typed "ifconfig" and it is not there now. I had to google which command it was

It decided to do some random updates in the background and stuck I think. It had apt-get lock, which was blocking me to install applications manually

--

In my opinion Ubuntu Desktop is not improving, at least for average Joe (and I am a bit above that). The opposite really.


If you're a tiling wm user, you might want to try out regolith linux. It's a i3wm desktop variant of Ubuntu (latest release based on 20.04 LTS) like Pop!_os, and I just checked, I don't have 'snap' installed.


>Users wanting to install Flatpak apps need to revert to using the .deb version. It’s not an ideal solution when previous Ubuntu Software releases could handle all three formats. In all, the latest Ubuntu Software is a step back.

Supporting fewer competing formats (and eventuallu just one) sounds like progress to me...

>And that’s another place where snaps don’t shine. They are slow. I hate that Chromium’s snap takes more than 10 seconds to load on cold boot on a freaking SSD, whereas .deb and Flatpak apps load in 1-2 seconds. Snaps are simply not fast enough to be default anything yet.

That said, this sucks. How can this be? I though snap was just some packaging wrapper format (so one wouldn't expect any difference in loading time) -- is it more like ELF instead? Or is it some extra overhead like non-shared libs?


I'm sure Ubuntu considers it progress when they can push more people to use their pet project and it's app store nobody can self-host and give less room to its competitor. But is it progress for the users, for whom access to software not available as Snap and not in repos became harder? The developers who hoped supporting one of the competing formats would be enough to reach most users, and picked for Flatpak for any number of reasons?


>I'm sure Ubuntu considers it progress when they can push more people to use their pet project and it's app store nobody can self-host and give less room to its competitor. But is it progress for the users, for whom access to software not available as Snap and not in repos became harder?

The progress being having a single standard -- then all the software will be available for that.

Mixing and matching 2-3 different package/distribution formats to get all of your software because some is available in one and not the other is not ideal, and doesn't lead to a consistent sytem.


If there's anything approaching consensus about the "single standard" then maybe, but to me it seems like a good chunk of the ecosystem right now wants Canonicals idea of that to fail, at least in its current form, and instead of attempting to fix the problems and trying to make the user experience more consistent across the different options they try to force it.


what is the main problem this snap thing is looking to address ?


> what is the main problem this snap thing is looking to address ?

Third party software developers wanting to push their latest software releases directly to users, and for users wanting to install such software.

For users who are happy to use the traditional distribution-curated model, getting all "feature" software updates when doing a distribution release upgrade only, snaps are not necessary and you can use Ubuntu 20.04 perfectly fine, as normal, without using snaps.


thanks, that explains . is it okay if such users get rid of all snap stuff from their Ubuntu 20.04 installation ? would the Ubuntu continue to work fine ? Browser, terminal, vi, java, python, etc mainly .


Yes - that'll work fine.

As others have pointed out, Chromium is no longer packaged as a deb. You can use Firefox, or get Chromium from another source. An AppImage is available for example. I haven't tried it, but it should be functionally equivalent to using a distribution deb.

Or use Google's Chrome deb, for example.


Upstream moving faster than downstream Linux distributions can possibly keep up with.


The proper solution is for upstream to behave.


If you're solution to the issue is, that upstream may only depend on 10 year old API/ABIs, then I'll gladly misbehave. Why should I hold back my software for old, but still supported distributions like RHEL 6?


There's a lot between "depending on 10 year old APIs" and the "we release every 7 days, you can only assume last week's version is supported" that is all the rage these days.


Once people pay me enough to maintain old versions, I'll gladly spend some time on it. Until then I'd rather look forward and use my limited time on things I care about and if a new API allows me to do that quicker, easier or more reliably, then I'm all in.


Yep, all debian and cent now, no more snap craziness for me


I'm not a snap fan either, for the reasons listed in this post-- slow starts and forced autoupdates.

I do use it for LXD, where it's the only way to stay up to date.


So, people are complaining that distributions they can modify include something that they can remove and replace by another thing like Flatpak...

Interesting...


In the face of advances in security and usability brought about under iOS/Android/MacOS and yes, Windows S, these complaints are frankly ridiculous. The days of running applications without confinement, that run with the full permissions of the user, are gone.

Ubuntu might have been able to make this a smoother process, but they lack the literal BILLIONS of dollars that has been spent achieving that on other operating systems.

If you want to be an old greybeard, use apt or build it from source.


Most snaps and flatpaks run with full home access. snaps only provide sandboxing when run with the Ubuntu AppArmor policy. snaps offer zero security without special configuration on other distros.

Sandboxing desktop applications on Linux won't happen until distros start shipping strict SELinux policies that properly confine programs like Android does. Along with flatpak/systemd taking allowlisting seriously combined with stricter seccomp filters.

Please if you maintain any packages with systemd units go read this right now and harden them, it should only take a few minutes: https://www.freedesktop.org/software/systemd/man/systemd.exe... Verify them using 'systemd-analyze security $unit'.


> but they lack the literal BILLIONS of dollars that has been spent achieving that on other operating systems.

They could have just stepped aside and used one of the alternatives—priceless.

Not to mention it doesn't cost billions to develop a read-only chroot archive, the building blocks are available to combine. Just have to have the humility to choose the most elegant design, which already exist.


Just use the Red Hat tool chain for everything? How does that end up? Subject to the whims of a few guys. Red Hat itself is better from having to compete.

The use of 'just a chroot' thinking brings all the problems and none of the gains.


I'd been happy with KDE Neon for a while, but since the upgrade to the Ubuntu 20.04 base, snap became so annoying that I switched all of my machines to openSUSE.

Snap is an endless source of frustration. Unless you jump through a bunch of hoops, apps can't access network mounts, you have the obligatory ~/snap folder, startup times are atrocious, and I'm seeing a number of weird glitches especially in Chromium.

Flatpak is marginally better, but has a lot of similar issues.


I haven't dealt with snaps yet. Isn't there really any way to disable snaps? Not even an obscure one?



You can remove snapd on servers, but on the desktop Gnome itself and a lot of standard desktop apps are snaps, so it becomes much harder to avoid.


I actually like snap.

At the end of the day users like me matter.

I cannot find 1 decent pomodrone app for linux. I now have 2 of those from snap store.

I could not get that mycroft ai app to run for some reason. One in snap actually works.

Ya the startup time is noticeable but they work.

For intermediate like me and noob users its awesome.

Edit: also purple task is good to do list maker. So at the end of the day what ever attracts developer to make cool little app is a win for me.


It's nice to have as option, but I think the issue is that software that becomes a snap doesn't stay a normal package anymore.


Keep the Ubuntu apt package telemetry activated and uninstall Snap. Vote with the uninstall.


Or vote by leaving for Linux Mint or Debian.


They won't be able to measure this.


Switched to Debian because of all this useless Ubuntu crap. Never looked back.


Yes. I'm staying with Ubuntu 18.04 LTS for now.

Maybe Mint next.


Canonical does not work for free. Open source has turned into a marketing term. Canonical made Amazon a default install on Ubuntu and now they are trying to turn the Snap store into something like the Apple or Android store.


That's a no no for Ubuntu, going back to Debian


I just apt remove --purge snap and life is good


my personal solution : Use Kubuntu and ignore snap stuff and use Firefox instead of Chrome


Which distro do you recommend?


- Snap cannot be avoided in Ubuntu. Key parts of UI are or will be delivered as snaps. - Snap uses SquashFS, that is optimized for small size, not fast extract speed. - Snap share some functionality via other core snaps, but overall files can be repeated on each snap and include files to run in all platforms, resulting in more space needed in device. - The location where snaps are and key folder locations cannot be changed by user. - Snap keeps a unique machine id and unique per snap cookies that are used to collect statistics and identify same instance, even after remove and reinstall of the snap. User has not control over that. - Snap store collects geo-location data based on IP and install and usage information based on machine id and cookie ids. - Snap uses loop devices, a lot of them (all revisions are mounted). UI tools are being patched to hide them, but gnu tools such as df look messy. - Snap daemon and server are closed source. - Currently old revisions are not removed. - Manual remove does not remove snapshots and config unless --purge is used. - Cache and config folders as seen by a snap application cannot be changed or pointed to other locations. - Users can connect or disconnect plugs that are already defined in a snap, but cannot add new ones. - Snap updates cannot be avoided. - System misleads users by making it confusing to know what is deb and what is snap. - Many snaps do not really work. They expect something that is different in my machine. I tried rclone snap, ended up installing it manually. - Many snaps have old versions of software, people did them once to have them apper in store and do not maintain them anymore. Users will be forced to leave away security of deb packages in official repos, and install things manually on their own. - Some snaps due to limitations have limitations in functionality. For example, gimp snap cannot start xsane anymore. - I have no doubt that snaps are checked by Ubuntu team for security, but as user I have no way to check that and there are many snaps that have both :home and :network interfaces enabled, although :network is not needed. Especially snaps from "Snapcrafters" are somehow liberal on plugs combinations they allow. - As other have notes, snapcraft.io is hard to use to find things. - The only winner I think of Chromium browser being a snap is Google. We are forced to install Google Chrome directly as that is the easiest alternative that works. Ubuntu could better decided not provide it than make it a snap. - I tried Chromium snap and my Gnome mouse cursor does not work there. I read in some Ubuntu snap forum entry the fixing that (using user themes) is against the idea of snap running same in all systems. I do not care about that, I only care they respect my UI theme. - The snap sandbox is not a complete sandbox. Snaps still see XWindow data, and other variables and memory. The main security behind snaps does not come from the sandbox, but from review of Ubuntu security team. A non-transparent process done by few people. - These being said, if I want to run or test some app quickly and not mess the system, snap is a choice, but forcing it over users without having a choice, will increase the risks uses have to take to figure out alternative installs for all cases where snaps do not work for the user needs.


You need to add two newlines to each bullet point or they combine to a single paragraph.


> RMS ousted from the FSF

Wtf. Looks like I must have been living under a rock! What happened?


He was politically outspoken, like he has always been. But being outspoken is not acceptable in modern times.

He said it was likely that some women of Epstein's presented as entirely willing to Marvin Minsky. This was twisted by the press into him saying they were entirely willing.

For this he was ousted.


also for the bit on the CSAIL email discussion where he objected to the use of the term "sexual assault" when referring to statutory rape, the other bit on the CSAIL email discussion where he was questioning whether statutory rape actually counts as rape, and mostly for the new publicity that everything else objectionable he's been doing and saying for the last several decades received (e.g. his stated views that laws against child pornography are censorship, that pedophilia, necrophilia, incest are actually fine (IIRC he revised his view on pedophilia a few days prior to resigning), his MIT office sign reading "knight for justice and hot ladies").


> he objected to the use of the term "sexual assault" when referring to statutory rape

As someone who has ever been a teenager at any point in my life, I second this objection. There's plenty of overlap, but those aren't the same thing.


Statutory rape is not the same thing as rape, that's the whole point of having a separate term for it.

If you look outside the Anglo bubble, you'll discover that sexual relations between teenagers of various ages are not uncommon at all.


right. It blows my mind that anyone who considers themselves intelligent would argue that statutory rape is rape.

Statutory rape LITERALLY means "consensual sex with someone below the age of consent". Because if the sex was non-consensual, it's called rape regardless of age.

RMS's biggest problem is that he was technically correct, but surrounded by a society of idiots.


Your idea of statutory rape not being rape fails at the definition you specified for the former term. Someone below the age of consent has been deemed by the law to be unable to make an informed decision about their body - which is true in all cases; in cases where that may not necessarily be true, the child would be smart enough to realize the act they're engaging in is unhealthy if not for their development then for their partner. Sexual relations with someone who cannot consent is rape. This doesn't include the fact that a large age discrepancy between two parties engaging in sexual relations will generally lead to an inequality in power, and sexual relations between two inequal parties in that way may not be rape but certainly isn't "good sex".

Many states have "Romeo and Juliet" laws, which decriminalize sexual relations between people under the age of consent as long as they're similarly aged, solving the biggest issue most people have with the idea of statutory rape ("what if children rape each other?").

I use the term "sexual relations" because the act of sex necessitates consent. And although sexual relations with children may be uncommon outside of the "Anglo bubble", they still harm the child whether via physical or mental trauma.


Lets be clear here. When you say child you're referring to teenagers, not prepubescent children.

The question is whether or not a 15 or 16 year old can consent to non-harmful sex, and the answer is obviously yes. At that point they are sexual creatures with their own urges. two 16 year olds having sex is not harmful in any meaningful way, many many people start having sex at 16 (or younger) and go on to be just fine.

At this point, as far as I'm concerned, it's been clearly established that young people under the age of consent _CAN_ actually consent to sex.

Statutory Rape is not about consent, it's about manipulation. Due to the differences in life experience between a 16 year old and a 20 year old, the 20 year old can manipulate the 16 year old to give that consent. This does not imply that the sex between them is implicitly harmful to the 16 year old, just that it's immoral for a 20 year old to do this sort of manipulation.

It's also clear that a 20 year old can rape a 16 year old. Actually rape. And they'll be charged with rape, regardless of the age of consent. This is because, by definition, with statutory rape the 16 year old DID consent.

And one last piece of evidence to show clearly that you are wrong here.

It's possible for 2 25 year olds to have sex and statutory rape charges be brought. How? Because one of them is mentally handicapped.

Because Statutory Rape is not specifically about the age of consent, or giving consent. It's about the coercion of someone who is not considered mentally capable of protecting themselves from said coercion. It's about the morality, not about any sort of inherent harm of the sex itself.

And to head off one argument that I KNOW is coming. The age of consent in Japan is 13, pointing out that the age of consent is 16 in many places in western civilization is not meaningful or useful here. It doesn't change the ideas that I've presented in this post.


Statutory rape is only semantically different from rape; legally, they are the same thing and they should be because having sex with children is gross and weird and I need a shower after seeing this thread.


Is that insane to be ousted for stating an probably true idea?

Or am I just missing something?


Yeah I think it wasn't just that. He's said a lot of crazy things over the years so nobody could really defend him by saying "he didn't exactly say that" because the response was always "ok fine but what about all these other creepy things he's definitely said?". Basically the final straw.


These were all things he SAID and they weren't hateful or prejudiced?

They were just weird?

He never actually did anything and he never actually said anything hateful?

I don't know it seems like a case of talent getting beat out by politics.

I could be wrong I don't know much about it but that's what it seems like on the surface.


[flagged]


[flagged]


I didn't emit any remark about the obvious rampant misogyny in tech. You're jumping to completely unwarranted conclusions.


I think starting off your comment the way you did made the conclusion fairly natural.


I think that "being offended" isn't a serious matter, or an actual political stance. That's the crux of my distaste of SJW, intersectionality, and victimhood culture. One of my "offensive opinions", if you wish.

That doesn't mean that I think that sexism, racism, homophobia, etc aren't serious matters. On the contrary.


Basically, he made some poor decisions on various things. Not only sexual harassment, but also defending others who are accused of some not very nice things and has a history of misogynistic comments and behaviors. Here's an article giving more detail on a few of the issues surrounding the resignation so you can draw your own conclusions:

https://www.zdnet.com/article/richard-m-stallman-resigns-fro...


Once someone has been condemned by the crowd (in this case, Minsky), you must cry with the pack (even in private or semi-public conversations) or share their fate. You have absolutely no right to ponder, ask for more evidence or whatever. Obey the Party Line or be sentenced.

Then they basically admit in the article that truth or justice carry no weight whatsoever, but that "good PR" and social media trends are what count.

Revolting. It's disgusting beyond any measure.


RMS has been doing things that people in the open source community disagreed with for a long time. His latest offence also had the effect of bringing all/many_of those previous slights to the attention of the "crowd" (because social media) and that amplified the message of criticism against him.


I think that his "latest offences" were of a particularly laden political tone (or could be painted as such), while eating stuff from his toenails publicly and other eccentricities passed as innocuous, though actually annoying the hell out of corporate types and other people interested in power, money, shiny PR and other matters that RMS always more or less considered with relative contempt.

It's significant that you talk of "the open source community". I'm not of "the open source community" (less than ever, in fact), I'm firmly in the "Free software community" and an FSF member for 20 years. I don't remember ever having a disagreement with RMS on any important matter.


I'm afraid your opinion of RMS is not universally shared, as evidenced by FSF's latest membership drive.

Despite his numerous and off-putting shortcomings, many people (incl. myself) still believe RMS to be the best person for leading the free software community. His unyielding stance on how software should be used and shared is exactly what the FSF needs in its leadership. All the issues you and others list to justify RMS' dismissal are orthogonal to this task. Yes, this also includes the toe thing.

I have personally stopped supporting the FSF because of this incident after about a decade of support, and I no longer include the "or-later" clause in my GPL software.


> All the issues you and others list to justify RMS' dismissal are orthogonal to this task.

Where did you get the impression that wazoox considered RMS's dismissal in any way justified or justifiable? The comment you're responding to literally says:

> > I don't remember ever having a disagreement with RMS on any important matter.

Also:

> and I no longer include the "or-later" clause in my GPL software.

I'm embarrassed to admit that this problem didn't occur to me until you pointed it out; thank you; I need to go fix that for my own software.


Doing things? Do you have any examples? My impression at the time that he resigned was that it was because the crowd didn't like the things he was saying, not that he'd actually done anything himself.


I intentionally left examples out because writing something that would be fair towards RMS is hard (and I don't want to spend the time to do it). It's also been some time since I read about this and I haven't saved the location of those sources.


He was forced to resign for expressing his opinions and not for anything he did himself.


That's not true, it was more of a "straw breaking the camel's back" situation, and he has in the past personally acted in inappropriate ways for many many years.

https://medium.com/@selamjie/remove-richard-stallman-appendi...


All these are just allegations. If his alleged misbehavior is as frequent as it is being famed, then it should be relatively easy to provide some evidence. The man was filmed eating stuff off his toes ffs!


I saw some tweets from some women AI researchers at MIT shortly after RMS was ousted. They all had vivid tellings of instances where RMS would make lewd comments about their bodies or look at them weird, and how a lot of the women researchers would actively avoid him.

Tbh I think some of the memes about him and his sexuality might be valid. If that's the case, then frankly he should have been cut loose a long time ago.


I'd say it was all stand-in charges under the pressurized internet of today.


[flagged]


The problem is that judgment of the crowd will throw anyone under the train at any time and for any reason. We know of many a proud SJW that made a misstep at some point and was hunted down all the same by their previous friends, because you can be perfect only to a point; first you'd laugh and feel some schadenfreude, but...

There are actual reasons why we instituted things such as a judicial system, principles like "you'll be deemed innocent until proven guilty", etc.


That article you linked is garbage. The article title says "RMS resigns... after defending Jeffrey Epstein behavior", which is an outright lie since RMS never -- not even once -- defended Epstein. You're free to dislike the man, but please stop posting slanderous nonsense. If you want a facts-only take of things, see this: https://itsfoss.com/richard-stallman-controversy/



This whole thing smells like a power game to me. I don't know exactly who or what the real power behind this stuff is, but I sure get the feeling that if you present a threat to their worldview, they'll destroy you by trying to associate you with any bad stuff and painting anything you ever said in the worst possible light. Toe the line, and your every misdeed will be ignored.

I'm not entirely a fan of how far RMS goes on some tech issues, but this all just feels dirty and wrong.


He has been accused of sexual harassment by someone. In the usual "cancel culture" fashion, you're deemed guilty until irrevocably proved innocent, therefore RMS has been ousted from the FSF and the MIT.

A proof if you needed one that the "SJW" or "baizuo" culture isn't at all progressive, but a dangerous intolerant bunch more akin to the Khmer Rouge than anything else.


Sorry, but there's a lot more to it that a simple one-off sexual harassment accusation.


Can you elaborate? At least the ZDNet article you posted doesn't exactly corroborate your statement. (Not taking a stance for/against RMS here – I still don't really understand what happened.)


There were some Tweets by some women AI researchers at MIT about RMS's lewd behavior towards them. The kind of behavior that would have gotten a lesser man burned at the stake (metaphorically, of course).


See the daring fireball link up thread.


There is more to it but it is also a "cancel culture" moment if there ever was one. I think history will look back on his ousting as part of a mass hysteria, but I don't agree with OP about likening it to the Khmer Rouge (messed up, that)


[flagged]


I support you speaking your mind, but don't agree with it. Carry on.


AFAIK the subject was discussing Marvin Minsky's acts related to the Epstein affair, on some MIT mailing list (semi-public). RMS was having a discussion about someone he personally knows. If I personally know someone accused of something, and I think it doesn't fit well with the person as I know them, I may doubt, wonder, ponder, etc. In fact, it seems an obvious, natural thing to do in such a case.

But for some reason expressing doubts about Minsky's culpability was this particular day akin to apology of rape or child exploitation.

OTOH Epstein has been obviously assassinated in weird circumstances that scream "many very important people wanted this guy dead" but apparently this is no big deal. Go figure.

Oh I didn't mention the generally inappropriate behaviour of RMS. He's dirty and so and so. Then there's his behaviour with women, so let's see what Gruber said: so he basically went to women and say silly shit such as "go out with me or I'll kill myself". That's ridiculous, that's annoying and that makes the person on the receiving end uneasy and shameful and offended. But that doesn't qualify as an "aggression" or "harassment", it's stupid and lame, awkward and tasteless, however I pretend that hurting someone's feeling shouldn't be prosecuted. Ever. As long as no insults were proffered, generally that's what the law states in civilized countries, too.

Asking someone out in an awkward way once doesn't qualify as harassment. Watching intently as someone doesn't, either. Making someone uneasy still isn't harassment. And this seems to be all there is against RMS, which is, admit it, really not much.


[flagged]


I'm espousing all sort of offensive views. I'm for the end of capitalism, for instance. As for self-reflection, I've been accused of everything, for instance to be an antisemite because I'm espousing the offensive opinion that Israel is an apartheid state, and that BDS is a fine idea I support. I concluded with George Orwell and Noam Chomsky that “Goebbels was in favor of free speech for views he liked. So was Stalin. If you’re really in favor of free speech, then you’re in favor of freedom of speech for precisely the views you despise. Otherwise, you’re not in favor of free speech.”

Therefore I conclude that it's of the utmost importance to be able to express despicable, obnoxious, offensive views. That doesn't mean that I condone all of these views. OTOH, all really important views are necessarily offensive to someone, else they're probably benign and of little significance, if any.

I understand that RMS is probably an obnoxious jerk. However, it's in a large part because he is an obnoxious, insensitive jerk that he's been so adamant on his principles, and I deem this is a key reason why he had a significant influence in technology and on the world at large.


The groupthink in current mainstream media wouldn't allow it so they launched a smear campaign to get him kicked out and the board that controls FSF probably convinced him to move on. It's not okay speak against the radical left these days, if you do you're a nazi gun toting alt right person. Whereas I'm actually none of the above and a moderate democrat.


Character assassination happened: https://sterling-archermedes.github.io/



John Gruber just rubs me so completely the wrong way now. I'm not really sure why but he seems incredibly petty and nit-picky in the past several years and I actively avoid reading things he's writing. It's a stark change from someone whom I used to read religiously and strongly considered his every word, especially on Apple-related topics.

In part of this[1] he's nit-picking RMS for choosing to spend time laying out his personal preferences in a rider for speeches, and, none of them are really outlandish beyond what you expect for a privacy-focused free-speech-free-software person. I don't chose to live my life as RMS does, but, I won't begrudge him his choices.

The worst thing he could pick on is "I don't want breakfast, please don't ask."

He just comes across to me as petty and sanctimonious, in this, and in general recently. I'm not sure which one of us has changed.

Also some of his choices are quite humble and heartening and make me like RMS quite a bit more. "I don't like hotels, I would prefer to stay in someone's home, even if I'm sleeping on a couch, so I can socialize with them." He seems like a nice enough guy.

[1] https://daringfireball.net/linked/2011/10/26/rms


We detached this subthread from https://news.ycombinator.com/item?id=24383640.


Linux is only useful if your time has no value. I use it on my server but that's about it. Couldn't use it on the desktop. it's just as bad now as it was back in 2007 when i first tried it. in fact i think it's actually gotten worse in some aspects. it' definitely slower and more bloated now for sure.


It's a "I can remove it but instead I'll just switch distros" episode.


This reminds me of the whole systemd debacle - hundreds of Linux curmudgeons getting unreasonably angry about an improvement to their distribution of choice, just because it's different to what they're used to.

The great thing about free and open source software is, if a distribution or package maintainer does something you don't like, you don't have to use it. Simply modify to suit your desires, and enjoy.

But I guess it's easier to just complain loudly and with an inflated sense of entitlement, despite not having put in any work whatsoever.


I disagree.

Systemd did things differently, sometimes annoyingly, but the intention was still to allow you to control your own system. It also ate other projects (e.g. udev) and sprouted features (resolved, timedated, etc.) making it difficult to untangle, but it was/is still possible to do so.

The biggest complaints I see about snap as currently implemented are related to making arbitrary outbound network connections, automatic updates that you can't disable, and inability to mirror or vendor snaps or an entire archive / repo.

These don't seem similar to me at all. Systemd is opinionated, perhaps in a non-unix-philosophy way, but still preserves users' freedom. Snapd does not.

If you have a system with snapd, your system is doing things that you can't disable without replacing the system wholesale. The Microsoft-levels of telemetry and lack of control seem to me clearly worse in every way than anything that was ever wrong with systemd.

I don't actually believe that snapd is irredeemable, and think that these things will eventually be fixed, either as they progress on their roadmap, or as a response to user outrage. But I don't really understand how it got shipped in such a state. In particular, the non-disableable silent updates seem to me like a complete non-starter for a server operating system. How are you supposed to schedule maintenance? What were they thinking?


If your software auto-updates, then you no longer own your device. Anti-features, spying can be pushed onto it from above and you have no choice but to accept it.

I like auto-updates. I almost always turn them on. But being able to turn them off is an important bargaining chip, to pressure devs to behave. I'm not excited about giving that up.


I make sure to turn OFF auto-updates on Android... but I use automatic updates on Linux.


Stop bitching and moaning and compile it yourself. ./configure && make is pretty easy and pretty much the way it was done up until mid 2000s


Counter to the current thoughts.

I love Ubuntu and haven't had any problems with Snaps via the software center in Ubuntu.

I use apt at the command line so I may not be getting the brunt of snaps problems.

But using Sublime, Chromium, Spotify, Intellij, has been pretty flawless for me so far.

Maybe I'm not a snap power user.

Ubuntu desktop is wonderful. Not as good as Mac imo, but infinitely better than Windows.

Just wanted to stick up for Ubuntu a little.

I think it's great there's finally an open source Desktop OS that is as nice to use as the big two and I hope they continue their great work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: