Hacker News new | past | comments | ask | show | jobs | submit login
Improving Firefox stability on Linux (hacks.mozilla.org)
631 points by TangerineDream on May 19, 2021 | hide | past | favorite | 229 comments



I noticed that the two bugs they linked [1] [2] are both due to the distros introducing bugs by applying patches -- one for hurd support (!) -- that were not shipped by the upstream projects that the distros were repackaging. As we have discussed earlier [3] I think the way distros inject themselves into this software development process produces these bad outcomes due to getting the incentives wrong.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1679430

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1633467

[3] https://news.ycombinator.com/item?id=26216917


Arch Linux does not do this, instead preferring to upstream patches [0]:

> Arch Linux defines simplicity as without unnecessary additions or modifications. It ships software as released by the original developers (upstream) with minimal distribution-specific (downstream) changes: patches not accepted by upstream are avoided, and Arch's downstream patches consist almost entirely of backported bug fixes that are obsoleted by the project's next release.

It's one of the core things that has kept me on Arch as a daily driver, even long after I've lost the urge to endlessly tweak my system configuration. I can trust that the software I use is simply vanilla upstream software with little or no modifications, and that's a great advantage when it comes to filing upstream bug reports and working on patches. In addition, it means the Arch Wiki is fairly general in its applicability, and effort spent documenting software for Arch can apply equally well to, for example, Void Linux (which also has this "vanilla software" philosophy).

[0]: https://wiki.archlinux.org/title/Arch_Linux


As a former Debian enthusiast (I even helped staff a Debian booth at a conference once!) who also tends to be conservative with new technology and who is consequently also skeptical of all these random Linux distros, I eventually tried out Arch and found it ... super awesome, I really love it!

I highly recommend it to anyone else like me, who is generally cranky about new things. They did a really great job with Arch. This policy you mention is a great example of what I like about it.


I've been using arch for a few years as daily driver, and I mostly like it, but I would be lying if I said it doesn't break at random intervals during system updates. Nothing unrecoverable, but it's definitely a thing that comes with frequent upgrades.


I use Debian testing as my daily driver at home and at work, and have found it to be plenty stable over the decade or so that I've been using it.

I've also been using Arch on a "for messing around; I don't care if it breaks" laptop for about two years, and haven't had any major breakages.

For me, the most noticable difference has been that Debian will install new kernels as new packages, will suggest removing old kernel packages, but won't suggest removing the currently booted kernel. I like this behavior. By default, Arch will swap your currently booted kernel and modules out from under you. You don't necessarily need to reboot right away, but if you don't, you can run into issues loading modules or recovering from hibernation.

You can work around this once you realize what's happening, but it seems like a needlessly dangerous default.


I'm using Sid as a daily driver on all my dev machines[1]. I run apt dist-upgrade twice a week too and I back up to external drive once a week. I've never once been in a situation (since ca. mid 2000) where I needed to restore from backup or boot into single user mode.

My favorite documentation is the arch wiki and there has never been an issue due to it not being compatible to the "Debian way"

When upstream devs package for a distribution they usually put some effort into it. Just because a package gets published to Sid/unstable doesn't mean the package is unstable. There are some devs that mostly work on higher layer GUI and user-lamd applications who are perhaps inexperienced or reckless who do sometimes introduce breaking changes. It's rare though since most people understand that packaging for a distro means a potential huge number of issues if they're not careful

My 70+ year old aunt also runs sid since a couple of years with unattended upgrades since I installed it for her (and she has KDE). She never had issues with stability or things not working and she uses her computer daily for writing (libreoffice), printing and research (probably reddit idk, I didn't ask :)).

[1] my install is fairly minimal: not much gui, server systems etc are started with docker so hardly anything actually runs on bare-metal which can cause issues during upgrade jeopardizing the things I work on (e.g. due to config file changes). And I don't have a huge DE like gnome/KDE. My sway/wlroots and i3, and tooling like x-terminal/wofi/Waybar/etc are always compiled from git/source. All my dev tooling is the latest and I can still install older versions of clang/gcc or other environments with my package manager if I have to.


See https://bugs.archlinux.org/task/16702 for a discussion about versioned kernel installs in Arch.


What part of it breaks for you? It's not that I don't believe you, and I've seen this sentiment before, but I've used Arch as my daily driver for about five years, and can't recall a single time where it broke from a system update.


I use arch on all my machines and there are really a lot of ways it has broken.

- Arch will release a new version of Xorg before the graphics card vendors release new drivers that are compatible with it. Same for kernel versions too.

- I've had gedit start crashing in new minor versions of gnome due to a setting being incompatible, needing to track that down and unset

- If you have python virtualenvs for development and system python is upgraded to a new major version, all your virtualenvs break

- If arch upgrades the major version of glibc during some random package install (rather than a system upgrade), and you don't upgrade the whole system at once, every app that didn't get upgraded will fail to start due to soname mismatches... and that can mean that pacman, sudo, etc are all busted (this has actually happened to me)

- If you do a full system upgrade and have AUR stuff installed, you need to be sure to upgrade the AUR stuff otherwise it could break due to being incompatible with any library that was upgraded.

- In general (not specific to arch linux), new versions of software break stuff all the time. You tend to hit way more of this on arch if you keep your system up to date.


I'm sure you know all this, but just to give other people some context on how Arch expects one as a user to handle this:

> - Arch will release a new version of Xorg before the graphics card vendors release new drivers that are compatible with it. Same for kernel versions too.

You accidentally put a plural in "vendors", but in practice this is just Nvidia.

If you are forced by circumstance to deal with an Nvidia card, use the LTS kernel and most of your problems go away, and you can also pin your X.org version and manually update it. The real solution is to use a GPU vendor that has mainline drivers, the only valid reason nowadays not to do that being CUDA or being unable to acquire hardware with mainline support.

> - I've had gedit start crashing in new minor versions of gnome due to a setting being incompatible, needing to track that down and unset

That's a gedit bug. Complain to gedit devs or stop using it.

> - If you have python virtualenvs for development and system python is upgraded to a new major version, all your virtualenvs break

You should not be using system python for non-system tasks. Use asdf, Nix, Docker, pyenv or a similar tool for projects requiring their own non-system environment.

> - If arch upgrades the major version of[...]

Partial upgrades are explicitly unsupported by the distro. Pacman allows you to do this, but you're on your own.

> - If you do a full system upgrade and have AUR stuff installed

AUR is not Arch, it's up to each AUR maintainer to keep their scripts up to date and you as a user to keep up with those external dependencies. A common approach is to use an AUR helper to handle both system and AUR upgrades.

> - In general (not specific to arch linux), new versions of software break stuff all the time. You tend to hit way more of this on arch if you keep your system up to date.

And this is why as an Arch user you will be nudged by experience to use software that gives a crap about quality.


Yes, all of this is definitely true and I'm not actually complaining about it, I have used arch on all my machines for 7 or 8 years and don't plan to change that anytime soon, but these problems are actually unique to using arch and other rolling distros. Ubuntu, Fedora, etc have coordinated releases with some level of testing that things are compatible with each other. On arch, these are problems one must deal with though. I don't mind dealing with it, I live for Linux stuff, but the point is that it does happen to real users and it's not for everyone.

> Partial upgrades are explicitly unsupported by the distro. Pacman allows you to do this, but you're on your own.

Definitely correct, the issue is when you're trying to install a package to get something done, and suddenly your faced with the proposition of either upgrading your whole system while trying to get work done or attempting to upgrade just the required libraries. The latter often carries less risk, but occasionally is very problematic.

> And this is why as an Arch user you will be nudged by experience to use software that gives a crap about quality.

The issue is that this is true of so many very major pieces of linux software. Upstream vendors don't necessarily test their stuff in a ton of contexts, plenty of bugs occur in the kernel/xorg/wayland/gnome/glibc/python/etc. All of these are very mainstream projects that are difficult to avoid.


The problem with rolling release distros like Arch is that I don't actually care about running the latest version of all my software. For most of software on my system, I only care that it works. The few things I do want on the bleeding edge, I'll just build myself.


Arch isn't bleeding edge, its latest stable. If the software you get isn't stable that's on the software authors you've chosen to rely on, not arch, not rolling releases.

I found more problems using e.g. debian stable. Upgrades were totally black box and anything could break spectacularly. This was most commonly caused by mixing up to date software you build yourself and the stable repos. Arch on the other hand updates happen daily, and for me, unattended. If I ever do find issues (rare to never), backups fix it.


> Arch isn't bleeding edge, its latest stable.

For any definition of stable.


That said, the ABS tree and the AUR in arch, combined with the general avoidance of patches, do make arch one of the easiest places to play with bleeding edge software. With the toolchain in place, so many packages are just a one line version number change to get any particular version building. Whereas building packages from source on ubuntu/fedora/etc can be an uphill battle depending how entrenched the distro's patches and inter version dependencies are. (Though most of the time, building things via make and sticking them in some directory is acceptable, arch really does make building packages easier than other distros)


>Definitely correct, the issue is when you're trying to install a package to get something done, and suddenly your faced with the proposition of either upgrading your whole system while trying to get work done or attempting to upgrade just the required libraries. The latter often carries less risk, but occasionally is very problematic.

I may be wrong, as my default is to -Syu any package I install, but doesn't -S install the version of the package compatible with the last time you updated your repos? It's -Sy that can cause problems.


Package versions don't remain in the repo for long so if you don't update the package index you'll find that many packages are not installable. There isn't really a snapshot of compatible versions at a given point in time as far as I know, though this might work for a couple of weeks.


> of your problems go away, and you can also pin your X.org version and manually update

> Partial upgrades are explicitly unsupported by the distro.

The problem is obvious. Sometimes you might have to pin a package version because the newer version does not work with your hardware driver, as you said, but then again Arch calls this unsupported.

I had the same problem in the past, not with Nvidia but with DisplayLink. For that reason I always leave a USB stick with a bootable Arch ISO in my Thinkpad, so that I can arch-chroot and repair the system. It happens rarely, but it happens to me and others as well.


Thanks.

> - Arch will release a new version of Xorg before the graphics card vendors release new drivers that are compatible with it. Same for kernel versions too.

I imagine this is mainly a problem if you have an Nvidia card (which would explain why I haven't had the problem).

> - In general (not specific to arch linux), new versions of software break stuff all the time. You tend to hit way more of this on arch if you keep your system up to date.

More often, sure. But for software delivery from the production side, the common viewpoint seems to be that deploying more often leads to less pain in total, because you're making smaller increments so the individual failures are smaller as well.

As such, aside from integration issues like Xorg and the drivers, I would expect Arch to have fewer major breakages than (e.g.) using LTS releases of some distro and upgrading every second year.


> More often, sure. But for software delivery from the production side, the common viewpoint seems to be that deploying more often leads to less pain in total, because you're making smaller increments so the individual failures are smaller as well. > > As such, aside from integration issues like Xorg and the drivers, I would expect Arch to have fewer major breakages than (e.g.) using LTS releases of some distro and upgrading every second year.

This is mostly true for production infra (at least for your own software in production infra) but I'm not sure this logic flies for personal machines and software you don't directly interact with. I have used arch for at least 7 years and I have had upgrades render my system unbootable (or nearly unbootable, like Xorg/GDM/Gnome failing to start). These issues were mostly not due to my configuration needing to change, they were due to breakages between different packages that would end up being fixed in some later minor version.

As an end-user upgrading distros every 2 years, I really doubt I'd hit many major breakages each time as all of those pieces of software will have been in the wild for a bit and major bugs will have likely been fixed. I think the system-level issues that are resolved by frequent deploys are stuff like "systemd deprecated setting X in service unit files, so I need to update my config" or "library xyz changed some API so I need to update my app", etc. With linux distros that have coordinated releases like fedora/ubuntu/debian/etc my software's interaction with the distro may break, but for the most part the major inter-package relationships within the distro get some amount of coordinated testing which doesn't happen with rolling release distros.

Put another way: deploying more often is great for quickly discovering problems introduced in software that you write. Deploying every single dependency in a linux distro more often is going to cause you to hit every bug destined to be fixed in some minor version of the software in pieces of code that you are very far away from and lack the context to quickly debug. So you will hit a much higher sum total of bugs that would have otherwise ended up getting fixed whether you personally hit them or not. However, if you are developing a linux distribution itself, then yes your CI infra should be constantly upgrading dependencies.

I'm not convinced that an arch-paced rolling release at a distro level will ever reach a point where inter-package dependencies do not cause totally unexpected bugs given just how many inter-library dependencies there are. The entire OSS ecosystem would need to write a ton more tests for this to be a reality, and distros would need to run that full suite of tests any time and dependency is upgraded. And even then, there's still so much possibility for breakages given that shared libraries don't do a fantastic job of versioning and the upstream vendors can't possibly ensure their software works against versions of a library they never ran it against. There's a linus torvalds post about this: https://lwn.net/ml/linux-kernel/CAHk-=whs8QZf3YnifdLv57+FhBi...

Despite all of this, Arch is actually amazingly stable, and the entire community is probably better off due to the existence of arch and arch users. We might be the best integration test there is for OSS software.


> - If arch upgrades the major version of glibc during some random package install (rather than a system upgrade), and you don't upgrade the whole system at once, every app that didn't get upgraded will fail to start due to soname mismatches... and that can mean that pacman, sudo, etc are all busted (this has actually happened to me)

Okay I just wanna point something out here, partial upgrades in Arch are broken by design because they simply can't work. Don't do it.

That being said...

> - Arch will release a new version of Xorg before the graphics card vendors release new drivers that are compatible with it. Same for kernel versions too.

... kernel ABI is stable and even ancillary interfaces are usually stable, so usually it's quite A-OK to not immediately upgrade the kernel.


Can you elaborate why has Arch chosen to not support partial upgrades by design? They do work in Debian and it's saved my day a number of times. They let me roll back a broken upgrade without holding back upgrades of the rest of the system, and my computer then continues to be usable until the package in question is fixed.


They choose not to officially support it precisely because of the types of issues that upgrading e.g. glibc or openssl can cause.

That said, the vast majority of the time only upgrading a particular application/package (and its dependencies) will work just fine. It's just that there are no (official) guarantees.


The types of issues upgrading e.g. glibc or openssl can cause are largely predictable, aren't they? That's why there's libtool and sonames and that kind of stuff. The only missing piece is support for this in the package manager…


Pacman can roll back packages, and does let you stop certain packages from being upgraded. As you say, you could probably get away with doing it, if the package in question has "boring" (i.e. ABI stable) dependencies.


Because Debian sort of maintains their own "pinned" versions between upstream and you, you basically get Debian's version possibly with their own patches and fixes; whereas Arch just directly gives you upstream, compiled for your platform.

Debian's way of working is labour intensive, requiring packagers to fork, follow and maintain fixes and security issues in the version of the software they're packaging. They generally do a great job, but this is not a sustainable approach for smaller distributions. Arch Linux on the other hand follows upstream in real time, so you get the latest fixes directly from upstream, but there is no real Arch Linux "version buffer" that allows to freeze the versions of (parts of) your system. You move with the stream, that's sort of the philosophy of rolling distributions like Arch Linux.


I don't think that's it, really.

The real trick is that Debian packages have sonames in their versions, so ABI compatibility is encoded in package dependencies. So when I try to downgrade a library (to a known older version from snapshot.debian.org), it knows precisely which packages depend on the new version and forces them to be downgraded as well.

This is entirely orthogonal to following upstream vs. backporting patches. Sure, Debian (stable) does backport patches, which makes it more likely that single package downgrades don't downgrade half of the system, but it really is a different thing. Debian testing/unstable follow upstream to a larger extent than Debian stable, and upstream fixes are usually preferred to patch backports. Still, partial upgrades and downgrades almost always work without trouble.


Yeah, definitely usually fine to avoid kernel upgrades, so that is a saving grace. The issue is just that pacman -Syu is going to be full of surprises


> - If arch upgrades the major version of glibc during some random package install (rather than a system upgrade), and you don't upgrade the whole system at once

this was what convinced me to switch to Debian back in the 90ies. It was the only distribution that could do this without throwing a tantrum. I was coming from RedHat and SuSE and both had huge problems with this. Debian presented a higher learning curve (at least that was what people said). Debian really only lagged behind back then with their graphical installer and the overall lack of integrated DE back then (compared to others).

I really like the Arch wiki as a Debian user but never used Arch itself. Not going to change because old dog, new tricks ...


I was an XMonad user during the time that Arch switched from statically linked Haskell packages to dynamically linked and oh my god was that a nightmare. Other than that I’ve had far, far fewer issues with Arch than any other distro I’ve used.


not OP, and not Arch, but Manjaro (so you can take it with a grain of salt), reporting from memory so details might be incorrect.

* after "BootHole" GRUB vulnerability, I've read that upgrade requires re-installation of the bootloader. And just in case, did run `sudo grub-install …`. After reboot, system didn't boot. Had to use installation USB to restore.

* More recently, during routine upgrade pamac (Manjaro's pacman alternative) GUI showed me some "transaction can't be completed" error message. Shuddered it away - few days later pamac wasn't starting at all. Starting it from terminal showed error message about some missing *.so file. Googling showed that this is a required file for pacman (Arch package manager) to function. Typing `pacman` in terminal showed "command not found" error message. Restored the missing *.so file from snapper snapshot (thanks btrfs!), after that pamac started fine and happily upgraded my system.

I'm not sure what happened in second case and why pamac left system in broken state (looks like it wanted to upgrade pacman by first removing old files and then putting new files in place, but aborted in the middle), but first one might be quite distro-independent.

Also, reading through recent Arch news, I believe this could bite someone:

https://archlinux.org/news/sshd-needs-restarting-after-upgra...

> After upgrading to openssh-8.2p1, the existing SSH daemon will be unable to accept new connections. When upgrading remote hosts, please make sure to restart the SSH daemon right after running pacman -Syu. If you are upgrading to openssh-8.2p1-3 or higher, this restart will happen automatically.


> I'm not sure what happened in second case and why pamac left system in broken state

FWIW, as a vanilla Arch user, I have not encountered this issue. I remember a time when pacman updates were kind of iffy (and in fact, pacman itself asked you to update it first before proceeding with the rest of the updates), but since 5.0, all subsequent updates have been completely unremarkable in the best way.


I used Arch many years ago as my main OS. I eventually had to give up on it because (i) several packages I needed were badly out of date, combined with (ii) a rule against AUR having packages that were only updates of something in the repos. I was left handling the compilation process all by myself.

When several of us raised the issue in the forum...I'll just say we didn't get a warm reception. That was more than a decade ago, so things may be completely different now, but it left a bad taste in my mouth.


Could you post more details about what broke, why and how you fixed it? I'm curious.

I've been using Arch for years and I never experienced any breakage. I update it every month or so and it's still stable even after these huge updates. There's nothing for me to do other than merge .pacnew files.


You can try one of the Arch derivatives like Manjaro. That might help with this issue a bit.


Seconded. Manjaro's stable channel is the best of both worlds -- a meticulously manicured package ecosystem with just enough time lag behind the bleeding edge that it's very rare for a bad patch to sneak in. I think it's happened to me once in just over four years now?


I used Arch a lot back in the days there was no dkms (got a new kernel? Recompile your gpu module otherwise no desktop on the next reboot, specially with nvidia) and Arch is a very good place to learn Linux, but I eventually went to Debian because everything just works.

For the topic I think is good to have dfsg and to patch any software with the goal to provide better integration with the system and for user's freedom.


Archlinux is far from being up to date though e.g openjdk is stuck with release from last year (15 instead of 16) It's a quite miserable experience in 2021 to not have a single distro that is universally up to date with software development. Windows has this since day 1 if we talk about auto updating software and the windows store apps, being owned by the app makers instead of by a poor middleman (the distro village) are by design up to date.


Yeah, sorry, the openjdk situation sucks. It's an "extra" package, which means the maintainer is a community member rather than being part of the Arch core group.

The nice thing about Arch packages, though, is that they're basically just bash scripts. And if all you need is a simple version bump, it's usually quite easy: download the package tarball from [0] and change [1] to the version you want, then do a `makepkg -s` in the directory of the package. It will build the package in a (usually reproducible) chroot. If it builds without errors, then you'll end up with a tarball that you can `pacman -U ${pkg}.tar.zst` to install the produced files.

If you need help, makepkg documentation on the wiki[2] is pretty great. And don't forget to send a patch to arch-dev-public[3] and CC the maintainer. At the very least, it'll kick off a discussion that will get the package updated.

Rolling your own packages is easy in contrast to, say, Red Hat - where compiling an RPM is easy if you've done it a bunch, but really difficult to get bootstrapped on.

[0]: https://archlinux.org/packages/extra/x86_64/jdk-openjdk/

[1]: https://github.com/archlinux/svntogit-packages/blob/packages...

[2]: https://wiki.archlinux.org/title/Makepkg

[3]: https://lists.archlinux.org/listinfo/arch-dev-public



> Yeah, sorry, the openjdk situation sucks. It's an "extra" package, which means the maintainer is a community member rather than being part of the Arch core group.

This is incorrect. Packages in both core and extra are maintained by the core Arch developers. You're probably thinking of the community repository, which is maintained by a group called Trusted Users. These are still heavily vetted, it's not just anyone from the community. Or maybe you're thinking of the AUR, which is a completely untrusted repository of build scripts, which anyone can submit to.

In this case, the issue is that anthraxx, one of the Arch developers [1], has not updated many of their packages in some time. I don't know if a reason is known, but you might find something in the mailing list links someone else has posted.

[1] https://archlinux.org/people/developers/


With packaging technologies such as Flatpak and Snap, app makers now have the option to distribute a bundled-everything version of their software to Linux users.

This has drawbacks too though, in that it's now up to app makers to take care of keeping supporting libraries up to date and secure. That's not necessarily their top priority though, which is a risk for the end user. Also, unnecessary duplication of libraries increasing memory use, when the distro is able to share most of them. Plus the poor middleman has an incentive to set user-friendly, privacy-preserving defaults that I wouldn't trust as much when the package is provided by a commercial company with different incentives.

It's great to have the option, but overall I would still prefer to use distro packages except in the rare special case.


OpenSuse Tumbleweed / GeckoLinux Rolling have openjdk16. They have an extensive battery of automatic tests to prevent breakage and btrfs to rollback in case something should happen.


So pay for open source. Or volunteer.

As engineers we have no one but ourselves to blame for this one


> Arch as a daily driver, even long after I've lost the urge to endlessly tweak my system configuration.

My desktop has looked and behaved the same for almost a decade.

I migrate to a new machine by dd'ing from backups and haven't installed fresh in years.

These things are what keep me around.


That explains why FF on Manjaro has /never/ crashed on me, but on any distro of Ubuntu it crashes on a daily basis.


I'm not sure I agree. To quote one of the cases:

>For example, at some point, Debian updated the fontconfig package by backporting an upstream fix for a memory leak. Unfortunately, the fix contained a bug that would crash Firefox and possibly other software too. We spotted the new crash only six days after the change landed in Debian sources and only a couple of weeks afterwards the issue had been fixed both upstream and in Debian. We sent reports and fixes to other projects too including Mesa, GTK, glib, PCSC, SQLite and more.

That sounds a lot like Linus' "many eyes make all bugs shallow" idea working as intended.


Even more to the point is that these bugs were found in Debian's unstable/testing distribution. Reiterating that the release process is happening as it should and the bugs are being found by people who volunteered to help test the software.


Yes, that's one of the benefits. Firefox has a very large user-base compared to other FOSS projects so we will often spot bugs that others haven't noticed simply thanks to the sheer volume of crash reports we get.


> I noticed that the two bugs they linked [1] [2]

This is misleading and in fact not even strictly true. They link three bugs, not two: https://bugzilla.mozilla.org/show_bug.cgi?id=1633459

This third bug was not due to Debian patches. And in fact the two Debian patch cases in the article were included to make the point that they can now catch issues caused by distribution patches, not to claim that most of the issues they find are caused in this way. Assuming on the basis of an article written to make a wholly different point that because 2/3 were the result of distribution patches that this must be a huge problem in general for Linux software development just shows your bias on this issue.

I like and use Arch Linux on my desktops. But I'd defend the choice by Debian to patch. Most of Debian's changes are intended to improve support for certain setups or introduce bug or security fixes that upstream hasn't backported. These are both good things. Furthermore, problems that are created tend to be caught while they're still in unstable or sid: if I'm not mistaken that's exactly what happened in these cases, which is the process working as intended. (Mozilla is certainly free to disregard bug reports from Debian users if they feel that Debian's patching process is simply causing too many problems.)


You are right, sorry for missing the third bug.


I think it's a good thing that there's enough information to easily correlate things like crash reports from the latest bleeding-edge packages with uploads of dependency packages:

> the libfontconfig1 package was updated on Debian on the 21st of April and the first crash we have on record was sent on the 22nd. Here's the changelog:

I think that's a good argument for this kind of crash-telemetry, working in conjunction with Linux distributions. And it helped, in this case, that the dependencies could be updated independently from Firefox.

On the other hand, the libdrm hurd patch breaking things on Linux seems like an excellent example of how portability to obscure platforms has a non-zero cost, and it isn't as simple as "just accept portability patches".


This is exactly why historically Mozilla has required distributors to distribute an unmodified version of Mozilla software to use Mozilla trademarks such as the name Firefox, which (unlike the code) are not freely available.

Unfortunately the specific bugs here are in library packages that Firefox uses and those have no such restrictions.


Yeah, I'm pretty thankful for distributions patching out or otherwise disabling the nonconsensual telemetry in browsers, Firefox included.


If I were Mozilla I would strongly consider bundling more dependencies.


They do have a bunch if you get it directly from them:

    /opt/firefox-nightly % ls *.so
    libfreeblpriv3.so   liblgpllibs.so    libmozavutil.so  libmozsandbox.so  libmozwayland.so  libnss3.so     libnssutil3.so  libplc4.so   libsmime3.so    libssl3.so
    libgraphitewasm.so  libmozavcodec.so  libmozgtk.so     libmozsqlite3.so  libnspr4.so       libnssckbi.so  liboggwasm.so   libplds4.so  libsoftokn3.so  libxul.so
It did give me some grief a few weeks ago when their libmozwayland broke and I had to LD_PRELOAD libwayland to get it working.


Ironically, that was one of the reasons why distributions wanted their own builds.


The blog post could be titled "Why the Linux distribution model is totally broken for end user software".

As a linux user I appreciate the work, but it's tough to read this blog post and not think about the wasted effort that could be used to fix the bugs in the upstream software itself.


Really? Because I just think about how impossible this would be on Windows, where shared or external libs are inscrutable, as are OS build chains, version differences, etc etc.


We do gather symbol information on Windows too, but it's a lot less detailed than what we get from Linux. It also requires jumping through a lot of hoops. For example we get minimal information from Windows graphics drivers and we have to semi-manually scrape them ourselves.


To get symbols for basically any MS library I just add the official MS symbol server to my symbol search list. Chrome has one too, there's no real reason other vendors couldn't do it. You can also just ship a .pdb file next to your binaries, and I think the only good reasons for not doing that are keeping download sizes small (use a symbol server...) or paranoid secrecy.

For example, NVIDIA's windows drivers are like 600mb at this point yet they don't even include function name symbols. It's inexcusable.


Has Windows ever broken Firefox? Do you think it's more or less likely to happen in Windows (or Mac) or Linux?


Switching to static binaries would solve some of these issues.

Earlier discussion: https://news.ycombinator.com/item?id=27009044


Are you saying switching to static binaries would help because... they are harder to patch? If I'm not mistaken, Debian and most Linux distros build Firefox from source. In this instance, if Firefox was only available as a statically-compiled binary, it would still be vulnerable to the issue Debian was patching!


> the way distros inject themselves into this software development process produces these bad outcomes

As a former maintainer of packages in an enterprise Linux ecosystem, I agree. But...the patches are often meant to solve a dependency mismatch and the maintainers can't easily pull related dependencies forward without breaking other applications. It's great that other distros like Arch just simply upstream the patches - it's the right thing to do after all - but that takes time that the enterprise distributions don't always have.


Agree wholeheartedly. As far as I am concerned the whole repo/package manager model is the root cause of most of the reasons the Linux Desktop experience is as awful as it is.


It's actually pretty great. Linux distributions have actual human maintainers that everybody trusts and they generally do the right thing. The fact random developers don't have direct access to people's installs is a major feature. If you want to release a Linux package, you should have to work with distribution maintainers in order to actually integrate your package with their ecosystem.

It's not like PyPI/rubygems/npm/whatever where any random person can make an account and start pushing packages. The maintainers have to actually trust you.


> It's actually pretty great.

Subjective, but needless to say I disagree.

I disagree with all of your reasoning too. Inserting a middleman into the software distribution process is needless friction and adds another point of failure. Even Linus has spoken on how much of a pain in the ass it has made it to distribute software: https://www.youtube.com/watch?v=5PmHRSeA2c8&t=340s

But hey, you guys keep doing what you're doing and ignoring what potential users want. I'm sure the Year of The Linux Desktop is just around the corner.

P.S. Yes, I know, it's been your Year of the Linux Desktop since 2001 or whatever. We both know that's moving the goalposts. So is saying "but Android" since it replaced everything above the kernel with a completely new userspace stack and, oh yeah, it doesn't really run on the desktop anyway.


Why such a snarky reply? Some users do find value in having different flavor distributions and them exercising some control on the source, you obviously do not. Yes that means spending resources on this, and that will sometimes cause issues and bugs, same as with any other extra code.

>But hey, you guys keep doing what you're doing and ignoring what potential users want

Both me and the parent users are not "potential" users but actual users. Why we shouldn't count but some "potential" users should? We already have plenty of distributions which don't do patches.

Parent poster didn't mention anything about "Year of the Linux Desktop" so your diatribe about it is completely uncalled for too. Plenty of people are absolutely fine with current trajectory of user base and don't subscribe to prioritizing user growth above serving existing users.

It's a bit lame when a positive well-written article gets the usual negative response about how everyone is doing everything wrong and "Linux desktop is completely broken".


> Why such a snarky reply?

Because at this point I guess I just assume I'm talking to an evangelist about 90% of the time. That seems to be the case with people who insist that the Linux Desktop Experience is "actually pretty great".

> Some users do find value in having different flavor distributions and them exercising some control on the source, you obviously do not.

The fact that Linux Desktop needs hundreds of bespoke distributions to offer those choices is precisely the problem, in my opinion. A well put together system shouldn't require everything to be compiled just-so and all together by an army of volunteers to work properly. You should just be able to use the software you want, as up-to-date or out-of-date as you want, why is that so hard in Linux?


While you see it as problem, lots of people see this as a feature, just different priorities that's all. I do prefer distros which curate the packages they maintain to provide some logic and predictability to the way packages are meant to be used, where and how they store the config files, remove various nonsense from it and so forth. And all the issues you've listed just isn't a problem for me. "Well put together" system isn't an objective measurement and opinions vary.

And anyway your app based view is slowly becoming more prominent, with so much development dedicated to various app based solutions(appImage, flatpak etc). You don't see me complaining about army of volunteers wasting their time. People work on stuff they think is useful to them, nothing to see here or be negative about.


> Even Linus has spoken on how much of a pain in the ass it has made it to distribute software

I've watched that Q&A before. Linus is of course right but not because of package managers and not because of distribution maintainers. He eventually starts talking about binary interfaces which are the root cause of this issue.

The fact is most open source projects out there don't have stable binary interfaces. So we have important libraries in the system breaking working binaries because they do things like change structure layouts and function signatures. This is a problem of the wider free and open source software ecosystem in general. In that same talk he contrasts this with the kernel which does have a stable binary interface and can therefore effortlessly run binaries compiled in the 90s with no changes at all.

If all software had stable ABIs, you wouldn't even need packages at all. You'd be able to just compile stuff and distribute that binary directly.

> But hey, you guys keep doing what you're doing and ignoring what potential users want.

Sure thing. I don't really care at all about what "potential" users want. I'm a programmer and I want a system suitable for programmers. "Normal" people are more than welcome to keep using whatever it is they're still using. If they don't see the value I'm not gonna convince them.

With Linux I get to trust that if something's made it into the package repositories then it's trusted by the community. That also means maintainers fix idiotic "features" like old and insecure bundled libraries in the case of dynamically linked distributions, telemetry in the case of a privacy-focused distribution or non-free code in the case of Debian.


> I've watched that Q&A before. Linus is of course right but not because of package managers and not because of distribution maintainers. He eventually starts talking about binary interfaces which are the root cause of this issue.

An issue exacerbated by the package managers and repo model. Developers think "why should I bother with binary compatibility when package maintainers are just going to recompile everything anyway?".

> If all software had stable ABIs, you wouldn't even need packages at all. You'd be able to just compile stuff and distribute that binary directly.

Gee, it's almost like other Desktop OSs did that.

> Sure thing. I don't really care at all about what "potential" users want. I'm a programmer and I want a system suitable for programmers. "Normal" people are more than welcome to keep using whatever it is they're still using. If they don't see the value I'm not gonna convince them.

That's fair, I'm just tired of dealing with evangelists. There are reasons people use other OSs and it isn't because they're stupid or uninformed or whatever nonsense people use to feel superior. In my case, it's because I don't like the way Linux does things.

> With Linux I get to trust that if something's made it into the package repositories then it's trusted by the community. That also means maintainers fix idiotic "features" like old and insecure bundled libraries in the case of dynamically linked distributions, telemetry in the case of a privacy-focused distribution or non-free code in the case of Debian.

They also fix idiot features like actually-working cryptography (Debian), or a "you're using a horrifically out of date version" notification from the developer (also Debian). Oh, and of course the whole model really can't handle simple tasks like:

- Installing software to different disks

- Installing more than one version of the software (unless explicitly packaged separately)

- Choosing to have a mix of old and stable versions of some software and the latest hotness of other software

These are all things that are trivial in other operating systems, but I need to use something like AppImage to get them in Linux software. Sadly not a lot of software is distributed that way because the culture of Linux is that everything must flow through a walled garden repo for your own good, and if you don't like then compile from source.


> Developers think "why should I bother with binary compatibility when package maintainers are just going to recompile everything anyway?".

Because it's good engineering.

> Gee, it's almost like other Desktop OSs did that.

They don't. Every now and then Apple breaks mac applications like they're nothing. Microsoft used to care about this to an almost heroic extent but that's history. I literally can't play some old games on Windows 10 without troubleshooting stupid DirectX DLL errors. Ironically that game just worked on Linux.

> That's fair, I'm just tired of dealing with evangelists.

Wait, so anyone who's happy with Linux is an "evangelist" now? Why even start this conversation if you don't actually want anyone to reply?


Windows compatibility isn't perfect, but it is a hell of a lot better than Linux's. I can't even take a binary from a 2 year old version of Ubuntu and run it on the latest Ubuntu in most cases without a lot of dicking around. By contrast Windows API is so stable that Linux has better compatibility with it than it does with old versions of its own.

No, not everyone who likes the Linux Desktop is an evangelist, but when I come out and say the experience sucks and the very next reply is "actually it is great" it is hard not to read that as evangelism.

Maybe I'm reading too much into it, but what I said was "I find this experience terrible" and what they said was "you are wrong". If they had left out the "actually" it would have read differently, but as it was it seemed like just another evangelist trying to tell me what I do and don't want in an OS.

Note that "Linux" in this case refers specifically to typical userland, the kernel ABI, by contrast, is quite stable.


> when I come out and say the experience sucks and the very next reply is "actually it is great" it is hard not to read that as evangelism.

Let's review. You said that the Linux desktop experience is "awful" and provided no evidence whatsoever for your claim. The reply to you, "It's actually pretty great" at least provided a couple of points in favor of believing that the situation on the Linux desktop is good, at least for some people.

From where I'm standing you're the evangelist, insisting that their own picture of how a Linux desktop should be architected is the only good way with no evidence. Claiming that the way things are currently done is garbage, with no real evidence. I think it's totally unfair for you to claim that you were just giving your opinion and then had other people walk all over you and "tell you what you do and don't want in an OS".

Rather, what people in this thread have been trying to get across (I think) is that we like Linux the way it is. We think it's "pretty great". We don't want to see it change because a bunch of tech "evangelists" (if you'll pardon the expression) think static linking and AppImages are a much better way to do things. Sure at the end of the day that's "subjective" (meaning it's the result of us finding certain things valuable that not everyone else finds valuable), but, well, so are any alternative conceptions of the Linux desktop.


If you want evidence then all you need to look at is the extremely low adoption rate. Even developers often prefer MacOS. My statement about Linux's awfulness included an opinion qualifying phrase, theirs did not. Why do I need evidence for a subjective opinion? Besides, I know from experience that if I air my grievances with the Linux Desktop an army will come out of the woodwork to tell me how wrong I am, attacking my intellect, my use-cases, the very facts of my experiences, etc. If you need evidence I suggest you do even the tiniest amount of looking at how negative opinions of Linux Desktop are treated in forums. I, for one, am not even remotely surprised to be downvoted steadily every time I express them.

> Rather, what people in this thread have been trying to get across (I think) is that we like Linux the way it is.

And that is fine, I don't want to harass people who use and like Linux any more than I want to be harassed for not using. What I want is to not have to keep having the same arguments over and over about why I feel the way I do about it, because the community has made it clear they don't care.

> We think it's "pretty great".

IF parent had said "I disagree, I think it is pretty great" we wouldn't be having this conversation.


> My statement about Linux's awfulness included an opinion qualifying phrase, theirs did not. Why do I need evidence for a subjective opinion?

I disagree. I do not read this statement as containing an opinion qualifying phrase about whether the Linux Desktop experience is awful, the opinion being qualified here is that the repo/package manager model is the root cause of it.

> As far as I am concerned the whole repo/package manager model is the root cause of most of the reasons the Linux Desktop experience is as awful as it is.

You appear, in this statement at least, to just take "the Linux Desktop experience is ... awful" as an underlying objective fact that everyone should just accept, and we're all debating the reason for it. I don't accept it.


> If you need evidence I suggest you do even the tiniest amount of looking at how negative opinions of Linux Desktop are treated in forums.

Your opinion about this mythical Linux Desktop was not the reason why I replied to your post.

I replied to your opinion on the packaging model. It is great and in my original post I explained why I think so. It's a highly successful model. Just look at the huge number of platforms with app stores these days. It's fundamentally the same thing. Apple is notorious for enforcing quality control on apps and forcing them to integrate with their wider ecosystem. Linux invented it with software repositories and automatic package managers and it's even more open compared to modern solutions.


No, the reason is SW development process - developers which introduce big changes between library versions ( why do i need fontconfig 4.3.2_pl6 for example). When the library is a moving target any package which depend on it and also the security of the system become a moving target. Unfortunately Firefox does the same ( they discontinued ESR because it is boring fixing bugs).


> they discontinued ESR

Did they? Looks like it's still there:

https://www.mozilla.org/en-US/firefox/all/#product-desktop-e...


Firefox is great on Linux. It kinda whoops Chrome's ass for what it provides (though Chrome on Linux is a nice experience), and Mozilla has consistently been one of the most charitable organizations towards the Linux foundation. Here's to another decade of Firefox and an open web!


> Firefox is great on Linux.

If you exclude most hardware acceleration on majority of Linux PCs that currently run FF and if you exclude following standards that good and normal software does[1].

Both issues exist nearly decades now. "Great"? Absolutely not.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=259356


Chrome's sandbox is far superior to Firefox's from a security standpoint. I think the best bet is to run Ungoogled Chromium on Linux; it's got even less telemetry than Firefox and is more secure.


You're focusing on technicalities, but the real issue is political/social in my opinion. If Firefox did not exist, what other powers exist to prevent Google from shaping the Web for its own benefit more than it already has?

I personally don't even care whether or how much feature parity Firefox has or whether its GPU rendering is 20% slower. Not supporting Firefox comes at our own loss.


Firefox does nothing to prevent Google from shaping the web; the users of Firefox are largely irrelevant to the web.


Mozilla is a member of the W3C, and as long as it has a substantial share (and good leadership), it can influence things. Our job is to be that share.

Also, there is no such thing as an "ungoogled chromium". Google controls chrome/ium, and as long as people use it, those people remain subject to Google's control.


From what I've understood, it's not recommended to run Ungoogled Chromium: https://qua3k.github.io/ungoogled-chromium


In interesting article. Personally I use Ungoogled Chromium on my Mac occasionally, for things Firefox doesn't do well, but mostly I use Firefox and Safari.

In that article there's something I disagree with having tried:

> Most of the functionality of the patches are either in the best case minimally beneficial or can be reproduced with either a setting, a flag, or a switch,

Some years ago, on Linux, I tried finding all the command line flags necessary to use Chrome without having it talk back to Google. Unfortunately despite hunting for every option I could find (and there are a surprisingly large number of undocumented options), including with "strings", nothing I tried completely suppressed traffic to Google while using Chrome on sites not connected with Google.

My motivation at the time was to run my own local applications using Chrome as the UI, the way Electron is used now. It didn't work out because I failed to find a way to confidently suppress traffic to Google.

That experience is why I run Ungoogled Chromium now when I need Chrome functionality.


"third parties contribute prebuilt binaries, so there is no central party to trust."

For me Google _is_ a third party. And taken into account its behavior it receives 0 (null) trust from me. What i really don't understand is this blind trust in everything google does.


For the Chromium-relevant configuration changes applied automatically by Ungoogled Chromium: I would never have discovered them all by myself, and, now that I know about this article, I'm not convinced that I would be able to stay on top of any future switches and configuration changes in future versions of Chromium.

That said, the "binaries built by anyone" thing is pretty suspect.


Even as someone who doesn't use Ungoogled Chromium, those are pretty dumb reasons not to use the browser. The biggest draw (to me) is having the telemetry digitally stripped out before being compiled, so that Google can't remotely activate any tracking, fingerprinting or identification.


What do you think makes Chrome's sandbox "far superior" on Linux? AFAIK they're very similar.


This is typically what I cite when people ask me about FF vs Chrome security: https://madaidans-insecurities.github.io/firefox-chromium.ht...

And here's Theo de Raadt's opinion of Firefox from back in 2018: https://marc.info/?l=openbsd-misc&m=152872551609819&w=2


Your first link is a good resource. Thanks! I'm not a security researcher, but perhaps I can add a few comments on their discussion for other users who might not be familiar with the specifics:

> Firefox's sandboxing lacks any site isolation. Site isolation runs every website inside its own sandbox so that an exploit in one website cannot access the data from another.

It does seem that Firefox's site isolation is becoming more ready. From a Mozilla blog post two days ago, with instructions how to manually enable it on Firefox stable, beta, or nightly: https://blog.mozilla.org/security/2021/05/18/introducing-sit...

Also, from the same link, they mention X11:

> One example of such sandbox escape flaws is X11 — X11 doesn't implement any GUI isolation which makes it very easy to escape sandboxes with it.

Definitely true that X11 sucks (sorry NVIDIA users). So we have Wayland becoming more mainstream now. I've been using it for several years already, on GNOME and Sway. Working great... even Electron is native now (Signal, VS Code, etc).

And lastly, they rightly mention Pulseaudio:

> PulseAudio is a common sound server on Linux however, it was not written with isolation in mind, making it possible to escape sandboxes with it.

In the last few months PipeWire became a mature drop-in substitute for Pulseaudio in my experience. It was designed with isolation in mind, and a whole bunch of other things.

Hoping Firefox can bridge the gap in security with Chrome so we can wholeheartedly recommend it to people without caveats! We deserve a fast, full-featured, secure, and open source alternative to proprietary web browsers.


Your first link is a good resource, thanks; it points out many valid issues where Firefox needs to catch up. It is definitely slanted though; for example Firefox's Rust usage is compared to Chromium talking about maybe using Rust someday and the conclusion is "so that's a wash". Also, "the parts that are memory safe do not include important attack surfaces" isn't correct; all C++ code that manages dynamic memory is attack surface.

A slightly tangential issue is that the mitigations section is not super compelling to me because I think many mitigations are low-value. Evaluation of mitigations typically does not ask the right questions: How much work is it for an attacker to bypass the mitigation, assuming they're aware of it? Can that work be packaged and reused in multiple exploits? And how many bugs become completely non-exploitable due to the mitigation?


The rate at which escapes are found (and publicly disclosed) suggests to me that the Chrome sandbox is substantially more advanced than any other browser's.

Perhaps there are just as many escapes being found each quarter in Chrome as the other browsers and they are just being hoarded privately but I don't find that super plausible.


Where are you getting that data? I'm genuinely interested.

FWIW I think it is probable that the IPC APIs into and out of the sandbox have been more thoroughly fuzzed and otherwise tested in Chrome than in Firefox. I don't know how that translates into actual security though.


It's been a long time since I have seen a firefox crash on Linux.

What's killing me is the memory leak that just renders the whole computer unusable and almost frozen. Almost because if I can grab a terminal and killall firefox then I get the machine back.

https://bugzilla.redhat.com/show_bug.cgi?id=1597028 something like that, not sure about the root cause. But I don't let imgur tab with running vid opened for too long now.


I've been encountering a similar memory leak on Windows, so I don't know if it's really related to Linux.

What I do know is that the way Linux GUI distros deal with low memory situations is absolutely garbage. The new systemd OOM daemon that's shipping with systemd 248 will hopefully improve this situation, but for now I'm left running nohang[1] on my dev machine, where I'm consistently running out of RAM (IntelliJ + tons of Java frameworks + huge React code base + a web browser all take up way too much space!). Enabling zRAM also seems to work, but I haven't measured the effectiveness of that yet. On my home PC with double the ram (32GiB) everything runs smoothly, but the way Linux on the desktop deals with OOM situations is still atrocious. Besides, none of these tools should be consuming such ungodly amounts of RAM anyway.

For server situations, Linux handles OOMs better than Windows, IMO. On desktop, the same strategy just doesn't work and only makes working on it more painful.

[1]: https://github.com/hakavlad/nohang


As of now, I find `earlyoom` is better choice than `nohang` for three reasons:

1. It comes preconfigured: https://build.opensuse.org/package/view_file/openSUSE:Factor...

2. Packaging adoption: https://repology.org/project/earlyoom https://repology.org/project/nohang

3. Written in C and unburdened with tangential features like desktop notifications, the daemon's process takes very little amount of memory.


yes, it isn't because of the linix. this is just the mozilla open sores; they will always add a memory vulnerability.


About year ago, i started to run Firefox in memory-limited cgroup (to 4 GB), so when it eats all the memory, it just locked itself (by thrashing pages at full SSD speed) and not the whole machine. It is done by:

In /etc/rc.local:

  cgcreate -t USERNAME -a USERNAME -g memory:firefox
  echo 4294967296 > /sys/fs/cgroup/memory/firefox/memory.limit_in_bytes
  echo 4294967296 > /sys/fs/cgroup/memory/firefox/memory.soft_limit_in_bytes
And then start it by script with:

  cgexec -g memory:firefox /usr/bin/firefox


Same here... I very rarely see Firefox crashing or a tab crashing on Linux. It happens once in a very rare while that I see a tab crashing but the whole browser never crashes.

Memory leaks can be bad indeed and here are two tricks that work fine for me: either run Firefox inside a Docker container... Docker container on which I put CPU and RAM quota. It's IMO way easier than try to put resources quota directly on Firefox. The other one: Firefox has zero issues reopening all my tabs, so I simply kill it from the terminal when I see that it's going wild on memory. Now I've got 16 GB and Firefox rarely eats it all and it's usually not sudden: it's a slow bleed over the course of a few days... So when I get to something like 10 GB of memory used by just one Firefox instances (I typically run several Firefox instances, from different user accounts) , I just kill it and restart it, while choosing to reopening all the tabs (usually tens of tabs).

I'm sure there are other methods too but this works fine in my case so I haven't looked much into it.


I also notice the mem leaks, but I can usually leave my Firefox process running until there is the next update to install.

What works for me is having the `about:memory` tab open and clicking on "Minimize memory usage" at the end of the day when I suspend the machine. And, I have a lot of tabs and windows open, strewn over various virtual desktops. Though most tabs are not loaded but more expensive bookmarks, but I have no add-on which actively unloads tabs.


I've found that earlyoom [1] can at least save me from having a complete freeze. There are packages for most distros.

[1] https://github.com/rfjakob/earlyoom


When things get bad, check about:memory and check out which process is using all the memory, and for what. It may just be some Web application you use that has a leak of its own.


You can also trigger a manual GC from about:memory.


To be fair to Mozilla this firefox bug having the effect of rendering the "whole computer unusable and almost frozen" is 100% a Linux bug/feature. Programs have bugs and memory leaks. The OS has to handle cases where they do. This is what the OS does when it happens here.


Have you tried the "Auto Tab Discard" addon?


Oh cool! Now that I think of it I haven't seen the crash window in quite some time. Well done!

Now if I could only disable Ctrl+Q and prevent it from asking for a restart after update...


> prevent it from asking for a restart after update

This isn't just nagging. The API between the main browser context and content processes isn't stable between builds. This means that Firefox can't spin up a new content process if the update has replaced the executable with a different version. Therefore Firefox has the tough choice of trying anyways and risking misbehaviour (including possibly security bugs) or forcing the user to restart the browser process so that it can spawn new content processes.

It also depends on how your distribution updates Firefox. For most distributions they have this problem, however NixOS doesn't because the new version is installed in a new directory.

IIUC Firefox's self-updater also doesn't have this issue, and I think it supports Linux.


> This means that Firefox can't spin up a new content process if the update has replaced the executable with a different version.

Couldn't it spin up another one from the old version by doing execve("/proc/self/exe", ...)? Or by keeping a template process around with all the libraries loaded from which a child process can then be forked.


Completely possible, but that's not how it works ATM.

EDIT: And at the risk of stating the obvious, the fact that it doesn't work this way should be a hint that there are complexities involved that make it more complicated than "just" doing it.


I don't agree. The fact that it doesn't work is not necessarily a hint that there are complexities involved. There are a myriad of other possible reasons, such as no one really cares that Firefox forces you to restart the app whenever it updates. People just get used to it.


I've been privy to the discussions at Mozilla about this topic. I can assure you, things are not the way they are out of either laziness or ignorance.


Or you know, most apps dislike having the executable file itself replaced under them and getting into an unknowably bad state, for basically null benefits. Distros should handle multiple versions of the same program, a la nix, and programs should not optimize for random things like that.


That's just a defeatist attitude. Why does the executable need to be replaced? Why can't the new executable be called firefoxNext, and when firefox starts next time, it checks for this file, and if it exists, it quickly starts that instead, which renames itself to firefox, overwriting the old version? What would be wrong with that?

EDIT: Oh, I see. This happens when the distro package manager updates firefox. I don't have a solution, since there isn't an easy one that will satisfy all people. So nevermind. shrug

Maybe Firefox could create a temporary executable copy of itself and run from there, and run that, so when the original is replaced, it doesn't affect the running process. But, that probably comes with a host of complications that I'm unaware of.


> it quickly starts that instead, which renames itself to firefox, overwriting the old version? What would be wrong with that?

You'd have to enable all users to replace files owned by root somehow - which is a bad idea. Alternatively you'd have to somehow extend the packaging to support owning either/or file and being able to verify the installed files that way - which would be a massive change.


Couldn't it spin up another one from the old version by doing execve("/proc/self/exe", ...)?

99% of the Firefox code is in libxul.so, so you'd need to get the file handles from the original process or something similar.


They could have one pristine unused content process and then just fork new ones from that.


I don't know about the `/proc/self/exe` but I think it would work if they use the same binary (I think they do). I don't know how many other config files or libraries may also be read, but maybe those have better comparability.

As for the template process one major issue is that this means that you only get ASLR once on browser start, instead of for every content process, which somewhat reduces the benefits.


On my system, /proc/self/exe is just a symlink to wherever the program is on disk, not a copy or a reference to the image in memory.


If I remember rightly, some of the things in /proc that look like symlinks don't actually behave like them - opening them opens the actual file that the process has open, even if it's been deleted and a new file created at the same location.


Why can't they keep the old version around and continue using it? It just sounds like excuses.


The problem isn't Firefox updating itself; it occurs when the distro update, or other external process, replaces running Firefox.

A workaround would be to use Firefox directly from upstream instead of the distro packaged version (I understand some people are not happy with this tradeoff).


You mean the distros, right? It’s not firefox’s job.


>Now if I could only disable Ctrl+Q

Good news! =)

Set browser.quitShortcut.disabled in about:config


I would absolutely love a shortcut manager for Firefox. For example I use this shortcut enough that something like Ctrl+Shift+Q would be useful. (And the dozens of other things that I would like to rebind, including custom shortcuts).


I second that. I'd love to be able to configure a shortcut to move tabs to new windows.


Since when was that added? Anyway, my days of closing the whole damn browser instead of a single tab are finally over!


Yeah this must be new-ish. I remember searching for this and only finding some plugins that didn't work on Linux.



Oh god thanks for this!! How many times I misstyped a q for a w!


> Set browser.quitShortcut.disabled in about:config

That's awesome, just did this. But apparently you need to restart Firefox in order to apply the change. (Learned it the hard way)


Well I guess that sorts itself then. :)


It does not really ask, it straight up bricks the whole browser session until you do.

But otherwise, FF has been really stable for years now


I haven't been able to disable it, but the option "Warn you when quitting the browser" at the top of preferences has already saved me numerous times. Not as perfect as disabling, but far better than nothing.


There is a new "browser.quitShortcut.disabled" setting in about:config to disable Ctrl+Q.


I've never really had an issue with restarting Firefox, it remembers all my tabs when I restart it either after an update or just a close/open. Perhaps it's a setting I toggled years ago and have since forgotten but I'm not sure what the complaint is about restarting after an update?


At least for me after FF updates itself it won't let you open a new tab or anything until it restarts. According to this Reddit post this is due to it being updated by the package manger so maybe it's a Linux only issue?: https://old.reddit.com/r/firefox/comments/jwx54y/firefox_for...


It's due to the "unattended upgrades" feature. You can blacklist FF from getting those upgrades with a change to /etc/apt/apt.conf.d/50unattended-upgrades. Add "firefox" to the "Unattended-Upgrade::Package-Blacklist" section. I find manually upgrading much less annoying than having my browser randomly lock up on me.


Thank you!


It remembers the tabs, but it loses my vertical position within the tabs. When reading articles or a thread on some forum, that's quite annoying. Even Pocket doesn't save my progress on a desktop but does on my phone.

It's nothing that would make me stop using it, just an annoyance that I wish I didn't have to deal with on a semi-regular basis.


Unless there's a private window, which gets lost during these forced restart after updates.


Per a sibling comment, browser.quitShortcut.disabled in about:config works. For me, I just never disabled the option to ask before quitting when more than one tab is open, so that dialog always catches me before I quit by accident.


Sadly it's not enough. If I have one tab open, I still don't want ctrl-q to kill Firefox; and it doesn't count pinned tabs as open tabs, so it will still kill even if you have several pinned tabs open. It's a really, really dumb shortcut to have on by default and I'm thrilled it's finally dead.


Coincidentally, I have experienced 4 crashes already since 88.0.1 was released.

But I agree with you, I can't remember when was the last time I had a crash before these!


>"When it comes to Linux things work differently than on other platforms: most of our users do not install our builds, they install the Firefox version that comes packaged for their favourite distribution.

This posed a significant problem when dealing with stability issues on Linux: for the majority of our crash reports, we couldn’t produce high-quality stack traces because we didn’t have the required symbol information. The Firefox builds that submitted the reports weren’t done by us. To make matters worse, Firefox depends on a number of third-party packages (such as GTK, Mesa, FFmpeg, SQLite, etc.). We wouldn’t get good stack traces if a crash occurred in one of these packages instead of Firefox itself because we didn’t have symbols for them either.

To address this issue, we started scraping debug information for Firefox builds and their dependencies from the package repositories of multiple distributions: Arch, Debian, Fedora, OpenSUSE and Ubuntu. Since every distribution does things a little bit differently, we had to write distro-specific scripts that would go through the list of packages in their repositories and find the associated debug information"

This is why I'm glad that a lot of Linux software is moving towards distro agnostic containerized applications maintained by developers. Linux already does not have a large market share and if you have to deal with a dozen distro-specific quirks it seems hardly economical.


> This is why I'm glad that a lot of Linux software is moving towards distro agnostic containerized applications

To me, it entirely depends on the specific packaging job, not even who is doing it. Entirely too often, they ignore distro convention, and can introduce bugs and other security issues through ignorance, laziness or mistake.

Long term, flatpack and its ilk will devalue distro differences and put pressure on uniformity. Whether or not you consider this is a good thing may depend on your opinion of/relationship to RedHat/IBM.

For me, if it isn't in the Debian repos, I'm probably going to ignore it.


> Entirely too often, they ignore distro convention

Can you clarify what you mean by this?

I have used both the flatpak and rpm (on Fedora) version of Firefox and don't really notice a difference. Perhaps Firefox is just one of the packages that does distro convention properly?


Wasn't specifically referring to Firefox, didn't intend to imply that - I have not even used the flatpak version.

It was a general comment that is not specific to Flatpak - it will a problem for any cross-distro packaging. (Heck, plenty of developers mess up native packaging for distros they're not familiar with.)


I'm not holding my breath. The Linux Standard Base is 20 years old now. That was supposed to get us there. If Canonical and Red Hat cared, they'd probably go through the Linux Foundation instead of creating competitors to AppImage. Speaking of, AppImage is from 2004. So... see you guys in 20 more years.


This will stop being a problem when/if all distros start using symbol servers


Snap, Flatpak and AppImage all suck! Fix this first.


I don't know what you're talking about, AppImage is great!


AppImage is very good, I agree, but it has its fallings too. It also has external runtime dependencies that make it a pain to package and use on NixOS.


It's awful. It's a pain to keep up-to-date, pain to keep track of the software's data, pain to force to adhere to conventions, pain to theme, it's absolute dogshite.


Unfortunately they are solutions designed for a platform and ecosystem with a lot of suck built-in. For my money, AppImage has the most sane approach, but without support from the wider community they are still a bit of a pain.


Strange. I’ve never had an issue with Firefox crashing on Linux.

Firefox on Android is a different matter and is unusable on some devices.


One of the issues of running an open-source C++ project is the difficulty in obtaining stack traces from users - hoping to re-use the work that Mozilla did here for Firefox, so thanks team for paving the path there for others.


You're very welcome! If you're interested in the topic we've got a working group [1] and a very active channel on chat.mozilla.org [2]. Besides the actual stability work we've also been busy either rewriting some of this tooling in Rust (see [3] and [4]) as well as contributing to Sentry's excellent symbolic crate which we leverage during symbol extraction.

[1] https://wiki.mozilla.org/Data/WorkingGroups/CrashReporting [2] https://chat.mozilla.org/#/room/#crashreporting:mozilla.org [3] https://github.com/mozilla/dump_syms/ [4] https://github.com/luser/rust-minidump


How do you extract symbols from official binary builds of Arch? According to official documentation (https://wiki.archlinux.org/title/Debug_-_Getting_Traces and bug report threads I've seen), Arch lacks both debug packages and symbols bundled with binary packages. As a result, if you want to debug an app crash on Arch, you need to quit the program, recompile the app from source with your makepkg.conf set to !strip, then try to reproduce the bug in your new build.

It seems Firefox has a script at https://github.com/gabrielesvelto/symbol-scrapers/blob/maste..., but I haven't investigated what it does. In any case, this information should be integrated into the Arch Wiki.


Only public symbols and unwinding information is extracted from Arch packages because they don't have debuginfo like other distros. It's better than nothing.


Well that's unfortunate to hear :( Hoping Arch can fix releasing debug symbols for official binary builds soon... but I don't expect it to happen soon.

Also how does unwinding info help during debugging? Does gdb use it? Do I need special reverse-engineering tools to extract useful information?


Arch already uploads symbols for their Firefox builds so that's OK, it's the dependencies that are missing. Unwinding information is used to produce accurate stacks. If it's lacking our unwinder has to rely on heuristics to discover the frames on the stack and those might end up producing suboptimal traces.


Great work Mozilla! I can't remember the last time Firefox crashed for me (Linux). I am using the upstream version though, as I prefer security fixes in my browser to be applied as soon as possible.


I use firefox but haven't had sound since they dropped alsa support. Occasionally I'll run palemoon to get sound but while I like the browser, its not totally stable.

Probably need to try chromium.


I've had the same issue and finally bit the bullet and installed pulseaudio. I was semi-pleasantly surprised: it actually works most of the time on my computer these days.

I still have to mess with a billion things when I want to do anything more complicated than basic music playback, but for that it sorta works which is already pretty great by Linux audio standards.


I mean, the days are long gone when pulseaudio was buggy as hell. It is quite stable for years now, so much that now there is another unifying audio stack: pipewire. It will still require pulseaudio for most software (it provides a stub for it), and it is very early release but I’ve been using it with only occasional bugs.


I've had pretty terrible experiences with pulseaudio only a few years ago. As often on Linux I was probably unlucky with hardware support.

More generally I've always been frustrated with Linux audio because in my experience for basic audio playback OSS mostly Just Worked and every single API that came after it was buggier and harder to configure for the simple cases while still usually not good enough for professional use.

One exception was Jack which I like quite a lot, unfortunately it's also not natively supported by a lot of software, and that adds a lot of complications in my experience.

But again, I've been running pulse for about six months now and I haven't had any major issues, so credit where it's due.


The great thing about pipewire is that it provides a jack stub as well — you can use both pulseaudio and jack software side by side, and even pipe them to each other. I truly believe that once it stabilizes, linux will potentially be the best platform for audio production.


I'm developing a DAW, https://ossia.io and was recently very impressed by Pipewire : it's able to sustain 32 samples of latency on my hardware (multiface 2) while still working with Firefox, etc.


I must admit that I'm skeptical of yet another Linux sound API but I really hope that you're right, I'd really like to be able to use Linux for serious sound stuff and finally ditch Windows for that too.

I would even be willing to finally learn how to use Ardour correctly!


I'm really into music as a hobby and I don't want to mess up one of my few working computers. I do admit to using a youtube downloader some when I want to view a video.


Yeah I definitely see where you're coming from, unfortunately I just cowardly reboot on my Windows partition when I want to do "serious" audio work because Linux is just an exercise in frustration in my experience.


I run a minimal Gentoo system and it includes pulseaudio. It just works. No problems with it whatsoever. But on Ubuntu work machine it can be a pain. I don't know why this is.


Or you could install pipewire for a pulseaudio free experience, if that's what you're after (that's what I did).


Interesting, I might give this a shot, thanks!


does it work on ubuntu?


It should, however not sure about installation documentation. As always, the gentoo and arch wiki should get you there, even on ubuntu



This is what I use.

I do have an issue where Firefox hangs (requiring a kill -9) after the end of any WebRTC call, but I don't think that's related.


While not officially supported Firefox can still be built with ALSA support and with no pulseaudio requirement at buildtime or runtime. I suppose this might stop working at any point but it still works for me (currently on Firefox 88.0.1).


lol awesome, not only do I have the misfortune of no sound, people here want to rub it in with downvotes for mentioning it. thanks guys


We're just jealous you don't have to listen to ads :)


It would be nice if Firefox played nicely with cgroup memory limits. Right now, Firefox will freeze, crash or freeze while swapping heavily when it starts allocating near or at the limit. Same thing with Firejail.


That's a really tough problem to solve on Linux. We're actively working on it in order to make Firefox better behaved when memory gets tight but it's not easy. You can follow the work in these two bugs:

https://bugzilla.mozilla.org/show_bug.cgi?id=1532955

https://bugzilla.mozilla.org/show_bug.cgi?id=1587762


Thanks for the links


I've also seen Firefox freezing and swapping heavily when real physical RAM is depleted.

I was able to reduce RAM usage by reducing the number of content processes, so that might be helpful to you also.


> I was able to reduce RAM usage by reducing the number of content processes, so that might be helpful to you also.

Yeah, I dropped dom.ipc.processCount by half a while ago, but I think Fission[1] overrides that setting.

[1] https://wiki.mozilla.org/Project_Fission


This is great. I've done my fair share of griping about Firefox's many UI and functional regressions, but these types of stability improvments are very welcome and much appreciated.


Tangeltial, but for me firefox on linux is rock solid. A good while ago as a halfway test I stopped closing tabs to see how ff would perform. Up to around 5500 tabs now, daily driver, amassing my own personal internet so still using old tabs, still not a single crash!

Although new tabs starts feeling like they would be quicker to open with 5490 other tabs less. Who knew.

I guess my biggest finding is that tab and history search suck. Half of what I end up searching for is one of my tabs. I wonder why..


I've never had an issue with it crashing on Linux, but I've had this bug for months that causes the scrollbar to jump around. Usually it jumps up 10-20px every couple of seconds. I've tried removing all plugins, refreshing, changing mice, etc. until I had to switch to Brave.

Anyone else come across this issue? I'm using Solus.


I haven't encountered that issue on Solus Budgie. My first thought was an issue with your mouse, but since you've ruled that out, and it apparently works for you in Brave, I'm stumped.


Since I've upgraded to Fedora 34 (been on Walyand before though) Firefox disconnects from Wayland and crashes whenever there's a bit of IO going on.

Haven't had a tab crash in quite some time though.


I think I started scraping symbols from Fedora 34 builds on Monday, we probably haven't looked at those crashes yet.


The only issue I have with Firefox(and a regular one) is that its UI seemingly cannot handle tiling window managers. Unless I manually resize each Firefox window after startup the UI won't properly adapt to the actual window size.

But that is really a minor nitpick compared to the tons of issues I have with other browsers; Firefox runs insanely stable, and easily handles a large number of tabs.


I think that’s specific to your setup. I run Debian SID with i3 as tiling window manager and FF is the primary browser I use. As a web dev, I have quite a few instances of FF open all the time.

I can’t recall an issue with FF behaving badly when tiled. So, it’s unlikely that FF generally doesn’t work with a tiling WM.


That might be a bug in your window manager. Window resizes, whether automatic or manual, should send the same notifications to the software about the need for a redraw.


I doubt it. It happens in every tiling window manager I've used, and only ever with Firefox. Also I've seen the same issue mentioned once by a user of a normal desktop environment, who had to toggle fullscreen in Firefox to get it to update the UI after startup.

But it's just another visual bug users of tiling WMs have to deal with. Many programs handle this much worse, either crashing instantly because the WM doesn't let them have their favorite hardcoded window size, or becoming unusable from epilepsy-inducing glitches. Before I went down that path I'd have never guessed UIs(both on desktop and on websites) are so damn flimsy.


FWIW I've never seen this issue on Sway (can't say for others as I haven't used them).


Never had this issue with bspwm or sway.


This is great! Looks a lot like Fedoras crash statistics:

https://retrace.fedoraproject.org/faf/summary/


Congrats on shipping Gabriele and all LLT team and colleagues :)


If anyone from Mozilla is reading this: the thumbnails are unreadable but can't be clicked to open the full-size image in a lightbox or new tab.


I've been using firefox nightly on arch linux for 5 years, never had an issue (except some changes that broke my customization).


Does firefox provide an official appimage?



Wonder if they sent those Breakpad improvements back to Google.


yeah, see https://github.com/google/breakpad/graphs/contributors luser was contributing as Mozilla staff, Gabriele is too.

And a bunch of others folks (I counted 3 more)


How about fixing GPU acceleration. This is THE main issue why I don't use Firefox.


I only in the last year got FF not to crash frequently when there is no GPU. More dependence on GPU would not be welcome. (There is no GPU access from within a VM, on Qubes.)

I can't see what benefit you expect to get from the GPU. It is plenty fast with all-CPU rendering.

The Qubes X model is interesting: each VM runs its own X, and renders windows into memory shared with a central VM that copies the pixels to real video memory. Input events get forwarded to a VM's X according to which is the active window frame. The camera can be attached temporarily to a VM, and mike input similarly. So, no VM has physical access to hardware anytime other than when you want it to.

I am hoping a future Qubes will have proxying Vulkan stubs or something, for controlled access to a GPU. But it isn't missed much since FF stopped crashing.


It's in fact not plenty fast. My PC heats up considerably watching 4k youtube videos. And in fact it stutters with 4k 60fps. How is that acceptable? How is it acceptable I cannot use one of the most popular websites optimally. And that's not even talking about all the wasted energy.


Where do you find 4k youtube videos? The best I ever encounter are 1080p. Often enough the best I can find are 720p or even worse. Not that it usually matters.

I use mpv when I am serious, moving FF entirely out of the picture.


Here are a few channels that I watch ofen that always upload in 4k. This is not a niche for serious youtube channels:

https://www.youtube.com/channel/UCsn6cjffsvyOZCZxvGoJxGg

https://www.youtube.com/channel/UCHnyfMqiRRG1u-2MsSQLbXA

https://www.youtube.com/channel/UCSpFnDQr88xCZ80N-X7t0nQ

https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw

Besides even at 1080p the difference in how much battery it saves is astounding. With acceleration your CPU is just idling while without it has to work quite hard.


If they have a setup that can display and content that they want to watch in 4K without their laptop burning a mark on their lap, then that's a valid use case. An use case that doesn't work with Firefox at the moment because of Firefox. "You're holding it wrong" is an incredibly inane attitude.


What's wrong with it?


It only works under GNOME officially and that only since January. And I have in fact never gotten it to work, and I'm not alone.


As of bug 1702301 [1], and Firefox 89 currently in beta, we are shipping hardware accelerated WebRender by default on Intel (Mesa 17+), AMD (Mesa 17+) and NVIDIA (Mesa 18.2+, Binary driver 460.32.03+) GPUs, as well as lifting the desktop restriction. KDE and XFCE are allowed by default in 88. Wayland anything for a while. There are some particular GPUs with issues which we block, and there are a few more coming due to reported issues (mostly older/weaker GPUs).

There are open bugs I know. Some of which that are getting attention, others we haven't gotten a chance to prioritize. Is there a bug filed about your particular problem?

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1702301


> Wayland anything for a while

According to this : https://wiki.mozilla.org/Platform/GFX/WebRender_Where that is not true. Or is it out of date?


That is out of date, yes. It was turned on in bug 1701977.

https://bugzilla.mozilla.org/show_bug.cgi?id=1701977

Edit: Actually this is a little incomplete. It is still turned off with XWayland in late beta and release. So unless the MOZ_ENABLE_WAYLAND=1 environment variable is set, you won't get WebRender by default. I will need to review why we didn't ship with XWayland yet -- there were a couple of bugs (e.g. bug 1635186) but I don't know the status of them offhand. Expect to see movement here within the next few releases I would say.


Thanks for answering my question even though I posted it a day after this thread started.

I'm glad to hear that. One of my computers is actually using Wayland with MOZ_ENABLE_WAYLAND=1, but I haven't actually checked if it worked yet.


Will VAAPI get enabled (on X11) by default too?


VAAPI requires DMABuf, which requires EGL. We want to ship hardware acceleration using EGL instead of GLX, but CI issues have held it up (we are otherwise pretty much there for testing on nightly).


Great work!


Works for me on sway (Wayland).


I use it under KDE.

Or are you on Wayland?


Now it would be cool if they didn't break the number inputs with Bootstrap's form-control class.

There's something wrong with the padding and the up and down arrows simply don't work.

I think it's already fixed on beta/nightly but this can't happen. We can't wait months to have our inputs working again.

Also there's another bug introduced a couple versions ago. If you select some text with you mouse and a the next line, when you open a new tab with the middle button it'll paste the whole thing you selected in the direction bar.

I usually select text or code when I read it and every time I open a new tab with my mouse, boom. I have to delete everything, or select a single word somewhere and close that tab to reopen a new one. And if I'm not looking and start typing I end submitting a long text to duckduckgo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: