> The most important change is that content processes — which render Web pages and execute JavaScript — are no longer allowed to directly connect to the Internet, or connect to most local services accessed with Unix-domain sockets (for example, PulseAudio).
Finally, some process isolation! That's something to celebrate!
Being able to wrap Firefox in an unprivileged chroot makes for a really nice addition to its security.
If I am not mistaken, Tor Browser has already been using this technique for some time to thwart malicious connections from leaving the browser. All traffic is communicated to the tor client via a unix socket.
>Unprivileged user namespaces had some security bugs in their initial Linux implementation, and some Linux distribution maintainers are still wary of keeping them enabled.
Unsurprising. These are all a pile of hacks to try and fail at making up for the fact Linux doesn't support capabilities (not to be confused with POSIX Capabilities).
Fortunately, projects such as seL4 and Genode do exist, so it might be possible to, at some point, run a browser on an actually sane environment.
I have hope for Zircon/Fuchsia, because I'm sure the goal, at some point, is "everything supported by AOSP and all Chromebook hardware" which I hope would be enough to bootstrap the positive feedback loop of people contributing drivers…
The BSDs are nowhere near Linux when it comes to driver support. Particularly when it comes to mobile devices like what fuschia is targeting, I don't even know if you can run a BSD with what exists currently.
FreeBSD is probably the closest. At least for the stuff that matters to me :) Specifically, I have a Radeon RX 480 graphics card and a Mellanox ConnectX-2 network card and they work great.
Fun fact, both of these drivers are pretty much copy-pasted from Linux with little changes -- thanks to LinuxKPI, a layer that reimplements some Linux kernel interfaces on top of FreeBSD's.
FreeBSD was close or even ahead of Linux maybe 15 years ago however now Linux too far ahead. Heck Linux gets driver support for hardware before the hardware even exists.
Two groups in L4 family used driver VM's that had just enough of Linux to use the drivers. Depending on compatibility vs isolation, you could put your driver-dependent code in that VM, use it via a virtual driver from another partition, or use a native driver on microkernel. I believe OK Labs used that mix for their OKL4 platform for mobile virtualization. It got deployed in what they said was a billion phones mainly for baseband isolation.
So, it's doable. There was also a project a long time ago that combined virtualization with Windows to use its drivers. Stuff like that seems best solution for now where we can incrementally build native drivers over time. At least until cross-platform synthesis is built and takes off. ;)
I use Windows, because as much as I would love to be using an open source desktop OS, the way Linux systems are structured is a really terrible fit for how I use a desktop.
For brevity, let's stick with just application management as an example: You can't install two versions of the same application in the vast majority of cases, you can't move applications, good f'ing luck if you want to install something not in the repository, a version newer than what's in the repository, etc.
There's a reason there's so little non-oss software support for Linux.
You're going to need to be more specific. Linux does indeed have a feature called "capabilities", so it's not clear exactly which OS feature you are taking about.
I'm curious why chroot is used instead of mount namespace and pivot_root(2). This would let them get away without CAP_SYS_CHROOT, while also providing stronger filesystem isolation.
We're all running some bespoke setup, and probably each planning something more-advanced and more-bespoke, just to overcome bugs in browser implementations. What we all actually need is a browser developed with the goal of being secure by default, rather than shrugging off eg the leaking of network config / window size / rendering engine nondeterminism / etc.
I used that one to route all of my GMail stuff over my server. This is done transparently to Thunderbird, by launching it in its own network namespace. That way, I don't get these annoying "Someone has your password" messages when travelling.
That's close to what I'm using (I have only one route-script for OpenVPN which is pretty much the same, but handles both up and down commands from VPN client). To make DNS work properly, there should be a resolv.conf file in /etc/netns/$netns_name/ (directory has to be created manually).
I should generalize my stuff and push it to Github...
Last year, afer a firejail local root exploit got released [0], I've completely quit following their project.
I don't want to discomfort the developers and I think it's stunning what they are creating...
But under the aspect that they are working on a security product, I'm concerned by their overall code quality and testing strategy.
They might want to consider taking a step back and reevaluating how they are going to direct their development in terms of secure (c-)coding practices.
*Disclaimer: Not a developer, just a sysadmin, but reviewing some of their code/profiles/CI-jobs in their git repo [1] leaves a bad feeling.
I didn't like the code either. I remember seeing that they were changing euid betwen root and something else all over the place, for seemingly little benfit, because exploit code could simply change it back to root too. It seemed a bit confused.
Though there's nsjail if you want something better written/cleaner.
Well it depends, does your Linux account running Firefox has the possibility to access root (sudo, su)?
If yes, I don't know. Maybe a 'strong' apparmor/selinux policy might capture some exploits, firejail tries to mitigate?
Other yes, clearly: A Firefox exploit would usually not result in root access (unless it's combined with other Linux exploits) - in the case of firejail, it would have resulted in a root exploit.
I'm not saying: Don't use firejail at any cost. But I'm trying to say that you shouldn't have a false confidence in your security, just because you are using firejail and this because their current practices doesn't seem ideal for a security product. At the moment firejail advocates sound like that firejail is 'a proper security solution for Linux desktop', but given the circumstances, it's not.
might be worth checking out tor-browser-(bundle?) apparmor profile/s
I don't know much about this topic, and appreciate sane levels of protection.
But when I use the menu in Snap Firefox (59) to open a file in another partition so I can read it ... and I can't do that because of 'permissions' ... I get irritated.
While I'm on this general topic ... I also don't like the fact that javascript won't let me write to a file. That would be VERY USEFUL.
In short, these long-term accumulation of these 'hardwired' restrictions is severely cramping my use of (and enjoyment of using) computers. If I want it, I have time to change a preference, but not to earn a PhD.
Javascript can write to a file.. see jsshell, node and a number of other runtimes. However, you do not want browsers to have unfettered filesystem access. The language is perfectly able. You could possibly write an extension to do what you want.
I'm not talking about 'unfettered'. It would be great for me to -natively- let users write-to/modify text-only files in the same folder as the parent file. I fail to understand why this has to be hard, dangerous (it'd certainly be useful ... to them AND to me). Real languages have a WRITE command.
FileReader was a big step forward that took forever to arrive. Where's FileWriter?
Thanks for that, but I'm talking about user-initiated writing to separate, editable, plaintext, named files in the same local folder as the parent HTML/JS source. NO web interaction needed or desired, no other knowledge of or access to filesystem desired. For example, browser as IDE. Or database editor/searcher. Powered by JS.
The only way you'll get that is to use something like electron that gives you access to node resources. The browser is sandboxed for a reason, and what you're talking about would be a violation of that. The good thing is Electron apps are incredibly portable.
Interesting. Sandboxing in Chrome is something I turn off when I write my headless bots as it requires extra privileges. Would be nice to see a comparison between it and FF.
> In this context, we'd like to remark that an application like Firefox only needs CAP_SYS_ADMIN, CAP_SYS_CHROOT, CAP_SET(UG)ID to achieve most effect
I hope they also drop those capabilities once they have achieved what they need. I also which other namespaces they use, the article only mentions user namespaces.
CAP_SYS_ADMIN is a catch-all where they dump all the things root has that didn't fall into another bucket. So if a program needs CAP_SYS_ADMIN then basically you haven't really got any meaningful sandboxing so far as I understand it.
Outside a namespace, CAP_SYS_ADMIN is enough to grant yourself any other capability, but inside a namespace the main danger is that it exposes a bit more kernel attack surface than a normal user would have.
The important question is whether the APIs that CAP_SYS_ADMIN and other capabiloities grants you access to are potential security issues in an user namespace.
Dropping things like CAP_NET_ADMIN in a namespace is useful, because helps reduce the attack surface further.
Right, CAP_SYS_ADMIN has been described as the new root. It gives the sandbox more rights than what a normal user process would have. Well, capability checks are often carried out against a resource. Their namespace does probably not own many resources.
So I would like to see a detailed analysis what doors CAP_SYS_ADMIN really opens. But I am very skeptical that this the right thing to do.
I wonder why they need CAP_SYS_ADMIN. I have used unprivilegded containers before. We needed to make some ugly compromises to have them do useful work, but CAP_SYS_ADMIN has never been required.
> The one exception to the network policy, for now, is the X11 protocol which is used to display graphics and receive keyboard/mouse input
Well, this makes everything else moot, since you can just inject keystrokes and thus trivially take over the system... (in addition to probably many other vulnerabilities in the X11 server)
They should try to do a proper isolation job (meaning no I/O other than shared memory and pipes to another Firefox process), not this apparently useless effort.
If you have 10 problems to fix, the effort spent fixing the first 9 of them isn't "useless" as you say, just because you haven't fixed the 10th one yet. And they couldn't fix all of the issues by blindly proxying all IO through the main Firefox process, either, because that wouldn't fix the problem. It has to be able to _selectively_ proxy IO through the main process, and that takes time and effort to implement.
The whole point of this is to stop attackers who've managed to break out of the JavaScript sandbox. It's an extra layer of protection. If you're going to assume that an attacker is restricted to what JavaScript is meant to be able to do then the whole exercise is pretty much pointless; this is also a bad assumption.
You have a good point, but the way you are writing is unnecessarily aggressive. Emphasis on the unnecessary, it doesn't add anything so you should leave it out of your writing.
Finally, some process isolation! That's something to celebrate!
Being able to wrap Firefox in an unprivileged chroot makes for a really nice addition to its security.