Hacker News new | past | comments | ask | show | jobs | submit login

TBH it's entirely in character with "make X Y again" to give some recommendations that are super straightforward and sound appealing, don't make a case that X is in fact no longer Y, completely ignore why those recommendations haven't been implemented thus far, and don't spend any time thinking about whether there's even the slightest downside to those recommendations (which are in fact super dangerous/harmful).



The most essential person in kernel dev linked this just the other day, presumably seriously, in reply to an AMA question:

    75 points · 2 days ago
    
    If my environment doesn't need to worry about
    executing malicious code and I want syscalls to
    happen as fast as possible, is there a single/simple
    option to disable all the performance killing
    hardware mitigations?

    gregkh Verified 183 points · 2 days ago
    
    https://make-linux-fast-again.com/
https://www.reddit.com/r/linux/comments/fx5e4v/im_greg_kroah...


Sure. That's a very big "if," and the most essential person in kernel dev has the ability to make that the default if he wants. There's a reason he hasn't.

(If your environment actually doesn't need to worry about executing malicious code and you want to make syscalls as fast as possible, try a unikernel or implementing your code in a kernel module. Or, depending on what you're doing, try kernel bypass to get to the devices you care about and use something like https://lwn.net/SubscriberLink/816298/4aed890ee2dbffff/ to pin your user code to certain cores and get the kernel completely out of the way. Having to transition out of user mode to access hardware instead of making plain function calls, having to change page tables between processes, etc. are all performance-killing mitigations of their own. http://www.csl.cornell.edu/~delimitrou/papers/2019.asplos.xc... found a 27x performance improvement by getting rid of the privilege boundary between userspace and kernelspace - if you really care about performance and really don't care about malicious code, why would you leave a 27x speedup on the table and worry about a small percentage improvement from these flags??)


I don't agree.

Telling people to write in kernel mode if they care about performance isn't realistic. For most people that would mean completely rewriting their code from scratch, foregoing high-level software stacks and languages, giving up on most databases, giving up on all manner of tools and techniques for high-velocity software development, giving up fault tolerance, dealing directly with fiddly hardware issues (when do I need a TLB shootdown?), etc.

Whereas disabling spectre mitigations is a one-line config change.

For use cases where local system security really doesn't matter (of which there are a lot, let's be honest), a one-line config change for a 25% (or whatever it is now) performance boost is a pretty damned good deal.


I'm not sure I agree that cases where local system security really doesn't matter and performance matters are that plentiful, but I am happy to be convinced otherwise. In particular, just about any personal computing context doesn't count - you'd have to not run mutually-untrusted third-party code. That rules out web browsers with JavaScript, that rules out Android/iOS-style independent apps, etc. Sure, if you use the web without dynamic content and you use local office suites you're fine, but on the other hand, you don't really care about performance - a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.

Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound. It seems like performance is likely to be I/O-bound (getting assets from disk into memory), CPU-bound, and GPU-bound, but are you really making large numbers of syscalls? (Maybe this matters in online gaming?)

So that leaves basically some specific server workloads, and at that point I think some of these techniques start to be realistic. Pinning your work onto a core and using kernel-bypass networking is a pretty straightforward technique these days. It's not quite as easy as using the kernel interfaces, but it's pretty close, and it's definitely worth investing some engineering effort into if you care about performance - you can get much more than 25% speedups.

I agree that writing in kernel mode is generally unrealistic (although if you're writing a kernel module for Linux, you still don't need to care about fiddly hardware issues - you've got the rest of Linux still running). Mostly I'd like to see more work like the paper I linked - there should be a standard build of Linux which has hardware privilege separation turned off for use in the cases where you actually can avoid hardware privilege separation (single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers, etc.), or at least a flag to spawn a process and leave it in ring 0. If the use cases are plentiful, this seems like it would be valuable for lots of people - and it'd also make it clear that this generally isn't an option you want on personal computers. (But I think the reason this hasn't been done in the last several decades is that there aren't actually that many use cases that are both genuinely single-user and syscall-bound.)


If you think a 486 is sufficient for reading textual content and running a word processor and spreadsheet, you haven't been paying attention to software bloat. A 486 would have a hard time just booting a modern OS, never mind the application software.


Dig up a copy of WordPerfect 6 from somewhere, no need to boot a modern OS.

(More to the point - the solution to software bloat isn't to get rid of security protections so that the bloat has room to grow.)


Could use Geoworks also. Pointy clicky on even XTs, it would absolutely fly on any 486.


> So that leaves basically some specific server workloads,

The vast majority of servers don't run any untrusted code. Servers tend to do lots of syscalls for network I/O.

> Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound.

I would expect that interfacing with the GPU involves a fair number of syscalls -- but admittedly I'm also guessing.


> single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers

This is a lot of cases. I'd love to get 25% perf back on postgres, or 25% back on my air-gapped DAW, etc. etc.


Benchmark it - your air-gapped DAW is almost certainly spending very little of its time making system calls, and depending on workload, your Postgres probably isn't either. You'll get 25% back on syscall-heavy workloads but your workloads probably aren't syscall-heavy.


I would rather just enable this stuff, honestly. Which I think is the point.


Yes, that's precisely my point - the webpage appeals to people who want easy answers whether or not the problem has easy answers.


> a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.

Are you still running Visicalc?

I have seen, used, and maintained spreadsheet applications that are irritatingly slow on eight core, i8, with 8 GB of ram, and an SSD.

Granted this was on Windows but nonetheless the idea that end users only need a 486 is ridiculous at best and probably insulting.


While we're being honest, how many programs that get written are so desperate for performance that the only thing left to do is turn off security? And are the people who are able to even make this determination the kind of people for whom making a kernel module is unrealistic?


Your comment would make sense if GP had written "but writing kernel mode code is very hard and only really smart people can do it".

But they wrote something completely different. It seems to me like you just totally ignored their comment. What's the point of that?


I didn't ignore it. My point is, the kind of people who truly need this kind of performance are already translating hotspots to assembler and so on, a kernel module is plenty practical for them. For nearly all computer users there is no excuse for turning off these mitigations.


I don't know. If I'm running an application server with only my own code on a dedicated server, and I can flip some switches to make it go faster, then that's pretty nice, no? Might save me from upgrading to a bigger (pricier) server. What am I missing?

I mean, sure, that site is nuts, it sorely needs documentation. But not every scenario needs Spectre protection.


The problem with the scenario you describe is: how will you ensure that no one ever forgets that this server is vulnerable and can never be used for certain things? And everyone on here advocating turning off the mitigations is assuming the only exploits are the ones we know about. But when has that ever been the case. If more people turn off the mitigations black hats will be invested in finding ways to exploit it we haven't realised before.


Two comments down, he says to only use that if you know exactly what you're doing, as that set is not secure for most people.


Correct, but 'geofft's criticism of it isn't great, in my opinion. Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered. The link has significant value in being a quick and easy way to direct people to information, and it seems nonsensical to criticize it for things which don't particularly apply.


> only a fool would disable security mitigations in a situation where it mattered.

Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.

If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.

If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.

In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.


Are we reading the same HN threads on this page? I'm seeing people who I don't consider to be stupid, who obviously have some Linux knowledge, still advocating that user accounts are a waste of time for most desktops.

A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.

If anyone is smart enough and knows enough about Spectre/Meltdown to understand the risks they're taking, they are also smart enough to search online how to disable the kernel protections. The commands aren't hard to find.

If anyone is not smart enough find that information online for themselves, they also don't have enough knowledge to make an informed decision about whether or not it's safe for them to run.

In both cases, there is value in forcing users to display a modicum of knowledge about even just the fact that Spectre/Meltdown exist before we give them a command to run that turns off an important security setting. Anyone who knows anything about Spectre/Meltdown already knows that the mitigations affect performance. They should already know what to search for online without the aid of no-context commands being pasted at the top of HN.

You're considering it as advice, when really it should be considered more like a doi.


> A fool isn't necessarily stupid: plenty of people have knowledge yet terrible judgement.

Then I'm not sure why you disagree with my criticism - I'm claiming that this page appeals to people who have knowledge in this matter but have not done the deep thinking to have wisdom in this matter. There are plenty of smart people who find "Make X Y again" for other values of X and Y appealing.


I'm claiming it functions more like a doi or other identifier than a sales pitch. I don't think there's a ton of deep thinking involved: there will always be someone who's willing to run a web browser as root on their main system; you can't stop people who are set on something foolish from doing it, but if you can make it more convenient for people who have valid reasons, why not?

This all might be a bit too much serious thought for what was intended as a joke initially (the site, not my comments), though.


> still advocating that user accounts are a waste of time for most desktops.

I mean, phones and tablets do fine without any user-like abstraction. You can sandbox apps without having a concept of multitenancy.


Every Android app runs as a different UID.

I'm not sure what you mean by "concept of multitenancy," but if you want to "sandbox apps," you cannot let side-channel attacks break that sandbox.


iPads do have a concept of multiple users, though only one can use it at once.


Android differentiates between user processes and root processes. I'm pretty sure iOS does as well, although maybe they've coded it as something weird.

I'm not seeing people here arguing that Linux could get by with only supporting 1 user account. I'm seeing people argue that the biggest reason they avoid running as root is just because userland applications complain about it. It's very difficult to do sandboxing if there isn't some kind of differentiation between a privileged and unprivileged process.

Regardless, Linux also doesn't really have good sandboxing by default, so I'm not completely sure what you're getting at. It's still a bad idea for people to run a Linux system as root.


iOS has a root user, that your code does not run as.


> Reading it makes it obvious what it does, and only a fool would disable security mitigations in a situation where it mattered.

I disagree. Most users do not understand how speculative side channels work or how they might be affected; many people's experience with Meltdown and Spectre is "my games got slower because of some magical speculative stuff that I don't really understand". Making an informed decision on this is hard.


I may be overestimating them, but I imagine this applies drastically less to your average Linux user who knows how to actually apply this.


The average Linux user will not understand the implications. Heck most don't even know the security trade-offs of X11 (which are relatively simple to discover and understand).

Leave the HN bubble and go to an average Linux forum. You would be surprised what kind of advise people give or what people are willing to copy and paste into their terminal ;).

This is not meant as a criticism. Most people, including many Linux users, just use their computer as a tool and will just do anything to get it running as they want.


I think you're overestimating the average Linux user.


I could see it.


Do you have an example of a situation where it does not in fact matter, and do you have a benchmark of the speedup?


Most offline systems, and many systems used only for using trusted code (which obviously bars anything with a web browser, and most internet-connected personal computing devices, among other things). Applications that equate roughly to "Scientific number-crunching" for example are generally systems where security doesn't matter much.

There's a case to be made for gaming, and the mitigations if I remember correctly definitely ding performance on quite a few games, but because most computers used for games have significant amounts of sensitive information on them, among other things, I don't think it really holds well.

Benchmarks (I really dislike linking to this site, but given it did a bunch of benchmarks, it's not the worst thing, I guess):

'Up to 50% performance loss!' is obviously clickbait, and I disagree with a lot of the wording & conclusions & so on, but there's definitely some workloads where the trade-off makes sense.

https://www.phoronix.com/scan.php?page=article&item=linux-42...

https://www.phoronix.com/scan.php?page=article&item=linux-42...

Warning about those benchmarks, though: they're of a rather old kernel version, I imagine they tank performance less now. However, given that 5x has been performing way worse to my knowledge than 4x, 4x is probably what most people would be using for these sorts of things.

EDIT, added section below:

Ah, here we go; how it's effecting modern stuff:

https://www.phoronix.com/scan.php?page=article&item=spectre-...

The mitigations ding it a reasonable amount for a substantial amount of workloads, though it's not as intense as it used to be.


Are systems used for "scientific number-crunching" really applicable in this case? While some departments may have those on a separate network, not connected to the internet at large, I have never heard of a system employed for research tasks being completely air gapped. Otherwise, accessing and working on data would be prohibitively harder. Would such a trade-off be worth the potential gains by deactivating mitigations?

Also, may I ask why you dislike Phoronix? I personally enjoy their articles and the benchmark suite they have developed seems very well-rounded and transparent. I wouldn't count the statement concerning the ~50% increase in time it takes to complete a certain task on 4.20 as clickbait, considering it was never used in the linked articles title to hock readers and gain clicks.

Honestly, I have yet to see a large and popular enough use-case that allows for a both completely air gapped system, whilst also heavily benefiting from disabling mitigation to such an extent, that an admin couldn't just lock up the flags required. If I, as an admin, made the conscious choice of going so far as to disable these patches, I would also want to at least re-read whether this is truly significantly advantageous, rather than copying a line from a website with no context or further information on the current state and impact on performance.


It'd be nice to be able to easily boot into, or toggle into, a performance optimized, disabled mitigations environment to do something while offline.. many computer uses don't require being connected to other computers. I've gotten into the habit of hotplugging my Ethernet connection, personally.


You can actually do that fairly easily, just add the parameters linked to a second boot entry in GRUB.

However, I would very much not advise doing so, as I still am unaware of any task that can both, be done without the need for a network connection, while also being significantly slowed down by the mitigations, after recent improvements to the kernel and software. Basically, the potential benefit is very low in a lot of tasks, whilst requiring additional security measures (ideally fully air-gapped) and that you reboot the system every time you'd do such a task.

Also note that, in theory, just being temporarily offline may not shield from being exploited fully.


As an example (the only case that I've identified personally), if your curious, I have a (windows; Intel q6600) box that I use for gaming occasionally. Single player game I like, Total War: Shogun 2, runs at about 55 fps (benchmark) pre-Meltdown/Spectre/etc. Now it gets ~22 fps. I can use https://www.grc.com/inspectre.htm to toggle some mitigations to get it playable again.


...doesn't that exactly answer the question that was asked??


Very well said.

I would add that it is also needlessly antagonistic in addition to be being in poor taste.


To be clear: I mean that using "Make X Y Again" is antagonistic and in poor taste. GP of this comment is well stated and accurate.


[flagged]


The issue isn't bad taste, it's starting flamewars. Please don't.

https://news.ycombinator.com/newsguidelines.html


I'm not aware of any examples where "Make X Y Again" was a positive thing.


We're all smart enough to read between the lines here- although you've barely avoided saying 'Trump' this is very clearly a political message

One of the reasons I like HN is it generally avoids such shallow political talk.


Is there a possibility this whole project is in jest? It's called "Make Linux Fast Again", and it deliberately disables reasonable security measures to achieve its goal. Could be a political statement in the negative!


I don't, know the website went away really fast. :-)

I could see legitimate use for un-networked computers.


The crisis has changed people's online behavior, and the shallow culture of other sites is becoming normalized on HN. Please flag where appropriate.


[flagged]


Please take the time to read this over:

https://news.ycombinator.com/newsguidelines.html


There isn't a high bar of 'deepness' required here, but thinly veiled political insults with zero substance don't meet it.

Ill pull something out of the guidelines jshrevek posted for those who don't want to read through the whole thing

> Please don't use Hacker News for political or ideological battle. That destroys the curiosity this site exists for.

I'll add that using the term 'gross idiocy' also violates these guidelines.


Thank you for responding politely and meaningfully to the snarky, partisan flamebait.


The quality of comments on hacker news is due to the high bar for thoughtfulness and respect. Not from a restriction blocking any "political" discussion.

The linked content (disable security features to improve performance) is very clearly a relevant topic for hacker news and the apparent association between the linked domain and Trump's infamous catch-phrase seems like an intentional choice that is worth discussing.


>linked content (disable security features to improve performance) is very clearly a relevant topic for hacker

True, but not relevant to issues of partisanship and ideological battle.

>domain and Trump's infamous catch-phrase seems like an intentional choice that is worth discussing.

Theoretically, maybe. In practice, well take a look at the actual quality of the comments being made.


There is a restriction actually. This is pulled from the guidelines-

> Please don't use Hacker News for political or ideological battle. That destroys the curiosity this site exists for.


It is possible to discuss a political topic without the discussion becoming a "political or ideological battle".

It's definitely challenging, but it is possible.


I think it is a real problem that someone should try to solve. I visit a good number of expert communities and they all tend to be highly on topic. This is a good thing without question but I've often noticed the community is completely oblivious to what happens outside their bubble. I suspect most of the accidental battles are from being years behind on the discussion with the people they've talked with every day for the last decade or so.

I don't claim to have a solution. Programming philosophy or Programming politics do not seem very attractive but I do suspect there to be a big adventure behind those curtains. It's not like code is not political, lacks a philosophy or ideology. But the best we could do was "stop spying on me!"? hah


It is certainly very possible to do that, and it almost always requires sharing new information that relates to the context of the discussion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: