Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Legitimate question: on any non-shared non-virtualized system is there any reason to enable these workarounds besides running sandboxed applications such as javascript in a web browser (or flash/java applets/Active X, but those are not really super popular nowadays)?

For any other non-sanboxed application you pretty much have to trust the code anyway. Privilege escalation is always a bad thing of course, but for single user desktop machines getting user shell access as an attacker means that you can do pretty much anything you want.

As far as I can see the only surface of attack for my current machine would be a website running untrusted JS. For all other applications running on my machine if one of them is actually hostile them I'm already screwed.

Frankly I'm more annoyed at the ridiculous over-engineering of the Web than at CPU vendors. Because in 2017 you need to enable a turing complete language interpreter in your browser in order to display text and pictures on many (most?) websites.

Gopher should've won.



This unfortunately also affects almost all mobile apps and modern Windows installations, as they all run Javascript-enabled ads. Maybe this might cause Microsoft to reconsider what it allows to run on Windows but I don't see mobile ads going away any time soon.


>javascript-enabled ads

Good opportunity to get rid of them.


Or, as a compromise, no third-party javascript. Google can easily code up the 100 most common javascript ad formats and let advertisers pick from a menu.


Why would you need a Turing-complete language to describe advertisements? Why can't they be static images? Or static HTMLs?


From the advertiser's perspective, it wouldn't be a Turing-complete language, since they would only have access to standardized templates. Such a system would probably have to be implemented in Javascript at the browser level though, unless you could do it all with CSS animations.


I'd rather just be rid of them altogether. No compromise.


Do they really need 100 formats?

1. Video ad that autoplays.

2. Punch the monkey.

3. ???


> on any non-shared non-virtualized system is there any reason to enable these workarounds

Does the non-shared non-virtualized system have any encryption keys in memory that you want to protect?

Do you use full-disk encryption or ssh to other machines or use a cryptocurrency wallet?


If one hostile application running on my machine isn't sandboxed then SSH local keys are pwned anyway. Might as well install a keylogger or just hijack ssh-agent directly. Full disk encryption keys might not be but the app will have access to any mounted and unlocked safe. Ditto for cryptocurrency wallets without a hardware token (and even with a hardware token if the app can get it to sign a bogus transaction).

I don't think this particular vulnerability significantly increases the surface of attack for any non-sandboxed application running on my computer. There are much easier and straightforward ways to get access to anything an attacker with shell access may want that don't involve dumping the kernel VM. So in my situation the only vector of attack I'm worried about is JS running in the browser since I gave up on javascript whitelisting long ago when I realized that most of the web is unusable when you don't allow heaps of untrusted scripts to run all over the place. I don't have time to audit the source code of every random website I visit.


These questions are only relevant if you're not controlling and trusting all the code you're running that system. For a consumer system this is true if (and basically only if) you're running a web browser on that system.

If you're confident in the software you're running on a non-shared hardware, both Meltdown and Spectre are non-issues requiring no mitigation. This is a narrow class of systems, but it exists.


> For a consumer system this is true if (and basically only if) you're running a web browser on that system.

… which is pretty close to universally true, especially when you consider how many people use apps which are based on something like Electron. If those apps load code or, especially, display ads there's JavaScript running in places people aren't used to thinking about.


> If you're confident in the software you're running on a non-shared hardware...

and that you won't be hit by a remote execution vulnerability.


That's what being confident in the software means.


As the owner of gopher://jdebp.info/1/ and the author of gopher://jdebp.info/h/Softwares/djbwares/guide/gopherd.html , I disagree. GOPHER needs a lot of improvement merely to learn the lessons that people learned with FidoNet in the 1980s.


Followup legitimate question: the only way to read data is to control the results of a speculative execution or fetch right?

For JavaScript won't it be sufficient to check all the calls out of it so that they can't pass data that controls an exploitable speculative execution, and also generate JIT code so the JS itself can't create exploitable instructions. The API will have to be heavily scrutinized and the JS will run somewhat slower.

If the rest of the browser code is vulnerable, but the JS code can't control the speculative execution then it should be safe to run any JS.


Javascript is theoretically fixable -- what's needed is less fine-grained timing capabilities. I think it would be very hard to completely eliminate the possibility of controlling speculative execution as long as you can predictably invalidate the CPU branch predictor, which can be done even at a high level with if statements. Unless you get rid of the notion of contiguous arrays (making predictable word-aligned cache manipulation nearly impossible) and potentially remove the JIT completely in favor of an interpreter, it's hard to completely be rid of this class of an attack when executing any sort of code on the processor. Those steps are probably possible, but the JS performance hit that ensues would be the death knell of any browser.

That said, without some way of extracting timing at the granularity of 10s of instructions, this attack is moot. So that's likely going to be the mitigation. Unfortunately, the web frames used in some apps are infrequently if ever updated, so JS engine updates there are gonna be hard.


Why does it matter if they can cause the CPU's predictors to guess incorrectly if they can't control the target address of the branch or memory access?

Example: "if (a < length) return data[a]". If "a" comes directly from JavaScript then they trick the CPU into fetching data[a] even if it's invalid speculation and thrown out. But if there's a safe barrier between "if (a < length) { prevent_speculative_execution; return data[a]}" then they cannot learn anything.

I concede that safely checking all data coming from JS code to the browser would be a huge task, but pretty sure it would work to fix the problem for JavaScript although not in general, between processes with shared IO pages and such.


Honestly, you probably don't even need the barrier in your example. Getting data[a] into the cache is no information leak if the attacker already knows a. That's why the example in the Spectre paper uses an additional level of indirection.


Yes thanks to HN being so quick to freeze comments I was unable to fix the example.

Point is, a JavaScript program in isolation cannot read anything, it has to interact with the other target code somehow. If that interaction (the data passed over the API call) can't fail after a certain point and can't be used to read data before that point, then the JS can't read anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: