Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Page was down when I tried to read it, but it's archived here: http://archive.is/s831k.

Its hard to get your head around how big a deal this is. This vulnerability is so bad they killed x86 indirect jump instructions. It's so bad compilers --- all of them --- have to know about this bug, and use an incantation that hacks ret like an exploit developer would. It's so bad that to restore the original performance of a predictable indirect jump you might have to change the way you write high-level language code.

It's glorious.



>Its hard to get your head around how big a deal this is.

It truly is difficult to predict all the ripple effects from this. I can't think of a single computer bug in the last 30 years that's similar in reach to this Intel Meltdown.

[EDITED following text to replace "Intel bug" with "Spectre bug" based on ars and jcranmer clarification. The Intel Meltdown can be fixed with operating system update patches for kpti instead of a complete recompile.]

Journalists like to overuse the bombastic metaphor "shaken the very foundations" but this Spectre bug actually seems very fitting of it. Off the top of my head:

- browsers like Chrome & Firefox have to compile with new defensive compilation flags because it runs untrusted Javascript

- cloud providers have to recompile and patch their code to protect themselves from hostile customer vms

- operating systems like Linux/Windows/MacOS have to recompile and patch code to protect users from malware

Imagine the economics of all these mitigations. Also imagine that each of the cloud vendors AWS/Google/Azure/Rackspace had very detailed Excel spreadsheets extrapolating cpu usage for the next few years to plan for millions of $$$ of capital expenditures. Because of the severe performance implications of the bugfix (5% to 50% slowdown?), the cpu utilization assumptions in those spreadsheets are now wrong. They will have to spend more than they thought they did to meet goals of workload throughput.

There are dozens of other scenarios that we can't immediately think of.


> to this Intel Meltdown.

Wrong bug. Intel meltdown is bad, but not anywhere near as bad as Spectre which affects everything! No AMD immunity here.


Meltdown is far worse in practice than Spectre.

Spectre needs a more perfect storm of factors to lead to exploitation. No hardware is immune to it, but not all software is vulnerable, either. You need code execution and you need a vulnerable target and you need to somehow trigger the vulnerable targets path and that vulnerable target needs data you want.

Meltdown just needs code execution and you have full read access to all memory.


> Meltdown is far worse in practice than Spectre.

Far worse for an unpached system, yes.

But in terms of fixing the problem, Spectre is much worse, with a larger impact.

It's so bad that I suspect some people will deliberately run without Spectre protection.


Other than javascript in a browser there does not appear to be much in the way of actual spectre attack vectors, though. So once browsers are patched spectre appears to be basically "fixed" for most practical purposes.

Worth noting that many of the claims around Spectre are wholly un-demonstrated. The PoC only involves reading memory from within the same process (aka, the PoC read memory through a side channel that it had full ability to read anyway). Trying to exploit this in a different process is entirely undemonstrated, and there's not even any real discussion in the paper of how it would work. In theory it's doable, but the issues around how you do this once process switching and IPC enters the picture seems substantial yet the paper does not make any attempt to tackle any of that.


>Worth noting that many of the claims around Spectre are wholly un-demonstrated.

This is untrue.

https://googleprojectzero.blogspot.com/2018/01/reading-privi...

Variant 2 is Spectre.

"This section describes the theory behind our PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific version of Debian's distro kernel running on the host, can read host kernel memory at a rate of around 1500 bytes/second."


>>Worth noting that many of the claims around Spectre are wholly un-demonstrated.

>This is untrue.

Anything that is not demonstrated in a reproducible way (that is, some downloadable PoC code) is wholly un-demonstrated. To date, afaik, that goes for Spectre in whole.


Note that variant 2 of spectre has not been demonstrated on AMD and AMD claims to be unaffected ( http://www.amd.com/en/corporate/speculative-execution )

However, the description of spectre from spectreattack.com is this:

"Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets."

That, however, is not demonstrated by any of these PoC, not that I can find.


> Intel meltdown is bad, but not anywhere near as bad as Spectre

I mentioned Meltdown because multiple entities (gcc, llvm, Google Cloud, Azure, Linux, Windows, etc) have already converged on concrete solutions such as new compiler flags and patches which gives us a glimpse into the costs and severity. The Spectre bug may be "bigger" but it doesn't have complete consensus mitigation yet and in the meantime, we really can't tell people to "just keep your laptop unplugged from internet and don't run any apps to avoid the Spectre bug." The Spectre hole seems like it will be an open problem for many years and the new gcc/llvm is an incomplete fix.


Meltdown is fixed by page-table isolation (unmap the kernel's pagetables after syscalls are done). All the other bugs are Spectre, and these are the ones that require fixing every compiler.


This document has performance impact estimates from Red Hat Performance Engineering: https://access.redhat.com/node/3307751


As far as I can tell this only measures the impact of the Meltdown bugifx/kernel patch for Intel CPUs. Would be interesting to see the accumulated impact of the mitigation of Meltdown (KPTI) and Spectre (using Retpolines)


"The Red Hat Performance Engineering team characterized application workloads to help guide partners and customers on the potential impact of the fixes supplied to correct CVE-2017-5754, CVE-2017-5753 and CVE-2017-5715, including OEM microcode, Red Hat Enterprise Linux kernel, and virtualization patches."

The numbers seem to be for patches on several Intel CPUs for all three of the disclosed vulnerabilities. [reworded]


they'll be releasing a fix for RHEL 5. hats off to these gentlemen as the patches probably aren't anywhere close to applyable.


> - browsers like Chrome & Firefox have to compile with new defensive compilation flags because it runs untrusted Javascript

Not meaning to be that rude, yet this itself summarises (and the issue perhaps will shed more light on) how stupid an idea is to let everybody run untrusted code from other peoples, let alone third party stuff like "privacy-intrusion-as-a-service" startups et aliae.


That won’t really be a problem for the cloud providers. That simply charge more because the customers will use more compute.


But it will cost them on I/O which they can not be passed to consumers as the price is contractual. Either the cloud providers on the hook or have to pass it somehow to Intel.


Which in some cases might make it cheaper for customers to use their own hardware, resulting in cloud providers losing business.


Maybe. But their own hardware will also be slower, no?


Maybe. When it is your own hardware you can do a different risk analysis. If you control all the code that runs on the system you don't need any mitigation. Most servers don't run arbitrary code form the internet - at least not intentionally. (a security hole that allows remote code execution is a real issue, but that risk can be managed)

My company doesn't need to mitigate the risk of me using these tricks. I'm not going to, but even if I was I have many other ways to get at sensitive data if I tried. If I get caught I'm fired and put in prison which is enough mitigation.


Is this a 5% to 50% performance hit on all workloads or specific workloads?


The 50% was for a microbenchmark of C++ code making heavy use of virtual. V-tables and jump tables get much more expensive. Any execution path that is known at compile time is not affected.


I wonder where that leaves Java, C#, Node.js etc. Do the VMs generate code that suffers from this?


Hmm, interesting question. On the hand JITs do devirtualization, so that on lots of code paths the indirect calls are replaced with direct ones. Which should mitigate the performance issues. However I think the JITs also need to insert the logic and branches for re-jitting and spectreulative execution. These might be additional indirect calls.

Would be interesting how it balances out in total.


yes


And I fear there's little reason to think that the "three variants" from project zero's announcement are the full scope of the problem. They were just the variants that the few people in on this found time to develop exploits for. There can now be security bugs in things your program doesn't do; it seems like there is room for nearly unlimited creativity in finding them.

From the spectre paper:

"A minor variant of this could be to instead use an out-of-bounds read to a function pointer to gain control of execution in the mis-speculated path. We did not investigate this variant further."


I think the main takeaway should be "speculative execution creates exploitable side-channels, and you should assume your hardware is exploitable until proven otherwise." AMD and ARM are probably still exploitable with unknown exploits, possibly even at Meltdown-levels of exploitability, but people haven't taken the time to reverse-engineer the microarchitecture enough to find the exploits.

If I were developing processors, I'd be having emergency meetings on trying to craft exploits to figure out where our processors' weaknesses are. While being happy that Intel is getting all the bad PR for this and I'm not.


AIUI the fundamental difference between Meltdown and Spectre is Meltdown involves speculating execution that loads memory across privilege domains and Spectre doesn't. If both AMD and ARM won't speculate memory loads across privilege domains, then it sounds like they're strictly immune to Meltdown.


> main takeaway should be "speculative execution creates exploitable side-channels, and you should assume your hardware is exploitable until proven otherwise."

Speculative execution does not create side-channels in and of itself, side effects of speculative execution does that. In this case the side effect of cache state. Just don't change the cache during speculative execution and there's no problem.


Why can't the processor isolate those cache lines during speculative execution?


And roll them back? It can, but it doesn't for performance reasons. What the performance impact would be is unknown but this requires a silicon change so unless you work at Intel you'll probably never know.


This needs to find its way into the hands of every manager of companies that make processors.


> d I fear there's little reason to think that the "three variants" from project zero's announcement are the full scope of the problem.

Agreed. This is an entirely new class of vulnerabilities, and we're just at the beginning.


As evidenced by the Mozilla announcement.


ARM’s white paper details a variant 3a that affects some of their cores that are unaffected by var3 (and vice versa)


Is glorious the right word for it? We’re going back to the stone ages where processors couldn’t predict the targets of indirect jumps. More generally, this seems to me like an attempt to patch out of what is really a class of attacks leveraging fundamental assumptions about high-performance CPU design. Before, OOO just had to preserve correctness and (some of) the order of exceptions and memory operations. Now, it has to preserve (some of) the timing of in-order execution too? Where does this path end?


Legitimate question: on any non-shared non-virtualized system is there any reason to enable these workarounds besides running sandboxed applications such as javascript in a web browser (or flash/java applets/Active X, but those are not really super popular nowadays)?

For any other non-sanboxed application you pretty much have to trust the code anyway. Privilege escalation is always a bad thing of course, but for single user desktop machines getting user shell access as an attacker means that you can do pretty much anything you want.

As far as I can see the only surface of attack for my current machine would be a website running untrusted JS. For all other applications running on my machine if one of them is actually hostile them I'm already screwed.

Frankly I'm more annoyed at the ridiculous over-engineering of the Web than at CPU vendors. Because in 2017 you need to enable a turing complete language interpreter in your browser in order to display text and pictures on many (most?) websites.

Gopher should've won.


This unfortunately also affects almost all mobile apps and modern Windows installations, as they all run Javascript-enabled ads. Maybe this might cause Microsoft to reconsider what it allows to run on Windows but I don't see mobile ads going away any time soon.


>javascript-enabled ads

Good opportunity to get rid of them.


Or, as a compromise, no third-party javascript. Google can easily code up the 100 most common javascript ad formats and let advertisers pick from a menu.


Why would you need a Turing-complete language to describe advertisements? Why can't they be static images? Or static HTMLs?


From the advertiser's perspective, it wouldn't be a Turing-complete language, since they would only have access to standardized templates. Such a system would probably have to be implemented in Javascript at the browser level though, unless you could do it all with CSS animations.


I'd rather just be rid of them altogether. No compromise.


Do they really need 100 formats?

1. Video ad that autoplays.

2. Punch the monkey.

3. ???


> on any non-shared non-virtualized system is there any reason to enable these workarounds

Does the non-shared non-virtualized system have any encryption keys in memory that you want to protect?

Do you use full-disk encryption or ssh to other machines or use a cryptocurrency wallet?


If one hostile application running on my machine isn't sandboxed then SSH local keys are pwned anyway. Might as well install a keylogger or just hijack ssh-agent directly. Full disk encryption keys might not be but the app will have access to any mounted and unlocked safe. Ditto for cryptocurrency wallets without a hardware token (and even with a hardware token if the app can get it to sign a bogus transaction).

I don't think this particular vulnerability significantly increases the surface of attack for any non-sandboxed application running on my computer. There are much easier and straightforward ways to get access to anything an attacker with shell access may want that don't involve dumping the kernel VM. So in my situation the only vector of attack I'm worried about is JS running in the browser since I gave up on javascript whitelisting long ago when I realized that most of the web is unusable when you don't allow heaps of untrusted scripts to run all over the place. I don't have time to audit the source code of every random website I visit.


These questions are only relevant if you're not controlling and trusting all the code you're running that system. For a consumer system this is true if (and basically only if) you're running a web browser on that system.

If you're confident in the software you're running on a non-shared hardware, both Meltdown and Spectre are non-issues requiring no mitigation. This is a narrow class of systems, but it exists.


> For a consumer system this is true if (and basically only if) you're running a web browser on that system.

… which is pretty close to universally true, especially when you consider how many people use apps which are based on something like Electron. If those apps load code or, especially, display ads there's JavaScript running in places people aren't used to thinking about.


> If you're confident in the software you're running on a non-shared hardware...

and that you won't be hit by a remote execution vulnerability.


That's what being confident in the software means.


As the owner of gopher://jdebp.info/1/ and the author of gopher://jdebp.info/h/Softwares/djbwares/guide/gopherd.html , I disagree. GOPHER needs a lot of improvement merely to learn the lessons that people learned with FidoNet in the 1980s.


Followup legitimate question: the only way to read data is to control the results of a speculative execution or fetch right?

For JavaScript won't it be sufficient to check all the calls out of it so that they can't pass data that controls an exploitable speculative execution, and also generate JIT code so the JS itself can't create exploitable instructions. The API will have to be heavily scrutinized and the JS will run somewhat slower.

If the rest of the browser code is vulnerable, but the JS code can't control the speculative execution then it should be safe to run any JS.


Javascript is theoretically fixable -- what's needed is less fine-grained timing capabilities. I think it would be very hard to completely eliminate the possibility of controlling speculative execution as long as you can predictably invalidate the CPU branch predictor, which can be done even at a high level with if statements. Unless you get rid of the notion of contiguous arrays (making predictable word-aligned cache manipulation nearly impossible) and potentially remove the JIT completely in favor of an interpreter, it's hard to completely be rid of this class of an attack when executing any sort of code on the processor. Those steps are probably possible, but the JS performance hit that ensues would be the death knell of any browser.

That said, without some way of extracting timing at the granularity of 10s of instructions, this attack is moot. So that's likely going to be the mitigation. Unfortunately, the web frames used in some apps are infrequently if ever updated, so JS engine updates there are gonna be hard.


Why does it matter if they can cause the CPU's predictors to guess incorrectly if they can't control the target address of the branch or memory access?

Example: "if (a < length) return data[a]". If "a" comes directly from JavaScript then they trick the CPU into fetching data[a] even if it's invalid speculation and thrown out. But if there's a safe barrier between "if (a < length) { prevent_speculative_execution; return data[a]}" then they cannot learn anything.

I concede that safely checking all data coming from JS code to the browser would be a huge task, but pretty sure it would work to fix the problem for JavaScript although not in general, between processes with shared IO pages and such.


Honestly, you probably don't even need the barrier in your example. Getting data[a] into the cache is no information leak if the attacker already knows a. That's why the example in the Spectre paper uses an additional level of indirection.


Yes thanks to HN being so quick to freeze comments I was unable to fix the example.

Point is, a JavaScript program in isolation cannot read anything, it has to interact with the other target code somehow. If that interaction (the data passed over the API call) can't fail after a certain point and can't be used to read data before that point, then the JS can't read anything.


> Where does this path end?

It ends with the performance advantages of OOO execution being effectively negated by the workarounds to address the security issues it causes.

The following parable is edifying: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/E...


Seems like the ultimate end-game here is to have mini-vms for every process using CPU-level ring protection. If you can't speculate across privilege levels, only inside them, it isn't a security problem anymore.


Or time to have Kernel live on dedicated cache not ever accessed/shared with anything else. Let the CPU speculate all it wants, just not when playing in the kernel's cache. It may even be time for dedicated kernel cpus/cores.


Reading Kernel memory (Meltdown attack) is extra bad but regular user processes being able to read each other's memory (Spectre attack) is also very bad and not solvable by isolating the kernel.


Im less worried about my steam client reading my chat cache than something inside my web browser reading the keys that encrypt my home directory. Short of abandoning all sharing, the least we can do is isolate kernel cache.


That depends on what you are chatting with. My chatlog would be very interesting to our competitors. The key that encrypts my home directory isn't useful because the firewall blocks your access to my home directory (that a different layer of security).


Why one solution is put secrets in kernel and use meltdown mitigation to protect.


> It may even be time for dedicated kernel cpus/cores.

Oh yes, I agree! One needs to be able to phycically (un)lock the "kernel fpga" like a door without remote capabilities, except for server cpu's. Or whatever chip designers believe is a good "physical kernel embodiment" other than fpga.

EDIT: I know it's not really clever, but I would really enjoy hearing any solutions that doesn't try to fix it at the hardware level.


Qubes OS [1] does something like that

[1]: https://www.qubes-os.org/


Yet Meltdown nuked exactly that.


> mini VMs for every process using CPU ring protection

Yes. We should really start to learn from history, MULTICS operating system had already 16 CPU ring support back in the early 1970s. MULTICS is the mother of UNIX, its smaller child. MULTICS had so many advanced features that barely got implemented (often reinvented) in newer OS. It's time to read old docs and ask the old devs who are still alive. (Another such often overlooked gem is Plan9, but it's better known thanks to Go lang devs).

Older Intel CPUs only supported 2 rings. Modern Intel CPU supports only 4 rings. Windows and Linux use ring 0 for kernel mode and ring 3 for user mode. And Intel introduced a ring -1 for VT.

  "To assist virtualization, VT and Pacifica insert a new 
  privilege level beneath Ring 0. Both add nine new machine 
  code instructions that only work at "Ring -1," intended to 
  be used by the hypervisor
It's time for modern operating systems to use more rings, and modern CPUs to correctly protect between different rings.

https://en.wikipedia.org/wiki/Multics

https://en.wikipedia.org/wiki/Protection_ring


We have different utility functions, you and I.


tptacek exploits computers for a living, so it's glorious for him :)


It's not that it makes the practice of breaking into computers that much more interesting so much as it makes the underlying field much more interesting to work in. The engineering problems just got a lot more complex. We're all taking an attack vector seriously --- microarchitectural side channels --- that we weren't taking as seriously before, except as an abstract threat to crypto and a way of defeating a mitigation --- KASLR --- that nobody believed in anyways.

What's glorious is that serious software security people now have to start being literate about what it means to reverse engineer and dump the branch history buffers on different CPUs. Getting dragged through this kind of minutiae is the reason I'm still in this field after 22 years.

And I'm just a bystander here. Imagine what it must have been like for Jann Horn over the last several months!

This subsection describes how we reverse-engineered the internals of the Haswell branch predictor. Some of this is written down from memory, since we didn't keep a detailed record of what we were doing.

... because shit was so crazy while they were working this out that they didn't have the cycles to write everything down!


I'd be surprised if other Googlers like Christian Ludloff (also of sandpile.org fame) and Dean G were not involved. They know x86 better than many engineers at Intel/AMD, having been in charge of the most performance/cost critical code (e.g. tuning search serving down to the last cycle), as well as qualifying new platforms and identifying a steady stream of CPU bugs.


As someone specializing on a different field, this comment reads to me as if you were an astronomer being excited about discovering a gigantic meteor heading towards the Earth :)


Sure, if your subfield of astronomy was all about practical ways to adapt to living on meteor fragments!


Some of us would be terrified, but also excited about the prospect of studying a supermassive black hole from the inside. You know what I mean? Some things are just so singular and incredible, and so to the heart of a given field of interest, that “glorious” is a perfect word to describe it. An alien invasion of Earth would be a nightmare, but for some it would be the moment they could finally move past the realm of the hypothetical.

Passion is passion, even when it’s terminal.


> they didn't have the cycles to write everything down!

hahaha that was a good pun! Do you have a link to Jann Horns personal blog or Github? I've not never heard of him before.



Isn't that a bit like a firefighter saying your house burning down is "glorious"?


How many firefighters do you know? I guarantee you every one of them gets excited at the prospect of a "good" structure fire.


Maybe more like a meteorologist excited by a really bad storm.


The result might not be glorious but the fire is amazing to watch.


To the extent that he exploits computers for a living (e.g. pentesting) and not stops exploits for a living, it seems more like a (presumably law-abiding) arsonist calling houses burning glorious.


Arsonists set fires. Vendors create vulnerabilities, not pentesters.


Ooooh, I dunno, I might argue that vendors have their role in creating pentesters. ;-)


I like to think of a lot of vulnerability discovery and research as solving a puzzle. In the sense that this puzzle has so many far reaching implications makes it totally compelling to me. tqbf says "glorious", and I couldn't disagree.

[Edit] Or, how far down does the rabbit hole go?

Additionally, it is quite fascinating to me to compare the complexity of modern CPRUs with, say, a compiler.


> leveraging fundamental assumptions about high-performance CPU design.

I believe the generalized fix is to restore the entire CPU state after a mispredict. You’d either need to add an extra copy of the entire processor state (tens of megabits) for every simultaneous predict you support ($$$) or keep track of how to revert all changes and revert them one at a time ($, slow).


This is harder than it seems, because once cache is deleted you can't just un-delete it, you'd have to go back to memory and pull it again.

Only the "extra copy of processor state" thing is really viable. You have to have a speculative cache and buffer in reads that only get flushed to the main cache once they're confirmed to be valid, which is enormously complicated. This facility already exists for writes, but now it needs to exist for reads too.

GP is absolutely correct that this is a fundamental assault on processor design as we know it, the speculative execution concept is going back to the drawing board for a major re-think.


I can’t help wondering what igodard’s day is like so far...


"I'm walking on sunshine..."


Wasn't Intel's transactional CPU Memory the solution, but it also failed to to bugs?

Sorry for quoting wikipedia, but I'm not at school, hah! [1]

'''' TSX provides two software interfaces for designating code regions for transactional execution. Hardware Lock Elision (HLE) is an instruction prefix-based interface designed to be backward compatible with processors without TSX support. Restricted Transactional Memory (RTM) is a new instruction set interface that provides greater flexibility for programmers.[13]

TSX enables optimistic execution of transactional code regions. The hardware monitors multiple threads for conflicting memory accesses, while aborting and rolling back transactions that cannot be successfully completed. Mechanisms are provided for software to detect and handle failed transactions.[13]

In other words, lock elision through transactional execution uses memory transactions as a fast path where possible, while the slow (fallback) path is still a normal lock. ''''

[1] https://en.wikipedia.org/wiki/Transactional_Synchronization_...


Wasn't Intel's transactional CPU Memory the solution, but it also failed to to bugs?


CPUs have been vulnerable to this attack since 1995. How did it collectively take us 22 years to figure this out? I know it's a highly esoteric complex attack, but there's no shortage of clever hackers in the world.


- we didn't have browsers compiling JavaScript into machine code

- we didn't have hyperconverged cloud infrastructures running arbitrary entities' code next to each other


Sounds like it's time for me to give up on the so-called "modern web" and install noscript.


You can have both "modern web" and block most JavaScript. You just need to keep adding scripts to the whitelist until sites you trust work again. It's a bit arduous at first, but possible to get used to once everything you visit daily has been added.


Unless, of course, the site you trust is hosted in a shared hosting VM which is also vulnerable to spectre or meltdown. In which case, you can’t trust the scripts.


spectre can read, not write.


If I can read arbitrary data, what’s stopping me from reading the credentials I need to write data?


What if I read the sites TLS/SSL keys? I could MITM the connection and inject JS to do more malcious thing.

Or even easier get the ssh key for the VM. Then do what ever I want.


If it can read the right data (private keys, etc.), then it can write whatever it wants.


Great answer.

The web issue is easier to mitigate if not fix completely since there is already a massive infrastructure for widespread, rapid browser updates, and crippling Javascript to eliminate attack vectors such as high-resolution timers is completely acceptable.

The cloud/vm infrastructure is a massive problem though. It is 100% required that VMs be fully isolated. The entire infrastructure breaks down if they aren't.


It's sort of been well-known that speculative execution opens up the possibility of side-channel attacks for quite some time. Hell, it's long-known that SMT (e.g., HyperThreading) can leak keys in a not-really-fixable way.

What's new and surprising is the power of these side-channel attacks--you can use these, reliably, to exfiltrate arbitrary memory, including across privilege modes in some cases (apparently, some ARM cores are affected by the latter vulnerability, in addition to Intel).


Honestly we knew about this in the 70s. Mainframe/time share systems had lots of protections against attacks like this. The problem is mainstream computing when cheap/single user and attempted to build a multi user/untrusted code execution environment on top of it. Now it's come back to bite us in the ass.


>there's no shortage of clever hackers in the world.

Are you sure?


Well, these are workarounds because fixing the problem at the source is hard.

The right fix is to prevent speculatively executed code from leaking information.

Here that perhaps means associating cache lines with a speculative branch somehow so that they aren't accessible until/unless the speculative branch becomes the real branch. (I have no idea exactly how that would be done or what the performance cost might be... I'd really need to know the details of how speculative execution is implemented in a particular CPU to even be able to guess.)


Agreed. I haven't had this much fun thinking through the implications of a new exploit technique in a long time. It is truly beautiful.


Prediction: This will be just like any vulnerability disclosure. The infosec people and media will scream hysterically about how game changingly bad it is. The OS vendors will patch, and business will go on as usual.


i know this came out as a leak, but makes one wonder how "responsible" even a Jan 9 official announcement would have been. the scope is absolutely terrifying. this bug will be exploitable for a very long time.


They had like 6 months or so... how is more time going to make things less painful?


Jan 9, 2019? 2050? How much longer is long _enough_?


i guess at minimum it's worth asking how many major hosting providers have been fully patched at the time of disclosure. in addition to browsers and OSes.


You don't "think infosec". If I'm an attacker and I notice both amazon and azure rebooting all their systems I know something is up. When I see that both Microsoft and Redhat employees are working overtime it gives away more information. All I have to do is crack on of their patched systems and I can bin diff it and figure out what is up.

Then I sell it off to blackhats before the rest of the world is aware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: