IIRC, there was a proposal some time ago to use performance counters to detect a rowhammer attempt (high number of cache misses) and stop it (by pausing the offending process until the DRAM refresh can catch up). Did anything come out of it?
I wish we could get rid of this dependency on the natural world that computers seem to have. Sadly I cant of a way to implement computers outside of reality and have some safe interface to them.
I always see computers as instruments for creating an approximation of a Platonist mathematical space from a physical space. Theoretically, digital computing can be done to any precision we want, until the non-ideal physical properties and constraints kick in, such as time, space, processing power, physics, in other words, leaky abstraction...
The dependency comes from pushing the physical limits until the abstraction leaks. Older memory technologies were not vulnerable to rowhammer. But they were also a lot slower.
This is why I'm pleased to announce my new Petahertz Rock Computer line. Not only are they the fastest computers in the world, but they are also the most secure. No data will leak from a system implemented on a rock computer, ever.
Buy one now and it will ship with a complementary write only ssd for all your secure storage needs!
Theoretically, if we accepted lower performance, could we design our hardware and software to actually be secure? The number of exploits over the last two years is making my head spin
For this case we could probably fix it without any real performance impact if we actually prioritized it.
Put 14 more bits for a counter in each row, and adjust the DDR interface semantics to guarantee time for the extra refreshes, and I think you avoid all the problems this attack depends on.
Total cost <0.1%
Don't cheap out with a tiny hash table stuck to the side or whatnot.
This brings up something I've often wondered about: we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right? However, there is no real way to guarantee that full programs are secure. Can we not truly guarantee that some portion of code is secure, or is it simply the amount of code that makes it impractical (and financially infeasible)? If the latter, what is the practical boundary?
> we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right?
That's the problem, we can't do that. Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.
You can say that something very very likely doesn't trigger any known vulnerability, but only while also specifying the exact hardware to run it on and probably other external variables.
> Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.
I guess I was thinking software vulnerabilities specifically (although in the context of this article I should have thought about the hardware side more). Or to be even more specific, flaws that are from that program in particular.
You can formally prove that a program is a refinement of a specification, or that a subroutine always obeys certain invariants (when executed honestly.) See Dafny and ADA-SPARK for examples.
It reduces to theorem proving with a SAT solver like Z3, if your language is Turing-complete and your invariants are abitrarily general. SAT solvers are fast and powerful these days, but the usual issues are:
1. It's too cumbersome to define invariants or write the type of specs that can check program correctness.
2. Things get exponential as state space increases.
Modern checked languages are finally addressing #1, though they will sometimes have blind spots (e.g. Dafny can't reason about concurrency, TLA+ (which can) is very abstract and high level.)
Writing programs as bundles of isolated, formally verified modules, then proving the correctness of the program with a high-level spec that assumes module correctness, lets you scale to real, large programs. This is how Dafny works, for example -- functions declare pre and post-conditions, and so Dafny can throw away its reasoning about callee function's internals when checking a caller.
This strategy really is overlooked! It's powerful enough that you can eliminate memory/argument checks at compile time, ditch all your unit tests and KNOW your program implements a spec. It should be mandatory for all safety-critical systems that can't rely on hardware fail-safes (e.g. airplanes.)
Software side there's a lot that can be proven and said about small sections of programs. I think a sibling commenter talked about an ISO standard related to that (And I think it covers some hardware bits). My layman's understanding of it is that it's a way to specify as many assumptions about how the code will be run, where, and what side effects it will have and no others. That makes for a really nice set of assertions but the end effect is that you can't say anything about certain kinds of programs; i.e. This program is safe, That program can't be safe, and then these programs are unknowable. Godel should never have been allowed to publish anything.
we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right?
I’m not sure that’s true, but even if it is, you have to ask: Which few lines?
Let’s say we have the technology that can analyse (say) five lines of code and prove them completely correct and secure. That doesn’t mean you can prove a 500 line program to be secure by running the analysis on each 100 sets of five consecutive lines. Code in one place can affect the logic elsewhere. So now to prove a 500 line program correct, we’d need to analyse every possible combination of 5 lines chosen from the 500 => 255244687600 different proofs to check!
It's not just that. Before you can prove anything "correct" you need to define what "correct" means. Now you have the meta problem of proving that your definition of "correct" is correct.
In the context of Rowhammer, we do already, right? Rowhammer susceptibility is considered a defect in memory modules, it's a deviation from the functional specification of the component.
See also this quote in the article. "We followed a multi-party coordinated vulnerability disclosure involving the main memory vendors"
Or if you can spare the cost switch to SRAM instead. Not sure if the density would be comparable enough to be reasonable even with the 100x jump in price (source: pulled out of hat).
Not everybody's working on web or something running untrusted code. Not fair to force people to pay a financial or performance hit for something they don't need; the world loses if peoples' scientific computations take 20% longer to run because of some ugly patch to an obscure vulnerability in speculative execution, or because the university can afford less computing resources because the only option is expensive ram.
If ECC was made for the same markets as normal memory, it would cost <12% more and make entire computers cost <2% more, with no meaningful performance hit. It's reasonable and should be mainstream.
SRAM isn't competitive for bulk storage with access latency >10~50ns. You'd just use DRAM cells and turn the parameters until they deliver your desired performance. The energy efficiency of normal server-DDR4 (non-overclocked, high-density) is magnitudes better than state-of-the-art SRAM, due to leakage. You _only_ have to add more ECC bits and more aggressive feedback in favor of refresh rate vs. response time.
It's not particularly hard to detect patterns that try to provoke rowhammer, and respond with even more aggressive counter measures. DoS vectors on that front are already to be expected, so turning Rowhammer attempts into something akin to a no-worse-than-2x slowdown seems an easy ask.
Ludicrously expensive. The largest SRAMs that are commercially available are around 288 Mbit (32 MB with parity), and cost hundreds of dollars per chip.
I'm not sure you could even physically fit 16 GiB of SRAM onto a CPU with current technology. SRAM cells are much larger than DRAM.
That already exists with AMD SME and AMD SEV (https://en.wikipedia.org/wiki/Zen_(microarchitecture)#Enhanc...). But to protect against bit flipping attacks like rowhammer you need authentication, not encryption, and I don't think these features authenticate the encrypted memory (since they would need extra memory to hold the authentication tag).
By authentication you mean some kind of HMAC? I guess that's fair. Still I figured that corrupting 1 bit in the encrypted output would corrupt the entire block & thus be very difficult to exploit.