Hacker News new | past | comments | ask | show | jobs | submit login

> adding random noise just makes attacks take longer

No. It's true that with a low-res clock source in some cases an attacker can still get the precise measurement they're after by repeating an operation 1000x+ more times. In some cases though that is not possible, because at that timescale the signal they're after is degraded by noise that they cannot predict or control: operating system interrupts, memory traffic from other threads, etc.

Anyway, even if a lower-res clock source helps only 10% of the time, on defense you should always prefer to make an attack complicated, slow, partially-reliable rather than trivial, fast, highly-reliable.

Developers who need a high-res clock source for profiling, etc, should of course still be able to enable one selectively.

> You need to address the specific vulnerabilities, not shoot the clock_gettime messenger.

You can and should do both, to gain some defense-in-depth against future attacks.




Random noise isn't necessarly a hindrance -- it can help.

Contrary to intuition, the presence of random noise can actually make detectable a signal that is otherwise below the minimum detection threshold. See, e.g. stochastic resonance. (Essentially, the noise occasionally interefers constructively to 'bump' the signal up beyond the threshold to make it detectable.) If you are able to introduce and control your own noise, you may also be able to take advantage of coherence resonance.

Randomness itself can be a very useful tool in many signal detection and processing systems, e.g. sparse random sampling in Compressive Sensing techniques can reconstruct some kinds of signals at frequencies beyond the Shannon-Nyquist limit of a much higher-density fixed-frequency sampling -- something thought impossible until relatively recently.

I would not be at all confident that such 'system' noise could not be filtered out statistically; it might even be used to an attacker's advantage.


> on defense you should always prefer to make an attack complicated, slow, partially-reliable rather than trivial, fast, highly-reliable.

Always? I would think it would depend on what the trade-offs are, what costs you are paying for doing that (in inconvenience or damage or cost to the non-attacking users and use cases; in opportunity cost to other things you could have been focusing on, etc) compared to how much you lessen threats. Security is always an evaluation.

Timing attacks are the worst though. I think this may only be the beginning of serious damage to the security of our infrastructure via difficult to ameliorate timing attacks.


> In some cases though that is not possible, because at that timescale the signal they're after is degraded by noise that they cannot predict or control: operating system interrupts, memory traffic from other threads, etc.

Wrong, those signals will average out over a long enough data collection time.


I'm not sure that's true in this case, if your timer doesn't have the resolution to ever detect a cache hit, how can you measure impact?


It's one of those things that feel utterly counter-intuitive.

But if you have a signal overlaid with random noise [1], and you know what your signal's looking like and when it's happening, you can correlate. For example, a small delay that occurs at certain known points (or not), will introduce a bias into a timer measuring it, no matter how noisy or coarsely quantized that timer is.

Similar techniques have been used in other fields for decades to pull useful signals from far below the noise floor (e.g. a lock in amplifier can go dozens of dB below the noise floor, because it, essentially, correlates frequency and phase and thereby eliminates all but a tiny sliver of noise. E.g. GPS signals are typically 20 dB below the noise floor.

[1] It doesn't have to be random.

——

So these mitigations just make the attacks harder, hopefully hard enough that they become not feasible to be exploited widely.


Then why can't we make sharp high resolution photos of distant planets? Shouldn't we be able to average out all the noise for every pixel if we just collect light long enough?


It's not the noise that prevents us from seeing distant planets, but the diffraction limits.

https://en.wikipedia.org/wiki/Diffraction-limited_system


No, because there's correlated noise: all the stuff in between us and the planet.


What type of stuff is between us and the planet and stays on the same pixel all the time? I would assume everything in the universe moves all the time. We move. The other planet moves. How can something block the same pixel of our view of the planet all the time?


I don't think other planets usually move enough to cross one pixel. They've mostly been detected by changes in brightness of their host star. https://en.wikipedia.org/wiki/Methods_of_detecting_exoplanet...



> We move. The other planet moves.

Exactly. The same pixel isn't imaging the same location on the distant object. If it were, then what you say might be possible.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: