Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many languages just don't have the primitive to do time invariant code.


Are there any languages that properly support this, i.e. where you can express at the language level that the compiler should refrain from being to clever? If not at the language level, are there compilers which you can ask to keep an eye on the timing?


There are a few research languages where handling secrets and constant-time operations correctly is a first-class feature. See for example:

https://github.com/google/rune


Assembly.

Jasmin.


Even assembly doesn't work anymore, as CPU manufacturers no longer guarantee constant-time execution unless the software opts-in to constant-time mode: https://lore.kernel.org/lkml/YwgCrqutxmX0W72r@gmail.com/


Still the best we can do.


That's true in the sense of "time invariance is not guaranteed as a first class feature". But as a practical matter this is as simple as choosing an invariant algorithm and disabling optimization for the module in question.


And even if they do you're an optimizer or a processor issue away from your precious time invariant code becoming less so to the point that you leak bits.

This is a very hard problem to get just right.


Is time invariance necessary to mitigate the attack, or would inserting pseudorandom delay from a source the attacker can't observe be sufficient?


Adding noise may make it a bit harder but does not fundamentally change anything. The measurements, especially over a network, are noisy to begin with and adding a delay will only add another noise source.

Aside from that, time invariance is the wrong term, it is constant time. Time invariant means does not change over time, what you want here is data invariant timing, i.e. the runtime not depending on the data being processed.


Noise can be removed by averaging results. It generally makes such attacks more expensive but still possible.


The delays should not be noise-like.

Determine, say, N = 10 points in the sensitive part of the algorithm. On startup / library initialization, assign N small random delays to these points. The delays stay the same in every invocation, creating a stable and unique timing profile. This,profile is hard / impossible to reproduce on a different machine for analysis and finding correlations. To make things funnier, a random delay parameter should randomly change every few weeks on long-running installations. This would render previous timing stats invalid, making known-plaintext attacks more difficult.


Realistic attacks tend to involve doing many different runs of the algorithm with different values of whatever input is timing-dependent. That includes this attack, where you’re doing many different TLS connections which result in different messages being decrypted, and the runtime depends on the message.

If you couldn’t vary the timing-dependent input, then you could effectively only measure a single duration value per target; doing multiple runs would just reduce noise. It’s very unlikely that a single duration would have enough information to recover a full key. So this is already a hopeless scenario for the attacker.

But if you can vary the timing-dependent input, then you can compare timing between different runs on the actual target machine; there’s no need to reproduce the exact timing profile on your own machine (which is very difficult anyway outside of small embedded CPUs). So a random delay per machine does not help.

Adding delays to sensitive parts of the algorithm either would be equivalent to adding a single overall delay (if the number of delay invocations is constant), or would make the attack much easier (if not).


Unobservable pseudorandom delay has to be harder to get right (and verify) than time invariant code don't you think?


Not on the modern architecture.

To get time invariance I have to have guarantees provided by everything from the compiler to the hardware implementation. If I am exclusively, explicitly, and completely controlling all of those layers, good to go... But that means we know I'm at least not publishing a library for public consumption.


> To get time invariance I have to

To get time invariance all I need is a clock. Responses are normalized to some value greater than the slowest computation. Thus, I don't need exclusive, explicit and complete control of all layers, including whatever inevitable evolution those layers incur without my knowledge.

Barring any further threat model inflation involving ammeters, spectrum analyzers or what have you, how is this not a sufficient solution?


What is a 'computation' exactly, and how do you know the duration of the slowest one? I imagine the complexity spirals as you try to answer those two questions...


> What is a 'computation' exactly

Execution of whatever one proposes to code in a time invariant manner. In this case: "RSA decryption and signing operations."

> I imagine the complexity spirals

Measuring things and normalizing operation time doesn't appear terribly complex to me. How can endlessly reworking subtle algorithms for new compilers and hardware seem less complex than throttling well understood implementations with a clock?

Also, I asked a question. My question was sincere: how would using a clock to normalize operation time not be sufficient?


Measuring and normalizing the operations isn't necessarily straightforward - suppose we're in a streaming context (common for encrypting things for the internet), in a real-world environment, where computation time might depend on what other applications are running, or the temperature of the CPU; max operation time will vary over time and possibly unobservable changes in circumstances.

It's easy to spitball a seemingly 'good-enough' solution to this, but crypto doesn't seem to be a place where 'good-enough' is actually good enough.

https://superuser.com/questions/432579/how-is-cpu-temperatur...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: