Hacker News new | past | comments | ask | show | jobs | submit login

These timing measurements by nature are going to have a small range of possible values --- the size of the range is going to likely be system specific, and for some systems, there may be only a few possible outcomes.



And that's enough. One bit is enough, less than one bit of randomness is enough if you can just repeat the process enough times.


How are we to know how much entropy the process provides our system? The patch in the article counts one bit per timer interrupt, but there may be much less than that. If the tsc clock and the timer clock are linked, and the system is totally blocked on entropy, it seems plausible that the tsc reads will be predictable/repeatable beyond maybe the first value.


If you've ever tried to measure cycle-accurate timings for algorithms, you might've noticed that even relatively simple and short instruction streams have variance.

Whether tsc clock and timer clock are linked or not is completely irrelevant, since the timer clock is only used to 1) credit entropy 2) possibly introduce additional jitter. The value of TSC when the timer interrupt runs is not used as entropy at all so even if that were 100% predictable, it doesn't matter.

It only matters that you do have jitter (between calls to schedule() and random_get_entropy()), and I have a hard time imagining a modern system with a high-res TSC where calling schedule() with active timers in a loop (with branches) would not have any jitter at all. You probably have way more entropy than one bit per jiffy; I think the counter is very conservative.

Notice that while entropy is credited only once per tick, actual TSC values are mixed into the entropy pool on every iteration of the loop that calls schedule().




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: