Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your claim makes me a bit skeptical unless im misunderstanding something here... I'd assume that a data source of pure natural radiation would be genuinely random even if its distribution isn't even, and that using anything in your computer to compensate for it would actually do the opposite: reduce randomness with damaging bias.

It reminds me of a story from Cryptonomicon in which a character mentions a secretary grabbing randomly spun number balls from a tumbling device while blindfolded (if I remember the objects right) and not liking the results when she had to write them down because they didn't look random enough to her, so she starts peeking and slightly "correcting", and thus ruins a number of one-time pads



The signal coming from typical radiation detector is analog pulse train which you have to somehow convert into useful binary data. Simply sampling this analog value at some regular interval will create samples that are hugely biased to small number of values (and distribution between these values has more to do with measurement uncertainty than with the supposedly random phenomena you are trying to measure) so you need some kind of processing step to get useful random numbers. Typical radiation based TRNG works by comparing the analog value from the sensor against some kind of threshold (usually in analog hardware) and producing stream of digital samples from that which is then either directly passed through von Neumann whitening algorithm or converted into stream of pulse times from which only few low-order bits are taken and whitened (in fact, the end result is mostly same as whitening the bit stream directly, but timing the pulse lengths is slightly more efficient). One can argue that "quantum TRNG" based on semi-transparent mirror produces unbiased output, but that is not true in practice because of manufacturing (the mirror will not be perfectly semi-transaparent) and implementation (producing and measuring single photons is hard) constraints.

The point is that you are going to do some kind of whitening anyway and you then have essentially three choices:

1) design something which requires only von Neuman-style whitening where there still are arbitrary parameter choices hidden in the hardware

2) Design some non-trivial, but still simple entropy-extraction/whitening algorithm (ie. take 16b sample and discard top 10 and bottom one bit).

3) just take the measurement results and pass it through some kind of CSPRNG or sponge function.

Third variant is what makes most sense for most applications because mostly you either don't care about the randomness that much or you want to use it for cryptographic purposes. And if you want to do cryptography then philosophical arguments about the cryptography-based whitening not being "truly random" do not make sense, because your application itself is based on belief that the crypto primitives used are "random enough".


Facts I looked up to understand this comment:

* Bias in a bit-stream means any deviation from IID Bernoulli trials with p=0.5

* Von Neumann whitening addresses the IID Bernoulli case for p!=0.5 by looking at bit pairs. It takes the first bit when they differ, and no bits when they match. This works because (1,0) and (0,1) both occur with equal probability p(1-p).

* NIST wrote a remarkably accessible document [1]

[1] https://csrc.nist.gov/csrc/media/publications/sp/800-90b/dra...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: