Hacker News new | past | comments | ask | show | jobs | submit login

>Surely you can check the output and see if it's random?

No you can't. That's an inherent property of randomness. If you're lucky you can win the lottery 10 times in a row. The only thing you can verify is that the random number generator is not obviously broken (see AMD's RDRAND bug) but you can't verify that it's truly random.

> isn't output the only way to tell??

Looking at the implementation is the only way to tell.

Let's say I generate a list of one million random numbers through a trustworthy source. Now I build a random number generator that does nothing but just return numbers from the list.

There is an impractical way of verifying a random number generator that is only useful in theory. Remember how you can flip a coin and take 1000 samples and you get roughly 1/2 probability for each side? The number of samples you have to take grows with the number of outcomes. If your RNG returns a 64 bit number you can take 2^64*x samples where x is a very large number (the larger the better). x = 1 is already impractical (50 years @ 3 billion RDRAND/s) but to be really sure you would probably need x > 10000. Nobody on earth has that much time. Especially not CPU manufacturers that release new chips every year.




> but you can't verify that it's truly random.

Even if you cannot be 100% sure that something is not really random, there are plenty of statistical measurements that can be used to assess the quality of the output (in terms of entropy).

> Let's say I generate a list of one million random numbers through a trustworthy source. Now I build a random number generator that does nothing but just return numbers from the list.

In less than a second your generator would be stalled which is a pretty obvious flaw to see.

The real issue is cryptographic algorithms because they are designed to simulate randomness, and they are adversarially improving when statistic methods of studies become more powerful. At every single point in time, state of the art cryptography is going to be able to produce fake random that the current state of the art cryptanalysis cannot prove as not random.


> At every single point in time, state of the art cryptography is going to be able to produce fake random that the current state of the art cryptanalysis cannot prove as not random.

That may not be true. (As in, I'm not sure it is.)

For many useful crypto algorithms where we give a nominal security measure (in bits), there is a theoretical attack that requires several fewer bits but is still infeasible.

For example, we might say a crypto block takes a 128-bit key and needs an order of magnitude 2^128 attempts to brute force the key.

The crypto block remains useful when academic papers reveal how they could crake it in about 2^123 attempts.

The difference between 2^128 and 2^123 is irrelevant in practice as long as we can't approach that many calculations. But it does represent a significant bias away from "looks random if you don't know the key".

It seems plausible to me that a difference like that would manifest in statistical analysis that state of the art cryptanalysis could prove as not random by practical means, while still unable to crack (as in obtain inputs) by practical means.


That makes sense. I stand corrected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: