As a result of system configuration or specific requirements placed upon an organization, contractual or otherwise, only /dev/random is guaranteed suitable for high-value keying material.
So from a programmatic standpoint, developers should use /dev/random on Solaris if they believe the material being generated is "high value". It is up to the developer to determine whether the material being generated is "high value".
Yes, that's what I understood you to be saying before. What I'm saying is that nobody has come up with an argument for why that would be. In fact: every argument, even the ones that get down to the level of kmem-style magazines, ultimately ends up in an argument isomorphic to the argument we just had about the LRNG.
No case has actually been made for why the argument is different on Solaris than it is on Linux. I'm increasingly convinced that's because there's no difference, regarding this issue, between Linux and Solaris. The generators are different, but equally safe once the generators are seeded.
See, the problem with this is that you haven't put it plainly. I understand how CSPRNGs work. I understand a lot about how Linux's works, and a little bit about how Solaris's works. My sense is that if there's an argument about how Solaris urandom is inferior, I should be able to understand what it is. What is it?
I'm becoming increasingly convinced that there is no difference between the urandom story on Linux and the urandom story on Solaris. Not that the generators are the same, but that the differences simply do not matter.
If you don't know the specific answer, could you get one of your friends on the team to chime in? I'm reaching a threshold at which I'm going to start noisily telling people that urandom on Solaris is fine --- incidentally, a lot of very well-regarded software already agrees with me, so I feel reasonably safe joining the chorus.
I have personally verified that the documentation and guidance in Solaris is up to date and correct per the authors.
As I said before, and I as I will say again, the differences do matter for some administrators with specific contractual and/or other obligations and when generating "high value" keying material.
If you choose to advise others contrary to the documented guidance that Solaris provides, that is your choice.
I've just read through the Illumos code, and for Illumos at least, urandom actually seems less scary: there's a direct code path in /dev/random that reads raw entropy bytes (like thread timing) and returns it to callers, but the urandom path always goes through fips_random_inner() or equivalent.
Always use urandom. If your contracts require you not to, revise your contracts, not your code.
The reason tptacek looked at the illumos source code is because I mentioned to them that a blog post you've previously linked to about the Solaris random devices [1] appeared [2] to suggest that the entropy provided by KCF randomness providers is given out fairly directly by /dev/random (with each byte of entropy XORed with the byte returned 1024 bytes earlier). Do you know if that specific thing has been changed to stop being true since illumos diverged? Do you know if urandom has been changed to no longer always run things through fips_random_inner (as illumos does and Darren Moffat's blog post says Solaris does)?
[Edited to add: it looks as though the tweeting questions-at Darren Moffat protocol has been initiated: https://twitter.com/tqbf/status/817496091759362048 . It's also worth noting (which I failed to do previously) that two of the three random providers in the blog post are described as doing their own hashing/similar processing of the entropy bytes they provide to the random pool, and the other one appears to do so from the illumos source.]
> Use of urandom contrary to official guidance is not recommended.
This is, I think, a case where using the passive voice is suboptimal. The official guidance obviously does not recommend using urandom contrary to official guidance, nor do you. Some others do recommend it, as this whole argument shows.
More substantively, this is the part of your stance that would be, as tptacek previously suggested, equally applicable to Linux urandom prior to Linux fixing their man pages. At that time, the official guidance on urandom was that it was inferior to random in general instead of solely in the one specific case of requests before the kernel CSPRNG has been seeded. If the official guidance is incorrect about when and if urandom is inferior to random, then use of urandom contrary to official guidance should be recommended.
[2]: The blog post only briefly mentions the rndc_addbytes and rndc_getbytes functions where the entropy provision and randomness-extraction bottom out, so it is possible that it just omits the details of any additional processing performed at that level. But it at least does not mention any further processing performed on the bytes from KCF providers except in FIPS mode.
Here's what I can say: I asked the crypto authors about the guidance in the past. They assured me the text had been updated to reflect current guidance.
When I asked Darren about this in the past (paraphrasing from memory), the response was that constraints on the implementation and/or applied by system configuration ensure that bytes from /dev/random provide the highest quality random numbers produced by the generator, and so are the most suitable for high-value keying material.
So as I understand it, yes, it's more than just applicable to the case of requests before the kernel CSPRNG has been seeded.
Also, keep in mind that on Solaris live migration may mean that your process (well, the zone hosting it anyway) is live migrated to an entirely different system and so may be hosted by a different kernel without your process ever being aware of it. So relying on assumptions about the state of the kernel is inadvisable.
If I receive any additional information I can share, I will do so.
Once again, cryptographically speaking, there's no practical sense in which a random is "high quality" or "low quality". There are cryptographically unpredictable numbers, and there are insecure numbers. As you can see from the Illumos code, unless Solaris deliberately broke their urandom (hint: they did not), urandom on Solaris produces (so long as it's seeded) cryptographically unpredictably random numbers.
That's the second randomness canard introduced on this subthread (the first being that there is a kind of cryptographic random number that is suitable for IVs and nonces but not for "long-term" cryptographic secrets). The two canards are related, but not identical.
I doubt the Solaris KCF team is thrilled to be virtually interposed into this argument; it is unlikely that they disagree with what I'm saying, since I'm making a pretty banal observation about FIPS cryptographic DRBGs and about the plain meaning of the KCF random code.
The Solaris urandom story is, in practical (end-user) terms, the same as urandom's story on Linux. There's some confirmation of this on Twitter, if you care to look.
Once again, cryptographically speaking, there's no practical sense in which a random is "high quality" or "low quality". There are cryptographically unpredictable numbers, and there are insecure numbers.
Once again, all information available to me contradicts your assertions:
"Bytes retrieved from /dev/random provide the highest quality random numbers produced by the generator, and can be used to generate long term keys and other high value keying material."
"While bytes produced by the /dev/urandom interface are of lower quality than bytes produced by /dev/random, they are nonetheless suitable for less demanding and shorter term cryptographic uses such as short term session keys, paddings, and challenge strings."
I doubt the Solaris KCF team is thrilled to be virtually interposed into this argument; it is unlikely that they disagree with what I'm saying, since I'm making a pretty banal observation about FIPS cryptographic DRBGs and about the plain meaning of the KCF random code.
Everything I've said has been taken from either the current documentation or from conversations I've had with the crypto team.
Since they confirmed the documentation is up to date and correct, then I don't see how your assertion can possibly be correct.
The Solaris urandom story is, in practical (end-user) terms, the same as urandom's story on Linux. There's some confirmation of this on Twitter, if you care to look.
I see no confirmation on Twitter from anyone that is currently working on Solaris -- only a past member that left the organization some time ago.
Until I have independent confirmation from the team involved, I'll have to agree to disagree.
So from a programmatic standpoint, developers should use /dev/random on Solaris if they believe the material being generated is "high value". It is up to the developer to determine whether the material being generated is "high value".