Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Myths about /dev/urandom (2014) (2uo.de)
201 points by Ivoah on Jan 6, 2017 | hide | past | favorite | 104 comments


I've given up on the hope that these myths will ever die. Every time it's relevant, someone pipes up complaining urandom isn't safe for use, or will run out of entropy. Have a look at the bugs for Node, Ruby and Wordpress.

And inevitably, the appeal to authority ends up referring to these projects.

Serious question, if I submitted a patch for the man page detailing the content of this myths page, is there a chance it might go somewhere and put somewhat of an end to some of this?



Finally!

It would be even better if there was some special file with the same semantics of getrandom, so the advise could be simply "use this file, or the getrandom syscal, whatever is easier". The way it is now, no special file is foolproof.


Wow, those are big changes! I could see them having a big effect because the man page is usually the ultimate source of people's intuition that /dev/random is much safer/more appropriate for key generation.

If you can just say "actually the man page no longer says that", that's pretty significant.


As long as people post articles about "myths", the myths won't die, because people will remember the myth, and forget the debunk!

PLEASE: If you wanr people to remember your takeways, write TRUE statements, not False statements with debunkings!


That's what the Debunking Handbook advises to do too.

https://www.skepticalscience.com/Debunking-Handbook-now-free...


Especially if the false statement is written in bold and the true statement is not. That could lead to remembering the myth more than the true statement, even though you know it's false.


> Serious question, if I submitted a patch for the man page detailing the content of this myths page, is there a chance it might go somewhere and put somewhat of an end to some of this?

I would say go for it. It would probably make some difference (assuming the patch were accepted). People do frequently quote the man page when arguing about this issue.


Careful, the rules for random vs. urandom on Linux do not apply to all other NIX or NIX-like operating systems. As an example, Solaris makes them different intentionally and provides guidance on their appropriate use.

I'll just point to my comments from last time:

https://news.ycombinator.com/item?id=7363188 https://news.ycombinator.com/item?id=7364121


People say this every time this issue comes up with, and point to the same very long Solaris RNG blog post. But that blog post doesn't support the claim: it says that urandom and random are different on Solaris, but that urandom is a FIPS-derived DRBG running from a kernel random pool --- ie, a kernel CSPRNG, like on Linux.

Can you be specific about why you believe Solaris urandom would be unsuitable for any specific cryptographic task?

The fact that the "Solaris cryptographic framework team" believes something to be true is inadequate evidence for me.


> Solaris cryptographic framework team" believes something to be true is inadequate evidence for me.

I don't follow - why would you believe that something is secure despite the developers of it saying it is not? If you trust that they are competent, wouldn't you trust a competent cryptographer who says that their code is insecure? And if you don't trust that they are competent enough to make that evaluation, why would the assumption be that they are still somehow able to write secure code, even if they can't correctly identify it as such?


In security, competence in securing things and level of paranoia about possible threats are pretty orthogonal. Someone can be very good on a ground level when it comes to following best practices to get crypto done, while also imagining all sorts of implausible threat scenarios without thinking them fully through that make them say that what they've done is "not enough."

You can find such people outside of cryptography as well: for example, the parent (or bodyguard) who won't let their child (client) leave the house because of all the deadly things that happen every day to people who leave their houses.

It's what happens when you combine a profession that relies on a certain amount of healthy anxiety, with an anxiety disorder.


Its kind of like asking a lawyer, "If I do X, will that prevent me from being sued?"

The answer is always No, but they think about it for a few hours before they reply.


The cryptographic framework teams of operating system projects have not generally been great sources of authority on cryptographic engineering, which is a much narrower speciality than a lot of people think it is.

That doesn't make them incompetent! The lawyer comparison is a telling one. I have a lawyer I work with on contract review that I think is amazing. But that doesn't mean he's my best source of wisdom about litigation, because litigation is a very specific speciality of law practice, and most lawyers don't do it. Just like the OS crypto developers, he has to know a lot of stuff about litigation to do his job, and I respect that. But that doesn't make him a litigator.

The LRNG developers thought they were accomplishing something quite important with the /dev/random reseeding/blocking system. But as you've seen from the man page update, the consensus is, that thing they were trying to accomplish was in fact counterproductive.


Can you be specific about why you believe Solaris urandom would be unsuitable for any specific cryptographic task?

The short version is that, on Solaris, /dev/random has certain guarantees that /dev/urandom does not and so if you are generating long-term keys or high-value keying material, you should use /dev/random.

While Solaris (over time) has tried to make the differences between the two as little as possible, for a variety of reasons, they are not identical.

As just one example, one difference between the two is that, for organizations or individuals with specific security requirements, /dev/random can be configured to use only hardware-based sources registered with the kernel-level cryptographic framework by disabling the software-based provider using cryptoadm.

The fact that the "Solaris cryptographic framework team" believes something to be true is inadequate evidence for me.

They are the domain experts, authors of said material, and my friends. I'm sorry that you don't believe them, but I've known some of them almost a decade or more and I have no reason to believe they have anything other than the best interests of others in mind when they provide this guidance.

In the end, you'll have to choose what to believe on your own, all I can tell you is that the Solaris crypto team provides the guidance that "high-value" keying material should be generated using /dev/random and that I have every reason to believe that advice is sound and competent.


If you're going to argue that Solaris random is more secure than Solaris urandom, it's problematic that you're claiming that urandom is OK for "short term" secrets, because that's not how cryptographic attacks on randomness work. This is the same argument Ted T'so made on HN a few years ago, and it was pretty easy to point out that attacks on things like nonces and IVs were just as devastating as attacks on things like keys.

Further: no matter what authority you're going to appeal to, I'm still going to look at what the systems engineering details are. According to the document you sent, Solaris urandom uses a cryptographic DRBG seeded from a kernel random pool. That's what the LRNG does, too.

In fact, if I was going to take your appeal to the Solaris cryptographic framework team seriously, I would also have to concede that Linux urandom was insecure --- because the LRNG team has maintained for years that it is inferior, and only this year is finally conceding otherwise.

What am I missing? Can you be specific?


As a result of system configuration or specific requirements placed upon an organization, contractual or otherwise, only /dev/random is guaranteed suitable for high-value keying material.

So from a programmatic standpoint, developers should use /dev/random on Solaris if they believe the material being generated is "high value". It is up to the developer to determine whether the material being generated is "high value".


Yes, that's what I understood you to be saying before. What I'm saying is that nobody has come up with an argument for why that would be. In fact: every argument, even the ones that get down to the level of kmem-style magazines, ultimately ends up in an argument isomorphic to the argument we just had about the LRNG.

No case has actually been made for why the argument is different on Solaris than it is on Linux. I'm increasingly convinced that's because there's no difference, regarding this issue, between Linux and Solaris. The generators are different, but equally safe once the generators are seeded.


Let me put it plainly -- system configuration can affect /dev/random in ways that do not affect /dev/urandom.

As a result, organizational or contractual requirements that a system administrator may have are only guaranteed to be met when using /dev/random.

This system configuration is specific to Solaris, which is why Solaris is different than Linux.


See, the problem with this is that you haven't put it plainly. I understand how CSPRNGs work. I understand a lot about how Linux's works, and a little bit about how Solaris's works. My sense is that if there's an argument about how Solaris urandom is inferior, I should be able to understand what it is. What is it?

I'm becoming increasingly convinced that there is no difference between the urandom story on Linux and the urandom story on Solaris. Not that the generators are the same, but that the differences simply do not matter.

If you don't know the specific answer, could you get one of your friends on the team to chime in? I'm reaching a threshold at which I'm going to start noisily telling people that urandom on Solaris is fine --- incidentally, a lot of very well-regarded software already agrees with me, so I feel reasonably safe joining the chorus.


I have personally verified that the documentation and guidance in Solaris is up to date and correct per the authors.

As I said before, and I as I will say again, the differences do matter for some administrators with specific contractual and/or other obligations and when generating "high value" keying material.

If you choose to advise others contrary to the documented guidance that Solaris provides, that is your choice.


I've just read through the Illumos code, and for Illumos at least, urandom actually seems less scary: there's a direct code path in /dev/random that reads raw entropy bytes (like thread timing) and returns it to callers, but the urandom path always goes through fips_random_inner() or equivalent.

Always use urandom. If your contracts require you not to, revise your contracts, not your code.


The illumos code is more than five years diverged from Solaris. It is no longer a point of valid comparison for some subsystems such as crypto.

Use of urandom contrary to official guidance is not recommended.


The reason tptacek looked at the illumos source code is because I mentioned to them that a blog post you've previously linked to about the Solaris random devices [1] appeared [2] to suggest that the entropy provided by KCF randomness providers is given out fairly directly by /dev/random (with each byte of entropy XORed with the byte returned 1024 bytes earlier). Do you know if that specific thing has been changed to stop being true since illumos diverged? Do you know if urandom has been changed to no longer always run things through fips_random_inner (as illumos does and Darren Moffat's blog post says Solaris does)?

[Edited to add: it looks as though the tweeting questions-at Darren Moffat protocol has been initiated: https://twitter.com/tqbf/status/817496091759362048 . It's also worth noting (which I failed to do previously) that two of the three random providers in the blog post are described as doing their own hashing/similar processing of the entropy bytes they provide to the random pool, and the other one appears to do so from the illumos source.]

> Use of urandom contrary to official guidance is not recommended.

This is, I think, a case where using the passive voice is suboptimal. The official guidance obviously does not recommend using urandom contrary to official guidance, nor do you. Some others do recommend it, as this whole argument shows.

More substantively, this is the part of your stance that would be, as tptacek previously suggested, equally applicable to Linux urandom prior to Linux fixing their man pages. At that time, the official guidance on urandom was that it was inferior to random in general instead of solely in the one specific case of requests before the kernel CSPRNG has been seeded. If the official guidance is incorrect about when and if urandom is inferior to random, then use of urandom contrary to official guidance should be recommended.

[1]: https://blogs.oracle.com/darren/entry/solaris_random_number_...

[2]: The blog post only briefly mentions the rndc_addbytes and rndc_getbytes functions where the entropy provision and randomness-extraction bottom out, so it is possible that it just omits the details of any additional processing performed at that level. But it at least does not mention any further processing performed on the bytes from KCF providers except in FIPS mode.


Here's what I can say: I asked the crypto authors about the guidance in the past. They assured me the text had been updated to reflect current guidance.

When I asked Darren about this in the past (paraphrasing from memory), the response was that constraints on the implementation and/or applied by system configuration ensure that bytes from /dev/random provide the highest quality random numbers produced by the generator, and so are the most suitable for high-value keying material.

So as I understand it, yes, it's more than just applicable to the case of requests before the kernel CSPRNG has been seeded.

Also, keep in mind that on Solaris live migration may mean that your process (well, the zone hosting it anyway) is live migrated to an entirely different system and so may be hosted by a different kernel without your process ever being aware of it. So relying on assumptions about the state of the kernel is inadvisable.

If I receive any additional information I can share, I will do so.


Once again, cryptographically speaking, there's no practical sense in which a random is "high quality" or "low quality". There are cryptographically unpredictable numbers, and there are insecure numbers. As you can see from the Illumos code, unless Solaris deliberately broke their urandom (hint: they did not), urandom on Solaris produces (so long as it's seeded) cryptographically unpredictably random numbers.

That's the second randomness canard introduced on this subthread (the first being that there is a kind of cryptographic random number that is suitable for IVs and nonces but not for "long-term" cryptographic secrets). The two canards are related, but not identical.

I doubt the Solaris KCF team is thrilled to be virtually interposed into this argument; it is unlikely that they disagree with what I'm saying, since I'm making a pretty banal observation about FIPS cryptographic DRBGs and about the plain meaning of the KCF random code.

The Solaris urandom story is, in practical (end-user) terms, the same as urandom's story on Linux. There's some confirmation of this on Twitter, if you care to look.


Once again, cryptographically speaking, there's no practical sense in which a random is "high quality" or "low quality". There are cryptographically unpredictable numbers, and there are insecure numbers.

Once again, all information available to me contradicts your assertions:

"Bytes retrieved from /dev/random provide the highest quality random numbers produced by the generator, and can be used to generate long term keys and other high value keying material."

"While bytes produced by the /dev/urandom interface are of lower quality than bytes produced by /dev/random, they are nonetheless suitable for less demanding and shorter term cryptographic uses such as short term session keys, paddings, and challenge strings."

https://docs.oracle.com/cd/E53394_01/html/E54777/urandom-7d....

I doubt the Solaris KCF team is thrilled to be virtually interposed into this argument; it is unlikely that they disagree with what I'm saying, since I'm making a pretty banal observation about FIPS cryptographic DRBGs and about the plain meaning of the KCF random code.

Everything I've said has been taken from either the current documentation or from conversations I've had with the crypto team.

Since they confirmed the documentation is up to date and correct, then I don't see how your assertion can possibly be correct.

The Solaris urandom story is, in practical (end-user) terms, the same as urandom's story on Linux. There's some confirmation of this on Twitter, if you care to look.

I see no confirmation on Twitter from anyone that is currently working on Solaris -- only a past member that left the organization some time ago.

Until I have independent confirmation from the team involved, I'll have to agree to disagree.


So: the man page says so.


Unless Solaris regressed its urandom, the difference is immaterial.


The man pages are being worked on. At least I was CC:ed on some relevant mails in December (I think), but couldn't really follow it since I've just moved to a new apartment.


> Have a look at the bugs for Node, Ruby and Wordpress.

Worth noting: WordPress has been using the OS's CSPRNG since 4.4.

https://paragonie.com/blog/2015/10/coming-wordpress-4-4-cspr...


That's great progress! I clearly remember a lot misinformed tweets doing the rounds before you got there.


There's even more work to be done on the WordPress front (and PHP in general for that matter): making automatic updates secure even if the server gets hacked.

https://core.trac.wordpress.org/ticket/39309

https://github.com/paragonie/sodium_compat

I'm currently writing a pure-PHP libsodium polyfill that, once audited, will be proposed to shore up the security of PHP projects the whole world over. Among other things, this means that everyone will be able to adopt Argon2i for password hashing even if they support PHP 5.2.4 like WordPress does.

Ideally, everyone would get in the habit of updating to the latest versions more rapidly. But when 27% of the Internet runs software that refuses to modernize, that becomes a huge risk to the rest of the Internet.


Probably not. That would assume people actually read man pages.


For end users I would agree. However:

https://bugs.ruby-lang.org/issues/9569

I many technical discussions, the man page becomes the compelling argument.


That's a different argument. Their usage is correct and you should use a CSPRNG to generate your random data like, arc4random or the ones provided by OpenSSL.


No, you should not use a userspace CSPRNG. That's the whole point.


A useful article, originally from 2014 but updated recently.

Previous HN discussions:

https://news.ycombinator.com/item?id=7359992

https://news.ycombinator.com/item?id=10149019


I remember seeing it before. It's a good post.

What I think is really interesting is the question: how do you prove something is random. If you take a chunk of data that uses strong encryption, without headers or other identifiers, it should look like it has a random distribution.

That's one of the reasons for the great concern around the Snowden leaks that implied some manufactures had compromised their hardware AES chips:

https://theintercept.com/2016/01/04/a-redaction-re-visited-n...


While it may be generally better to use /dev/urandom, what about the Mining Your Ps and Qs paper?[1]. It finds that:

> Every software package we examined relies on /dev/urandom to generate cryptographic keys; however, we find that Linux’s random number generator (RNG) can exhibit a boot-time entropy hole that causes urandom to produce deterministic output under conditions likely to occur in headless and embedded devices. In experiments with OpenSSL and Dropbear SSH, we show how repeated output from the system RNG can lead not only to repeated long-term keys but also to factorable RSA keys and repeated DSA ephemeral keys due to the behavior of application-specific entropy pools.

This is mentioned a little at the end of the article. Would it be a breaking change for Linux to block urandom at startup?

[1] https://factorable.net/weakkeys12.conference.pdf


Read 32 bytes (i.e., 256 bits) from /dev/random, write them to /dev/urandom, and then use /dev/urandom for everything.

And every few months, send an email to Ted T'so with a patch fixing this behaviour. Maybe if every single Linux user bugs him he'll finally capitulate.


1. That was exactly what I was thinking would work best as a workaround if your program might run at boot-time.

2. Perhaps somebody could create one of those preloaded email forms like those political organizations do where you fill in your name and email and it does the rest up to and including sending the email?


It's unfortunate, but this issue lends a kernel of truth to all of the hysteria. /dev/urandom IS flawed. It is unfortunate, but the linux kernel has two different kernel RNG devices, and they are both different from proper security practice.


The way it is now, the only recommendation you can give is to use the getrandom syscall. The special files are not foolproof.


The way to work around this is to either use a method which blocks until seeded (getrandom) or to simply seed it yourself before using it.


Wouldn't using a method that blocks mean using something like /dev/random, which is exactly what this article is against?


/dev/urandom does have a problem with not enough entropy sometimes. Yes it never "runs out" of entropy when it has enough previously. But when a computer first boots up, sometimes it doesn't have enough entropy yet, and gives bad output.

This causes problems in practice, allowing people to crack RSA private keys. https://factorable.net/weakkeys12.extended.pdf


Yes in that case you want to use getrandom/getentropy.

Also note (as TFA indicates) that urandom not blocking on uninitialised entropy pool is mostly a Linux thing e.g. BSD urandom will block until the system the system is correctly initialised.


"that case" is every case unless you know and control exactly which OS, on which (potentially virtual) hardware and under which circumstances your code is going to run, now and in the future.

That excludes anything that is meant to be used by other people. Fragile security is no security, and relying on undocumented assumptions like "is not running on Linux", "starts late in the boot process" and "is not running on an VM" is incredibly fragile.

Unfortunately people who really should know better keep writing articles like this one, dispelling the "myth" that urandom works exactly as documented.

Which leads us to situations like this:

  $ dmesg|grep random
  [    0.469142] random: systemd-tmpfile: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.470297] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.470325] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.470877] random: udevadm: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.470890] random: udevadm: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.471936] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.471950] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.471969] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.472132] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    0.472142] random: systemd-udevd: uninitialized urandom read (16 bytes read, 3 bits of entropy available)
  [    1.909082] random: nonblocking pool is initialized
Oops, hope none of that was for anything important, say generating a long-term cryptographic key! But of course if we're to believe TFA... "Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems."

Just use getrandom. If you can, consider using only getrandom. But if your code must work on systems without getrandom or similar, before following the advice of this article, ask yourself if what you are doing is worse of blocking, or reading out a grand total 3 bits of entropy. You'd probably prefer the former.


"I don't think systemd is doing anything wrong here, and we really shouldn't change anything." -- Lennart Poettering, 2016-09-18, https://github.com/systemd/systemd/issues/4167#issuecomment-...

The bug report also explains what systemd is doing here.

"systemd starts allocating those hashtables very very early on, before any process is forked off... It's par tof the initialization scheme of systemd really. Hence there's basically nothing else going on in the system, execept what is done by the kernel itself", -- Lennart Poettering, 2016-05-05, https://lists.freedesktop.org/archives/systemd-devel/2014-Ma...

"moving that blocking behavior to /dev/urandom simply does not work. The system does not boot. The reason to this issue is actually quite simple. The init process of systemd reads /dev/urandom for whatever purpose. Now, when /dev/urandom blocks during boot, systemd will be blocked too." -- Stephan Mueller, 2016-10-22, https://lkml.org/lkml/2016/10/21/982


tldr; just use /dev/urandom --- this should be at the top of the page.

Maybe prefix the bold sections with Myth: so it is easier to read. I was really confused as to which side of the argument the author was on at first...


I, too, found it INCREDIBLY difficult to read. Perhaps because I don't have much understanding of random-number generators, but I had to re-read the first handful of entries about 3 times before I could catch on. I think a big part of it was my tendency to read the BOLD statements as the facts. Then when it said FACT beneath it, I had to start all over again.


Or better use getentropy(2)/getrandom(2) when it is possible.


arc4random and arc4random_buf are probably better still.


Please don't use arc4random* for cryptographic purposes. RC4 is broken -- there are practical attacks against it. It's fine for non-secure purposes though (eg. monte carlo simulation).


The 'arc4' no longer refers to RC4 on macOS/iOS and OpenBSD. On those systems 'arc4random' is playfully a bacronym for "A Replacement Call for Random". The new arc4random* implementation will now be replaced as cryptographic techniques advance.

It appears the Apple and OpenBSD implementations use the getentropy syscall and then add additional entropy mixing.


Yup, OpenBSD have even changed calls to other entropy sources to returning strong random by default from arc4random.

http://marc.info/?l=openbsd-cvs&m=141807513728073&w=2



arc4random uses chacha20 on openBSD now, has for several years, OSX and FreeBSD are yet to update...


The macOS Sierra man page says it uses the NIST-approved AES cipher and will be replaced as the techniques advance.


True, but never pass GRND_RANDOM or GRND_NONBLOCK.


NONBLOCK just returns an error code instead of blocking. No harm in that, AFIAK.


Just wait until people start using the EAGAIN value in keys.


Why do these flags exist if they should never be used?


There are probably good uses for them.

But unless you are doing something very tightly integrated with Linux, you should just ignore them.


Also Filippo Varsaldo has a nice talk about this:

https://www.youtube.com/watch?v=Q8JAlZ-HJQI


Please don't link to talks from CCCen. They contain adds and is not the offical channel from the C3VOC. Link to the mediacccde channel (https://www.youtube.com/user/mediacccde) or better; https://media.ccc.de.

https://media.ccc.de/v/32c3-7441-the_plain_simple_reality_of...


Huh, they do a good job of making it look official. (Definitely more so than mediacccde.)

I'll have to link from media.ccc.de from now on.


So what are the cases when you should actually use /dev/random and not /dev/urandom? The article fails to address this in clear.


You shouldn't use /dev/random. The article makes that very clear.


/dev/random still exists for a reason, and I believe the reason isn't backward compatibility only.


I do believe the reasons are exactly backward compatibility and politics¹.

1 - As in "oh just keep it there, it's easier than convincing everybody that it's useless".


"The /dev/random interface is considered a legacy interface, and /dev/urandom is preferred and sufficient in all use cases, with the exception of applications which require randomness during early boot time; ..."

From http://man7.org/linux/man-pages/man4/urandom.4.html


OK, "applications which require randomness during early boot time" is one such case. Thank you for pointing out.


No, it isn't. During early boot time, you should use the system call interface, not the special files.


I am so confused about this all the time. My solution is to go see what Go is using: https://golang.org/src/crypto/rand/rand.go Trusting them to do the Right Thing.

Is there a good reason why /dev/random AND /dev/urandom exist?


What do you want the behaviour to be when there isn't enough entropy to provide high-quality random numbers? For cases where you need the random number right now, you probably just want the best-quality number available (urandom). For cases where it's an important random number you'll be using for a long time, like SSH/SSL key generation, you probably want to block until more entropy is available (random).


There's no such thing as a "low quality random number" from a CSPRNG. The outputs of a CSPRNG are either insecure, because the CSPRNG hasn't been seeded properly, or they're secure --- for all intents and purposes, forever. That's the problem with the old version of the Linux man page.

The new version of the man page resolves this problem, and says outright that urandom is the preferred interface, and that /dev/random is obsolete; applications that run during early boot time should instead use the system call interface.


> The outputs of a CSPRNG are either insecure, because the CSPRNG hasn't been seeded properly

In what sense is this not a "low quality random number"? Every CSPRNG I've seen will output numbers that pass many statistical tests for randomness even if seeded with e.g. zero - is that not a "low quality random number" in the usual sense of those words?


What they are getting at is that thinking of this in terms of the "quality of the randomness" is thinking about it in quite the wrong way that leads one right up the garden path; so stop even thinking about it like that. Discard that mental model.

The randomness has the same quality. It's the same pseudo-random number generation algorithm. Only in one case, the world knows your seed value, and can predict anything that you do that derives from pseudo-randomness; whereas in the other case, the world does not know your seed value, should your seed value be discovered somehow you are regularly re-seeding anyway, and the world cannot predict your actions.


I bought a TrueRNG v3 [0] a month or so ago and have been using it with rngd / rng-tools.

I wish rngd was easier to use with multiple entropy sources, however. Even w/ RDRAND (times two -- dual CPUs), a TPM, and the TrueRNG, it's difficult to (easily) tell which of these are being used and/or if more than one is being used.

Ideally, I'd like to be able to tell rngd to use/mix the TrueRNG, RDRAND in both CPUs, the TPM, and any other entropy sources I may come up with, such as RTL-SDRs doing funky things [1]. I suppose I could just run multiple instances of rngd, though.

Anyway, I've switched to just using /dev/random for pretty much everything (where it can be configured) since, with the TrueRNG, it never blocks on me now.

[0]: http://ubld.it/truerng_v3

[1]: https://www.google.com/search?q=rtl-str+entropy


What I did instead was switch to /dev/urandom, which never blocks either.


Why would you do this? What problem is a USB hardware RNG solving for you?


In this particular case, I'm evaluating it on my workstation. The intended use, however, is for an offline, headless machine (that is normally powered off) that is used to generate cryptographic keys.

Do I absolutely require it? Probably not. For $50, though, I thought "might as well try it".


I would recommend you not do this.


These 'running out of entropy' graphs are not exactly a reassurance these people are interested in your security as much as in selling you something.


What I've got from this thread is that you never touch security-related prngs unless you're an expert in security, cryptography and operating systems, all at the same time, and even then you care a lot. Thanks for deeper debunking.


The advice on this page is accurate and in line with what cryptography experts will recommend.

That said, if you're looking for more of the how and less of the why: https://paragonie.com/blog/2016/05/how-generate-secure-rando...

Contains snippets and recommendations for C/C++, Erlang, Go, Java, (Browser) JavaScript, .NET, Node.js, PHP, Python, Ruby, and Rust.


Though you're answering a different (more relevant, but different) "how". That's "how do I find a library for random numbers". The article is focused on getting random numbers directly from the Linux kernel, and the "how" there is invoking getrandom(buffer, len, 0).


This is really no problem now that Intel chips ship with RDRAND instructions. Linux should follow BSD/Mac's footsteps and incorporate a good CSPRNG non-blocking devices (after seed). I believe BSD is moving towards Fortuna, which is perfect for this use-case.

With these instructions you'll always get a "good"[1] seed for your CSPRNG, and that includes virtual machines and clones.

[1] Of course that depends on how much you trust Intel. Don't ask the crypto mailing lists whether this is a good idea :-)


From Theodore Ts'o, 2012:

"Create a new function, get_random_bytes_arch() which will use the architecture-specific hardware random number generator if it is present. Change get_random_bytes() to not use the HW RNG, even if it is avaiable.

...

So it's much better to use the HW RNG to improve the existing random number generator, by mixing in any entropy returned by the HW RNG into /dev/random's entropy pool, but to always _use_ /dev/random's entropy pool."

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...


I'm not sure I'd say it's better or worse. I think it's unnecessary. You only need like ~16 bytes of a random seed and your CSPRNG will run forever with unlimited unpredictable stream of bits.

Intels instructions perform the same as a HWRNG only it's built in to every chipset >= Ivy bridge. I actually think Linux's rng-tools incorporates it as a mix automatically.


I found interesting djb's blog post about a theoretical scenario where mixing entropy can actually be dangerous: https://blog.cr.yp.to/20140205-entropy.html


As of Kernel 4.8, Ted Ts'o has already switched /dev/urandom over to ChaCha20 (https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....), so I would say a good CSPRNG's already done.


One to add - it's not fast at providing a value

I remember learning this when I found /dev/urandom multiple times slower than /dev/null while writing garbage data to disk.


Oh really :)


its like aes 128 bit vs aes 256 bit, both are secure but when in doubt me, you and the nsa choose 256 bit


[flagged]


please consider getting sterilized.

There's never a need for uncivil personal attacks like this, regardless of the topic or how ignorant or wrong-headed you think someone to be.


[flagged]


This account has been posting mostly uncivil or unsubstantive comments. Please stop and re-read the guidelines before commenting again.

https://news.ycombinator.com/newswelcome.html

https://news.ycombinator.com/newsguidelines.html


"And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!"

http://www.pcworld.com/article/3117728/security/why-quantum-...

https://en.wikipedia.org/wiki/Grover's_algorithm


Grover's algorithm, even assuming that we have hardware that can efficiently implement it, improves the runtime of a brute force attack on AES-256 from 10^49 * age of the universe to 10^10 * age of the universe. Hardly practical. All symmetric ciphers and all hashes are quantum-proof. Key exchange might become problematic, but that is a different problem


"Practical attacks". "Quantum computers".

I fail to see how these might be related.


If you're working on storing cat videos on your personal website, I completely agree.


No, I'm storing urandom articles on my personal website.

Cat videos are much too popular and a lot bigger, and since I'm paying for traffic... [0] ;-)

[0] regular traffic: ~30 MB/day, today (projected): ~3.5 GB




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: