Hacker News new | past | comments | ask | show | jobs | submit login

It's very amusing that the various kernel developers are bashing on GnuPG, going as far as calling it's behaviour a "misuse. Full stop."

PGP/GPG has certainly falled out of favor.




I actually thought that was a bit unfair. Is it a misuse to use three times as much concrete as is strictly necessary? That would make the Empire State Building a "misuse" of concrete. Even if you aren't doing Empire State Building levels of overkill having engineering margins is a very well accepted thing. Is extracting 4096 bits from /dev/random for a 4096-bit RSA key "misuse" when said key only has between 200-300 bits of cryptographic strength? Meh.... I've got more important things to worry about; public key generation happens so rarely. And I do use a hardware random number generator[1] as a supplement when I generate new GPG keys.

[1] https://altusmetrum.org/ChaosKey/


The problem is that extracting more than you need is what ruined /dev/random for everyone.


So is GnuPG bad because it reads directly from /dev/random instead of using an interface like getrandom()? I'm naive enough to not know reading directly from /dev/random is bad and would love to know more.


The getrandom() syscall is relatively new. Before it was available, you had two choices.

Use a non-Linux OS with reasonable /dev/(u)random or use Linux with its Sophie's choice:

/dev/random will give you something that's probably good, but will block for good and bad reasons.

/dev/urandom will never block, including when the random system is totally unseeded.

GnuPG could not use /dev/urandom, since there was no indication of seeding, so it had to use /dev/random which blocks until the system is seeded and also when the entropy count of nebulous value was low. Most (all) BSDs have /dev/urandom the same as /dev/random, where it blocks until seeded and then never blocks again . This behavior is available in Linux with the getrandom() syscall, but perhaps GnuPG hasn't updated to use it? Also, there was some discussion in the last few months of changing the behavior of that syscall, which thankfully didn't happen, in favor of having the kernel generate some hopeful entropy on demand in case there is a caller blocked on random with an unseeded pool.


> This behavior is available in Linux with the getrandom() syscall, but perhaps GnuPG hasn't updated to use it?

GnuPG has been using getrandom() where available for over a year[1]. Obviously some distros may not yet have updated to a recent enough version, but it (and OpenSSL) are no longer among the offenders that cause /dev/random blocking hangs.

[1] https://lists.gnupg.org/pipermail/gnupg-announce/2018q4/0004...


So the issue is the block? I make a blocking call and another app attempts to make a call during the block and will fail if it's not expecting to wait? Is that (one of) the problem(s)?

Thanks for breaking that down for me!


So, if the random system hasn't been properly seeded, you do need to block, if you're using the random for security; especially for long term security, ex long lived keys.

The problem is, before this patch, Linux keeps track of an entropy estimate for /dev/random, and if the estimate gets too low, read requests will block. Each read reduces the estimate significantly, so something that does a lot of reads makes it hard for other programs to do any reads in a reasonable amount of time.

If you knew the system was seeded, you could use urandom instead, but there's not a great way to know. Perhaps, you could read from random the first time, and urandom for future requests in the same process... but that only helps in long running processes; also reading once from random and using it as a seed to an in-process secure random generator works almost as well. The getrandom() syscall is really the way forward, but you would need to keep old logic conditionally or accept loss of compatibility with older releases.

In summary, it's not really fair to say GnuPG is doing it wrong, when they didn't have a way to do it right.


Thanks! That makes sense. I appreciate you taking the time to break all that down.


It should have read just 16 or 32 bytes from /dev/random in order to seed its own CSPRNG (at most once per process invocation, only when first needed)


No! Per-processs CSPRNGs are a terrible idea. Fork-safety is hard. Swap-safety is hard.


I guess all programming is kinda hard, it's the nature of expectations of modern computing.

Per-process CSPRNGs are pretty common. Most programs don't fork without exec, no problem for them. Managing a per-process CSPRNG is only hard for libraries that might be used by some programs that fork without exec, and don't want to require the program to do anything right.

No! It's not hard, just don't screw it up. This is true of most things.


Why not just use getrandom() or CryptGenRandom() instead and simplify everything avoiding all those classes of bugs?

A user space CSPRNG is just a foot-gun waiting to go off.



Snowden kept his communications secure using GPG. The papers he leaked told us that the NSA was reading everyone's emails, and also that they weren't able to break GPG - which made sense, GPG was the respected gold standard. For a moment it looked like GPG might finally get its day in the sun.

And then suddenly, as if overnight, the "crypto community" was all about crapping on it. Open source and open standards were suddenly not so important, for reasons that were never really explained. Proprietary "secure" hardware was suddenly fine and not worth worrying about. Automated updates from a single vendor, yeah, why not. And a theoretical cryptographic property whose real-world impact was marginal-to-nonexistent (perfect forward secrecy) was suddenly the most important thing and a reason to write off any existing cryptosystem.

Call me a conspiracy theroist, but something stinks there.


GPG is fine if properly configured and very carefully used.

The current defaults GPG presents aren't that safe anymore and everyone who wants to develop integration with GPG suffers extreme pain because for GPG therer is only the CLI Interface.

Modern E2EE-capable chat solutions are a good replacement, which are cryptographically stronger and don't have the same chances of blowing up as GPG does.

I don't think it's that much of a conspiracy there is a bit of time between those events, it's simply that in the latest years, people are advocating for security tools that prefer being resistant to misuse (GPG isn't) and safe by default (GPG isn't) over other tools.


> The current defaults GPG presents aren't that safe anymore and everyone who wants to develop integration with GPG suffers extreme pain because for GPG therer is only the CLI Interface.

Entirely true.

> Modern E2EE-capable chat solutions are a good replacement, which are cryptographically stronger and don't have the same chances of blowing up as GPG does.

I'm not convinced. Most or all of these chat solutions seem to involve closed-source code, single-vendor implementations, closed networks, complicated protocols that lead to incomplete analysis, lack of pseudonymity, and an embrace of closed-source operating systems and hardware, and I think those things are still just as worrying as they were 10 years ago. I'm all for improving on the safety and usability of GPG, but I don't think the tradeoff in overall security that we're currently offered is a good one.


There is Signal (which has open forks), Matrix, XMPP and several others which support E2EE. For E-Mail there isn't a good alternative.


It is worth considering how things appear from the perspectives of the application developers.

* https://dev.gnupg.org/T3894


Well, I got as far as "just have your distro dynamically edit grypt conf to use urandom only after startup" before I considered that the GPG devs are being weird about it. Still took them half a year to replace "read(/dev/random)" with "getrandom()".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: