No, not really. The Efail attack is a pretty good example of how PGP's flawed design really just sets the system up for researchers to dunk on it; the GnuPG team's belief that they can't make breaking changes without synchronizing with the IETF OpenPGP working group ensures it'll remain like this for a long time.
The Efail attack was almost, if not entirely, a client issue where those clients were leaking information from html emails. There were no real weaknesses in the OpenPGP standard or the GnuPG implementation of that standard.
>... the GnuPG team's belief that they can't make breaking changes without synchronizing with the IETF OpenPGP working group ...
That does not actually sound like a bad thing to me.
The linked rant against OpenPGP/GnuPG takes the form of a semi-random list of minor issues/annoyances associated with the OpenPGP standard and the GnuPG implementation mixed together in no particular order. It ends with the completely absurd solution of just abandoning email all together. So you have to explain which parts of it support your contention.
The OpenPGP standard is in reality one of the better written and implemented standards in computing (which isn't saying much). There may in the future be something better but it is downright irresponsible to slag it without coming up with any sort of alternative. It is here and it works.
I think it's interesting that when a pattern of vulnerabilities is discovered that exfiltrates the plaintext of PGP-encrypted emails, a pattern that simply cannot occur with modern secure messaging constructions, the immediate impulse of the PGP community is to say "it's not our fault, it's not OpenPGP's fault, it's not GnuPG's fault". Like, it happened, and it happened to multiple implementations, including the most important implementations, but it's nobody's fault; it was just sort of an act of God. Like I said, interesting. Not reassuring, but interesting.
And when a practical application of SHA-1 collision generation to PGP is found, it won't be their fault either. After all, the OpenPGP standard says they have to support SHA-1! Blame the IETF!
Stuff like this happened to TLS for a decade, and then the TLS working group wised up and redesigned the whole protocol to foreclose on these kinds of attacks. That's never going to happen with PGP, despite it having a tiny fraction of the users TLS has.
Support is different than utilization. GPG no longer uses SHA1 for message digests and has not done so for a fair time now. This what the preferences in a public key generated with gpg2 recently look like:
So SHA1 is the last choice. Note that 3DES is there at the end of the symmetrical algorithm list. It ain't broken either so they still include it for backward compatibility. This is a good thing. It is essential in a privacy system for a store and forward communications medium.
What portion of the implementations have to fall victim to the exact same misbehavior before, in your opinion, it’s plausible to suggest that the issue is a foot-gun on the part of the overall standard/ecosystem?
Throwing out the discontinued things (Outlook 2007) and the webmail things that PGP can't be even sort of secure on (Roundcube, Horde) we end up with 7 bad clients out of a total of 27 good clients. So 26%. To get that they allegedly had to downgrade GPG.
Are you serious with this analysis? You counted the number of different client implementations? You don't think it matters that the "26%" includes GPGTools and Thunderbird?
This is like counting how many TLS implementations were vulnerable to Heartbleed. Was WolfSSL? GnuTLS? TLS.py? What, just OpenSSL? I guess things are looking pretty good!
Sorry, to clarify, I’m not asking what percentage of clients were vulnerable in this case. I’m asking what the threshold is, beyond which you would consider the possibility that the issue was with the broader spec/ecosystem rather than the individual tools.
Obviously something higher than 26% or zero depending on what you believe happened here....
... and just to be clear, we are only talking about Efail here... There is no pattern of client information leakage issues... So it is hard to generalize.
Regarding the threshold, no, I’m not talking about EFail, I’m speaking generally.
Much of this thread predicates on your claim that an issue which could have been caught in an end-use implementation is only a flaw in that implementation. By contrast, I’m claiming that it’s the responsibility of a secure specification and ecosystem to guard against classes of misuse, such that end-use implementations are not each individually required to mitigate that class of issue.
None of the above is specific to EFail, but the resulting threshold is noteworthy for EFail. Even if, by your numbers, we’re talking about 26% (I’m not sure why it would be “or zero depending on what you believe happened here”, since there’s not really any interpretation of what happened here where 0% of implementations were impacted), that’s a quarter of implementations that were impacted by this class of misuse (the ecosystem/specification not enforcing that a MAC check pass before returning decrypted plaintext).
As tptacek points out in a parallel comment, this is a pretty skewed measurement, because that 26% of implementations accounts for the vast majority of actual users (for example, “Thunderbird” and “GPGTools” account for the same weight in your percentage as “United Internet” and “Mailpile”). But even so, if a quarter of apples in my basket were bad, I’d potentially stop blaming individual apples and start to wonder about quality control at the grocery store.
As is exemplified by newer libraries like Nacl / libsodium, a primary goal of a strong cryptographic library is providing interfaces which make correct usage easy and avoid classes of misuse, so that authors of end-use tools are not each required to replicate the same correct sequence of sanity checking and safeguards, with any misstep being a security fault for their users.
I’m still curious for your threshold. For example, by your measurement methodology, is the EFail attack on S/MIME clients purely a client error? In that case, 23 of 25 tested end-use implementations were vulnerable, or 92%. Is 92% enough widespread impact for ownership to bubble up to the overall S/MIME specification, to guard against this kind of misuse?
OK, but this is all in response to an attempt to have me come up with an acceptable level of client information leaks. That is an obvious trap and attempt to turn this into a discussion about me. So I had fun with the idea instead.
None of this changes the fact that the GPG people claimed that the current implementation was not vulnerable with any of the clients and that the Efail people had to downgrade GPG so they could even mention GPG at all. If that is true (and there is evidence that it was) then the whole thing was just a hoax, at least as presented.
In other words, at the time of Efail, there was nothing further that the GPG people could do to work around the dodgy client implementations that were leaking data in general. They had already done it.
Even if the GPG implementation and/or OpenPGP standard had of been entirely broken, this is still mostly the email clients fault. The information leak from URLs loaded in HTML emails was up to that point routinely exploited. Heck, it is still routinely exploited. Efail did not actually result in a fix for all or even most of the email clients affected.
I was curious for your thoughts on the threshold for how widespread an end-use implementation issue needs to be before you’d consider it to be exemplary of an issue with the spec/ecosystem, which is why I asked about that.
Given that you’d established that this issue, in your opinion, wasn’t with GPG but instead was with the end-use implementations, I thought that discussing that threshold would help clarify the point under discussion.
I didn’t ask about an “acceptable” level of anything, nor was this intended to make the discussion about you, except insofar as you are a party to the discussion, so I was attempting to get more details on your position.
The new position you’ve given, that EFail was a “hoax” and that GPG wasn’t vulnerable, is pretty readily false given the details already provided as part of the EFail disclosure. The claim from the GnuPG devs (between https://lists.gnupg.org/pipermail/gnupg-users/2018-May/06032... and https://lists.gnupg.org/pipermail/gnupg-users/2018-May/06033... ) is that GnuPG will print a warning if the MDC is stripped or fails to validate. This isn’t disputed by the EFail release, which notes that the issue occurs because decrypted plaintext is returned alongside the eventual warning, and that common clients will utilize the decrypted plaintext despite the warning.
This is the crux of the discussion as to whether this is an end-use implementation problem or a spec/ecosystem problem. My position is that as a security-focused tool, GnuPG should not present foot-gun opportunities like this which require all end-use implementations to handle their own validation for these kind of failure modes. A proper security-focused tool would refuse to provide decrypted plaintext in the case that the MAC check failed, because it would have required the MAC check to pass before ever starting to decrypt anything.
GPG just happened to have a check that could of prevented this particular attack if the client had done certain things in response to the failure of that check. S/MIME didn't have a check of that type and was as a result was more affected by the attack. S/MIME had one less "footgun" than GPG did. That still didn't help anyone and in this particular case make things worse in practice.
There is a tendency in these sorts of things to get so wrapped up in the details that the root issue gets forgotten about. In this case the root issue is the leakage of information from HTML emails. After that, what really matters here? What point is there in considering each and every thing that could of been different that would of prevented the attack? Sure, if I hadn't of left the house on Thursday I would not of been hit by the bus, but this particular insight is not valuable in any way.
The hoax here is the suggestion that PGP (and S/MIME for that matter) was broken in some way. The original paper was called "Efail: Breaking S/MIME and OpenPGP Email Encryption using Exfiltration Channels" which was not just misleading, it was straight up wrong.
S/MIME didn’t have one less footgun: it had roughly the same footgun. The fact that GPG prints a warning isn’t the footgun, the footgun is the fact that decrypted plaintext was returned even if the MDC check failed.
The point of considering each thing that could have prevented an attack is clear, and is a central part of threat modeling and defense in depth. Those concepts aren’t really controversial. Thinking critically about the parts of a system that can contribute to adverse results, and then applying mitigations and avoiding pitfalls, is a pretty core part of basically all engineering (software and otherwise).
The bus analogy (if I hadn’t left the house on the day I got hit by a bus, I’d not have gotten hit) would, in a threat modeling context, be accurately identified as both ‘definitely true’ and ‘low probability’. Yes, leaving your house is dangerous, for a variety of reasons. But the relative danger of leaving your house vs not leaving your house is more questionable (staying inside the house is likewise dangerous), and the probability of leaving-the-house causing hit-by-bus is low. But a security tool returning plaintext despite a MAC fail isn’t like leaving your house, it’s like looking both ways once you’re already crossing the street. GPG warns you there’s a car coming, but you’re already standing in front of the car. A dexterous human could potentially dive out of the way, as an end-use implementation could discard the decrypted plaintext when it sees the MDC warning, but a root-cause-analysis would still rightly suggest that you should be looking both ways before crossing, rather than during.
Taking this thread in aggregate, it’s interesting to me how the goalposts keep shifting. The original thrust was “There were no real weaknesses in the OpenPGP standard or the GnuPG implementation of that standard”. When pressed, your position shifted to include “Even if the GPG implementation and/or OpenPGP standard had of been entirely broken, this is still mostly the email clients fault.” You’ve now further shifted to questioning why we should even worry about whether GPG could be better (“What point is there in considering each and every thing that could of been different that would of prevented the attack?”).
I don’t understand the rigidity with which you refuse to consider the possibility that GPG could have better handled this kind of threat.
> OK, but this is all in response to an attempt to have me come up with an acceptable level of client information leaks. Since that is an obvious trap I had fun with the idea instead.
Oh, you're commenting badly on purpose because you misinterpreted something as a 'trap'? Great.
It's not a trap question. If something gets misimplemented once, it's maybe probably not an issue with the spec. If it happens over and over again, it's suspicious.
It has evolved in the standard as well as the implementations. It is a bit silly to claim that the current thing is bad because there was a program once with a similar name.