Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPG and Me (2015) (moxie.org)
101 points by todsacerdoti on Nov 29, 2020 | hide | past | favorite | 111 comments


> The journalists who depend on it struggle with it and often mess up (“I send you the private key to communicate privately, right?”), the activists who use it do so relatively sparingly (“wait, this thing wants my finger print?”), and no other sane person is willing to use it by default.

I used to think that the GPG web-of-trust verification in person was antiquated and no one would do it in practice. Which they don't.

With Signal, when someone changes their phone, or reinstalls Signal, or one of five other things, I am supposed to...verify their "safety number" in person.

I am not smart enough to make meaningful comments about cryptography, but the expectations from the user to verify things in real life seem the same in both cases.

Am I just wrong here, or understanding something wrong? I swear everyone I talk to on Signal has changed their safety numbers at least once in the last couple years. Should I not talk to them any more until I verify those in person?


> Should I not talk to them any more until I verify those in person?

It depends on your threat model.

If you're a journalist talking to sources, then you should probably put down the phone and reach out through some other equally secure channel to confirm they're the ones who have changed their safety number (and even then, at this kind of level I'd have an "under duress" safe word.)

If you're like me and you're browsing Hacker News in your underwear shouting at people's posts because they're wrong about Go, you're probably OK to assume your mate just changed their phone.


I have most of my Signal contacts also belong to a group on Signal. When I see the message for a contact’s verification number changing, I ask the contact in the Signal group if they got a new phone.


What prevents an attacker (who obtained access to your contact's phone number or is acting as a MITM) to forge a message (pretending to be your contact) in that Signal group, saying that yes, "he" indeed got a new phone?

Or are you hoping that the attacker is only attacking your personal conversation with your contact, not your contact's other conversations?


I know it's not sufficiently secure. For me, it's about educating other Signal users about the significance of the message that pops up. Almost all of the people on my Signal contact list use Signal because I asked them to do so.


It's like with any of these things that leave the hard problem of identity entirely to the user. You don't have to verify the safety number but if you don't you implicitly trust Signal not to intercept your communications. A valid choice, but these sorts of issues should be made clearer to the user. Just tell the user that if you want effective end to end communications there are certain things you have to do. A truth in advertising thing.


Forgive me if I misunderstand, but I thought Signal's protocol prevented anyone, including Signal from intercepting messages, as these are encrypted end-to-end.

The threat model is that someone else is impersonating your correspondent. This is not message interception but identity verification. Your communication with whoever is on the other end is secure within Signal, but your correspondent may do whatever they wish with the messages.


If you haven't verified the identity of the other end then Signal could be MITMing you the whole time.


> Am I just wrong here, or understanding something wrong? I swear everyone I talk to on Signal has changed their safety numbers at least once in the last couple years. Should I not talk to them any more until I verify those in person?

I think keybase.io are on to the correct idea here. There was a good blog post about it somewhere in their blog.

It wasn't this one - but this one does talk specifically to the problem you describe.

https://keybase.io/blog/chat-apps-softer-than-tofu


Keybase also has the system where you can approve a new device from another. Why can't we have the same thing w/ Signal? Maybe tack on the PIN to make it a bit more secure.


If you backup the Signal data on the old device and restore it on the new one, your safety number won't change. This is the proper way to securely migrate to a new device.


I think that's a serious backdoor for a coercive attacker. They only need access to the original device once and can then assume the identity of the contact on another device.


You might be thinking of this one? https://keybase.io/blog/keybase-new-key-model


Well I guess it depends on your threat model. Do you casually use Signal because you are mass-surveillance aware or is it really important for you to be sure who you're sending messages to? Sounds like the former, so just let it be. Bonus : You get to know when your friends has lost his phone/bought a new one. (Unless they are very careful and set up a NIP:) I think Signal is a pretty good example of how to easily bring encrypted communication to the masses and to some individuals that may need the level of protection it provides. Is it perfect? No. Does it cover all use cases? No. Is it easier to set up than PGP? Definitely.


The journalists should implement appropriate OpSec, including learning to encrypt confidential information properly; be it using PGP or a more involved system:

https://securedrop.org/

or otherwise seek professional help.

Sending a message via WhatsApp or Signal may hide it from the Chinese government, but hands it over to Apple, Google, and intelligence agencies with whom they collaborate, who control mobile platforms.


This sounds great in theory but the journalists that often need this kind of help work in underfunded and hostile environments where technical people refuse to help them for their own safety. The few times I've given my analysis on some (trivially) technical material to a journalist, I was told that most people shy away from material they think is sensitive (it wasn't. Technically it was public information). I saw this first hand another time when I tried to introduce a journalist to an expert on a subject, but the expert just wouldn't want to get involved. And so they're left with terrible OpSec and technical people judging them from the sidelines.

This is, of course, all based on my own experience from a developing country.


> I used to think that the GPG web-of-trust verification in person was antiquated and no one would do it in practice. Which they don't.

Debian developers authenticate through a GPG web of trust. In fact, to become one, one requirement is obtaining two links from the existing DD web of trust.


You probably don't need to verify just to tell someone "hello" or exchange other non-secret conversation. But it would be wise to verify before you plan a secret conspiracy with the person.


> I am not smart enough to make meaningful comments about cryptography, but the expectations from the user to verify things in real life seem the same in both cases.

It pretty much has to be, because it doesn't fall out of any particular crypto algorithm or implementation. When you're setting up an encrypted conversation with some other person, let's say "Bob", how do you ensure that you've actually set it up with Bob, vs. someone pretending to be Bob, or merely forwarding your messages to Bob so that both you & Bob think you're in contact, when you've really got an eavesdropper between you?

In essences, you have to make sure that whatever cryptographic material you've either created or exchanged in order to communicate — e.g., your public keys — are the right ones. (That is, you have actual-Bob's key, not an attackers, and vice versa.) So, you must verify them, in whatever manner will convince you that you've gotten the right key(s). Verifying in person is essentially impossible to fake, unless you believe in masks like those from Mission Impossible.

In this regard, the really important bit is authentication — making sure you're talking to the right person, not just making sure the conversation is private. It does no good to have a private conversation with the wrong person. One might refer to this as the "binding" between the cryptographic key(s) and the real-world meatspace entity (e.g., Bob) those keys represent. Is the binding we have the right binding? And that's the hard problem, it isn't "solvable" in the sense that we can make it invisible, but good UI can help immensely here.

That said, in-person verification isn't the only way you can do things. "Trust-on-first-use" ("TOFU"), which is where you just blindly accept the key during the first conversation, but after that, you expect the same key. The idea being that it's a trade-off between convenience and safety — as long as that first exchange is fine, then afterwards, we'd notice anything fishy. And if you get a different one, there needs to be a reason, like Bob saying "oh I have a new phone now" — but does Bob actually have a new phone (and for some reason couldn't reuse the key from the old phone?) or is someone impersonating Bob using that as an excuse to get you to accept the new key.

Another example is the CA system used by certificates on the web. CA's help establish the binding between a website & its private key by signing certificates. They verify keys, e.g., through the ACME challenges. It's definitely not without its defects, and a fair amount of ink has been spilled over the problems with the CA system. But it again represents a system of figuring out those bindings.

Now, what you should do, e.g., whether TOFU suffices or not, should probably be dictated by some form of threat model. What risks are acceptable to you, what attacks do you expect to see, or that you must guard against (perhaps, e.g., due to legal requirement)? Those will tell you if, e.g., TOFU suffices, or if you need something stronger.

From personal experience, specifically with GPG, just getting someone to display, e.g., a GPG fingerprint, long enough to verify it, is a PITA. It takes all of 15 minutes, but the people I've generally had to do this with are too impatient, and expect it to be simpler and quicker than GPG makes it. Marlinspike's article touches on that sort of stuff in "technological dead end"; although he is mentioning it more from an API integration point of view (which is painful), just day-to-day use of the CLI is also painful, and painful to teach.


Moxie is just one of several security experts who are fed up with PGP. For a similar article see [What’s the matter with PGP?][1] or [The PGP Problem][2].

[1]: https://blog.cryptographyengineering.com/2014/08/13/whats-ma... [2]: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html


The interesting question isn't "how many cryptography engineers repudiate PGP"; it's "how many cryptography engineers stick up for it". I'd be interested in some examples of that. Are there any?


Plenty of people stick up for individual aspects of the PGP architecture. Apple sticks up for non-ephemeral message keys by implementing that in iMessage. A lot of people stuck up for non-repudiation when Matthew Green asked Google to publish its DKIM keys.

Non-repudiation and key disclosure are two of the bigger problems with PGP, so in that sense people still stick up for PGP.


Can you name a well-known cryptography expert who has within the last 15 years written any kind of defense of PGP?


Yeah, me. I solved this thing called the cryptopals challenge. You've probably never even heard of it. Only elite hackers know about it.

Edit: Clearly I'm not funny.


I'm not being snarky. I'm genuinely asking. Is there a single one?


I was trying to be funny, not snarky.

My point was that some of the uglier parts of PGP are still actively promoted by people who build products that affect hundreds of millions of people. That's all.


My response to "The PGP Problem":

* https://articles.59.ca/doku.php?id=pgpfan:tpp


This is a good response for people who believe "[PGP has an extensible record format] because a long term standard must be extensible". I think you would have trouble finding cryptography engineers who think this is a strong statement; the last 20-odd years of practical cryptography research have instead made a very strong case that the most "extensibility" you want in a cryptosystem is a version number.

I think the rest follows from there. It's easy to mount of a defense of PGP if you just ignore cryptography engineering and focus on how important it is for tools to follow the same proven Unix-flavored design as Awk. Which, of course, is what PGP is: the Awk of cryptography.


FWIW, that markup doesn’t work here.


It’s still more readable than just inlining the urls.


This post is not a complaint about GPG, it’s a complaint about the people who use GPG to contact Moxie.

> There just seems to be something particular about people who try GPG and conclude that it’s a realistic path to introducing private communication in their lives for casual correspondence with strangers.

Moxie’s objection is that people who initiate with GPG are more often a waste of his time than people who do not. I think Moxie is right, and remains right today, about this point.

I would consider it impolite/rude to use GPG to email someone with whom I have no existing relationship, unless explicitly directed to do so for professional use. When PGP was young I would attach a signature, but over time I found that this cheapened my conversations and set a tone I didn’t consider acceptable to use with others.

If someone blind emailed me with a GPG encrypted message, I would delete it without attempting to decrypt it and block the sender, because clearly they have no clue what they’re doing, and I don’t have time for that. So, then, I remain in agreement with the (2015) Moxie that posted this.


> This post is not a complaint about GPG,

Yes, it is. Read beyond the first section.

> it’s a complaint about the people who use GPG to contact Moxie.

Even that part is in fact a veiled complaint about GPG: it basically says that GPG's design and implementation are so crufty that it acts like some kind of negative personality filter: the only people who use it volountarily are somehow unpleasant to correspond with (he never actually specifies in what way).


> because clearly they have no clue what they’re doing

I don't understand your perspective. The whole public-key/private-key approach exists so that it is possible to establish secure communications without first meeting offline to exchange a symmetric key.

You have released your public key. This signals to others that you want your communication to be end-to-end encrypted. You also published your e-mail address to allow other people to contact you.

The cost for encryption is low, once you know how to use the software. So why not take advantage of it?


Can you really tell it's rude without looking at the content of the message? It might be odd, and it's likely to be unnecessary and, yes, rude; but I can imagine situations where I might be contacted unexpectedly with a GPG-encrypted message for a good reason.


I can too, but that’s already factored into my choices.


It's worth noting that my use of GPG is completely disjoint from everything moxie discusses here.

I use GPG to sign things. Git commits, mostly, but I've used it to sign legal documents as well, and a small grab-bag of other cases.

Is it the best for this application? It is not. Many of the objectionable aspects of the PGP standard apply to signing as well.

But also, yes, it is: it's supported by GitHub and GitLab, and if any other way of signing commits is supported, I'm not aware of that.

I'd be happy to switch to a better, modern cryptosystem for signing things, if it were adequately supported.

What I never do, is use GPG for encrypted communication. Given moxie's particular claim to fame, I don't blame him for focusing on that application, and he is (of course) correct: if you're using GPG to communicate 'securely', stop.


"it's supported by GitHub and GitLab, and if any other way of signing commits is supported, I'm not aware of that"

X.509 is another option.

https://docs.github.com/en/free-pro-team@latest/github/authe...

https://docs.gitlab.com/ee/user/project/repository/x509_sign...

https://github.com/github/smimesign


For signing cryptography, minisign is superior to PGP in every way I can think of, and it's a standard format (it's based off OpenBSD's package signing system).


Sounds great to me.

When I search for "minisign gitlab" I find implementations of minisign, when I search for "GPG gitlab" I find out how to use my GPG key to sign git commits.

When that changes, I'll switch.

There's no sarcasm intended here whatsoever, to be clear.


No sarcasm taken, that makes perfect sense. Keep minisign/signify in mind for future designs.


That seems a bit unfair, since minisign is developed on Gitlab and GPG is not?


> if you're using GPG to communicate 'securely', stop.

But then what do I use instead ? I have run into the problem before and I can't seem to find a suitable replacement.


The popular answer is Age.


Looks much worse. No key management, no well-known ways of integration with apps, nothing. It's just a tiny wrapper around set of cryptoprimitives, not a communication cryptosystem itself.


I think it depends on the type of communication being made, when, why, etc.

For example, if someone wants to send you an anonymous message, they could just reply right here to this thread with some blob of PGP encrypted text, anonymously. Plausible deniability would exist and, as well, metadata could easily be obfuscated. Here's an example, but I'm not going to make a throwaway.

-----BEGIN PGP MESSAGE----- Comment: GPG

hQIMA9gXRUOEWVQSAQ/9FppY6pK4P6xErZw9/M9UXtrUmRhKIWzorfXctiji0e8z f/Qgx6sxlIHT5HHKvdhagMVcPL7GqdYUSlfNmiyXBq3jPvkYPzzFfxGeIRSefBQP pHqo0vsMz9lZ572rtx2iIfBkPd0WDbzKEABID2REetJSaFUARoAKsiOaNUhIfoqd rOnytETv63WBYVBdTimNgtkhTWtT9LVobrKv9EfTLfbErVgONWmuo3mXXRbeOswG 50nqT7LQrV/nnzKj0Eq5nVjaIHiTo9EXwQgs3MGl8ZFAV45MWJ+7Sw9sbD/MpXhv i/za17Z+MOsmZs0ja3iXq/8N+xKCqYJv2bMh9EzTZ6mOOwVDIUDi59+S7pA5QzJx xGTwppJ9XDajdsvg91H+DCWgc8/Ln7/5FeC4F2QMhnjrF7KiR9qsux1mZfQbRFIs T5krRzMOvbHgb3/br/wdxbWgIKYpVFJiUmIO502R0cWw3X6ni6zz9yybfHlzUCm3 FBDbi5NNY7tcs+gehtRcwhrjfDHoxRZ17oOnexmD9fYvQA5FWvt4UVG9TBwEawnh BvBxhkuAgcMue2FgsiYlcWQ214FoXFuJSPlG8d01VcDmPLG5kXbNftwlS61dl6yW dyk5yGZw9uR0BKpCuvDk7f8uURDj8p7t69+u2M08FPYyaNImtBtJV96ZWI1gjV3S NwFPb/UKnqc/xQc6asORi5r7Mp/0PDFuasoInR8pp5rmIYHaJYzTQfdNxuL3aD5w EAxDPMn+bCI= =3DBv -----END PGP MESSAGE-----


I was mildly disappointed that this doesn't appear to be encrypted to my key, since I have Keybase in my profile ;)

The general shape of the argument against this, is you can use age, and avoid the many well-documented problems with the OpenPGP standard.

More people have PGP keys, naturally, but that's a bit of a chicken-and-egg problem. By the time you're messing with cloak-and-dagger stuff like dead drops on public message boards, taking the extra effort to set up to use a better cryptosystem strikes me as relatively minor.


Ah! I did encrypt it for: rsa4096/D817454384595412 2019-12-24

The formatting was messed up but it should be fixed now!


the difficulties here are illustrating some point about the perils of GPG, let's put it that way!

rsa4096/0355E91DFA3EFAB3 2019-12-24, that's me

lord knows how you have the right creation date with the wrong fingerprint... perhaps we can agree that we shouldn't be trusting our lives and freedom to this piece of software.


The fingerprint is:

Primary key fingerprint: A92D 5A61 7818 99B0 401F D8AC 0355 E91D FA3E FAB3

Which matches yours...

Again, it really depends on the situation. I wouldn't trust sending an anonymous message through Signal for example as it is tied to a phone number, whereas, I'd happily post through a VPN an anonymous message somewhere on HN that would go unseen other than to the recipient who would be directed to it at a later date across a different medium for example.


clears throat


A promising contender to "replace" GPG in which you might be interested is age: https://github.com/FiloSottile/age

It's based on ChaCha20-Poly1305 AEAD and the author is well respected in the crypto community.


Trying to read more about it, I cant get to the website they have listed: https://age-encryption.org/ -> NET::ERR_CERT_COMMON_NAME_INVALID

"You cannot visit age-encryption.org right now because the website uses HSTS."


So it's OK to use Age for sending secure emails?

FYI, no I'm not in the millitary/intelligence agency. I'm just a software developer who like to send, say source codes or documents securely.


The biggest problem for me (note: others may not care or consider that an advantage): implemented in go.

I'm however interested in something as small as possible, with minimal number of dependencies and implemented in C. Yes I know it's an old language with its own issues. But C is the "highest level" I at this moment consider acceptable for something that should be a competition to GPG.


To me, writing cryptographic software in a memory unsafe language these days seems a huge mistake.


Whichever your favorite language is, try researching its dependencies used in the cryptographic primitives, especially in the use cases where performance matters. More often than not, what at the end “does the job” is some library written in C (even with the asm parts) or marked with “unsafe” tags. The practice is still different to what the idealists believe.


That's a potential concern, as C is still probably unrivaled for portability. However, portability is not the only concern.

Age also has a spec, so it can be reimplemented in other languages: https://age-encryption.org/v1

There's already a Rust implementation: https://github.com/str4d/rage


Am I missing something or the existing tests in the repositories both in age and rage aren’t covering the specification?

The other languages could be used for the "proof of concept" implementation or with which the additional comparison tests could be performed, but having C as the most important version is unavoidable.


Interesting. Does it have momentum behind it?


It's the non-GPG general-purpose encryption tool with the most momentum, you can say that much.

Something that surprised me when I researched this (I wrote a similar tool internally at my last job, along with writing a blog post about the inadequacy of PGP) is that general-purpose encryption doesn't really have as many use cases as you'd assume. The reason for that is that modern cryptosystems are purpose-built for applications. Messaging cryptography (and email is a messaging system!) is different from backup cryptography, which is in turn different from disk cryptography.

Probably single most common application of general-purpose cryptography is for cryptographic tokens (most common subset: cookies). But even there, your cryptography tends to be specialized, in terms of what it doesn't have --- in particular, metadata, which tends to be an antifeature for tokens.

(I like Age a lot and you should use it if it makes sense for your application, though plain-ol'-secretbox is probably the right call for your encrypted cookie).


What's wrong with using gpg to communicate securely?


You can see it immediately above [0], where two people, both presumably very familiar with PGP, could not do so.

[0] https://news.ycombinator.com/item?id=25248465


I don't believe they were both very familiar with PGP. It's not hard to use if you read: https://gnupg.org/gph/en/manual.html https://emailselfdefense.fsf.org/en/ If was in a position where I had to take privacy very seriously, i.e. I am an activist or journalist... spending a couple of hours reading a manual doesn't seem much to ask.


It's weird that I got down voted for asking a question. Going against the HN grain I guess.


Original discussion (2015) on this had some pretty interesting opinions on the matter, https://news.ycombinator.com/item?id=16057579


If you need any more evidence of PGP being used poorly, just look at Darknet markets. It's effectively the standard for communication with vendors there. Here's a first Google link for the matter:

https://thebitcoinnews.com/how-to-encrypt-messages-with-pgp-...

We can quote: "Don’t let the name mislead you: Pretty Good Privacy (PGP) is better than good – it’s excellent." Definitely at odds with a lot of the community. But lets go on.

One of the next Google hits is this fantastic "tool" often recommended for users:

https://darknetmarkets.org/pgp/

Yes, it's a website that generates your keys for you. Added bonus, "e-mail address: Required".

Assuming people are smart enough to use their own local key generator, how many do you want to bet effectively dox themselves signing with real email addresses?


The darknet howtos tend to emphasize that you should not use a real email address and not to use an online key generator. The online key generator you mentioned itself recommends not to use it and provides links to programs and howtos.

PGP is actually the best available tool here. Something like Signal would involve way too much meta information leakage.


One aspect that most GPG critic misses is the identity management capability of GPG. GPG is almost only publicly available libre solution to create an immutable online identity where people can prove their statements. More on this can be read at: https://oyd.org.tr/en/articles/defense-of-gpg/


I think about this article a lot.

I think this perhaps discounts the usefulness of GPG in specific professional applications.

He's judging it on mass adoption; that's fine but it's not the kind of tool that will ever be adopted by most people. It serves a specific purpose (and shoehorning it into email isn't that) and for that purpose I (and other competent professionals) still use and trust it for that purpose.

I am also excited about new tools (like age, which will also never have hundreds of millions of users) along these lines.

I think some similar criticisms could be levied against, say, Adobe Premiere, even though it has an order of magnitude or two more users than PGP (although perhaps not when you consider every apt user in Debian and Debianlikes such as Ubuntu). Industrial software for trained professionals has a lot of sharp corners and footguns. Not all software is (or should be) Instagram.

He's 100% right that GPG email sucks, though. Don't use GPG for email except when attaching secrets.


> Don't use GPG for email except when attaching secrets.

Don’t even do that. Find a different way to send those secrets.


Email is the only widely used decentralized communication system that most people have access to that doesn't require showing ID.

Everyone this applies to already has GPG installed.

It's fine. If you have some specific argument against it, speak up.


You give up reputability by using PGP/GPG, which is a pretty big deal when transmitting secrets, and in the default configuration that everyone has GPG involves using long-lived secret keys on untrusted devices.

The first is a problem that is often not thought about in the moment. You know you need to transmit a secret, but it is rare that people think about under what future circumstances they will want to deny having sent it.

The second is a practical defeat of the assumed security model. The GPG secret is a high-value target that is relatively easy to snag from unencrypted memory on untrustworthy devices.


> You give up reputability by using PGP/GPG

Signing is optional.


Never say something like that without offering a proper replacement that is proveably more secure.


Someone did make an alternative to GnuPG called OpMSG, but I'm not certain that it is reliably more secure, other than not depending on an online Web of Trust / key server.

https://github.com/stealth/opmsg


The thing is it is less secure in many ways because you are giving up on reputability and using long-lives keys stored on untrusted devices.


Less secure than what? Unencrypted attachments?


The guy who wrote the OP is the guy behind Signal. That's the most obvious alternative.

But there's not a go-to answer because the lesson learned from PGP is that there can't be a single crypto Swiss army knife of a protocol. There are conflicting tradeoffs that necessitate the use of different protocols for different use cases. The best replacement for PGP is probably a basket of different protocols. See for example: https://blog.gtank.cc/modern-alternatives-to-pgp/


Perhaps the issue is with email itself. GPG is just a hack to make email secure (and it can’t even encrypt metadata). GPG has such a small user base that we could ditch it at any time, but email is so ubiquitous that we might be stuck with it for a long time yet.


Messaging apps such as WA or Signal have no means of verifying keys, require phone numbers tied to the users’ real identity, operate based on a client-server model in which the server could be compromised by well-resourced agencies and require black-box phones that are not secure against phone manufacturers and agencies with whom they collaborate.

Users should be warned of perils of phones not quite under their control. Set up a proper FOSS machine that you understand, and use own encryption (be it PGP or some other tool serving the same purpose).


Signal requires phone numbers because of taking care of users' privacy. Signal's choice to require a phone number is due to storing a contact list on a local machine to protect users' metadata. And PGP is never as secure as Signal or Other Encrypted Messenger (WhatsApp). Many Cryptographers blamed about weak security of PGP (https://latacora.micro.blog/2019/07/16/the-pgp-problem.html, https://latacora.micro.blog/2020/02/19/stop-using-encrypted...., https://blog.cryptographyengineering.com/2014/08/13/whats-ma..., https://blog.filippo.io/giving-up-on-long-term-pgp/) and even Edward Snowden doesn't use it now (https://twitter.com/Snowden/status/1175437588129308672).


Why wouldn't signal be able to store local data without a phone number? Your argument seems like a red herring.


They could also just ask for an email, like wire does.

Edit: or generate a UUID type random string for each new device.


This is because unlike Signal, the wire store user contact is on the server. (https://www.vice.com/en/article/gvzw5x/secure-messaging-app-...)


>or Signal have no means of verifying keys

To be fair, you've always been able to verify safety numbers (i.e. fingerprints i.e. public key hashes).

https://signal.org/blog/safety-number-updates/


Verifying a public key over a secure channel works trivially for any public key cryptography system.

I was referring to ways to establish such secure secondary channels. Either verify a key yourself, eg, in person, or use distributed trust to average out the noise.

For example, keybase has an approach: linking various identity information to keys.

Signal is secure in a strange narrow interpretation of the security. There are problems if you look more broadly.



I've recently spent a fair bit of time thinking about where encrypted messaging has been and where it ended up...

Yes, PGP sucks. Unfortunately everything else sucks as well.

Signal Messenger is as good an example as anything. To use it effectively you need to know these concepts:

* What safety numbers are. Why you need to use them to establish an identity for a contact. Why you need to do something when they unexpectedly change. What that something is that you need to do.

* Why you can't keep around old messages if you want forward secrecy. What forward secrecy is and why you might want it.

* What deniability is. What you have to do if you want to attempt to exercise it.

Note that Signal ended up making it significantly easier to ignore the change of safety numbers some years ago. The users simply were bothered with no idea of why it was important.

Public key cryptography is complicated and involves multiple basic concepts that need to be learned if you are going to use it for messaging. Making things more complicated and adding features as in the case of Signal protocol does not help if it adds to the list of concepts. We haven't yet come to terms with the basic stuff.

There seems to be a cycle here. Someone comes out with a cool new thing. Regular people fail to be able to use that thing. No one seems to know why. Repeat. Signal is a good current example of that.

The thing is, the root cause has been known for something like 20 years now. I will leave with a quote from the classic encrypted messaging study, Why Johnny Can't Encrypt:

>... it is clear that there is a need to communicate an accurate conceptual model of the security to the user as quickly as possible. The smaller and simpler that conceptual model is, the more plausible it will be that we can succeed in doing so.


It's interesting to see a critique of Signal and defense of PGP premised on how difficult it is to verify safety numbers, given that the dominant APIs for PGP release unverified plaintext to callers, with verification passed out of band.


PGP lacks a good library implementation. The remedy for that is not creating a new protocol with radically different properties, one that's difficult to analyse and gives up important properties. The remedy is writing a good library implementation.


That's not true. PGP's problems with message verification are endemic to the protocol; see, for instance, the MDC. This isn't just a known problem with PGP, it's probably the single most notorious design problem in the protocol, so it's surprising to see this response.


Feigned surprise is a cheap rhetorical trick; pretending that your faction has the expert consensus doesn't make it so. And pretending that you were talking about the protocol rather than the libraries a minute ago won't fool anyone when your post saying "APIs" is right there.

There is a specific imperfection in the protocol around verification; there is a mitigation in place, and a good library implementation (that didn't simply 1:1 replicate the protocol) would include that. The consensus is largely that the mitigation is adequate, but again, if you disagree with that then the remedy would be a protocol update that addresses that specific issue, not a radically different protocol with very different properties that's missing key protections.


My faction has the expert consensus.

Feel free to cite the countervailing expert opinion. I think at this point any reputable cryptography engineer would do!


I am not sure how fair is this?

You are complaining about "forward secrecy" and "deniability" being hard -- fair enough, but PGP does not provide either of those! So if you want to be "as good as PGP", there is no need to learn those concepts.

The "safety numbers" concept does need to be learned, but I think it is way easier to explain than PGP key management.

The goal is not to have the perfect system, it is to have something better than PGP. And this is a pretty low bar.


>So if you want to be "as good as PGP", there is no need to learn those concepts.

Sure, but we really have to aspire to be better than PGP. We are obviously missing the point if we are wasting time on new features when no one can use the old features.

>The "safety numbers" concept does need to be learned, but I think it is way easier to explain than PGP key management.

If the question is "why do I have to care when the safety number changes" then the answer still involves the basic concept of cryptography identity in both cases.


This is from 2015.


Things got worse after 2015, not better, for PGP.


I get the problems with email encryption, but what do people think of email signing? It's a shame it's so hard to do, but I always thought it'd be a good protection against phishing.


>I get the problems with email encryption, but what do people think of email signing? It's a shame it's so hard to do, but I always thought it'd be a good protection against phishing.

I think email signing is wonderful. In fact, my email client (Thunderbird) is configured to sign all messages sent by my primary identity.

For many years, I used Enigmail, until Thunderbird integrated the functionality a few months ago.

However, the value of signing is diluted significantly by the narrow adoption and ignorance of the greater populace.

As an example, I was exchanging emails with my financial advisor and giving them specific instructions on the disposition of certain funds.

Having a GPG signature is great for this, since they can confirm that the email hadn't been modified in transit.

But rather than being glad that I cryptographically signed my instructions, I received a phone call saying something along the lines of "there's all this junk at the end of your email. We're concerned it's been tampered with."

When I explained what the "junk" was, rather than asking how to confirm the signature in the future, I was met with confusion and the distinct impression that I was acting in an inappropriate way.

Their solution: All instructions from me must be verbally confirmed via telephone.

Not that such verification is a bad thing, but the complete ignorance, then rejection of a perfectly valid confirmation mechanism (without any interest in learning how to secure their communications) by folks who should care about such things left me flabbergasted.

As long as the above is the norm, email signing (and I still do it where it matters) will be much less valuable than it could be.

More's the pity.


That's frustrating. It's also very variable how different email clients deal with signing. My work outlook will flag it with a sort of rosette, whereas Gmail doesn't mark it in any obvious way.


> Instead of developing opinionated software with a simple interface, GPG was written to be as powerful and flexible as possible. It’s up to the user whether the underlying cipher is SERPENT or IDEA or TwoFish.

IMO GPG doesn't owe its lack of widespread adoption to its flexibility, learning curve, code cruft, forward secrecy, or anything of the sort. Its various usability issues do not preclude the creation of an opinionated and easy-to-use GUI front-end.

Corporations like Google and the government want to read your email. They could invest in private email communication but they choose not to. Arguably the network effects of widespread adoption of Gmail/Yahoo/etc even stifle folks pushing for GPG adoption. While Gmail claims they no longer "read" your email contents for ad purposes, your ISP most likely does[1], and no doubt Google still uses the data for something.

According to Gilens et al[4], large corporations greatly influence policy decisions, and now we have the entire Western world trying[2] (and sometimes succeeding[3]) to effectively outlaw private communications.

FastMail made a (thin) argument against using GPG[5], citing various problems (like losing key = losing email, and email search becomes hard). They also wax on some "transmit your key" nonsense. However, none of this seems to stop ProtonMail from making excellent use of GPG.

The OP envisions a future, better GPG where we "start fresh with a different design philosophy" that includes things like forward secrecy. IMO global adoption is equally as important as the technology under the hood. Even if we started with GPG and upgraded to a new protocol later, the tooling, integrations, and ecosystem would benefit.

1. https://www.consumerreports.org/consumerist/house-votes-to-a...

2. https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix...

3. https://fee.org/articles/australia-s-unprecedented-encryptio...

4. https://scholar.princeton.edu/sites/default/files/mgilens/fi...

5. https://fastmail.blog/2016/12/10/why-we-dont-offer-pgp/

6. https://tools.ietf.org/html/draft-brown-pgp-pfs-03


I think GPG is an amazing, well-designed, resilient, widely-adopted, well-tooled, time-tested, mathematically-proven technology and standard.

I think the only thing missing is a tolerable user interface that allows one-click key generation, one-click signing, and zero-click validation and verification.

I think that the biggest oversight in using it today is our attachment to our "main" or "primary" key and trying to hold on to it, keep it safe, and if we use multiple keys, trying to keep it all in the same "tree" or keyring.

We could be using it on every website and platform, and instead of creating an identity on the server, storing our identity locally, either on the filesystem, in LocalStorage, or some other way.

In fact, that's exactly the model I'm using in my web-based forum system. "Registering" an account is just generating a key, and posting something "with my identity" is just signing it with PGP.

In fact, I don't have a single feature which uses encryption yet. Only signing. Encryption is a bit of a red herring with PGP, I think verifiable signing and tooling for almost every platform and language in existence are its two real killer features.

Without nation-state backing, I wouldn't trust it to keep something secure or private. On the other hand, its portability, adaptability, and compatibility is unmatched by any tool that I'm aware of. It allows you to have a portable identity that you can own and secure, and do it all in plaintext.

And the content, identities, votes of my forum can be migrated, cloned, copied, synchronized among any other forum like it, or used outside of it with the most popular identity tooling in existence at your disposal. Just download a zip of text files and you're good to go!


PGP is secure and usable if you can manage keys.

The issue with the adoption is not much the protocol itself: Average user has no idea what is (asymmetric) encryption and will never manage keys; the process has to be automated and be provided by default.

To use PGP, you need GnuPG, and, say, Thunderbird+Enigmail. These are the barriers to adoption not the PGP protocol (which still could be updated).

Automating key management and encryption is what ProtonMail and other apps are trying to do. Google could include PGP in Gmail and it will be widely adopted over night.


GPG is a specific tool for specific messages. It represent freedom and privacy. I have bigger problems with email than I have with GPG.


What kinds of messages is it specifically for?


messages you deem important to encrypt. Sometimes it is every message; It contains the entire spectrum.


I wish Moxie blogged still. I love every single one of his few blogs.


Mostly agree with the content of the article, but he's not offering anything in the way of anything better.


> Instead of developing opinionated software with a simple interface, GPG was written to be as powerful and flexible as possible.

Moxie is the project lead for Signal, an encrypted messenger (https://signal.org/). That is what he ultimately offered.


Which is myopically focussing on encryption, not identity, signing or authentication. You send your messages in signal to someone, with no means to properly verify whom you are sending to (except if you do a nonsensical secret (because signal doesn't tell you to do it) dance of "send non-secret hello, meet in person, verify fingerprints (which people are supposedly unable to understand), send the secret stuff") or who you are receiving from. Have fun sending all your messages c/o your friendly secret service manipulating the phone number exchange.

There are things signal does better than GPG, and there are things it doesn't do at all. Which nobody tells you about until you are bitten by the resulting problems.


Look, I don't want to criticize signal, which I believe is a very nice tool, and kudos to moxie for putting this together:

however nice it may be, signal does not solve any of my problems, whereas gpg, which I agree with him, is an absolute dumpster fire, does (modulo great exertion).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: