> if you let someone else broker the key exchange, you trust them implicitly.
Sort of.
Yes, they could serve you a MITM key, but it would be easily discoverable when you compare security codes in the client. And since the client is widely distributed on major app stores, it would be very risky to ship a compromised client.
Ultimately key exchange is a hard problem to solve. Notice that Signal doesn't do anything that much different; Signal does the key exchange and unless you verify each user's key offline, you have to trust it. Both WhatsApp and Signal have an option to display a notice when keys change, but Signal's is on by default.
Overall it's still pretty damn good. WhatsApp is perhaps the only major form of consumer communication where, by default and with no opt-out, every single chat really is fully encrypted using a widely respected protocol (libsignal). That's not nothing.
> Notice that Signal doesn't do anything that much different; Signal does the key exchange and unless you verify each user's key offline, you have to trust it.
Unfortunately verification of reproducible builds is not baked in the OS (Android/iOS) so it's still possible to target someone with malicious update. When vast majority of people don't verify the build it's possible that the attack would go unnoticed.
> so it's still possible to target someone with malicious update
I'm no Android or iOs dev, so I might be wrong, but to my knowledge there is no feature to push an app update specifically to a narrow set of devices?
So at the very least, third parties (Apple/Google) would have to be involved in such an attack. This removes some entities from the list that could create an attack.
Also, Apple/Google have a big reason not to play such games. Their app stores are partially so popular because they, as companies, are trusted. Apple/Google would only do this if they'd be legally required to. IF they were involved, even against their will, this would mean tremendous risk to trust in these companies, meaning risk to the stock. And for a publicly traded company, there is no bigger motivator. Apple/Google would get out all the lobbying power they have, trying to fight off whatever coercion tool the US government uses against them to make them comply.
Even if there'd be no opposition from Apple or Google, people outside would notice sooner or later that they've got malicious updates. If they use it once or twice, they might go undetected, but if governments or other entities start using this as a vector repeatedly, it will get to the public.
This doesn't mean that I think that these issues aren't important. Reproducible builds, binary transparency, gossip protocols, all these things are very important areas to invest research in, but right now they aren't a vector that is being abused on observable scales.
> I'm no Android or iOs dev, so I might be wrong, but to my knowledge there is no feature to push an app update specifically to a narrow set of devices?
Yes, it's possible to target "narrow set of devices" by using Device Catalog. An excerpt from the ToS:
> Google Play Console Device Catalog Terms of Service
> By using the device catalog and device exclusion tools in the Play Console (“Device Catalog”), You consent to be bound by these terms, in addition to the Google Play Developer Distribution Agreement (“DDA”). If there is a conflict between these terms and the DDA, these terms govern Your use of the Device Catalog. Capitalized terms used below, but not defined below, have the meaning ascribed to them under the DDA.
> 1. The Device Catalog allows You to review a catalog of the Devices supported by Your app and search the Devices by their hardware attributes. It also allows You to exclude specific Devices that are technically incompatible with Your app.
Yes, signal is better than FB or sms.. But the whole requiring phone number puts a nail in it on my end.
So Signal can learn who talks with whom via requests going through their LDAP-like server. They can get an idea how long calls are, and if it was a vid or audio call. They know the times of communication.
You know, they can see the metadata. When's the last time we had problems with metadata? The POTS network? Yep.
And you're indeed right the client has reproducible builds. But the server side certainly doesn't. And we have no way to ascertain that.
> You know, they can see the metadata. When's the last time we had problems with metadata? The POTS network? Yep.
Yes, metadata is a problem, particularly with calls. However, Signal recently added the sealed sender (https://signal.org/blog/sealed-sender/) feature which makes the server blind to who the sender of a message is.
> And you're indeed right the client has reproducible builds. But the server side certainly doesn't.
That's true, but the server side is much less important when it comes to cryptographic assurances.
Signal is definitely not a panacea, but by many counts it's better than anything else that currently exists and has any semblance to something a typical user can use.
For what it's worth, they don't retain any of that metadata. This has been tested in court:
> We’ve designed the Signal service to minimize the data we retain about Signal users, so the only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user’s connectivity to the Signal service.
Everytime Signal is brought up someone just has to chime in saying ‘we must abandanon Signal at all costs because metadata’. The metadata limitation is well known and if metadata interception is a problem for your threat model there are steps to obscure your identity or you should use a different tool. For the 99% of other cases where I just don’t want anyone snooping on my conversation with friends and family but don’t care that people know I’m obviously conversing with my friends and family Signal is great. Let’s not throw Signal out just because the metadata is still there.
Briar is good if metadata is a prime concern, but even Matrix, XMPP and email have very similar metadata problems to Signal, plus contact discovery problems as you can't casually gather that your friend or relative is on the platform (phone numbers mostly solve this).
If metadata is good enough to drone strike weddings, it's probably good enough to throw you in a concentration camp too. And since data never dies, it might be enough to throw your grand kids in concentration camps.
Now, protecting everyone's meta data is hard (probably impossible), and I don't mean to be defeatist - but "it's just metadata" doesn't sit well in a post Snowden world. We know all large intelligence agencies hoover up this stuff.
And we also know that agencies are made up of people, and some people abuse their access.
I certainly don’t mean to discount the importance of metadata. I specifically mentioned ensuring Signal fits your threat model.
To suggest that metadata of communication over Signal between my spouse and I will be used against my grand kids one day is a bit absurd though. Of course there’s tons of metadata connecting my spouse and I. It would be more suspicious if there wasn’t.
Spouse, "family" and friends are different goalposts. Mapping friends and family is AFAIK a key part of who gets bombed by the cia. Sure, if your spouse is found to be an "enemy of the state" under a new totalitarian government - your immediate family will have problems.
If a friend turns out to be union organizer, you might be banned from jobs, if the government decides to collude with employers (again).
> Yes, they could serve you a MITM key, but it would be easily discoverable
Like a lot of things it boils down to your threat model. If the broker or a state are your adversary, it wouldn't need to be a general design feature to behave this way but it could instead target you at the time of key exchange. Not an implausible scenario for reporters and their sources, e.g.
Those folks are especially vulnerable because they might be led to believe claims of "end to end encryption". Put that together with those default settings and interception and impersonation can happen right under your nose.
I'm confused what does the client have to do with this. My understanding of these end to end encryption models is with public/private keys. You (Facebook, Whatsapp, or the user) generate a private and a matching public key. You distribute the pair of keys to the user who'd like to do communication. The user should not share their private key, not even to Facebook or Whatsapp. The user publishes his public key so other can encrypt messages using the user's public key and send their messages to said user. The user then uses the private key to decrypt the encrypted message. If Facebook keeps a copy of the private key, then they could read the encdypted message.
Maybe the client itself is generating the keypair. In this case, the only issue I can see is the following: when the user wants to communicate with a friend, how can they be sure that the profile they are sending messages to (as displayed by their user interface and communicated by Facebook or Whatsapp or the friend's server) actually do belong to their friend?
I'm confused what you were talkimg about, with the client build possibly being a trojan
>Maybe the client itself is generating the keypair. In this case, the only issue I can see is the following: when the user wants to communicate with a friend, how can they be sure that the profile they are sending messages to (as displayed by their user interface and communicated by Facebook or Whatsapp or the friend's server) actually do belong to their friend?
That's exactly the point though, how can they be sure in the event that their client (on the author's side) is a trojan? If the "author" client is deliberately compromised, there is no longer any reasonable means of ensuring that the public key the author uses to encrypt the messages is actually equal to the public key the recipient published.
Of course, this point is very much riddled with paranoia: it is exceedingly unlikely that the WhatsApp client deliberately contains such a trojan, especially since there are much easier ways of gaining access to user's messages (such as compromising their firmware with some form of rootkit, possibly installed via the baseband, and then simply sending copies of the local message cache to the NSA).
If WhatsApp uses a version of libsignal whose copyright is solely in the hands of Open Whisper Systems, OWS can have a separate deal with WhatsApp which does not involve the GPL. AFAIK, this is already done to get Signal into the App Store (IANAL though).
Moxie was brought on as a contractor at WhatsApp iirc, the code wasn't just purchased. While WhatsApp uses the same cryptographic architecture its likely they didn't just drop in libsignal (as libsignal is set up to tie into Signal's servers, rather than just be an encryption library like OMEMO or olm).
If your looking to build software that integrates with Signal, then libsignal is great (having built a few things with it).
Indeed! iMessage should get an honorable mention. Having lived outside of the US for a few years, sometimes I forget it exists, because here even people who both have iPhones use WhatsApp. iMessage deserves an honorable mention, but with some caveats. As I recall and quickly Googled:
There have been some concerns with their security:
Additionally, iMessage doesn't have any means of out-of-band key verification, so you actually have to trust Apple to faithfully exchange keys and there's no way to verify that it's done so.
iMessage also tells you after a message is sent (via the color of a bubble) whether the recipient received it using iMessage. That's not very good assurance if, say, you're messaging a journalist in an authoritarian country. Will it go out over SMS or iMessage? You can find out, but even a little bit of doubt about that can have significant consequences.
I'm glad iMessage does do encryption like it does, but it's no replacement for Signal and WhatsApp uses libsignal for its encryption.
Yes, and no. If you send a message to someone you've most recently conversed with on iMessage, it will be blue. But if iMessage can't deliver the message, it will fall back to using text messages. I believe on the next attempt, the button will be green, but I don't have a way to test that right now.
As recently as last weekend, I had it go through as green instead of blue without asking because the recipient was in a no-data area. Perhaps because I'd previously approved green messages for that person.
So put them in a cohort and treat them differently than the rest of the users? Personalised key exchange? Possibilites are endless.
If you don't trust the closed source operator here, then that end to end encryption should mean nothing for you.
Sort of.
Yes, they could serve you a MITM key, but it would be easily discoverable when you compare security codes in the client. And since the client is widely distributed on major app stores, it would be very risky to ship a compromised client.
Ultimately key exchange is a hard problem to solve. Notice that Signal doesn't do anything that much different; Signal does the key exchange and unless you verify each user's key offline, you have to trust it. Both WhatsApp and Signal have an option to display a notice when keys change, but Signal's is on by default.
Overall it's still pretty damn good. WhatsApp is perhaps the only major form of consumer communication where, by default and with no opt-out, every single chat really is fully encrypted using a widely respected protocol (libsignal). That's not nothing.