> Mr. Zuckerberg has also ordered all of the apps to incorporate end-to-end encryption, the people said, a significant step that protects messages from being viewed by anyone except the participants in the conversation.
I don't blame NYT for getting this wrong wrt WhatsApp but it bears repeating: if you let someone else broker the key exchange, you trust them implicitly. That is to say that IMO this is not truly trustworthy "end to end encryption". To add insult to injury, WhatsApp permits rekeying to take place without any indication to the conversation's participants [in the default settings].
> if you let someone else broker the key exchange, you trust them implicitly.
Sort of.
Yes, they could serve you a MITM key, but it would be easily discoverable when you compare security codes in the client. And since the client is widely distributed on major app stores, it would be very risky to ship a compromised client.
Ultimately key exchange is a hard problem to solve. Notice that Signal doesn't do anything that much different; Signal does the key exchange and unless you verify each user's key offline, you have to trust it. Both WhatsApp and Signal have an option to display a notice when keys change, but Signal's is on by default.
Overall it's still pretty damn good. WhatsApp is perhaps the only major form of consumer communication where, by default and with no opt-out, every single chat really is fully encrypted using a widely respected protocol (libsignal). That's not nothing.
> Notice that Signal doesn't do anything that much different; Signal does the key exchange and unless you verify each user's key offline, you have to trust it.
Unfortunately verification of reproducible builds is not baked in the OS (Android/iOS) so it's still possible to target someone with malicious update. When vast majority of people don't verify the build it's possible that the attack would go unnoticed.
> so it's still possible to target someone with malicious update
I'm no Android or iOs dev, so I might be wrong, but to my knowledge there is no feature to push an app update specifically to a narrow set of devices?
So at the very least, third parties (Apple/Google) would have to be involved in such an attack. This removes some entities from the list that could create an attack.
Also, Apple/Google have a big reason not to play such games. Their app stores are partially so popular because they, as companies, are trusted. Apple/Google would only do this if they'd be legally required to. IF they were involved, even against their will, this would mean tremendous risk to trust in these companies, meaning risk to the stock. And for a publicly traded company, there is no bigger motivator. Apple/Google would get out all the lobbying power they have, trying to fight off whatever coercion tool the US government uses against them to make them comply.
Even if there'd be no opposition from Apple or Google, people outside would notice sooner or later that they've got malicious updates. If they use it once or twice, they might go undetected, but if governments or other entities start using this as a vector repeatedly, it will get to the public.
This doesn't mean that I think that these issues aren't important. Reproducible builds, binary transparency, gossip protocols, all these things are very important areas to invest research in, but right now they aren't a vector that is being abused on observable scales.
> I'm no Android or iOs dev, so I might be wrong, but to my knowledge there is no feature to push an app update specifically to a narrow set of devices?
Yes, it's possible to target "narrow set of devices" by using Device Catalog. An excerpt from the ToS:
> Google Play Console Device Catalog Terms of Service
> By using the device catalog and device exclusion tools in the Play Console (“Device Catalog”), You consent to be bound by these terms, in addition to the Google Play Developer Distribution Agreement (“DDA”). If there is a conflict between these terms and the DDA, these terms govern Your use of the Device Catalog. Capitalized terms used below, but not defined below, have the meaning ascribed to them under the DDA.
> 1. The Device Catalog allows You to review a catalog of the Devices supported by Your app and search the Devices by their hardware attributes. It also allows You to exclude specific Devices that are technically incompatible with Your app.
Yes, signal is better than FB or sms.. But the whole requiring phone number puts a nail in it on my end.
So Signal can learn who talks with whom via requests going through their LDAP-like server. They can get an idea how long calls are, and if it was a vid or audio call. They know the times of communication.
You know, they can see the metadata. When's the last time we had problems with metadata? The POTS network? Yep.
And you're indeed right the client has reproducible builds. But the server side certainly doesn't. And we have no way to ascertain that.
> You know, they can see the metadata. When's the last time we had problems with metadata? The POTS network? Yep.
Yes, metadata is a problem, particularly with calls. However, Signal recently added the sealed sender (https://signal.org/blog/sealed-sender/) feature which makes the server blind to who the sender of a message is.
> And you're indeed right the client has reproducible builds. But the server side certainly doesn't.
That's true, but the server side is much less important when it comes to cryptographic assurances.
Signal is definitely not a panacea, but by many counts it's better than anything else that currently exists and has any semblance to something a typical user can use.
For what it's worth, they don't retain any of that metadata. This has been tested in court:
> We’ve designed the Signal service to minimize the data we retain about Signal users, so the only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user’s connectivity to the Signal service.
Everytime Signal is brought up someone just has to chime in saying ‘we must abandanon Signal at all costs because metadata’. The metadata limitation is well known and if metadata interception is a problem for your threat model there are steps to obscure your identity or you should use a different tool. For the 99% of other cases where I just don’t want anyone snooping on my conversation with friends and family but don’t care that people know I’m obviously conversing with my friends and family Signal is great. Let’s not throw Signal out just because the metadata is still there.
Briar is good if metadata is a prime concern, but even Matrix, XMPP and email have very similar metadata problems to Signal, plus contact discovery problems as you can't casually gather that your friend or relative is on the platform (phone numbers mostly solve this).
If metadata is good enough to drone strike weddings, it's probably good enough to throw you in a concentration camp too. And since data never dies, it might be enough to throw your grand kids in concentration camps.
Now, protecting everyone's meta data is hard (probably impossible), and I don't mean to be defeatist - but "it's just metadata" doesn't sit well in a post Snowden world. We know all large intelligence agencies hoover up this stuff.
And we also know that agencies are made up of people, and some people abuse their access.
I certainly don’t mean to discount the importance of metadata. I specifically mentioned ensuring Signal fits your threat model.
To suggest that metadata of communication over Signal between my spouse and I will be used against my grand kids one day is a bit absurd though. Of course there’s tons of metadata connecting my spouse and I. It would be more suspicious if there wasn’t.
Spouse, "family" and friends are different goalposts. Mapping friends and family is AFAIK a key part of who gets bombed by the cia. Sure, if your spouse is found to be an "enemy of the state" under a new totalitarian government - your immediate family will have problems.
If a friend turns out to be union organizer, you might be banned from jobs, if the government decides to collude with employers (again).
> Yes, they could serve you a MITM key, but it would be easily discoverable
Like a lot of things it boils down to your threat model. If the broker or a state are your adversary, it wouldn't need to be a general design feature to behave this way but it could instead target you at the time of key exchange. Not an implausible scenario for reporters and their sources, e.g.
Those folks are especially vulnerable because they might be led to believe claims of "end to end encryption". Put that together with those default settings and interception and impersonation can happen right under your nose.
I'm confused what does the client have to do with this. My understanding of these end to end encryption models is with public/private keys. You (Facebook, Whatsapp, or the user) generate a private and a matching public key. You distribute the pair of keys to the user who'd like to do communication. The user should not share their private key, not even to Facebook or Whatsapp. The user publishes his public key so other can encrypt messages using the user's public key and send their messages to said user. The user then uses the private key to decrypt the encrypted message. If Facebook keeps a copy of the private key, then they could read the encdypted message.
Maybe the client itself is generating the keypair. In this case, the only issue I can see is the following: when the user wants to communicate with a friend, how can they be sure that the profile they are sending messages to (as displayed by their user interface and communicated by Facebook or Whatsapp or the friend's server) actually do belong to their friend?
I'm confused what you were talkimg about, with the client build possibly being a trojan
>Maybe the client itself is generating the keypair. In this case, the only issue I can see is the following: when the user wants to communicate with a friend, how can they be sure that the profile they are sending messages to (as displayed by their user interface and communicated by Facebook or Whatsapp or the friend's server) actually do belong to their friend?
That's exactly the point though, how can they be sure in the event that their client (on the author's side) is a trojan? If the "author" client is deliberately compromised, there is no longer any reasonable means of ensuring that the public key the author uses to encrypt the messages is actually equal to the public key the recipient published.
Of course, this point is very much riddled with paranoia: it is exceedingly unlikely that the WhatsApp client deliberately contains such a trojan, especially since there are much easier ways of gaining access to user's messages (such as compromising their firmware with some form of rootkit, possibly installed via the baseband, and then simply sending copies of the local message cache to the NSA).
If WhatsApp uses a version of libsignal whose copyright is solely in the hands of Open Whisper Systems, OWS can have a separate deal with WhatsApp which does not involve the GPL. AFAIK, this is already done to get Signal into the App Store (IANAL though).
Moxie was brought on as a contractor at WhatsApp iirc, the code wasn't just purchased. While WhatsApp uses the same cryptographic architecture its likely they didn't just drop in libsignal (as libsignal is set up to tie into Signal's servers, rather than just be an encryption library like OMEMO or olm).
If your looking to build software that integrates with Signal, then libsignal is great (having built a few things with it).
Indeed! iMessage should get an honorable mention. Having lived outside of the US for a few years, sometimes I forget it exists, because here even people who both have iPhones use WhatsApp. iMessage deserves an honorable mention, but with some caveats. As I recall and quickly Googled:
There have been some concerns with their security:
Additionally, iMessage doesn't have any means of out-of-band key verification, so you actually have to trust Apple to faithfully exchange keys and there's no way to verify that it's done so.
iMessage also tells you after a message is sent (via the color of a bubble) whether the recipient received it using iMessage. That's not very good assurance if, say, you're messaging a journalist in an authoritarian country. Will it go out over SMS or iMessage? You can find out, but even a little bit of doubt about that can have significant consequences.
I'm glad iMessage does do encryption like it does, but it's no replacement for Signal and WhatsApp uses libsignal for its encryption.
Yes, and no. If you send a message to someone you've most recently conversed with on iMessage, it will be blue. But if iMessage can't deliver the message, it will fall back to using text messages. I believe on the next attempt, the button will be green, but I don't have a way to test that right now.
As recently as last weekend, I had it go through as green instead of blue without asking because the recipient was in a no-data area. Perhaps because I'd previously approved green messages for that person.
So put them in a cohort and treat them differently than the rest of the users? Personalised key exchange? Possibilites are endless.
If you don't trust the closed source operator here, then that end to end encryption should mean nothing for you.
This is fundamental. If Facebook messages are still to work as before, with a web interface, an archive etc. then you need to supply Facebook servers with a decryption key. You no longer have end to end encryption.
And I don't really see how they could get rid of the web features, it represents a massive number of FB Messenger users. This ties nicely with the older pressures from Zuckerberg to monetize WhatsApp and the resistance of the founders for security reasons, this most likely means access to the conversation plaintext.
WhatsApp Web already provides this, by connecting to the phone and proxying messages through it. Of course, that also requires implicit trust in WhatsApp Web, but it is possible. And using WhatsApp requires implicit trust in Facebook anyway, so...
It still requires you to run the WhatsApp phone app though. I doubt that facebook wants to require all of their users to have phones. You can reimplement WhatsApp's functionality in client side JS but then you get into deep technical problems rooted from the fact that js is, in general, ephemeral and not something permanent compared to an ios/android app. E.g. when someone logs onto facebook on a their friend's computer to check their messages you want everything to be smooth including message search but in order to provide message search, either the server needs access to the plaintext or the client needs to download the entire message history... If there's an app the client already has the entire history so that's no problem, but it is a problem with ephemeral Javascript. Another problem you run into is that Facebook can serve specifically manipulated Javascript just for you ^ TM because you are an interesting target or something. For Android/iOS this would require an app update and need to go through a third party and I haven't heard that google or apple give you the option to push a specific app to a specific device.
You already have to login with a username and password to access Facebook, unlike whatsapp where your phone is the sole key. Facebook could just construct the key at logon time based on the user's password.
To enable end-to-end encryption, you need more than local storage, you need local computation using local execution of signed and reproducible code. Javascript in browsers is fundamentally not such a platform, mobile applications are - to the degree you trust Google, Apple etc.
I think end-to-end encryption is currently undergoing a crisis of definition. Within the security and cryptography communities, implementations of secure messaging like Whatsapp and iMessage are considered to be end-to-end encrypted communications. The philosophical intention of end-to-end encryption is to enable communication through infrastructure which you do not trust.
Beyond the technical considerations, different demographics have various expectations about what end-to-end encryption means. Sometimes their position is that end-to-end encryption does not exist without decentralization. Some want to have a fully federated protocol. Others believe that allowing an intermediary to broker key exchange invalidates the end-to-end confidentiality and authenticity assurances.
This often leads to nitpicking about what defines end-to-end encryption which, while a useful exercise in its own right, doesn't capture the heart of the grievances at play. In many cases it would be more productive to talk more directly about expectations regarding the security and privacy of a service or protocol rather than whether or not it fulfills an underspecified set of criteria.
This is to say that you can make a compelling argument that allowing a third party to broker your key exchange is insufficiently secure for you. But if you anchor that critique to whether or not the protocol satisfies end-to-end encryption, you're inviting rebuttals that don't substantially respond to what your critique is. Whether or not something satisfies end-to-end encryption is somewhat less important than whether or not you think it holistically satisfies what you consider to be strong confidentiality and authenticity assurances. If your problem is that you don't want a company like Facebook bootstrapping the key exchange for you, then you should defend that (valid) opinion by choosing a different set of criteria to work with.
Decentralization/federation and the existence of third-party brokers don't really come into play here. What is required is
a. a cryptographically secure protocol
b. an UI that is strict about checking that the keys and signatures match and is loud about notifying the user when they don't
c. an open source client
Open source client doesn't really get you much, since you would need to audit the entire source code, then build it yourself, which you probably won't do. If you aren't doing that, you're implicitly trusting others to have audited the source code, and to provide builds that actually correspond to the source code. Now there are reasons to trust the open source community like this (lots of eyeballs, and people who care about security and privacy can inspect the source code and third party builds), but there is also one advantage to commercial software (including closed source) over open source: you're more likely to have someone to sue if they lie or mess up.
An open source client is not required for end-to-end encryption.
Do you see what I'm getting at? We're quibbling about a technically precise definition instead of what you'd like to see in a secure messaging application.
Yes, this is what I meant. Unless we think of some magical way to verify that e2e is really happening (enter quantum voodoo or something similarly wild), the only way to verify is by actually inspecting the source code. Even this may not be enough, but it is a necessary precondition for now.
For sure if you are using any app or OS over which you don't have complete knowledge and control, and which isn't entirely unhackable, you are trusting someone somewhere.
Application companies will always be able to backdoor their apps.
What e2e encryption does do is make messages nearly impossible to be intercepted in transit or on a massive scale by anyone without the cooperation of the company.
> What e2e encryption does do is make messages nearly impossible to be intercepted in transit or on a massive scale by anyone without the cooperation of the company.
No, this is a non-standard definition of "e2e encryption" that I've never heard of. In fact, it's exactly counter to the whole point of e2e encryption. The reason "end" and "end" are specified is because it precludes anyone in the middle from getting the plain text of the message. End to end encryption is supposed to assume "cooperation of the company" as a threat model!
> What e2e encryption does do is make messages nearly impossible to be intercepted in transit or on a massive scale by anyone without the cooperation of the company.
I don't understand why you and others in this thread describe it this way. "end to end" in this description sounds a lot like "transport security" -- like what you get from TLS (https, e.g.). How is this version of "end to end" (where are the ends?) any better than TLS?
> I don't understand why you and others in this thread describe it this way. "end to end" in this description sounds a lot like "transport security" -- like what you get from TLS (https, e.g.). How is this version of "end to end" (where are the ends?) any better than TLS?
TLS is client-server oriented. When a messaging system uses "transport security", like Facebook Messenger, that normally means that your client's connection to Facebook's server is encrypted, but Facebook's server still has access to your message plaintext. Whereas an "end to end" encrypted system would encrypt messages on your client that are only encrypted by the person you're talking to's client.
(I'm similarly skeptical about how much difference this makes in practice - I don't know what the threat model is where you trust a closed-source app and closed source google play services but don't trust the same company's servers. But it is a real distinction in behaviour)
You may have misunderstood my reply to mean that I don't understand what's different between TLS and end to end encryption.
In fact I don't understand the difference between TLS and -- let's call it E2E' (that which might be "end to end encryption"). If E2E' permits the message broker to intercept messages, does it satisfy the conventional definition of "end to end encryption"? No, certainly not. Is it any better than TLS? No, not in my opinion.
Here's what I quoted, which I believe to be E2E':
> What e2e encryption does do is make messages nearly impossible to be intercepted in transit or on a massive scale by anyone without the cooperation of the company.
> If E2E' permits the message broker to intercept messages, does it satisfy the conventional definition of "end to end encryption"?
By "broker" do you mean the server or are you including e.g. the company's code running on your device? "End to end" conventionally means "device to device" since few if any strong cryptosystems can be implemented by humans without mechanical assistance.
Key exchange is traditionally assumed away as outside the scope of analysis; we assume as a starting point that the users have a preshared secret key. So in theory E2E is very different from TLS. But in practice key exchange is very relevant.
There is still a very real practical distinction though: WhatsApp/Signal/... do not allow the server to passively intercept messages. There are active attacks that the server can perform against they key exchange process, but these would be very likely to be detected if performed on a large scale (even by insiders at the company).
It's also worth noting that a TLS approach leaves a much bigger attack surface for bulk attacks from outside the company: any security hole in the company's servers gives a single point at which an attacker can capture plaintext messages on a large scale (as the NSA is known to have done to GMail).
Yes, that does safisfy the definition of end-to-end encryption. The broker - and anyone else - can intercept messages, which is fully accounted for. That does not compromise the confidentiality or authenticity of the secure channel. What is explicitly disallowed is the intercepting party getting access to the plaintext. That includes the broker.
TLS establishes a secure channel between a client and a server. Both the client and the server have access to the plaintext.
E2EE establishes a secure channel between two clients who each have access to the plaintext, via an intermediating server which has no access to the plaintext.
The two clients are the "ends" in E2EE. E2EE does not mandate that the server is uninvolved in the key exchange.
I have a hard time believing Zuckerberg would purposely exclude himself from the conversation.
You know what's bizarre to me? Most US states are either one party consent or 2 party consent states. Meaning to record a conversation, at least one party must consent to the conversation being recorded.
So how does this not apply at all to private conversations online?
to me all this end-to-end business sounds too much like marketing. As long as you have to use the clients provided by facebook, it doesn't matter that the chats are end-to-end encrypted.
Facebook controls the clients and as such can do whatever it likes with your chats (or whatever you agreed to)
End-to-end encryption is about decomposing trusted parties and compartmentalizing untrusted infrastructure. There are meaningful differences between end-to-end encryption and server-side encryption. These differences are entirely orthogonal to the question of whether or not you can verify the client or the server.
This is what I was getting at in my other comment. If you’re going to reject end-to-end encryption because you can’t verify the client, you’re looking at a very different set of criteria to establish the confidentiality and authenticity assurances you want. In particular, you are at a point where it’s difficult to establish a secure channel unless you’re using a fully decentralized, federated protocol with a server you stood up yourself.
Yes, that's why you need independently developed and independently distributed client software. Otherwise there's no meaningful compartmentalization.
The parent poster is not rejecting end-to-end crypto itself, but how it's typically done. (on a locked phone you don't really control in an autoupdating app you don't control at all) Web based end to end encryption is even more ridiculous (say mega.nz for example), because then it's even more trivial to distribute different code to differnet users.
What application would you say has "real" e2e encryption? Signal and all the other apps have exactly the same problem right? If you don't compare your keys offline, you're always at risk for this attack. You can't build cryptography out of sand.
Software that uses the user's public key as the user's identifier (or potentially something that uses namecoin) do not have this issue - consider tox and ricochet for example.
This isn't really a solution, though. It's just moving the problem somewhere else. The problem then becomes things such as linking existing Third Party Identifiers like email, phone numbers etc. to the users key (which most regular users want to be able to do). The idea of a user per key in general also becomes problematic with multi-device usage or a device compromise. You will not be able to revoke access to any device without throwing away your whole identity.
WhatsApp also notifies on rekeying. A yellow message is shown warning that "your conversation partner's security codes have changed" or something of the sort.
I guess it's fair to assume that when we say "the conversation's participants" we explicitly mean everyone except the broker. I think it's important in this day and age to accept that the broker is now a conversation participant as well. Maybe we should look more into P2P messaging software.
> the broker is now a conversation participant as well.
I don't agree. This is the antithesis of the intent of the term "end to end encryption." Otherwise you could just use TLS to secure each of the {client->broker} connections and then you could call it "end to end".
I don't see why my comment was down voted. I made a valid argument that the expectation of the broker not being privy to the conversation is a bit unreasonable and we should further explore P2P.
I don't blame NYT for getting this wrong wrt WhatsApp but it bears repeating: if you let someone else broker the key exchange, you trust them implicitly. That is to say that IMO this is not truly trustworthy "end to end encryption". To add insult to injury, WhatsApp permits rekeying to take place without any indication to the conversation's participants [in the default settings].