This retort does not address the fundamental point made in the Guardian piece:
> “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”
This allows WhatsApp to MITM. Whatapps can rekey both Alice and Bob, decrypt both their messages from that point onwards (incl unsent messages) and forward them re-encrypted with their real keys. The only notification might be that rekeying warning, if the users have turned it on. In this scenario even the double-checkmarks are present.
This is contrary to WhatsApp's claim that even they cannot snoop.
PS: I just check on my phone if those notifications were turned on. There were not. And I'd never turn those off myself, which leads me to conclude that the rekeying notifications are off by default (in their android app)
> This allows WhatsApp to MITM. Whatapps can rekey both Alice and Bob, decrypt both their messages from that point onwards (incl unsent messages) and forward them re-encrypted with their real keys. The only notification might be that rekeying warning, if the users have turned it on. In this scenario even the double-checkmarks are present. This is contrary to WhatsApp's claim that even they cannot snoop.
You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
> PS: I just check on my phone if those notifications were turned on. There were not. And I'd never turn those off myself, which leads me to conclude that the rekeying notifications are off by default (in their android app)
Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.
I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
I think it's fair to say that you are the world thought leader on these matters right now.
One thing that the rest of us are wondering right now is:
> I've been impressed with the level of care that WhatsApp has given to that requirement.
To what degree do you really know that? Is there a place where we can read about your interactions with Facebook, the level of access they've given you, and the degree to which they have allowed your recommendations to shape the contours of their implementation?
Nothing less than the strength of dissent lies in the balance of questions like these.
> I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
I agree that the jump to scary terminology is dangerous.
However, at the end of the day, I think that many of us have been trying to make a simple point that shows that there is a sort of crossing of that line:
WhatsApp claimed that they were simply unable to intercept communications, and now we find out that, without any user interaction or approval, messages which haven't received the "double check" are re-transmitted when a new key is generated.
In some highly specific but easy-to-imagine scenarios (eg, a journalist on the ground in Tahrir Square using WhatsApp to report on conditions, receiving no replies), WhatsApp is hugely vulnerable in a way that most of us didn't think it was.
So look: nobody here is trying to diminish your tireless work and your accomplishments in bringing freedom into the information age.
But there are nuances here that are important, and fleshing them out is a big part of what this community is about.
> But there are nuances here that are important, and fleshing them out is a big part of what this community is about.
The entire point of the crypto community is to maintain as little trust as possible unless you can be highly certain about things.
The media reaction to "OMG WHATSAPP IS FOR SURE NOT SAFE" is a HUGE over reaction. But in an industry where audits and open source are huge factors in trust... WhatsApp doesn't do a whole lot. Phrased better, the article could have done a great job of explaining how to secure yourself and enable the messages, rather than just fear mongering.
Lets be honest. Facebook doesn't have a great privacy record. Theyre an advertising and data harvesting company. I basically trust them 0. But I trust Moxie a lot (its possible that he's been bought out by facebook/egyptian government for billions of dollars, but Im just gonna keep trusting him).
Honestly, Moxie saying that WhatsApp has a decent implementation of Signal does a lot more for my concerns than Facebook saying the exact same thing (though I too would love to know more about how much Moxie knows about whatsapp). I don't use whatsapp, but Im less prone to go "oh yeah, you def dont want to use that, its a facebook product!" like i would for skype/MS.
Its reassuring to know that if someone tried this, I could be notified of it, which means it seems like no one would really try this unless it was SUPER worth it (I dont think facebook is going to try to MITM and expose themselves so they can hear about my weekend drinking plans). So for common folk, I think it would be pretty safe. And if you are talking about things that require crazy opsec, definitely turn notifications on and verify those numbers.
I think that here you've made a great point. For many users, the level of privacy that Whatsapp gives is unnecessary, but if you are the person that needs to discuss mission-critical matters over Whatsapp, they give you the possibility to do that safely.
The only problem would then be that they can MITM one message, even if they'd be caught that way. I doubt they'd do that for less than world-changing messages, but still that's the only problem if you enabled the notifications and checked the numbers.
What does trust have to do with this? The trade-off has been clearly explained. As it stands, WhatsApp is great for protecting sexts and low value conversations if you're not famous (99.99% of everyone), but if you're snowden, or hillary, there is no protection - contrary to what has been advertised.
To my understanding, that's simply not true. What you can accurately say is that with key change notifications turned on, any one* message could be exposed without any means of recourse, but subsequent exposures would require user error.
*Question for anyone: could this apply to a "batch" of messages? That is, could servers hold back the delivery of some number of messages and then the attack could be applied to all such undelivered messages? But once the attack took place, the double check would be displayed on the sender's phone and the notification of key change would appear. My understanding is that the answer to the question is 'Yes'.
Very good question, and I haven't seen a definitive answer to it yet.
The responses by Bob are presumably numbered, and some might be delivery receipts, or contain delivery receipts (e.g. A cumulative ACK as in TCP). Could the server selectively suppress the read receipts, or manipulate the cumulative ACK? If it simultaneously triggered rekeying on Bob's side, presumably yes. But not seen a definitive statement on that.
I've little to add to this, other than the point that the UK's IP Act allows GCHQ (and other UK government agencies) to abuse this issue individually or en-masse against anyone, anywhere, more or less at will.
That's the world we're in now. I respectfully disagree with Moxie's point about key verification. I think the point you raise about easy-to-imagine-scenarios would've been laughed away years ago, but is not only realistic, but also distinctly possible now.
Whatsapp told the original reporter that they had no plans to fix the issue. The question is that in light of mass spying by the intelligence services, what else will Whatsapp not fix?
> The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
That defense, which happens to be the only defense, is turned off by default in WhatsApp.
You seem to argue they do so because it's bad UX to present such notification by default. That's - in my humble opinion - like suggesting browsers should turn off TLS chain errors by default because it's bad UX and just proceed with the connection as if nothing happened...
> That defense, which happens to be the only defense, is turned off by default in WhatsApp.
> You seem to argue they do so because it's bad UX to present such notification by default. That's - in my humble opinion - like suggesting browsers should turn off TLS chain errors by default because it's bad UX and just proceed with the connection as if nothing happened...
One thing we've learned over the years is that security warnings should not be displayed to consumers under "normal" (eg. non-critical) circumstances, otherwise it creates a condition of "warning fatigue."
TLS certificate errors are not something that should happen under normal circumstances. When a TLS certificate fails to validate, something is really wrong. As we've gotten better about ensuring those conditions, browsers have made it harder and harder to get past the warnings, because they're not warnings anymore -- they're error conditions.
Key changes in a messenger are totally different. They happen under normal conditions, so putting them in people's faces by default has the potential to do more harm than good. If we can make them workable, systems like CONIKS or Key Transparency might be in our collective future, but if you don't like systems that are fundamentally "advisory" (don't tell you until after the fact), you're not going to like those new systems at all either.
For now, I think a fact of life is that most people will not verify keys whether the warnings are there or not, so I think what's most important is that the server can't tell who is and who isn't.
I'd love to hear other ideas about how to improve the UX of interactions like this, but I think they have to include a basis in the assumption that we can't fundamentally change human behavior and that we can't just teach everyone in the world to be like us.
Why not phase the message differently, e.g. "It looks like (user) is chatting from a new device. Is this correct?"
Warning about unusual account activity seem to be very common these days, so why not using them here.
The way the warnings are presented as part of the chat history (a very good idea) also means they could be used after-the-fact to figure out when an account was overtaken, even if the warning was initially ignore. I figure even non-technical users would like to know that, after one of their contacts tells them their account was hacked.
Additionally, why is an ignored warning worse than a warning that is suppressed to begin with? That seems to me like a landlord that decides not to install smoke alarms because "the tenants could get used to the sound" - when most of the tenants are not even aware of the concept of "fire".
Finally, I don't find the "it's important the server doesn't know" argument not convincing. If you conclude that the vast majority of people doesn't have the warnings enabled and the costs of hitting someone with warnings is low, that would make snooping still a very low-risk activity.
Summing up, I think the very least consequence Facebook should take from this is to make the warnings in-by-default instead of off-by-default.
> Why not phase the message differently, e.g. "It looks like (user) is chatting from a new device. Is this correct?"
Because of exactly what Moxie said in his post. This is a relatively common occurrence in practice. Someone gets a new device. Or uninstalls/reinstalls the WhatsApp app. Or wants to read messages on their laptop, too. And so on.
Warning everyone about this all the time leads to people becoming subconsciously blind to these notifications — even to people who should care about them. The solution taken by WhatsApp is a great compromise in this situation. Not everyone will have it on, but the odds are in favor that someone they might want to intercept messages for will. And if they can't know who has the notifications enabled and who doesn't, they run the risk of tipping their hand that they're doing it at all.
That's why you include a checkbox underneath with the label "Do not show me this warning in the future (insecure)". And then a setting to turn it back on. It's not rocket science.
This shit is really easy to armchair quarterback over the Internet where nobody wins and the points don't matter, but the reality is that figuring out how to design crypto applications in a way that keeps users secure without users disabling or ignoring sometimes-important security problems is a very hard problem. In fact, it may very well be the current hardest practical problem in information security.
So yeah, it is actually kind of like rocket science, and I guarantee you that Moxie has spent orders of magnitude more time thinking with, dealing with, and collecting data on this kind of problem than you or I combined.
And we're not moxie's investor meeting or senate hearing comittee. This is a layman discussion thread that he decided to join and answer questions in. (Big respect to him for doing that) So I believe even "stupid" questions should be allowed if they increase understanding or bring up new points.
Furthermore, this is an argument via authority[1]. Of course there are experts, but even an expert should explain and discuss his rationale in the interest of sharing knowledge (which moxie is doing here) - otherwise problems like this will stay "hard" for a long time.
I did not chastise GP for asking questions. I chastised GP for his hubris in looking at this problem for all of five minutes and confidently asserting that he has a simple, obvious solution that somehow a literal expert in the field completely missed, then claiming offhand that the problem isn't "rocket science" when in fact it's, in my estimation, one of the hardest practical problems in the entire field. We know far, far more about building secure theoretical cryptosystems than we do about ensuring actual humans use them in a way that doesn't break the seal and void the warranty, so to speak.
And Moxie has explained his rationale in this thread. Argument to authority isn't always wrong — particularly in the case where the other side has no data or theory to back up their claims. For instance, I personally only know little about the actual mechanisms behind anthropogenic climate change. What I do supports the notion. But I'd be lying if I didn't acknowledge that the most compelling argument is the absolute agreement by 99.9%+ of the actual experts in the matter.
Likewise, in the absence of any obviously compelling evidence validating GP's approach, combined with Moxie's explanation above and my own experience as a security engineer, I'm going to go with the guy with literally decades of both theoretical an operational experience here.
The people who are most likely to be snooped on are also more likely to have the notifications turned on, so I don't think it's such an easy choice for an attacker.
The entities this is designed to thwart are not going to want to risk leaving behind a trail of evidence, even if the risk is small.
It also prevents fishing expeditions, since the risk would quickly add up as more targets were added.
All that said, a one-time prompt to turn on the notifications for users that care about extra-strong security seems like a good idea to me.
The fact of the matter is, that when you disable the only defense against MITM by default, you should not claim your stuff is secure and end to end encrypted, because it is not. It's really easy as that.
Warning fatigue, "most" users not knowing how to do it or doing it wrong etc, are indeed hard problems to solve. There are indeed no easy answers to this, or else somebody would have come up with something already. But just because it's not easy does not mean you're entitled to just lie about the security properties of your system to your users.
>WhatsApp's end-to-end encryption ensures only you and the person you're communicating with can read what is sent, and nobody in between, not even WhatsApp. [...] All of this happens automatically: no need to turn on settings or set up special secret chats to secure your messages.
Given that the only defense against a WhatApp MITM is turned off by default, the "not even WhatApp"/"automatically: no need to turn on settings" part is just not true.
At one of my jobs the network team uses a thing called "Forcepoint's TLS inspection" (aka Websense) (aka Raytheon). My browser happily let's that network team MITM me all day long without a peep, and logs & archives all my TLS traffic for who knows how long.
The funny thing is a VM I setup from my same laptop tried to make an https:// connection and the browser outright refused, without any possible workaround until I imported the Forcepoint CA cert.
Security people must love us users so bad. Love you, too! xox
(Note: the same network team imaged the laptop in the first place, and it's against my contract to re-image it. Hence the Forcepoint CA cert's presence in my browser's root chain. I prefer to call this LAN-In-The-Middle.)
This is absolutely standard in the UK financial services industry, and ultimately required for compliance with financial regulators.
The alternatives are running agents on your machine that capture everything you do (which most shops I've been at do as well) and removing local administrative rights to prevent users from removing auditing software and deploying workarounds like your VM (also the norm now).
This has absolutely no bearing on the security of HTTPS/TLS as a whole, the chain of trust is working exactly as it's supposed to in this instance. It's distasteful as an end-user (and even more distasteful as one of the network engineers deploying it, wondering why it's not Information Security's job instead), but you can always quit that job and find another one (yep, that's what I did).
If you are in Europe (or at least some countries in Europe), it's illegal to read in-transit messages even if the recipient is at work and the interceptor is their employer.
Reference? I've worked at several companies claiming they are allowed to do this (which I don't necessarily believe, of course). Has it been tested in court?
Great link, thanks. However, it doesn't back up the claim you made. A few quotes:
"In Europe, there is technically no uniform body of “European law” that directly applies between employers and employees"
"Courts and scholars increasingly reference EU law, usually without clarifying whether the existence of a particular civil right protection in the EU Charter actually changed the legal situation as a matter of law, rather than as a matter of public policy."
There's a lot of fuzziness around implementation of a very loosely worded human rights clause, combined with prior national laws. Mostly aimed at protection from Government. Previous tests have mostly been cases where the individual did not consent or some such thing.
More directly, EC data protection directive hinges on: 1) contractual obligation; 2) consent; 3) statutory obligations; 4) balancing test. It seems highly likely that most business can legally MITM me if I sign the contract they want me to sign.
Most - but not all - of the private sector examples given (including Germany and France) hinge on the employer not following the correct process: either not notifying the employees, not gaining consent, or opting to allow private communications at work which are strictly forbidden from being monitored (in some countries).
That said, there is also:
"A number of EC member states, including Germany, Italy, the Netherlands, Spain, and the United Kingdom, strictly prohibit ongoing monitoring of employee communications and permit electronic monitoring only in very limited circumstances (e.g., where an employer already has concrete suspicions of wrong-doing against particular employees),265 subject to significant restrictions with respect to the duration, mode, and subjects of the monitoring activities"
It's not immediately clear if the applies to specific, targeted monitoring. The footnote says gives an example where informing the employee of valid reasons for investigating is sufficient.
(Note: I made no claims, just jumped in to provide references about the state of affairs in some European countries)
The pages I gave are specific case studies of the law in Germany & France. You are right that there is not too much overarching EU level legislation about these things, it's generally in national legislation and up to each country.
Less than 2% of the total staff probably realize that all their https traffic is being intercepted. I find it odd that we try to teach everyone the difference between http and https, and then we do this.
Having started originally with Threema before I gave in to WhatsApp I kind of like the trust levels they established in the UI. Might be an improvement for the WhatsApp UI to downgrade the trust level visually in case of unexpected key changes.
Beside of that, and thinking through this comment by moxie, I fear he is right. I've a bunch of dead keys listed in my Threema contact list. All from people which are in general quite tech savvy but still were too lazy to transfer their keys on phone changes. And I already had to rescan (the QR code) quite a bunch of people when I meet them maybe once a year.
Thats for my modest 20 something Threema contacts. Now think about the not very tech savvy average whatsapp user with his 150+ contacts. Maybe about a third of them will change their phone or MSISDN throughout a year. If you see 50 alerts per year in your chats that something changed, how long will you care to verify those changes that they're valid?
I don't like those defaults choosen by WhatsApp and once I knew about it I changed it. But at the scale of WhatsApp I understand the decision they made. You might also want to add the common argument that in the real world close to nobody will give a shit about the encryption. Since Snowden a few percent more care but it's still a small minority. So to bring at least some security to the majority that do not care is still a win. Everyone else has to make informed decisions about their own configuration.
> Key changes in a messenger are totally different. They happen under normal conditions
This doesn't have to be the case. If you stop coupling a key to a device and instead couple a key to a person (generating a key deterministically from a password for example), they can be changed far more rarely.
That was just an example. You could also pair the key to a person by some other method, such as storing a copy of it on a storage medium other than their phone.
Requiring a external storage medium would kill the service. I think you have to separate a service made for the masses and a service with focus on security/encryption. For WhatsApp there will be some instances where you have to choose between security and convince, and they have choose the former, which is only naturally.
There is one pass phrase I remember, 5 passwords, 2 PINS, 2 phone numbers. My password manager and address book remember hundreds of passwords, phone numbers and emails each.
For some reasons everybody uses an address book, many people let browsers remember passwords but almost everybody resists the idea of using a password manager and end up with low entropy passwords.
> How would these 'trivial' steps look like if a telephone gets stolen
Just as 'trivial' as it is Facebook to swap your key at the request of a government. You should have to start from a blank slate (zero trust) in that situation.
Getting your phone stolen is an extraordinary event that warrants requesting some attention from your contacts, even if only to inform them of the old identity being compromised. And then you might as well have them verify a new key.
Buying a new phone and switching to it
Reinstalling your phone OS because "it's slow"
Reinstalling WhatsApp because "it crashes" or "it's slow"
Swapping a phone because the screen is broke or I dropped into the toilet
I think it's romantic to think that 1 billion of WhatsApp users can be taught about the risks of MITM attacks and how to do a key check.
This is what I do: I have the warnings turn on. When the key change warning appears, and if I care enough about the person and the discussions we have, I try to match the warning with a real world event, so either I already know that something happened, or I try to remember to ask somehow if the person repaired or changed the phone. If I can match the warning with such an event, I feel satisfied. Otherwise, i ask for a key check when I meet that person in real life.
It would help if WhatsApp provided a UI to show whether I have verified the current key of each user (something like a green check-mark next to the name) because it's hard to remember.
That is basically how 2FA works with Apple devices. You use an old device to approve new ones. Sure, if you lose your cloud account, laptop and phone all at once you'll need to start from scratch. But under normal circumstances it reduces the amount of blind trust.
Moxie, what about showing a positive UI for users you have verified keys with? Something like the verified checkmark on twitter. I'm ok with this being client side of course, and possibly even lost if you reinstall the app (better than nothing).
I like to verify keys of my main professional contacts on WhatsApp, but it's hard to remember who you verified keys with, and then whether the key was changed since last time you verified it.
Threema offers that. You do a QR code scan, after which a contact is marked as verified (3 green dots). Since Threema has a fixed key per user, these verifications are persistent and most people transfer their keys when switching to a new phone.
It would be good to be able to check how long the present safety number has been in place for. That would allow people who have become concerned about snooping to detect snooping back until that point (hey, when did you last change your phone?).
>TLS certificate errors are not something that should happen under normal circumstances. When a TLS certificate fails to validate, something is really wrong. As we've gotten better about ensuring those conditions, browsers have made it harder and harder to get past the warnings, because they're not warnings anymore -- they're error conditions.
Not paying Verisign your rent? That's an "error condition".
(Here of course referring to the choice of browser vendors to block access to web sites that offer secure end-to-end crypto via TLS, but merely haven't paid a browser-trusted CA to issue a new cert with a future expiration date.)
Would have been a fair statement a couple of years ago, but we live in a day when you can get free annual certs manually (Startssl) and free 90 day certs automatically (Letsencrypt).
The StartSSL CA is in the process of being blacklisted by major browser vendors because they issued a certificate for github.com to someone who clearly does not run github.com. [0]
LetsEncrypt just barely left beta (also this summer) and I'll admit that I haven't investigated it thoroughly, but it appears that some widespread devices are still incompatible (also consider the versions that accept LetsEncrypt; some of those are fairly recent, like CM 10). [1]
While some noble souls like LetsEncrypt have sought to remedy this rent-seeking behavior, it remains the fact that in most cases, a traditional CA is going to be required for a couple more years at least.
No, but they don't have to because (the vast majority of) users don't establish trust in website's TLS certificates themselves; instead, they use a trusted third party: the set of all trusted certificate authorities in their browser or operating system's root store. End-to-end encrypted messengers like Signal and WhatsApp don't rely on a trusted third party to establish trust, instead (rightly) leaving it up to users to establish trust between each other.
> That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
Moxie, some of us are of the opinion that [that] (implied) goal is certainly noble but ill-considered.
Modern state surveillance has 2 general unstated goals:
1) Create an atmosphere of fear to affect self-censorship. Some states (such as China) announce this as a matter of state policy. Others (such as US) drop hints. UK is somewhere in between.
2) Identify emerging memes, clusters, and thought leaders. This information is then used to counter, disrupt, and discredit/isolate (respectively).
(And yes, the stated public goals are to prevent terrorism, child pornography, and crimes.)
From the political angle -- activist angle, if you will -- the goal of "serving billions of people from many different demographics all over the world" is minimally misguided, and counter productive, and maximally a hazard.
I think you are wrong. When only a small portion of the population can use end-to-end encryption in their day-to-day communications, a state can declare it (e2e enc.) "suspicious" and achieve both goals far more easier.
I don't understand. How is it misguided, and who is it a hazard to? Are you saying the unstated goals of state surveillance are good ones which conflict with popular use of crypto, and therefore popular use of crypto is bad?
I THINK the commenter was saying that "serving billions of people from many different demographics all over the world" is inviting all of those different people together so you can betray them all at once.
> Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.
I'm not sure what exactly is the reason for that, is it UX? like if someone get a new phone and creates a new key pair their friends will get scared because of the warnings?
> Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality.
Another fact of life are bad password choices, which is why gmail don't let you use "love", "sex" and "secret" as a password :)
Browsers, for instance, throw warnings when something is wrong with a cert. Even when 99% of the time it's some domain name issue or expiration date, I think it's a nice default. By letting Facebook rekey anytime you (fig) are making them kind of a CA. I don't think there is a good reason for that, specially not when Whatsapp claims that even they can't read your messages... it feels dishonest to me. But then again this is just a messaging app downloaded from Google Play running on Android, my expectations aren't too high...
The problem with key notifications being off is for those users who really want to be secure, and downloaded Whatsapp because they wanted E2E, but didn't know they had to go into settings and turn it on.
The problem with key notifications on-by-default is that regular users see warnings they don't understand and get warning-fatigue.
So how about making a default-on notification that is understandable for all users? Like:
::: It seems like Alice switched to a new phone (i)
where Bob can click the (i) for more info, or just ignore the notification. If Bob was security-conscious, he'd perk up at that message, while the majority would just go "meh" or congratulate them on their new phone.
Which is ok asthe people you know would understand that you are that type of person.
I would also pop up when someone reinstalled the app.
Would be rather annoying but it should be on as default and the first time it pop up give clear information.
Whatsapp is set to notify you when a key changes, this could be due to a change of phone or reinstalling the app. If you do not wish to receive these notifications click here.
Which would allow users who want it to keep and those that don't to turn it off after the first change.
Is there at least a record of what keys were used on both sides, so that I could verify later whether or not this has taken place?
> You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
I think it's still completely valid to say that WA should not claim to be unable to snoop. They can, and appear to be able to do so undetected with the default settings. Does the setup at least ask users if they want this feature on or off?
What would prevent WhatsApp from shipping a client where they control the rekeying notification setting remotely?
I suppose there must ultimately be some level of trust in WhatsApp that the client is doing what it says it is? Unless we're willing to sniff every piece of network traffic from it.
> That way the server has no knowledge of who it can MITM without getting caught.
What exactly do you think is the worst thing that could happen if you "catch" them doing this?
Now what do you think is the worst thing that could happen if they receive a subpoena or NSL or whatever that tells them to do this regardless of whether the user finds out or not (because the government wants the message contents that badly)?
> What exactly do you think is the worst thing that could happen if you "catch" them doing this?
[I'm not the OP, but my 0.02]:
Hopefully there would be an outcry, initially started by technically sophisticated communities like this, and credible articles in the Guardian, eventually causing significant user anger, and letting competitors gain against them. People running social networks care about mass user anger.
Hopefully that possibility keeps them honest.
Hopefully people don't cry wolf too many times, like today - slowly poisoning the watchdog!
> Now what do you think is the worst thing that could happen if they receive a subpoena or NSL or whatever that tells them to do this regardless of whether the user finds out or not (because the government wants the message contents that badly)?
This has got to primarily be a defense against ongoing mass surveillance.
If the government can compel them (via NSL or force or whatever) to change the service so that it just spies on a few targeted individuals, wouldn't it be easier to push these individual a malicious client update, rather than MITM the encryption and hope they have notifications off?
Does anyone know how to build a massively adopted network that resists targeted NSLs? I'm grateful we appear to have one that is resistant to pervasive monitoring.
It's rather disappointing UX and something that trains users to accept key changes of their contacts that Signal doesn't support affirmations of key continuity.
If you get a new phone without having lost the old one, it would be good to have a feature where Signal on the new phone shows its public key as a QR code, you scan it with Signal on the old phone and Signal on the old phone generates a protocol message to contacts indicating legitimate key roll-over without "key changed but you don't know why" UX.
>I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
Then release your product in a manner that let's people improve the UX and correctly label what is and isn't a backdoor.
>You've just described a "man in the middle" attack. It is endemic to any public key cryptosystem, including Signal and PGP, not just WhatsApp. The notification that you see in WhatsApp, Signal, SSH, PGP, or whatever is the defense.
If you're operating under the assumption that users aren't going to check their peers' key fingerprints, then you could just give compromised keys from the beginning -- no rekeying necessary. There's no way to protect against that scenario. That's not a fault of WhatsApp.
> The only question it might be reasonable to ask is whether these safety number change notifications should be "blocking" or "non-blocking." In other words, when a contact's key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.
You seem to be arguing that they should be blocking. While I agree that's definitely the best choice from a security perspective, I kinda doubt most users would appreciate having to manually re-verify keys every time someone reinstalls WhatsApp or changes their phone. In this case, WhatsApp decided to prioritize usability over security. You and I are of course free to criticize that choice, but it's hardly a "backdoor".
I'd add to the author's statement above and say another question we might reasonably ask is whether the notification should be on or off by default. While my gut reaction to that "On, obviously!", upon giving it a bit more thought I think it's actually understandable why WhatsApp chose off instead.
They're not designing WhatsApp merely for security conscious people, but for the masses. The average user is unlikely to understand or care enough about this warning to manually re-verify keys every time they see it, especially when the vast majority of the time it's just going to turn out to be the result of something mundane, like one of their friends getting a new phone. Honestly, I could go either way on this one.
No, first and foremost, I argue the notifications should be ON by default and feature a lot more clear and understandable text than what is displayed now.
The auto-resending of unread messages is another issue next to the potential of MITM due to no or unclear notifications about key changes
Users could verify if they have been MITMed by verifying the Security Number (just tap on a contact, view contact details -> Encryption). This assumes the WhatsApp app doesn't just display the old safety number.
Indeed. I'm working under the assumption that the client itself is sound.
The question is: how many users will actually check the Security Number, and recheck all the security numbers now that they were made aware of having to turn on the notification.
What WhatsApp is doing here is like using a self signed certificate to do TLS (your own fault if you did not check the cert yourself using whatever out-of-bands method available) and on top of that the TLS client later by default will not tell you when that self-signed cert changes (and you therefore need to recheck it) and for added bonus routing all traffic through them.
I'm confused. The way I understand it is that once the double checkmark appears, those messages are locked in and never rekeyed. That means once you see those checkmarks, you're guaranteed no one can snoop on that message anymore.
Assuming you have the notification on (Which anyone who cares about security could and should turn on), once you see a warning, you could just delete all messages that don't have the double checkmark (if any) and not send new ones.
One checkmark = message delivered /
Double checkmark = somebody read it. That somebody might as well be WhatApp doing a MITM
The double checkmark stuff the blog talks about just means you cannot retroactively rekey already sent (old) messages.
WhatApp inserting itself as a MITM can however read (and double checkmark) and new messages after they did the MITM rekeying
It is actually slightly different. One checkbox means received by WhatsApp, two checkboxes means received by the recipient and two blue checkboxes means read by the recipient.
> This is contrary to WhatsApp's claim that even they cannot snoop.
They claimed that once a message has been delivered (double check mark),.
> The only notification might be that rekeying warning
The rekeying warning is a fundamental part of WhatsApp's security. If it's disabled, there are many possible attacks, not just this one. The blog entry explains it pretty well.
He does address this:
Once the sending client displays a "double check mark," it can no longer be asked to re-send that message.
That means a user is able to verify visually that the end-to-end is working.
"users might not notice" doesn't seem to me as a strong argument to state this as a backdoor. This would imply not noticing that you don't have a green padlock on chrome is a backdoor too, and it clearly is not.
The "green padlock" was not considered enough because users would not be able to differentiate it from a big lock symbol within the page. Thus we got HSTS.
(There was a time when browsers would color the entire URL bar yellow to indicate https, but that went out of favor many years ago.)
Moxie deserves respect for the web vulnerabilities he discovered and raised awareness about years ago, and for his general competence at cryptography. But in recent years he's shown himself to be willing to make catastrophic sacrifices to make security applications popular and viable for the "lay person".
If the "lay person" ignores the non-obtrusive key changes, and difference between single and multiple checkmarks (and the timing of them, whether single changed to double after a key change), and just trusts that "I heard WhatsApp is secure, so I'm good to go", then so much is sacrificed that there wasn't any point in the exercise to begin with. Except that real solid systems, with direct user control over key continuity, and fully open-source, are undermined by the confusion with these "lay person" super-convenient closed-source systems.
> catastrophic sacrifices to make security applications popular and viable for the "lay person"
This isn't wrong, but it's unfair to bring it up without the most obvious counter-argument.
PGP provides absolutely zero security to the average person, because average people don't use it. HTTPS provides lots of security to the average person, whether or not they know what the green lock means, because lots of people use it. Adoption is a feature.
Of course both of these things are true. Security sacrifices for the sake of adoption suck. But let's not paint a picture of Signal as "desperate for popularity", as though that was a selfish and not security-minded goal. Be fair.
The problem is that most security technologies only provide protection against specific attack vectors and attackers under specific conditions.
Without understanding these technologies very deeply, they are all creating a false sense of security to some degree.
That doesn't make your statement false, just very difficult to apply. That's not to say it can never be applied. There are clearly cases in which people are deliberately mislead.
But whatsapp markets itself as a secure system when the client just blindly accepts re-keying from the server without notifying the user by default,
It could easily have the notification on by default, and when a user turns it off actually explain that you are no longer secure.
The very best would of course be to require the users to physically exchange keys whenever they get a new phone etc, but we all know this will never happen.
I agree that there is much room for improvement. Instead of simply turning warnings on or off, they could let users enable warnings for some contacts but not others.
But my point is that the current approach is not simply "false security". It is incomplete or optional security against specific threats and not others. Depending on a particular user's expectations it may amount to false security. You're right about that. But it's not clear to me that having this sort of security is worse than nothing for the average user.
Also, you have to consider that this sort of optional and partial security used by a very large number of people allows those with real security needs to hide in the crowd. Taking a clear all or nothing approach, as you suggest, would put a bullseye on the back of those who do need security.
I think this is good healthy criticism, but I think it's also difficult to strike the "right" balance.
I believe we got Signal end-to-end encryption in all of these messengers just barely, even as "compromised" as you may think it is. Google and Facebook (Messenger) didn't even enable it by default because they thought it was "too much" encryption.
So if it was even more difficult to use, it may have never been adopted by these services.
At the same time, I don't think we should allow all sorts of modifications to the protocol and to how this encryption system works just to cover a few niche use cases that would slightly increase those users' convenience.
Sending undelivered messages when the recipient is switching SIM cards instead of just telling the sender that those messages can't be sent then is one of those niche use cases and compromises that shouldn't happen, especially if enabling such features could be turned into defacto "legal intercept".
I guess Moxie is saying here that this wouldn't be a defacto legal intercept, but I'm not so sure that's true, and the researcher that found the bug doesn't seem to agree either. I think, unlike others here, it's very likely that people wouldn't notice that the messages don't have a double check mark anymore.
>just trusts that "I heard WhatsApp is secure, so I'm good to go", then so much is sacrificed that there wasn't any point in the exercise to begin with. Except that real solid systems, with direct user control over key continuity, and fully open-source, are undermined by the confusion with these "lay person" super-convenient closed-source systems.
I totally disagree with this.
Security isn't some absolute thing, which you either have or don't have.
It's a series of threats, and counters, and usability tradeoffs you have to make so people still use your service.
You can always criticise someone selling a front-door lock - "what if the bad guy smashes the window"? But that doesn't mean front door locks are bad, or that we should all move into houses without windows.
I think tech folk treating security as a binary all-or-nothing thing, without thinking about the usability tradeoffs, are a big part of the problem, and why so much of what we have is so insecure. We have these "real solid systems, with direct user control over key continuity, and fully open-source" you mention which almost no one uses.
This makes them useless. Complaining that people should know better is also useless. Shipping software which dramatically increases security against a wide range of threats, on the other hand, because 1B people actually use it, is a positive contribution. Trying to blame that same software for the lack of adoption of "real solid systems" is lame. We've had the solid systems for years, but their lack of usability always sunk them.
Does WhatsApp encryption handle every threat? Of course not. There are always going to be unhandled threats. People could still come to your house and root your phone, or hit you with a wrench until you unlock it, for example.
But there's an apparently big threat (revealed in the Snowden leaks) of governments clandestinely, passively, mass-monitoring traffic on the wire, perhaps without the cooperation of the tech companies involved, which has compromised the privacy of vast numbers of entirely innocent people. WhatsApp's encryption seems to counter this.
Moxie et al's contribution thus deserves respect, in so far as it potentially protects a billion people from that class of threat.
It's possible that, due to the closed source nature of WhatsApp, they are actually snarfing everything. That would be big news, deserving of a mass outcry, or a leak. I'm hoping the potential commercial ramifications of them getting caught widely releasing a deliberately compromised client keeps them honest.
Its a risk, but security is always about risks and tradeoffs, not absolutes; and they have built a system that's actually usable enough that it has 1B users.
I agree, perhaps a further version of the signal protocol could implement a definitive solution that better addresses this kind of scenario the way HSTS did for ssl certs. And combined with a friendly UI solution (like the new padlock | Secure string in chrome) would lead to easier detection of possible eavesdroppers by the lay person.
It sounds like you are saying this project's true motivating objective is to advance the author's personal beliefs about "UX" and to see wide adoption of this philosophy as embodied in some particular software. That he chose to use a large, politically-connected, centralized social media^W^W ad sales company as a distribution channel ensuring wide adoption by default. And that the author would make any trade-off in order to see wide adoption.
Should we be surprised?
If I am not mistaken the crytography here is compliments of djb. (Best "UX" designer ever, IMHO.)
The Signal author's contribution is only a protocol and some "UX". His programming language of choice was Java.
A little. Without reading the Guardian piece, I wouldn't even guess that delivery notifications have anything in common with security properties.
In other words, the messages are secure only when there is a double checkmark, not just a single checkmark. How am I supposed to know that?!? I am not even sure what do the checkmarks mean.
Frankly, that's not a "backdoor", that's just a poorly thought out GUI (in my opinion), that might eventually lead to backdoors with an evil server.
Absolutely. There is no evident connection that can be done from the double checkmark to a "secured communication".
But when you think about it, the double check is a read confirmation so if we are in an end-to-end encrypted scenario, and the message content was read, it means the recipient's device was able to decrypt it successfully.
This tells you that the recipient is still using the same set of keys that your device thought it had and used to encrypt the message.
The single/double checkmark is used with Signal's client as well. When you're sending unsecured texts, you get a single checkmark when you send it; when you are sending encrypted messages, a lock symbol and a single check when sent, and a double check when received.
Except that the obvious interpretation of the second checkmark not appearing is that the message was never received, not that it was decrypted by an attacker. Especially when there's a key change warning afterwards. I think this is still the correct interpretation in Signal proper, but who knows at this point?
Couldn't the server change keys after every message? The delay would be fairly small. Hold the original, change key, retransmit, change key back? Guess that is easily enough mitigated if the client won't allow switching back to the same key.
I guess the point here is: Can there be a backdoor in whatsapp? Of course!
Is there a backdoor on Whatsapp as described on the guardian article? No.
Can the UX be improved to alert the smaller percentage of users that rely heavily on the encryption features when their communication is not being actively protected without disturbing the UX of the rest of the users? Probably yes,
No it's very different. Without further details, it sounds like it's possible for FB to switch keys and intercept all messages on everyone, all the time.
> Can the UX be improved to alert the smaller percentage of users that rely heavily ... Probably yes
Hell yes! Imagine jusr changing background color and the banner "somebody spies on you." Now it's something like some checkmark somewhere looks a bit different or whatever.. I'm still not sure what when...
But keys changes are pretty frequent and normal things that happen between clients - this would be an awful and inappropriate warning almost every time it was shown.
A double-tick will appear once everyone has received the message. You can view details on a single message to see who has received the message on an individual basis (and also who has read it).
Regardless of the merit of this specific accusational-and-denial cycle, the fact remains that Whatsapp is closed source crypto and there is no way in principle for the user to verify any security claims.
I happen to trust Moxie's principles, but not as much as I distrust the relationship-with-government imperatives implied by FB's vast business interests.
Whichever story you are talking about, this one or the guardian one, it doesn't address the closed-source point.
There is theoretically a way to verify WhatsApp even though it's closed source, but it's practically impossible. It's hard enough to verify software even when the source is open, you built it yourself and the whole platform and toolchain is trusted. A bunch of the potential NSA crypto backdoors were totally in the open.
The app could just be lying about resending old messages or even not encrypting them or any number of things much more subtle.
I'm sorry, but this simply isn't true. Software of far, far greater complexity than WhatsApp has been reverse engineered comprehensively by hobbyists and amateurs. Meanwhile, professionals have pretty sophisticated tools for doing this work at scale.
This seems to be the same angle played every time the analysis of crypto tools comes up on HN.
(Almost always) when someone mentions the 'impossibility of analysis' of closed-source programs they are actually referring to the difficulty in doing so -- not actually stating that it's impossible.
It is easier to look through source code.
Now, if we're progressing through this conversation according to script, it will be mentioned that open source projects have had tremendous security problems, too. (OpenSSL comes to mind..)
But that's beside the point expressed.
The only point, and it's the point that was originally expressed, is that open source code is easier to look through than a closed code base.
The hurdles posed by closed source, although not impossible to jump, significantly hinder the progress of analysis.
Not necessarily. People who spend their days writing source code tend to think in terms of source code because that's what they know. People who spend their days analyzing binaries don't. Be mindful of the difference between difficult and unfamiliar.
I'm sorry, but if you look upthread, the comment I responded to not only didn't say that verifying open source was easier, but actually made the extreme claim that there was in principle no way to verify closed source software at all.
Meanwhile, addressing your (different) argument directly: sure, reading C code is easier than reading assembly code, and reading Python is easier than reading C. The easier it is to read a program the easier it is to reason about it.
But:
* It's not terribly difficult to reason about the functionality of messaging software in any language.
* WhatsApp is an extremely high-profile target; it would be weird if people hadn't reversed it by now, since less well-known programs that are much harder to reverse have been productively (as in: findings uncovered) reversed.
* The particular things we're looking for in a program like WhatsApp fall into two classes: (1) basic functional stuff like data flow that is even more straightforward to discern from control flow graphs than the kinds of things we routinely use reversing to find (like memory corruption flaws), and (2) cryptographic vulnerabilities that are difficult to spot even in source code, because they're implemented in the mathematical domain of the crypto primitives regardless of the language used to express them to computers.
Sure, though. It is easier to spot backdoors in open source software. It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.
> less well-known programs that are much harder to reverse have been productively
> It's just not capital-H Hard to do it in closed-source software, so this open vs. closed debate about backdoors is usually a red herring.
No, you are oversimplifying the problem a lot.
In an Open Source project it is possible to create transparency in the development process by every commit public and allowing 3rd parties to mirror the sources repositories as well as perform reproducible builds, sign the artifacts and so on.
Once a project has been reviewed, it becomes pretty difficult to sneak in a backdoor later or deliver a backdoored build only to some specific targets.
In case of closed source smartphone applications it's very Hard to reverse engineer every single release simply because it takes a staggering amount of work.
It's also Hard to verify that some unsuspecting users are receiving a "custom" apk and block the update automatically.
>> it's very Hard to reverse engineer every single release
> Nope
Nope to what? Are you saying that binary diffing is possible or that the amount of effort required is (remotely) comparable with analyzing source code?
I would like to see evidence supporting the latter statement, if this is what you are saying.
Your argument is the same one as "nuclear submarines are impossible to build because I just thought about it for five minutes and can't build one". But Electric Boat Corporation from Groton, Connecticut delivers them regularly, on time and under budget (!). Googling around will tell you that these things exist and people do build them.
You can use Google to prove to yourself that either the infosec industry really exists (including skilled full time reverse engineers) or there is a vast conspiracy. Same as you would prove to yourself that nuclear submarines exist, without ever being allowed onboard one to inspect it.
Consider all the people who study closed source browsers (MSIE) and plugins (Flash) to write malware. Consider all the people who reverse engineer malware to write protections or ransomware decryptors.
The people who can do such work don't work exclusively for the NSA and Google, and you can probably hire them for $1000 a day. but none of them will do tricks for you for free just to prove that they exist. They're too busy making money.
I saw some of the work described in this [1] excellent paper on reverse engineering NSA's crypto backdoor in Juniper equipment being done live on twitter. People exchanging small pieces of code, piecing together all the changes that were made in order to allow passively decrypting VPN traffic.
Are you asking me to "back up" the claim that security researchers use BinDiff tools to reverse out vulnerabilities from vendor patches?
At one of the better-attended Black Hat USA talks last year, a team from Azimuth got up and stage and walked the audience through an IDA reverse of the iOS Secure Enclave firmware. Your argument is that it's somehow harder to reverse a simple iOS application?
You can demonstrate the presence of a vulnerability in closed source software but there's no way to demonstrate (or even provide evidence of) the absence of any vulnerabilities.
That's identically true of open-source software. To put it in the theoretical terms you're probably most comfortable with: the programming language used to represent a computer program has nothing fundamentally to do with whether it can be verified. Obviously some languages are easier to verify programs in than others, but the gap between assembly and C in ordinary compiled programs is surprisingly small.
Open vs. closed-source software is a concern orthogonal to verifiability.
I know this is a tough thing for people to get their heads around since it challenges a major open source orthodoxy. I like open source too. But the people who ratified it were not experts in this field, and this particular benefit of open source is overstated.
> the programming language used to represent a computer program has nothing fundamentally to do with whether it can be verified
That's not true. The design of a language can make it easier to verify with respect to certain properties. For example, it is much easier to verify that a typical Python program does not dereference dangling pointers than a typical C program.
It is true that open source does not help as much as some of its adherents like to think. But that doesn't mean that it doesn't help at all, and it is certainly not true that it cannot help substantially in principle even if it does not help much in current practice.
You're using a word, "easier", that is keeping us off the same page. I agree that Haskell programs are easier in many senses to verify than PHP programs. But our field does formal methods verification of assembly programs, for instance by lifting them to an IR.
The Skype client was obfuscated, encrypted, and riddled with anti debugging boobytraps, none of which prevented people from figuring out exactly what it did. (Not exactly a formal analysis, but probably news to the people who think messaging apps have never been reversed before.)
AFAICT that's (at best) research level stuff. I'd love to be proven (heh) wrong, though. I think what lisper was after was actual practical applications, e.g. something along the lines of the CompCert C compiler[1].
[1] Which I'll note was written and verified in Coq a high-level proof-oriented language.
I don't know what "(at best) research level stuff" means. Here's a well-regarded LLVM lifter by a very well-regarded vuln research team that's open source:
There are a bunch of other lifters, not all of them to LLVM.
Already, with the idea of IR lifting, we're at a point where we're no longer talking about reading assembly but rather a higher-level language. But this leaves out tooling that can analyze IR (or, for that matter, assembly control flow blocks).
Someone upthread stridently declared that analyzing one version of a binary in isolation was hard enough, but that the work of looking at every version was "staggering", "capital-h Hard". But that problem is in some ways easier than basic reverse engineering, which is why security product companies and malware research teams have teams of people using BinDiff-like tools to do it. "BinDiff" is a deceptive name; "Bin" refers to compiled binaries, because the tools work based on graph comparisons of program CFGs.
Part of the problem I have talking about this stuff is that this isn't really my area of expertise --- not in the sense that I can't reverse a binary or use a BinDiffing tool, because most software security people can, myself included, but in the sense that I'm describing the state of the art as of, like, 6 years ago. I'm sure the tooling I'm describing is embarrassing compared to what our field has now.
Open vs. closed source is an orthogonal concern to verifiability.
> Open vs. closed source is an orthogonal concern to verifiability.
The evidence you have presented does not support this conclusion. All you've shown is that it is possible to reverse-engineer object code, but this was never in doubt. It is still an open possibility (indeed it is overwhelmingly probable) that it is a hell of a lot easier to audit code if you have access to the source. All else being equal, more information is always better than less, and, as I pointed out earlier, the constraints imposed by some languages can often be leveraged to make the verification task easier.
I'm sorry, but once again: this thread began with a different claim. I'm not interested in debating whether it's easier to audit C code or assembly code: I've repeatedly acknowledged that it is.
Then what exactly are we arguing about? My original claim was:
"You can demonstrate the presence of a vulnerability in closed source software but there's no way to demonstrate (or even provide evidence of) the absence of any vulnerabilities." [Emphasis added] (Also please note that I very deliberately did not use the word "prove".)
Your response was:
"That's identically true of open-source software."
If you acknowledge that it is easier to audit source code then it cannot be the case that anything is "identically true" of open and closed source software (except for uninteresting things like that they are both subject to the halting problem). If it is easier to audit source code (and you just conceded that it is) then it is easier to find vulnerabilities, and so it is more likely that vulnerabilities will be found, and so (for example) the failure of an audit conducted by a competent and honest agent to find vulnerabilities is in fact evidence (not proof) of the absence of vulnerabilities.
But if you have source code written in a language designed to admit formal proofs then it is actually possible to demonstrate the absence of certain classes of vulnerabilities. For example, code written in Rust can be demonstrated (maybe even proven) to not be subject to buffer overflow attacks.
Look at the original claim you made. You said it's possible to provide evidence of the existence of vulnerabilities in closed source software, but not of their absence. To the extent that's true, it's true of open source software as well. The dichotomy you presented, about absence of evidence vs. evidence of absence, is not about open source software but about all software built without formal methods --- which, regardless of the language used, is almost all software.
The point you made is orthogonal to the question of whether we can understand and evaluate ("verify") closed-source software.
Let me try to advance a different thesis then: it is possible to write software in such a manner that the source code is amenable to methods of analysis that the object code is not. Accordingly, for software written in such manners, it is possible to provide certain guarantees if the source code is available for analysis, and those guarantees cannot be provided if the source code is not available. Would you agree with that?
I think it's possible that that's true, but am uncertain: the claim depends on formal methods for software construction that defy decompilation. But the higher-level the tools used for building software, the more effective decompilation tends to be. I also don't think we're even close to the apex of what decompilation (even of the clumsy, lossy compiled languages we have today) will be able to do.
So, it's an interesting question, and one I have much less of a strong opinion on.
If you can't tell, my real issue here is the idea that closed-source software is somehow unknowable. I know you're not claiming that it is. But I think if you look over these threads, you'll see that they tend to begin with people who do believe that, or claim to.
Over the years interacting with you here on HN, I think this basically sums up the worldview that puts you and I at odds:
> Open vs. closed-source software is a concern orthogonal to verifiability.
Is there a place where you have written at length, defending this assertion?
I am open to it. But it does not resonate with my understanding, nor my (substantial, I think) experience in deployments of open- and closed-source software with specific respect to verifiability.
> But "verified" software often means formal mathematical verification, and that is orthogonal to if the source is open.
I think there may be cross-talk here related to who's doing the verifying too. I think the parent is assuming "verification" would imply that a 3rd party could verify the software in question. AFAIUI it's currently nowhere near practical for a 3rd party to verify closed-source software of any non-trivial size. (Correct me, if I'm wrong, obviously.)
It's still a research problem even for open-source unless the software is built with formal verification in mind (for example in Coq or Agda), but at least there's an existence proof that it's possible to do for non-trivial software (see CompCert C). That was still a multi-year effort and it's still a somewhat (architecturally) simple program as compilers tend to be.
Downthread tptacek talks about how his company does verification of binary images. I assume there are limits to what is possible, but that's really the same as any verification approach.
I do agree that 'who is verifying' is a valid way of looking at it too.
Regardless, it's pretty clear that tptacek means formal verification.
If whatsapp allows few experts or companies access to code they can always verify but the issue is centrally hosted , that is the issue not open or close. If i can build software on and host on my server that might be better
How do you look at the last 15 years worth of Microsoft Windows OS advisories and conclude that closed source has prevented hats of all colors from discovering vulnerabilities?
Why do you keep raising this straw man? It is obviously possible to reverse engineer object code and find vulnerabilities. But it is (equally obviously) easier to examine source code to find vulnerabilities.
This slippery slope goes in both directions. Software is also easier to audit when it's built in higher-level languages, but we don't have ideological objections about verifiability to software written in C.
I'm not sure I understand what you mean by that. Do you mean that people think that it doesn't matter what language code is written in as long as it is open source? I certainly don't believe that. It's pretty clear to me that C is a terrible language for writing secure code. (But coming up with something that is actually better than C is not so easy.)
All agree that higher level (e.g. C) is easier to reason about than lower level (assembly).
Now, you say that open source (e.g. C) is not only easier, but qualitatively different: open source good, closed source bad.
tptacek points out: higher level (e.g. Haskell) is easier to reason about than lower level (e.g. C) - maybe even qualitatively.
So, why are people only complaining about closed source, when they should (by analogous reasoning) be complaining about code written in C? Granted, it's possible to analyse, but it's obviously easier when it's written in Haskell!
Here might be something to look for among those advisories: how many of them were discovered in the source versus in the field.
If the vendor is lazy about verifying code, it being closed is a big disadvantage. "We're not combing the code for bugs, and neither is anyone else; if it's not reported to us, it doesn't exist."
EDIT: WTF people? Why is every response to this comment being downvoted into oblivion? The sibling comment to this one (https://news.ycombinator.com/item?id=13395657) was killed in a matter of minutes despite being (IMHO) a perfectly reasonable and constructive response.
Votes aren't why that comment is dead --- note that it doesn't say "flagged". There is stuff that happens behind the scenes that [deads] users, especially new accounts; I think some of it might be voting ring related but not sure.
(That comment is incorrect but I agree with you that it's constructive).
Open source is a red herring, as you say. Closed binary is the problem.
I need a way to verify that binary I am installing is the same as the binary that has been thoroughly vetted by security researches. In the modern mobile app ecosystem, on a major OS, running a major app, I can't carefully pick and choose which binary version to install. I get whatever the OS company's server pushes to me, and I can't downgrade to a known good version.
It's possible to discover this without looking at the code at all, possibly even by accident, because it's going to be obvious to anyone who changed their keys that messages were automatically reencrypted to the new key when they receive them. That doesn't mean that issues which aren't user-visible would be found, and it took a long time for anyone to spot this one.
The sense of decorum we all ought to have in participating in HN is called out in https://news.ycombinator.com/newsguidelines.html. As an example, Be civil. Don't say things you wouldn't say in a face-to-face conversation. Avoid gratuitous negativity.
Well, I don't know if they're cognitive ones but you seem to have some deficits in 'knowing how to engage people on forums' and 'knowing what security by obscurity means'. The content of your post is that isn't rude is just inaccurate.
Look if WhatsApp wants to read your messages without you detecting, there's nothing you can really do to prevent it apart from not using WhatsApp.
For instance if you're on some list for message interception, they can give you MITMed keys when you first login. Or they can insert some subtle signal that tells the app on your specific phone to ignore key changes and avoid showing notification in some way you would struggle to check (closed source and obfuscated code) etc etc. They could even show you the right key if you attempt verification but use a compromised one for communication. This particular vuln. would be a ridiculously crude way to intercept messages.
In any closed source system where key distribution and message distribution are centralized, there is no way to protect against the service provider - and anyone who co-opts the service provider (eg. with a court order). The objective of the encryption is to protect against other actors snooping on you
Doing those sorts of things would leave a trail of evidence, though, since the attacks have to be included in public app store releases. A typical user might not be able to catch trojans in an obfuscated binary, but there are people who can, and the compromised client would be available to anyone who wanted to dig in.
Yes, there would be a trace in the binary. Potentially detectable by maybe one person in a million. Is that supposed to keep WhatsApp from including malicious features? I do not think so.
What happens is:
1. researcher finds a malicious part in WhatsApp binary
2. WhatsApp declares it a bug and fixes with a new binary
3. we are back at square one
If you are concerned enough about the security and privacy of the app, then you should learn how to use the app. That includes learning about the indicators provided by the UI telling you about key changes, delivery notifications, and anything else the developers considered important enough to show the user.
This is design issue and has little to do with knowing how to use the app. It is poor design for a pending message to be delivered to a device whose key was changed after the message was sent.
It wouldn't be a conversation. The attacker would have to rely Alice's messages to Bob before switching the key. But then if the attacker let Alice (the target) receive Bob's messages they will learn theirs got delivered and the attack would fail.
So it only works once against a string of messages with no replies. That's not a conversation.
* when the client is compromised, you're screwed anyways, so let's assume the client behaves as expected.
* now, with "proper" e2e, and Alice and Bob verifying key fingerprints, their messages can't be read even if the server gets compromised.
* as it stands now with WhatsApp, AFAI understand, the server could be compromised to take Alice's message, send it on to Bob, but withhold the "delivery receipt". It could also pass back Bob's answers, and so Alice could have what appears to be a normal conversation - except that Alice only sees single ticks, instead of double blue ticks.
* then, the server could send the "hey ho, new key" message, and Alice's client would re-encrypt and re-send all messages that it thinks haven't been delivered yet, the ones with a single tick. After that, it would display the "key changed" msg to Alice (if she had set that option).
> It could also pass back Bob's answers, and so Alice could have what appears to be a normal conversation - except that Alice only sees single ticks, instead of double blue ticks.
No, it can't do this, because Bob's answers contain the "delivery receipt".
Hence, the attack doesn't work on conversations.
EDIT to reply: messages are sequential and "delivery receipts" are messages, so it would be visible if the attacker dropped some but not all messages. AFAICT.
So, given that, it would seem that a compromised server could pull a whole conversation (if people overlook the single tick mark), as claimed in the article?
Ah. If that is so (and it's not obvious - clearly you can get delivery or even read receipts without Bob sending an answer), then it would seem that a bad server could only intercept a long monologue, indeed, but not a conversation.
I think that just misconstrues how IM apps are used. You have conversations on IM apps: you send one line of text to a person, and then you don't send another until the person has at least seen the first one, if not yet responded. Otherwise you're being rude.
And, presuming you are seeing reply-messages from your peer and having a back-and-forth conversation, I don't think it's actually possible for those reply-messages to not implicitly also be ACKs of your own sent messages—the Axolotl ratchet underlying the protocol ensures that (I think. Crypto people chime in?)
So, yeah, you can probably get a retransmitted transcript of one person talking into a void without seeing any delivery ticks in response. You can't really get a conversation.
I think that, much of the time, in a journalistic setting, apps like WhatsApp are used to report. If WhatsApp is used to report conditions at an event of political upheaval in a totalitarian state - and the receiver isn't routinely and quickly replying, this is a damn serious problem.
WhatsApp seems to be built for a threat model where a single message (and you can think of a block of messages without reply as semantically equivalent to a single message, here) being compromised is no big deal; only a conversation being compromised is a problem.
If there are cases (like this) where a single-message compromise is a big deal, and WhatsApp cares about these cases, then the simplest solution would be for WhatsApp to add a preference in the client to switch on—as the article describes—a "blocking mode" for re-key notifications, where retransmissions aren't allowed by default.
And - as the article describes - such a blocking mode would immediately expose to WhatsApp which users had not enabled it, and who would therefore be safe(er) to MITM (because they probably don't verify key changes in any meaningful way).
why? The UI on the receiver end could show the warning, not the message, and in the dialog to the Whatsapp systems behave normally. In that way the user would be notified of rekeying, the MITM attacker wouldn't be notified that the user is security consious.
t0: Client A sends message (1) to Server for Client B
t1: Server sends message (1) to Client B
t2: Client B fails to decode message (1) due to an outdated key. Sends failure notification to Server
t3: Server sends rekey notification to Client A
t4: (usually <400ms or >1 second) Client A sends rekeyed message (1') to Server for Client B
For t4, if the Server notices the response from Client A came faster than a user could ever have seen and responded to a notification, it now knows Client A is more susceptible to a MitM attack. If, on the other hand, Client A takes an appreciable amount of time to send the rekeyed message (say, 1 second or more) then we know that either there was a network latency, or the user had to respond to a rekey notification before the message was resent. We should be more careful about launching a MitM attack on this class of user.
So, without changing the actual _actions_ that the Server sees, but just the pattern of the timing of those actions, we're still leaking important information.
Thanks for the extensive answer, indeed if the receiver need to react than this is discoverable.
I was more thinking along the lines of discharging rekeyed messages and not showing the message at all to security conscious receivers. Whether this is behavior is good, I can not say, UX tests could give more insights.
1) ANY one message can be intercepted even if the sender exhibits ideal levels of alertness [Whatsapp server drops message to recipient; sends a rekey request with a fake key; message is intercepted since fake key was generated by server. Sender will see a warning if they turned on that setting (default is to show no warning), but it's too late].
2) Only Whatsapp has this vuln, not Signal app.
3) Depending on sloppiness of sender, more extensive interception is possible. [E.g., server not supplying delivery reports + sender doesn't have warning for key changes + sender sloppy about noticing lack of double check mark => full transcript can be generated]
Best summary I've seen. There are two significant facts here that surprised me:
1. The double checkmark has security implications. How would a typical user know that?
2. Even if you are completely vigilant, follow best practices, etc, Whatsapp messages can be intercepted. They claim this is a "wontfix" UX choice. I'm skeptical why the non-default feature cannot even provide the protection that almost everyone assumed it would.
I think that's basically the main problem: there is no way to get a typical user to understand security implications of anything without having that user give up before reaching that point...
I think all this is by-the-by. The gist of The Guardian's article was that WhatsApp has full control of when, if and how your messages are encrypted, and if you're a dissident working against an oppressive regime and you use WhatsApp to collaborate with your allies, your ass is grass, because there isn't anything physically preventing security agencies from getting hold of your communications.
That such security agencies have the power to force WhatsApp (or anyone) to comply with their demands is without doubt. A really secure system for activists would be one that makes it impossible even for the provider to read your messages, under any circumstances. WhatsApp is not just not that, it is also ridiculously easy for them to read your messages, if they so choose and you use it at your own risk.
They are a US company and they control what version of the app is in the play/apple store. They could be force to push a version of a flaw and no one could verify it. The source looks good but the app that has been distributed is not.
I was actually perplexed, after reading about signal, that I couldn't just download an APK.
Are play services required for signal? If so, can I even install signal on a cyanogenmod phone? Can you do so by rebuilding it yourself? Does the build match the shipped binary on the play store?
To me, Signal does look exactly in the same boat as whatsapp. The fact that WhisperSystems didn't cooperate harder to ship Signal in F-Droid is also a major let-down.
Just to clarify on mtreis76's bountysource link, that work has already been completed and submitted by 8bitkid, and we're just waiting for it to be merged into Signal. Discussion on this here: https://github.com/WhisperSystems/Signal-Android/pull/5962
Also - I don't think the naming issue with LibreSignal was necessarily a legal one per se. Moxie just expressed his opinion that he didn't like that they were using "Signal" in the name. He didn't specifically ask them to rename it. They offered to rename it, to which he replied that he'd appreciate that. (They didn't rename it since they just discontinued it instead).
I don't know about Canadian trademark law, but at least in the States, if you don't put effort into defending your trademark, you lose it. Completely different country, I know, but perhaps there's something similar in their law.
Wire uses the signal protocol. [1] I am not sure if they require google play services but I thought saw something a while back on their github regarding a fallback if google services was not installed and battery consumption.
They don't use the signal protocol, they don't even use X3DH or Double Ratchet. That citation of yours is just a download link, not an actual reference to your point.
The project is also kind of a mess. Check out their privacy policy, Wire maintains a server side copy of your entire contact list, all the groups that you're in, the plaintext metadata for your groups (membership, plaintext group title, plaintext group avatar).
Check out some of the code. They have broken voice encryption, and leak enough data to reconstruct the audio of your calls. They leak tons of plaintext directly back to themselves, like searches, and rolled their own messaging crypto.
They have been caught lying about what kind of encryption they provide[1], they lied about being open source for years, they lied about being based in switzerland. From what I can tell, the only people promoting Wire are usually on Wire's marketing team.
I'm surprised at your negative take. The article you link is from 2014. They do encrypt chats now, and group chats, and voice, and video, all e2e, as far as I can tell.
My understanding is that the developers sit in Berlin, but the legal entity is in Switzerland.
Their privacy policy [1] states that they retain logs for 72h, and not much else. Only hashed contact info (emails/phone numbers) are uploaded, after opt-in. It all sounds very reasonable.
Your argument seems to be that they're an untrustworthy mess - but I don't see much evidence of that, except for possibly some braggadocio in that old article.
> Check out some of the code.
Yes, it is on github [2], so you can do that, which is nice.
According to wikipedia "its instant messages with Proteus, a protocol that Wire Swiss developed based on the Signal Protocol" [1] so I guess it is just based on it.
Wire uses Firebase Cloud Messaging (former Google Cloud Messaging) by default as it is the Google requirement. If you, as a developer, roll out your own solution, you simply get banned on Google Play. Claimed reason is that FCM/GCM delivers all your messages in batch, so it saves your phone power [0]. It is doubtful because your phone may enter Discontinuous Reception (DRX) state, so your cellular operator will deliver messages in batch anyway, but whatever. I think it is Google pushing developers to use their push servers while avoiding antitrust laws.
GitHub message about fallback to WebSocket you are referring to is here: [1]. So you can remove all Google services from your phone and install APK, it will still work, but now Wire server pings your phone whenever it wants. TLDR on the thread is that APK is not uploaded to F-Droid due to dependency (not really) on Google services.
I love this post for the in-depth explanation of the UX challenges around e2e encryption and why they made the decisions they did. It's educational.
I think Moxie highlights a very good point that is commonly underrated among "security Dunning-Krugers": Opening yourself to the possibility of an attack is often OK if the attack is easily detectable, and if the identity of the attacker would be obvious upon detection. Yes, Facebook could intercept and decrypt a message without your advance knowledge. However, you would be able to detect it after the fact. And if you detected an attack, the attacker could be no one other than Facebook. You could then expose them and ruin their reputation. Given this, it's unlikely that Facebook would risk carrying out such an attack in the first place.
Security is not binary, it's risk management. The goal is to minimize the risk of an attack, not to rule it out entirely (hint: you can't). I think WhatsApp has made the right choices here.
In most cases, I'd much rather disallow Facebook from MITMing my messages than try to "ruin their reputation" (hint: I won't, because Facebook already does far worse things on a regular basis without so much as an eye-bat from the world).
In other words: I care about the confidentiality of my messages far more than the promise of some sort of dubious ability to shame Facebook for simply fulfilling its business model. Sure, there are some cases where allowing security to be exploited in one area protects the security of another, but those are called "honeypots", and I sure as hell hope my private communications are not a part of that.
Transparency is a dependency of trust. WhatsApp is not transparent; therefore, it is not trustworthy. Simple as that.
At the end of the day, it comes down to trusting WhatsApp. Even without a backdoor in their protocol, they can easily do all kinds of things.
For instance, it could instruct specific clients to encrypt and send each message twice: one for the recipient, and one for the WhatsApp server. As long as this was off for 99.9% of users, it's unlikely that security researchers would ever detect this.
And anyone with sufficient access could push an update from the app store to a selected target that bypasses the normal security protocols of any given messaging app. Who checks their app store downloads against source code?
We're mostly talking about government agencies here that would force Facebook to act in that way. And even though they could force them to try to intercept messages of a user with an ESL, they can't force them to introduce a vulnerability for all users.
Especially as ESL are usually kept secret within the company, this would be too risky, developers would ultimately find such a backdoor (especially since it generates a lot of server traffic).
Right, I definitely wouldn't use WhatsApp for anything truly sensitive. I wouldn't use smartphone software at all, personally. But this particular "backdoor" seems bogus.
It's shared physical space. I'm OK if they video me. I'm OK if they share that video with law enforcement if there is a reason of substance to do so and as long as they make it publicly known they asked for it within a reasonable timeframe, say 45 days. I'm NOT OK with broad sweeping requests and would only allow them if circumstances required it and the request for the data was disclosed within 90 days.
Does Walmart actively deceive me? Not that I'm aware, but I don't shop there.
Never heard of Glencore or Phillip Morris.
As for Blackwater and Palantir, my impression from the media is they do exactly what they say. It's not like Palantir lies about harvesting data to give to government. I trust that they actually do do that.
None of those companies have posted fake news and altered the news algo with the express intent of manipulating users' mental states for reasons that basically boil down to "for the lols" and "let's see if we can make money from this".
The amount of passion in your comment and this gap in knowledge don't go well with each other. I encourage you to at least read about Philip Morris (or watch John Oliver's episode about them at least).
Cool, no worries. Philip Morris operates worldwide though, and I'll have to hear some awesome arguments to be convinced that facebook is more evil/less trustable than them.
I mean, they're probably disassembling the app, so they'd definitely notice _that_, but there are some truly subtle problems that pop up in security, so your general point about trust seems reasonable enough (certainly for any closed-source remote-updating system).
The joke is if this specific "Backdoor" would be used widespread it would generate a lot of noise (eg. random key changes, keys that don't match) and would cause massively bad PR for WhatsApp. No way are they doing that.
>The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a "double check mark," it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption.
Can this be verified? Can this be verified to be the case 100% of the time? Is there anything stopping the client from lying to a user [0] with this interface, saying one thing (i.e. "this will not be resent") and doing another (i.e. resending)?
[0] - Or being triggered to lie to a particular user at a particular time.
If your threat model includes using a malicious app to send messages then you lose anyway. Nothing can ever be done to send messages securely using whatsapp if the client is neither trusted nor verified. This is true for basically all software that you use.
And that's why we have 2FA on separate devices or even hardware tokens. They allow some security even if the computer isn't trusted, like protection against replay attacks at a minimum.
If you do not trust your hardware, 2FA is of no use. All communication and display can be MITMed on the untrusted computer. It gives the appearance of a normal login, but the behavior of the trusted site or system is emulated. The real username, password and 2FA authentication token is only send to the target machine by the attacker.
That's not true. It prevents replay attacks, as I said.
More advanced tokens also do more. My bank uses a hardware token for signing transfers and other actions which can include a human readable message or part of the target account number, making MITM much harder.
I'd say the risk of replay attack is not proportional to the risk of authenticating and authorizing on a fully compromised machine. It's comparing a candle light with a blazing fire.
A definition issue with regard to "token": I sincerely do not think a device with display and keyboard used to sign transactions can be called "token". I'd say something like: trusted signing processor. But then again, I'm not a security specialist.
Well the big issue is that it prevents future access to your account. Let's say you have a simple 2FA device (no screen) and are using online banking. First you login via a compromised machine. The attacker MITM'ed you, so can see your account.
1. The bank should require a confirmation with your token to send money. If you don't send anything, the attacker can't either.
2. In the future, after the logout timeout, you know the attacker can't even read your account.
It greatly reduces the attack surface you need to worry about. Any attack they do must be right then.
Any open source project can allow you to verify binaries by making builds reproducible. The fact that most apps don't do this is indeed a security problem, but one that's far from unfixable.
But how often is that really done? And to be honest, it can be quite hard to spot critical bugs or backdoors, just look at http://www.underhanded-c.org/
The bitcoin development community solved the problem of mapping source code to a binary build. Check sums or signatures of binaries is not sufficient. Open source is meaningless if you cannot verify the source maps to the build and you're running critical software with irreversible consequences. E.g. a bitcoin transaction or a dissident being imprisoned from state surveillance eaves dropping on communications.
Certainly it's possible to remedy this situation simply by having the app author sign a checksum of binaries in the app store. Why this is not currently an option (to my knowledge) is a mystery to me.
It kinda is - you could add it to the description.
App stores seem to be getting progressively more hostile to this kind of thing though - you can't just download an APK / iOS app, you have to do it through a device. This lets the stores do "app slimming" (and per-country / per-carrier customized apks) to remove resources you don't need (like binaries that don't match your architecture), which would change the checksum. Which is useful, but inconvenient for this goal.
Something like fdroid may be supportive of this, which would be cool. But I wouldn't expect the mainstream ones (or Apple) to ever embrace it - it'd be a bad user experience / wasted UI space in the vast majority of cases.
It's been a while (a couple of years) since I last tried this, but I remember it being reasonably straightforward to get APKs of apps that were on the Google marketplace. Not through official user interfaces, of course. I can't remember if I ended up using some Chrome extension or a third-party app that needed root.
Not a particularly useful comment sorry, just "it was possible two years ago if you jumped through some hoops."
You can probably still get the one that'll install on your device(s), but if there's customization for e.g. carrier X in country Y (or app slimming) you're unlikely to know or be able to find it from the infinite "download APKs free now!" sites.
And the last time I looked, all the apk-downloaders required your device ID, because Google's API does (for customization reasons) - it's much more of a "you can do this if you emulate a device" than "you can download it". I'm also not sure if the ID works unless you have gplay installed, which you may not have if you're being careful/paranoid enough about security to manually validate apps.
You mean someone publishing source code and then falsely verifying the binary checksum?
I mean, at the end of the day, it's very easy to verify - if the binary doesn't match what whomever gets when they compile, there better be a reason for it.
Regardless, I don't think this is the biggest problem facing open source.
To do this you must not only verify the open source code, but that the binary was built from this code, and that your operating system and every layer below it is also trustworthy.
I stand by my claim that using software written by a malicious developer is game over in the vast majority of contexts.
> I stand by my claim that using software written by a malicious developer is game over in the vast majority of contexts.
Sure...perfectly accurate...but in the context you are implying that WhatsApp is malicious. To which I think "hackuser" was interpreting as "closed source is malicious" and therefore offering open source (and by implication Signal) as an alternative.
> The argument here is that open source code can be verified where closed source is explicitly non-verifiable by nature.
This is not a belief that people who actually do software security audits hold. Verifying binary only software is table stakes to a security audit of a third party application as you cannot trust the source provided.
Having source is a bonus for security audits not a requirement.
>Can this be verified? Can this be verified to be the case 100% of the time? Is there anything stopping the client from lying to a user
No.
OpenWhisper system otherwise GPLv3 auditable libraries are closed source for usage by WhatsApp and Facebook Messenger. Either OpenWhisper systems has elected not to enforce their copyright (in which case you could make a BSD/MIT/Apache fork of their software), OR they have consented to Facebook allowing them to circumvent the license.
Eitherway. The code in WhatsApp is off limits for anyone not working with Facebook so well never know. Closed source crypto is bad. We only have WhatsApp's word the Facebook libraries have no modifications.
No there's nothing stopping WhatsApp from lying to you, although if they are then the double checkmark seems like the least of your worries.
As far as I can tell this 'backdoor' is only relevant for the scenario where WhatsApp is not actively malicious, but gets taken over by a malicious entity, which wants to target someone who's disabled updates.
Isn't it possible (in fact trivial) to sniff the traffic generated by WhatsApp and verify that it is indeed the message transmitted, encrypted by the key on the device?
This might be possible, but I don't think it addresses the issue that such an exploit could be turned on and off remotely. The only way to be sure would be to do such sniffing all the time.
It actually doesn't matter. They are talking about comprimising the servers. The government has the power to force a backdoor (remember Lavabit?). All Whatsapp has to do is update their client, and all the beautiful encryption schemes are ruined.
If you need a truly secure communication system, it has to be open source and self-hosted. You still have to trust the hardware though.
To expand on this argument a bit, if you think the Government/Whatsapp are specifically out to get you (eg. willing to mount an active attack against you specifically), then WhatsApp is propably the wrong messenger for you.
If on the other hand you want a reasonably secure Messenger that you can use to chat privately with everybody and their Grandma then maybe you should not expect that it does super complex security thingys that 99% of its users just don't care about and don't want to be bothered with..
You might be able to disguise it as debugging/development code that was mistakenly left in there. And instead of a hardcoded list of targets it could pull down the values in a more creative way. But at the end of the day that probably wouldn't stop a talented reverse engineer from figuring out what was going on.
Android apps can also contain native code. Indeed, WhatsApp includes such libraries, to help with Curve25519 encryption, video encoding, voice over IP, and other functionality.
But it should be straightforward enough to see if text messages or UI elements (suppress key change notification) are being change depending on the output of those libraries.
> That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.
I am not convinced. Why should this option exist at all? Even worse, it is disabled by default. Just enable notifications for everyone and demand verification. If you don't want to verify, just ticking "veryfied" without actual verification is not that bad, it is just a trust-on-first-use principle in action. Actually it is how SSH works and nobody complains about SSH being backdoored.
If we're talking about key change notifications, isn't SSH the thing that throws the following error when a key changes?
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
51:82:00:1c:7e:6f:ac:ac:de:f1:53:08:1c:7d:55:68.
Please contact your system administrator.
Add correct host key in /Users/user/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/user/.ssh/known_hosts:12
RSA host key for 8.8.8.8 has changed and you have requested strict checking.
Host key verification failed.
Yeah, SSH is at the complete opposite end of the scale in how it handles unexpected key changes - it won't even let you connect unless you manually edit known_hosts, whereas WhatsApp automatically uses the new key without any possible way for the user to stop it from doing so until it's too late.
WhatsApp has a gazillion+ users, most of whom don't care about e2e security and definitely don't care enough to verify keys before they are allowed to chat with their friends again. The friction caused by a security popup that 90% of users simply ignore is real. It annoys users and causes fatigue that jeopardizes future security notifications.
I think both Signal and WhatsApp "trust on first use" like SSH does it.
The issue here is that:
1) the vast majority of users have those MITM notifications off by default (because WhatsApp decided it's best that way)
2) WhatsApp generates its own keys in some scenarios, like when people switch their SIM cards, so the "trust on first use" that worked on the original SIM is gone out of the window now, and the users won't even know it because the notification is off by default.
Actually now that I think about it, this is why WhatsApp must have let the notifications off by default, because they knew they would generate their own keys this way, which would generate a lot of those notifications all the time.
Caveat: I have never used WhatsApp and do not know anything about its interface or options (default or otherwise).
>>[The choice to make these notifications "blocking" (i.e. to require manual verification) would] leak information to the server, etc., etc.
>Why should this option exist at all?
The option does not exist, and should not exist. That's the author's point there. You agree with him and with WhatsApp on that.
All you disagree on is implementation:
Author: "[Non-blocking defaults] provide transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience."
You: "Enable notifications for everyone and demand verification; if you don't want to verify, just tick "verified" without actually verifying."
How are these two substantially different? They look the same to me in terms of security and WhatsApp's implementation doesn't make you click anything.
Difference is that the WhatsApp client re-encrypts the message with the new key from the server and re-sends it without user intervention ("non-blocking"), so even if you cared, you can't prevent it.
With the alternative, people that don't care could tick "verified" with or without verifying, but you could also click "cancel" (with or without verifying).
ALICE
When would you
like to meet?
BOB
Tomorrow, 19:00, by the
north tennis courts.
ALICE
Sounds good.
!!! BOB's key has changed !!!
BOB
Actually, could we meet
at my place? I'm going
to be super busy tomorrow.
If Alice and Bob are doing something that needs to remain secure, Alice would be a fool to trust Bob's messages after the key change without manually verifying the new key with Bob. How does withholding messages help, aside from telling the server which people have enabled the setting and which people have not? [edit: I just realized you were talking specifically about the case where manual verification is enforced for all users; disregard the last phrase.]
Admittedly one angle I can kind of agree with is that layman users may not understand the implications of a key change and the importance of out-of-band verification, and blocking messages until verification would be a way of signaling the significance of the key change. But... counterpoint to that is that users interested in security are probably already dead in the water in that regard if all they have is a layman's knowledge of how crypto works.
> The option does not exist, and should not exist.
I also think the option should not exist, but according to The Guardian article [0], the option exists: "In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed."
The option The Guardian is describing there is something like this:
When my partner's key changes:
[ ] Show me a notification (y/n)
What I was talking about was an option like this:
When my partner's key changes:
[ ] Wait for my manual confirmation before delivering any messages from my
partner that are dated after the key change (y/n)
To me it's up for debate whether or not the existence of the first option or the fact that it's disabled by default are good ideas, in terms of the behavior of the app matching consumer expectations of security. It'd be safer for it to be permanently enabled, but that's neither here nor there.
But the second option would be fundamentally broken and leak information to the server about how conscientious a user is about security, which is why the author, whatsapp, and everyone in this sub-thread agrees it's a bad idea. Someone else gave an concise example elsewhere in the thread: https://news.ycombinator.com/item?id=13397118
edit
N.B.: Requiring manual verification all the time, from everyone, would not leak any information and would be the most secure. Allowing users to choose whether or not they want to manually verify is the leaky bit.
Well, there are two options: notification option and confirmation option.
Moxie correctly assumes that confirmation option (require manual
confirmation to resend if key changes) should either be enabled for everyone or disabled for everyone, as its state can be determined passively by the server. But it depends on the notification option. His conclusion is that confirmation option should be disabled for everyone because if it is enabled, it is possible to leak notification option state. But it is wrong. More secure solution exists: enable notification option for everyone, and then enable confirmation option for everyone.
I was complaining about why notification should be an option. Even worse, disabled by default.
Wire [0] just shows "resend" button near message when it is not delivered and always shows notifications about key changes if you have verified devices. You can still ignore verification option if you want and get no notifications.
Signal blocks with a message when key changes.
Both solutions are secure, Wire's is more convenient, Signal is less error-prone. WhatsApp solution is simply insecure.
Ahhhh, dependencies. You're right. This is more involved than I originally thought. Here, does the following look like an accurate summary of the situation? (For optional row/cols, "Optional (yes)" with a value of "secure" means "if the feature is optional, it's secure for users who have it enabled.")
If both notification and confirmation are optional and enabled, it is secure.
And I don't think WhatsApp is secure for anyone with its permanently disabled confirmation. Maybe it prevents mass surveillance, but it is still vulnerable to targeted attack. Surely it may cost facebook reputation and the attack will be detected in the end, but it is still possible.
Moxie claimed:
> "The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully."
Surely if WhatsApp cared about the server not being able to detect this, they could just get the client to "retransmit" an encrypted blank message in place of the original under these circumstances. Then the server wouldn't be able to tell who has enabled blocking mode and who hasn't.
tl;dr to me seems: Since users can change devices, they'll need to reissue key material, this needs to be supported. WhatsApp reports key changing optionally, but doesn't tell the server that happened.
If WhatsApp tries to backdoor a channel and one of the users has key change notification, they'll find out about it, and WhatsApp has no idea whether the warning was shown.
> If WhatsApp tries to backdoor a channel and one of the users has key change notification, they'll find out about it
The problem is that the might not find about out retransmissions. You are trusting that the "double-tick" means that the message won't be resent, but presumably WhatsApp can indeed retransmit those messages with the new key under pressure from a state actor.
They need to specifically address this point; it's the only thing worth talking about. The rest is just a discussion of implementation of key exchanges generally.
That would require the client to be compromised though right? My understanding is that the client is making the decision whether to retransmit with the new key.
Now it's fair to question whether you can trust the client, but if you can't then there's no limit to what they could do.
"WhatsApp server has no knowledge of whether users have enabled the change notifications, or whether users have verified safety numbers."
If it's off by default, then the answer is likely to be near zero, as it often is with default options. This isn't an argument about the security of the app (which I have no background to know), more of a comment on relying on non-default behavior.
I believe that you get a key change notification, but by default it doesn't require any sort of confirmation and will just continue to work with the new key.
GP is correct: no notification of key changes by default.
Even if you enable the key change notification, it is "non-blocking" in WhatsApp as outlined by the blog post: when you get the key change notification, the WhatsApp client will automatically without user intervention resend undelivered messages (unlike Signal, btw, from what I understand).
Of course there is a backdoor. Why not? Under what law whatsapp and whispersystems live? The one with secret courts and secret court orders?. How to trust someone
under this umbrella?
We need to spread technology companies. Everything but a bunch of things comes from this law.
And what starts in another country, magicaly gets bought or dismissed. Take Symbian as an example...
'Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience. The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.'
Why not have every client show up as having safety number change notifications on and just choose whether to display them client side depending on user settings? i.e. if you have them off, no message will display and the message will automatically be resent using the new key?
You quoted the answer to that already. If these change notices are "blocking", then the sending device won't re-send the message until the user has verified it. If the user hasn't enabled the notification, then the sending device will re-send immediately. This makes it trivial for the server to figure out who's actually enabled the notifications and who hasn't, which means the server can be confident about when it's actually safe to MITM.
What is the user supposed to do when they get notified of a "safety number changed" message? How do they verify they've not just been MITM? Honest question... I don't use whatsapp or signal at all.
It's up to you to confirm what's going on using another channel (say call them on phone and see what's going on, compare the numbers). It's the same thing ssh does when server keys change for instance. To me it's a reasonable way to handle such situation.
You have to physically compare the numbers on the two phones (in real life), or send the numbers through a different trusted channel (PGP, USPS, Carrier Pigeon, etc).
You can aks the other party to resend you a message from before (one that doesn't matter).
Ie. Can you confirm this is you: send me a message from 5 messages back using the quote function. (you know what is said 5 mssgs back so you can pick a non sensitive one, and they can do the same to you)
edit: nvm, if this is man in the middle then that doesn't matter because you still exchange with each other and its not a hijack.
Sorry, I made a mistake.
There seems to be a pretty clear war going on between engineers and journalists lately.
- Chris Latter [1] vs Business Insider [2]
- Elon Musk vs (Bunch of outlets)
- Moxie vs The Guardian
I feel like journalists want to write a compelling story and engineers are on the other side like "No, those aren't facts!" I don't follow a lot of media outlets but it seems like journalists either lack the skills or don't care about doing any technical due diligence.
Let's avoid our own bias of automatically believing the engineers are in the right; they are fallible people, no more or less honest or prone to error than journalists.
Every news story that breaks, involving any person or industry, gets the same response: It's false, they didn't ask us, etc. etc. Therefore, that response is not an indication that something is wrong (or right); the response tells us nothing in itself.
"they are fallible people, no more or less honest or prone to error than journalists."
As people perhaps, but there's a big difference in how prone to error someone is when speaking within their area of expertise than when speaking outside of that area.
The problem with most journalists is that they're required to write on many different subjects, write for an audience whose only exposure to the subject will be a few thousand word article, and write all of this on a deadline that is sometimes just a few hours or days. You can't really understand a subject under those constraints, and it's inevitable that misconceptions will creep in.
You're right. Everyone is fallible and I'm all for mistakes being made — We are all human. I also agree that the immediate snap response tells us nothing.
Based on the BI story, I know a few people that have actually already uninstalled WhatApp for fear of a backdoor. What I wish is that there was a better way for these two entities to communicate rather than finger pointing and name calling so that we as consumers of both media and technology can read a better more comprehensive narrative.
Name calling results in reassigning fear. People are afraid of the unknown and so try to box it up in something digestible they can fear less. We tend to blame others because it's easy and cheap to do so. If you are wrong, that means I'm right and so then I wasn't wrong about it and don't have to think about it anymore. And you are wrong, so why would I think about it again?
I think it's interesting the entities are not two, but many and one at the same time. Elon Musk is an individual and he is also part of the press process. My rationale is that the press includes everyone in the press, including the people writing the stories and the people in the stories.
I don't think the Chris Lattner thing is a war at all. The journalist gave a reasonable effort to get a comment from Lattner, never got a response, so went with a story from a source they found trustworthy. Lattner issued a denial after the fact, and it's included near the top of the story.
I guess it's possible that the journalist completely fabricated the story, but I think it's a lot more likely that either someone at Apple overstated their relationship with Lattner to vent their own frustrations, or Lattner is trying not to burn bridges. At worst it's an avoidable inaccuracy, not a war.
"The journalist gave a reasonable effort to get a comment from Lattner, never got a response, so went with a story from a source they found trustworthy. "
Oh well, can't figure out the actual facts, better just publish whatever i do have?
Seriously.
Also, the "i tried to contact you" is clearly a BS defense.
It wasn't a "reasonable effort".
This is a reporter.
They know that people basically never respond to interview requests during what amounts to the busiest times in their lives, and so the reporter asked just so they could say they tried.
Otherwise, they would have held the story a week or whatever until they could get a comment.
Because it would be just as interesting then if it was any good.
But nope, gotta try to get it out while anyone gives a crap about the story flavor of the week because it's not substantive enough for anyone to care otherwise.
Yes, if you make several requests for comment over several days and don't get a response, it is acceptable to run with what you have, as long as you note those facts in the article. Also, he's switching jobs, not landing on the moon. If he has time to tweet, he has time to check his email.
If you just assume that both players in this story are human beings trying to do their jobs, you'll understand that neither of them did anything wrong.
Several days? The news of his departure isn't even several days old. Even if he asked lattner to comment five minutes after his email to the swift list, that wasn't a reasonable time to wait before publishing.
The same author published an article about Lattner leaving Apple on the 10th, and I suspect she asked for a comment at that time. Not getting one from Lattner, she probably asked other contacts she had at Apple and found one that gave her the story about the "culture of secrecy". She then would have reached out to Lattner through whatever means she had, including Twitter (https://twitter.com/Julie188/status/819216603086733312). I guess maybe "several" was a slight exaggeration, but 2 days is plenty for a story like this.
The dispute is really Facebook vs Tobias Boelter (https://tobi.rocks/), with Manisha Ganguly (freelance, so not really The Guardian) putting pressure on Facebook.
I think it's more that the media is biased against corporations, because positive information about corporations sounds like an advertisement or is instead attributed to the employee. Headlines like "Zuckerberg fires 100 employees" or "Wal-mart saves puppy" seem to be either rare or nonexistent.
It would be a war if all these people were allied together. Neither the engineers nor the outlets mentioned here are allied parties. They're disparate across the board.
Facebook's payments to use the Signal protocol pay the bills. Same reason why Moxie was fine with Google disabling E2E by default so they could use chat for ad targeting, after attacking other software for doing this, and with them tying enabling it to inconvenient behaviour like disabling the chat log that discourages all but the most paranoid from using it.
The real "Whatsapp Backdoor" is that, by default, the app stores a backup of all your messages on "teh cloud". On android, that's google.
So google can play "eve", and every run of the mill script kiddie that can get your google credentials may "restore" your messages. How convenient.
And that's the default settings. So, even if you turn it off, "mallory" can steal the credentials of your contact and snoop into your conversation that way.
As with all end-to-end encryption it stops at the "end". It is this unencrypted state, in which humans consume data, that can't be defended by crypto.
Therefore the only way to be completely safe, is to make sure both you and your conversation partner don't decrypt their message until it's on an offline device only you have access to.
But end-to-end encryption where the interface (mobile app/phone) is controlled by the parties you want to protect your data from is not possible. WhatsApp could send freaking screenshots back of the unencrypted data if they wanted. For nearly all other threat models whatsApp's encryption is a wonderful add-on.
It's remarkably hard to tell if a key has changed with a peer in signal. It gives you some small faded text between messages which is easily scrolled off by the other party making a lot of comments.
Once its off your screen, there is no way to tell that you've never authenticated the current key. No mark, not even burred in a menu.
So perhaps I should not surprised to see the authors of the worst public key management security that I've ever actually used defending even worse public key management security.
Change once to server keys, ask for the entire history retransmission.
No. the WhatsApp client will not retransmit messages that have already been acknowledged by the recipient device.
Of course, you'll have to trust that this is the case; but obviously you'd have to trust the the WhatsApp client app isn't backdoored in the first place, so there is no change in security posture.
"The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a "double check mark," it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption."
Maybe I don't get exactly what this means, but isn't it true that if see a message that someone's signature changed and the other person sees this also and we both choose to ignore this message the man in the middle can read all our new conversations?
Ofcourse if then one of us checks the signature later and sees it is not correct this would be very harmfull for whatsapp.
But this indeed doesn't sound like a backdoor. It's just the way it works. Which seems good enough.
I don't get the complains here. Whatsapp is used by over a billion people and offers end2end encryption for them. A system that mainly targets people with little tech experience can never be kept 100% secure. If they make it harder to switch phones, people would stop using whatsapp.
The only vulnerability seems to be that they could prevent delivery messages. I'm sure that most people would notice if they suddenly son't see the 2 ticks, even if the other person answered. And if you want your conversation to be secret, that's a major red flag, now that this is known.
And if I get a phone change notification even though the other person didn't change their phones I'd also be confused at least. When I last changed my phone, a lot of people noticed because of the notification and asked me. And those were not tech savvy people, they were just wondering why I got a new phone.
Spying on conversations (especially by govt agencies) is only effective if the target doesn't know about it. It seems that Whatsapp has no way of enforcing that without the user noticing.
Yawn. Next what, whatsapp is not going to use monetizing to advertise? Why don't we just all admit that all our data is being plundered by corporations to make money and just leave it at that? Seriously, nobody cares about their data being used by corporations for profit. Just be honest about it.. and you will see that people continue to use whatsapp or signal or whatever the current fad is.
> The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.
If resending undelivered messages to new keys waited for confirmation of the new key when safety number change notifications are enabled, then users without those notifications enabled would continue to immediately resend undelivered messages to new keys while users with the notifications enabled would not resend until the user manually OKed the change. The WhatsApp servers know (or can know) whether users have outstanding undelivered messages and can observe whether users resend them immediately after a key change. As a result, if resends after key changes waited for user confirmation with security notifications enabled, whenever a user changed keys, WhatsApp would be able to tell whether any of their contacts who had undelivered messages to that user had the notifications enabled by observing whether those contacts immediately resent the messages or not.
thats then a timing attack, but the client could replace the mesaage with a placeholder and send that immidietly, and the real message after the user approves. this way server could not know.
I think that could mostly work. The placeholder message would have to be of identical length to the original and the resent message would have to use later sequence numbers/message ids (or whatever is used to identify individual messages) so the server couldn't tell that the placeholders were placeholders.
One issue is that it would mean that, in the case of an active attack where the server substituted a key they knew in place of a legitimate new key from the user, the server would be able to decrypt the possibly-placeholder resent message and determine whether the user had notifications on. If the user didn't, they'd know that they were safe to continue to attack the user (this attack is more risky than the passive one on the blocking resend without placeholder messages protocol, of course). So, this does improve the security of the protocol for users with security notifications enabled, at the cost of making users without those notifications less safe. I'm not sure how the tradeoff should be balanced here (just as I'm unsure if the UX tradeoff of having the option of not receiving security notifications is worth it...).
to find out if the user has notification on would be the same as doing the attack, and to find that for all users is the same as mitm all users.
im not sure how is the security of those who dont have notifications on worse in this case then what they have now.
if i want to i should be able to say if somebody changes the key i want to first verify the key before i send the message to that person.
lets say you organise a big protest vs some regime. i know who you are and i know that you comunicate with the number xyz. i redirect that number to my. in the meantime you send me a list of names that are in our group and adresses. i reconect with a xyz number and get the list and everything. even if youbget the notification its to late.
It isn't possible, in most cases, to trust open source software either. Have you verified that the binaries on your phone were indeed built from the source you can read on github or wherever?
Which fully open-source phone platform do you have in mind? I'm not aware of any.
On desktop and servers, however, it certainly is possible (and not-too-impractical) to verify binary blobs against known PGP signatures. See Debian's reproducible builds, for instance.
Wouldn't a better defense be to not re-transmit messages encrypted with the new key unless the originating user clicks a button authorizing it AFTER they have been informed that the receiving user has a new key??
As Eike Kühl pretty well describes, this functionality only increases usability in a rare corner case: When you dump your phone in the ocean and you need a month to get a new one. Then everyone who has sent you a message during this period will not need to press an additional "OK" button.
Telegram made different choice. Secret chats do get broken, if ephemeral key renewal fails. Queued messages won't get encrypted with new key automatically. Of course at times this is annoying.
I have been contemplating deleting my facebook for 6 months. I finally pulled the trigger just now. I don't trust the company at all. Too many smokes and mirrors in what they do.
So full-on paranoia mode on:
What would you do if you wanted to compare safety numbers? I'm guessing most people would call and read out the code.
How far are we from targeted interception of calls, with replacement of key phrases? Voice synthesis seems to be there more or less, if I understood Adobe's recent demo correctly, but real-time parsing of conversations to determine where to intervene is probably not close yet.
So if the security of Whatsapp's keys hinges so much on the key change notifications, why turn them off by default? Why allow them to be turned off at all?
No one (today) would get the idea to make https warnings optional even though that audience is even broader than Whatsapp's. (Possibly even because that audience is so broad)
> We believe that WhatsApp remains a great choice for users concerned with the privacy of their message content.
What about meta-data? Even Signal uses Google's push service to send your messages, and WhatsApp is even known to collect meta-data. (IIRC they changed their EULA recently)
If I worked for WhatsApp (Or conversely, if WhatsApp used an implementation of something I've made, thus making it hugely popular in the process) I'd certainly say there's no backdoor indeed.
Probably it is not a backdoor but still insecure. End to end encryption is not what they offer but they make you think you are safe. They should clearly state that in a big splash message.
Only once you scan the barcode, which contains an encryption key to use for your phone to securely ship /its/ keys to the browser. Only encrypted messages are transmitted between phone and browser
This retort does not address the fundamental point made in the Guardian piece:
> “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”