Hacker News new | past | comments | ask | show | jobs | submit login
WhatsApp Encryption Said to Stymie Wiretap Order (nytimes.com)
153 points by danso on March 12, 2016 | hide | past | favorite | 54 comments



What the article doesn't make quite clear is that OpenWhipserSystems, the creators of Signal, partnered with WhatsApp to bring their crypto to WhatsApp. See https://whispersystems.org/blog/whatsapp/ (this was when Signal was still split into TextSecure and RedPhone). The problem, of course---and the article touches upon that---is that with a closed source app like WhatsApp you have no way of knowing whether your communications are actually being encrypted; you have to take their word for it.


Open source is not a panacea here.

The situation is largely the same with Signal: Unless you manually build and install Signal yourself, there is no guarantee that the App Store or Play Store versions are actually built from the source provided by OpenWhisperSystems.

And even if you compile Signal yourself, there's always "Reflections on Trusting Trust".


If you trust openwhispersystems than on android it is guaranteed to be the same build that was uploaded. All builds are signed by developer unlike apple (where you sign builds to prove that this is your build to apple, not to user).


"If you trust OpenWhisperSystems" is a pretty huge condition – the exact condition we're discussing here. The advantage of open source is that you don't have to trust OWS, you have to trust the source code, which you can do by auditing it yourself.


> The advantage of open source is that you don't have to trust OWS, you have to trust the source code, which you can do by auditing it yourself.

No. That is not the advantage of open source. The advantage is that as long as somebody audits the code, you don't have to.

And in this case, you don't need to trust OWS to do the right thing, only to refrain from pushing rogue updates (ie, to only sign versions that are actual releases). You can still read the code yourself.


And in this case, you don't need to trust OWS to do the right thing, only to refrain from pushing rogue updates (ie, to only sign versions that are actual releases). You can still read the code yourself.

You've missed the subtleties around the word "trust" in the context. You don't need to trust that the WhatsApp people are nice, or want to do what is right.

You need to trust that they and Google haven't been legally compelled to push a "rouge" update - perhaps only to you. Don't forget there is nothing technically stopping a uniquely compiled update being pushed to a single account holder.

The only protection against this is 3rd party auditing and checking the signatures yourself, and that the 3rd party is completely located in a jurisdiction where they are unlikely to be legally compelled to comply with an order that applies to WhatsApp and/or Google.


Oh I don't claim that opening up the source code is a magical way to fix everything. But without the four essential freedoms, it is much harder to be reasonably certain that nothing nefarious is going on.


I think the bigger problem is that Whatsapp doesn't even allow users to verify each other's fingerprints. Whatsapp could disable the end-to-end encryption and you would have no clue that they did that.


I've seen leaked UI screenshots that do have that feature. And Moxie has said that such a feature was always planned.

WhatsApp supports an ungodly number of platforms. Probably they don't want to expose encryption in the UI until the feature/platform matrix is complete and there are no edge cases where traffic is unencrypted.


But you do have a significant amount of confidence that -unless their signing keys have been compromised- the versions of the software in the app stores are the versions published by OWS.

If you don't trust OWS to fail to publish malicious copies of its software, why would you trust them to fail to subtly sabotage their software?


If you don't trust OWS to fail to publish malicious copies of its software, why would you trust them to fail to subtly sabotage their software?

I don't believe OWS would do this voluntarily, unless compelled by the government.


One could trace the protocol between the phone and the network to see if it conforms generally. Specifically is another issue.


Knowing all the details is key, though. For example, it's easy to vastly reduce the entropy of cryptographic keys, but still make them look completely random from the outside. The Debian OpenSSL bug is an extreme example of doing this by accident. It took a year and a half to discover that, and that was with the buggy source code available the whole time.


So what? Why is it important that OpenWhisperSystems is partnered with WhatsApp? Can you trust these people?


Yeah, I trust Moxie. He's worked really hard on security for a long time. If the government came to his house and threatened to shoot him unless he sabotaged Signal and WhatsApp, I expect he would get shot.


I still can't find an official statement by WhatsApp that they are using his tech. Can you or anyone else?


Go to the OpenWhisperSystems blog. Thats the most official thing I have seen.


I know that post - that's why I wrote by WhatsApp.


Unless it's opensource, it can't be trusted. Not saying it is enough, but it is a requirement. But that's not enough, any smartphone OS maker (google and apple) can be ordered to install a Trojan by force and keylog everything. So you'd need something like http://www.replicant.us/ I guess and compile it yourself if you're completely paranoid. Even then, there could be a keylog in the hardware directly talking to the bios.

tl;dr: if you want complete risk-free privacy, a smartphone is probably not what you want. I use a smartphone myself by the way, just saying.


That's not true: strongest products ever made were high-assurance closed source. There's also been strong FOSS apps. Most apps, open or closed, are insecure due to lack of rigor or specialist skill. Key things are right development, review, and distribution.

I broke it down in the essay below: https://www.schneier.com/blog/archives/2014/05/friday_squid_...

Review and identification/possession of what was reviewed are important points. Mass publication or distribution of source hasn't proven to be a determining factor.


> strongest products ever made were high-assurance closed source

Because we never found the backdoors put in place by the developers does not mean they were not there, and without source access we have absolutely no reason to believe anyone when they say there are none.


High assurance prevents that by full specs of all success and failure states, modular code, avoidance of dangerous constructs, covert channel analysis, testing of execution paths, pentesting with source, and so on.

You could conceivably slip a backdoor in with extra cleverness sure. Yet, whether OSS or closed, you depend on talent of reviewers to find it for you. High-assurance closed has most labor put into that. That's why it was more secure. Aside from activities described above that were mandatory for high assurance security but optional or ignored for the rest.


Compiling it yourself is not always necessary. If you are able to reproduce the build yourself (and match hashes of the build), you can be fairly certain that the binary is actually secure even if you did not check it yourself. If it was reproducible, some people would check that and point out if the hashes don't match.

You need to have a secure way to get the hashes from the build, though. One way to do it is to fetch the hash from multiple sources (e.g. your mobile phone (not connect over WiFi, of course), your PC (that is connected over Ethernet/WiFi), and maybe a VPN). MITM all these connects at the same time is hard. If you are super paranoid you could always join an IRC and ask for the hashes. Real-time MITM the traffic is probably close to impossible (the attacker needs to read every message, and then decide to change it vs. just serving a different static HTML page).

Compiling the source code seems pretty useless to me. I do not want to read the 10000 lines of code such an app might use. There might be just a little line inside some auto-generate GUI code that is malicious and sends the unencrypted message to somebody's servers. And sending malicious source code is not that hard, either.


Your essentially argue for network vision for verification.

Keybase is doing a interesting thing were they have a Merkel tree and then they put the root of the Merkel tree into the bitcoin blockchain. When you fetch the tree, you can check the validity on the blockchain.

They could use public key pinning that people are sure to always hit keybase.io and then verify on the blockchain (I must check if they actually have HPKP activated).

An viable attack on that is pretty damn near impossible without actually comprosing the end user device.

An viable attack on that is pretty damn near impossible without actually hacking into the end user device.


True, but this is a part of the issue that can be raised separately from what Obama and the DOJ are talking about.

I think we really should stay focused on the legal battle. If the government brings up federal level legislation mandating backdoors in IT companies' products, we're going to need a very educated public to understand how encryption works and what's really at stake. Right now, our head of state is spreading FUD that he believes to be accurate and we know is false. This must be corrected if we are to maintain public safety, and secondarily, a growing economy.


I'd be interested to find out how many actually legitimate terrorist plots have been foiled because of NSA/government eavesdropping. I'd bet it's staggeringly low, if not 0. I'd put money on single digits, easily. It seems like the vast majority of good & concrete actionable intelligence found in these cases are the results of good old-fashioned police work. I mean, they have already caught these people (or, as in the case of San Bernadino, the suspects are dead) -- so now they just want what?... more evidence against them, links to other people/suspects, etc? It seems lazy and/or a lie that they couldn't gather this info with, y'know, investigative police technique.


I think this is a very good question, but numbers given by officials will be difficult to verify.

It seems that the problem with foiling terrorist plots is not the amount of information that is available, but the inability to connect the dots:

http://www.theguardian.com/world/2015/nov/16/french-and-belg...

If anything, they should be asking (at least in Europe) for more personnel to keep better track of possible terrorists, because that is what has failed here. Another problem is that some countries have cut budgets of deradicalization programs and local eyes and ears since the mid-2000's.

As we all know, the terrorist attacks are just a convenience for asking backdoors. Each opportunity is exploited.


The long term problem for broad adoption of end-to-end encrypted mobile messaging is closed software ecosystems; the government will just pass a law to force Apple & Android App stores to stop distributing apps like WhatsApp that facilitate it. Game over.

add:

I suspect that code itself and the act of posting it on the internet could be interpreted as free speech. Even if not, it would difficult to stamp it out from international sites or bit-torrent. Distributing via an App "Store", even for free, could be more likely to be construed as commerce, which is already heavily regulated and for less important reasons than criminal/terrorism investigations. Google and Apple as large public corporations have fiduciary duty to their stockholders to protect their profits, which the US government can easily threaten. So there's a big weak link (and an easy lever for government to pull on) in the closed distribution of secure communications code.


Such a law may well be unconstitutional. I would expect Apple and Google to forcefully petition for an injunction (or whatever) that would prevent FedGov from prohibiting them to distribute such software until the constitutionality case was decided.


I expect the government to eventually under NSL demand the source code be given to them along with any necessary keys. The question at that point is do the targets say no. This of course would create standing to challenge the constitutionality of secret courts and NSLs and the like which I don't think the DOJ and FBI want to lose.


In end-to-end encryption the private keys are on the user's phone. The point of a wiretap is not to let the target know you're listening. Having the code source + user's public keys from WhatsApp is of no help in decrypting.


Yup. E2E crypto with keys stored on the conversing devices shuts down all MitM attacks. It doesn't stop targeted attacks (warranted or unwarranted), but I expect that law enforcement considers passive data scraping to be much less serious than targeted surveillance.


The problem when considering a thread model is that you have to think what the government would do, not what they could do. The government could shut down the internet and then we are fucked.

However, the government would not do that because that would be to damaging. Passing laws as you suggest is not quite as unlikely, as we see from the Apple case but its still a pretty tall order.

My big worry is that they stop trying the legal way and start requiring backdoors threw secret orders and such.


They wouldn't ever do anything that forward facing to the public. What they do is pass laws to put pressure on makers of apps like whatsapp to build them their own personal back door.


Hmmm ... is that a real, statutory wiretap? Like, to an actual and regulated telecommunications provider? Or is DOJ skating by on "quacks like a duck" orders? CALEA was limited for a reason (I imagine they really want to change that).

AFAIK pen-register and similar wiretappy things are only applicable if you're a phone company, or very much like one. If you just ship an app and handle packets, that's a different story.


The article doesn't really provide many details so it's hard to tell. I don't think they can order a wiretap under CALEA since, like you said, that only applies to telecom providers. And even if they determined that WhatsApp was a de facto telecom, CALEA still has exceptions for end-to-end encryption.


Sad. I'd argue that end-to-end encryption is one of the strongest principles of privacy in the history of mankind. I would have loved to see widespread adoption. It is obvious that the state pushes against it. But we must win. We probably won't though.


Not with that attitude!

Contact your representatives, tell them how you feel. Share information with your friends and coworkers. The public wants data security. On balance, backdoors will make us less secure. A few representatives like Lindsey Graham have already changed their minds when presented with new information. [1]

[1] https://youtu.be/uk4hYAwCdhU?t=1m44s


I think it's surprising / disturbing that the discussion here is centered on the importance of open sourced apps. What security does an open source encryption app offer when it's running inside a closed hardware / software platform that has direct access to your messages in the clear before they're encrypted and after they're decrypted. If Apple (not picking on them I'm just an iPhone user) wants to or is compelled to monitor what you're doing on your phone they can. Period. End of discussion.

As a thought experiment I think it's valuable to consider what secure messaging over an iPhone would look like. It's actually very simple, just very inconvenient. You encrypt your messages on another, air-gapped computer with known and trusted hardware (a Raspberry Pi works). From there you deliver your encypted text over the phone via whatever channel is convenient -- SMS, Facebook, Twitter, etc. On the receiving end you do the reverse.

If you're not doing that then your communication is theoretically and practically insecure. Discussion of the security of encypted messaging apps is not only worthless it's actually dangerous. It's security theater and takes the focus away from the actual issues -- at best, at worst it convinces people that insecure channels are secure.

I can't figure out if this is lost on most people, they're in denial or if there is actually a concerted effort to mislead. Smartphones -- and most computers -- are insecure by definition. People need to understand that and act accordingly if privacy is important to them.


You are missing one impotent aspect. The problem is that this is a war of resources. In the end a government can almost always get access to the end user device.

What we have to fight first is the dragnet. Government just sucking up all communication. This can be archived by end to end encryption even if the computer used are not secure. This forces government to compromise every individual enduser device, such action is impossible on the scale that they do now.

This means the have to limit the amount of people they attack. Once we have archived that, we need to massively improve on end user device security. Secure elements and separate smart cards are decently the future.

Having a trusted Smart card for your encryption needs is decently the future. I already use a Yubikey over NFC when sending email.

You need to stop thinking about absolute security and start thinking about the cost of mass surveillance.


Why nobody talks about steganography? Even if there are laws forbidding cryptography, the interested parties can use that today, if they have plausible deniability.

That's in the worst case when majority doesn't care, minority isn't strong enough and there aren't enough education about how to implement cryptography reliably in the absence of companies which would offer the product.


Im glad that steganography exists but the fight we are fighing, is about the large public. I have no dought that with Open Hardware, Open Source, Smartcards, Tor and a lot more, I will always be able to hid my most important message I send to a simularly connected friend.

However that is not usuable for most people and such products will have a hard time taking over majority market share.


>Jan Koum, WhatsApp’s founder, who was born in Ukraine, has talked about his family members’ fears that the government was eavesdropping on their phone calls.

And the Ukraine government is not the only government that might be suspected of having shenanigans, whether officially or done by rogue agents.

Even our own FBI has been infiltrated by rogue agents, such the case of an illegal immigrant who managed to become an FBI agent and was trying to get information on their investigation of Hizbollah, as documented on their own web site:

https://www.fbi.gov/news/stories/2007/november/prouty_111307

Of course the majority of the people there are great people doing a good job. But extending trust to them does not protect the users' interest in privacy and more importantly safety.


Are wire tap orders mandatory? Or is it just permission to wiretap...

If you can't succeed in wiretapping you aren't penalized right?


Didn't they do something like this with Skype years ago?


So, there are two end positions - Apple/WhatsApp on one end and the FBI on the other. Is there a third one we aren't looking at? Obama and FBI director want a "middle ground". Is there a middle ground?

Since the US govt helped develop the technology behind WhatsApp encryption, now might be a good time to put money into developing a technology that creates a "middle ground".


You have to trust the government to have a middle ground. It has been shown again and again that we cannot trust them.

Even if we trust one government, it doesn't mean you can trust the next one.

If the USG force the US tech companies to provide a backdoor, then the technology is out there in order for non-US tech companies to pick up where they have left off.

It is a battle the USG cannot and should not win.

Trust the maths. Thank you Snowden for showing us the light.


Bingo. Obama, Clinton and the rest keep asking for Silicon Valley to find some workable compromise and getting frustrated at the "absolutism". But they don't have enough self insight to see that they're guilty of the exact same thing.

In a world where governments were transparent and government employees were robustly held to account for transgressions of clearly written laws, I doubt many in the Valley would be putting so much effort into e2e encryption. It'd be like the world pre-Snowden, or the telcos, where companies just complied with warrants and orders and didn't see any big problem with that.

The problem is the Snowden affair showed that governments are utterly incapable of handling the power they temporarily obtained via subterfuge, utterly uninterested in holding people like Clapper to account, and generally don't seem to care about whether the people trust them or not. Standoffs like this are the inevitable end result.

If there was trust, there'd be a lot of technical solutions. Like just not employing E2E crypto to start with. SSL would be enough.


The supposed middle ground is that you force the naval engineers to, against their will, build the submarine with just a few extra holes.

Security is only as good as the weakest link. If you can bypass actual crypto and fallback to the world's justice systems, you've made a catestrophic compromise somewhere. In such a security system any crypto is pure deception. This is untenable in 2016.


I am deeply disappointed by President Obama's remarks:

> President Obama echoed those remarks on Friday, saying technology executives who were “absolutist” on the issue were wrong.

We can't always look at two extremes and try to go the middle way. If someone said asbestos causes cancer and someone else said but it is a good insulator and I need approval to put ten feet of asbestos in the walls of the school, we don't automatically try to say "well, ok apparently it is toxic so what about five feet?" No, we'd say we can't put asbestos as it is bad.

It sounds very nice to be fair and balanced but sometimes the middle ground is not a good outcome. Not to mention seeking the middle ground in everything leaves us susceptible to Overton window. I don't know what to say other than that I am deeply disappointed.


Many efforts have been started to have a middle ground, for example key escrow. However, key escrow might not be that different from allowing almost anyone to pwn your communications and data... that was a huge argument in the mid-1990s.


> Is there a middle ground?

Look at how anti-drug-cartel and anti-terrorism laws have been applied in practice in the US. We've basically trained law enforcement and prosecutors that their jobs get magically easier by acquiring superpowers as long as they whisper the magic words "terrorism" and/or "drugs" (or "kiddieporn" ) three times. See civil forfeiture, for an instances of law enforcement routinely abusing superpowers it gains by whispering "drugs" three times before grabbing something. This sort of mission creep/misuse is a social problem that's immune to technical solutions. No matter how perfect key escrow is, it's in the career interests of law enforcement and prosecutors to socially engineer their way around the protections.

We're not intentionally creating a Super Stasi, but we're accidentally creating lots of mini Super Stasis by law interpretations that allow bulk collection at an unprecedented level and poorly thought out career incentives for law enforcement and prosecutors. When there are no consequences for admittedly leaking the identity of an active undercover CIA asset for political reasons[0], there will be no consequences for leaking anyone's personal information for political reasons.

The magic words used to include "communist", which allowed the FBI[1] way too much investigative power into the civil rights and feminist movements simply by whispering "communist, communist, communist.". The FBI's activities included actively interfering with the civil rights movement. There's also evidence that someone with access to the bug recordings from Dr. Martin Luther King Jr.'s hotel room tried blackmailing him into committing suicide[1]. Hopefully that was just one or two rogue agents.

The inability of government agencies to secure their own systems, along with commercial pressures on companies, means that once these investigative powers are created, they're also available to any medium sized country. If you're on the political left, this means homosexuals in many countries are at risk. If you're on the political right, it means many Christians and other religious minorities in many countries are at risk.

So, the middle ground is not about trusting the current FBI and current US intelligence services. It's about trusting all current and future FBI's and the the intelligence and law enforcement agencies of all medium-sized countries from here forward.

As a minimum set of requirements, the middle ground would seem to require:

    1. Not allowing a rogue US LEO to blackmail MLK by whispering "communist"/"terrorist" three times
    2. Generally Technically solving the LEO/prosecutory social engineering problems of key escrow
    3. Not putting sexual, religious, and political minorities in foreign countries at risk
[0]https://en.wikipedia.org/wiki/Richard_Armitage_(politician)

[1]https://en.wikipedia.org/wiki/COINTELPRO

EDIT: Forgot to actually include footnote.


Couldn't these companies use a public/private key to encrypt a special message that their apps could receive and verify it's signed by the mothership and then proceed to relay message or send unencrypted traffic? I mean, there are technical ways to make this work in a secure way (maybe not using my example).

This discussion seems old and all safeguards are supposed to be in the law (warrants and other legal devices), IN SOME COUNTRIES. And here's the key of this discussion, should FB/WhatsApp and other be above the law? Why is the government being so careful with them and it wouldn't be if this was, say, Verizon withholding phone records in a specific case if there was a warrant?


They are not "above the law" because there is no law that they've actually broken. As far as I can tell there's no law against providing secure end-to-end encryption to your customers, and no law requiring that encrypted connumication be tappable.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: