We often hear the complaint here that nobody cares / cared about Snowden's revelations. But to me it seems he did provide a lot of the impetus for having HTTPS virtually everywhere and a lot of the instant messenging apps being end-to-end encrypted. Most of WhatsApp's users are as non-technical as it gets, and yet they use the kind of encryption that only computer enthusiasts were interested in just a couple years ago. It's a great development (all the limitations and caveats notwithstanding) IMO.
BCP 188 "Pervasive Monitoring Is an Attack" sets Best Common Practice for the IETF to say that mitigating pervasive monitoring is appropriate because it's an attack on the network. So that's a pretty long way from "nobody cares".
It's very carefully written, it does not propose to make a moral judgement about whether the things Snowden revealed are evil only to show that in a technical sense they were an attack and so it made sense for the network to try to mitigate them. Work like D-PRIVE (privacy for DNS) was driven by this concern, and of course it influenced a lot of other work including QUIC.
While a BCP is indication that “nobody cares” is false, it’s pretty far from even a majority of people caring. If BCPs mattered, IP spoofing wouldn’t be an issue on the Internet.
I like the explanation that simply explains "privacy":
When you are going to the toilet, and everybody knows that your going and what you'll do there, but you still close the door (for the most of us, most of the time).
Sure, but people close the door out of modesty not really privacy. If there was a machine that provided a written transcript of what someone did in the bathroom with no video/audio I don’t think people would mind.
Like when you’re in high-security areas and have to be monitored in the bathroom there might be a door between you and your guard but no real privacy.
Or when people loudly object to strip-searches at the airport but the scanner that sees everything but then only shows a cutout highlighting suspicious areas to pat down are mostly fine.
I think it's a pretty good workable analogy actually. People don't mind if you know they go to the toilet as an abstract thing, but once you start keeping a notebook of who is going to the toilet and when it starts getting creepy and undesirable. And that's just for collecting metadata! imagine if someone would actually intercept your sewer and analysed the makeup of your turds, folks would be up in arms.
We need this information to correctly gauge the interest on different kinds of foods we should keep available to purchase in the cafeteria.
It's also helpful as we can notify you early if you have some undiagnosed medical issue. You could unknowingly spread your illness to your children without this early detection.
We're even able to reduce your monthly health insurance premium by providing this data to the insurance company!
This also enables us to find troubled Individuals before it's too late and address building drug issues before they're full addictions. We'll be able to get them the required attention they need to get back on their feet and be productive members of our society. (Maybe not here though)
Similarly, you can still get falsely validated HTTPS certs via spoofing (not to mention older, easier validation exploits), and so it's possible all the newly encrypted traffic may result in most people having a significant false sense of security.
Ironically, Telegram markets itself as the most private and secure messenger, but in reality, it's much less private than WhatsApp or Viber: any regular (non-secret) Telegram chats are not end-to-end encrypted - if they were, you wouldn't be able to access them from a new device after authorization with a password.
This marketing message always confused me: my techie understanding was that Telegram is actually one of the least secure messaging choices. If you want security, my understanding is that your preferences should go Signal, Whatsapp, iMessage, Hangouts or whatever Google's flavor-of-the-month messaging app is these days, Telegram, and Facebook.
Google's security is still better than both Telegram's or Facebook's. It's not great, but that's why it's #4 on a list of 6. If you care significantly about privacy & security I would not use anything worse than iMessage, and even that's borderline.
(Your opinion of whether Google or Telegram is better will likely also depend upon whether you think malice or incompetence is a bigger threat. Google's business model relies upon it snooping on you, but they have really, really good security people ensuring that nobody else snoops on you. Meanwhile Telegram has less of incentive to actively violate your privacy, but they may let other parties violate your privacy by passively fucking up their engineering. They've done stuff like roll their own crypto algorithms, which is a terrible no-no for anyone that cares about security.)
How is iMessage worse versus Hangouts? Is Hangouts even end-to-end encrypted? IIRC it isn’t, neither is Google Chat (a product which is replacing Hangouts from what I can tell), just Allo and Duo.
1. iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I've often heard people say "you're wrong, iMessage uses unique per-message key and AES which is unbreakable!" Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It's like transport of safe where the key to that safe sits in a glass box that's strapped against the safe.
2. The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On August 15, 2018, Samuel Gross factored a 768-bit RSA key.
1280-bit RSA key has 79 bits of symmetric security. 768-bit RSA key has ~67,5 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11,5 bits, or, 2896 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1358-bit RSA keys in 2019. The conservative (security-consious) estimate assumes they can break 1523-bit RSA keys at the moment.
(Sidenote: This is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP, at least until OTRv4 protocol finishes).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
3. iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn't her. But it also means she can't show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
4. The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: "the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90".
5. iMessage is proprietary: You can't be sure it doesn't contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
6. iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it's impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It's not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: "Hey Alice, this is Bob's public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!"
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy's device.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won't get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple's key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple's message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like "Hey Alice, looks like Bob has a brand new cool laptop, I'm adding the iMessage public keys for it so they can read iMessages you send them from that device too".
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it's also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can't do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: "Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption."
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
* Uses Diffie Hellman, not RSA
* Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage
* Uses MACs instead of digital signatures
* Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
* Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
* Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you've verified the safety numbers and marked the safety numbers "verified", you won't even be able to accidentally use the inserted key without manually approving the new keys.
This reminds me that in france, unless cryptography is not used for authentication, it is considered a military weapon, and civil usage is restricted in its key strength. Above a certain strength, you technically have to give your key to the government !!...!!!
I don't have a source, but fr.wiki [1] says that in 1999, the government allowed for 128 bit keys to be publicly used without depositing it to the government. It also says that PGP was illegal in france until 1996 (considered a war weapon of category 2, whatever that means).
So I wouldn't be surprised if it were illegal over here to use key strengths above 2048 for end to end encryption in france...
TIL that content for wikipedia pages changes per language. I clicked 'English' in the left pane hoping to learn more about what you are saying, but the English version does not have the 'En Europe' section. not so great. Thanks for your post
They are in fact entirely parallel Wikipedia encyclopedias written in different languages. Not only will articles have different information and be organised in a different way, whole families of related articles may be organised in different ways from one language to another.
This seems pretty reasonable seen for the whole encyclopedia, but I suppose if you assume that the language change option will just translate the page you're currently looking at then it's quite a surprise.
That’s not what they said; they’ve said that iMessage has better security than Hangouts, and that this user wouldn’t use anything “worse” ie. further down on their list than iMessage
It is worse in that Hangouts does not make false claims about its security, so people who use it know that they are using it for features provided by the kind of security it provides (only between the user and Google), like searching chat history across devices.
iMessage can also only guarantee security between the user and Apple due to Apple distributing the public keys (but to a lesser extent because it uses worse crypto), but it does not provide the usability features like searching full chat history across devices that Hangouts does.
Telegram does have some unique privacy-related features though that other platforms don't support. Examples are: ability to register without a mobile phone/app, open source library (tdlib), ability to edit messages, ability to delete messages for both sides (including images that are cached on the receiver's side), ability to clear chats for both parties, auto deleting messages.
They don’t claim end to end encryption by default though. You make it sound as if there is a revelation you made here.
Telegram has faults, I would even argue it has many, but it’s clear that only “secret” chats and voice/video calls are end to end encrypted.
Whatsapp, however, does allow you to download all of your messages from your device using WhatsApp web, and they were recently shown to have an exploit/backdoor in the applications themselves. So in that context they’re comparable in my opinion.
"They don’t claim end to end encryption by default though."
They don't have to. The amount of my peers (i.e. who also major in CS) who think Telegram is more secure than e.g. WhatsApp, is staggering. People don't really think about the protocol, they only think what they hear on the news, or what their buddies think who have heard it in the news.
And what they hear is "Telegram, the new encrypted messaging app, blah bah..." and then they hear debate "Apple.. Encryption.. LEA can't read messages". So the incorrectly count 1+1=3 and think Telegram is safe against LEA.
When you're online and you try to point out Telegram uses home-brew protocol, EXACTLY the same security architecture as Facebook (TLS), and that both are created by Mark Zuckerbergs of separate nations, you'll very quickly drown in fanboys / sock puppets that come with following arguments
"WELL TELEGRAM'S ENCRYPTION HAS NOT BEEN BROKEN IN THE WILD NOW HAS IT???" (no need when you can hack the server and read practically everything)
or
"NOT TRUE TELEGRAM HAS SECRET CHATS" (which only works between mobile clients, and one-on-one chats, just like Facebook Messenger's opt-in end-to-end encryption. Like this one guy on the internet I talked to so eloquently put it: "I don't use secret chats because when I open my laptop, I want to type with my keyboard and not take out my phone every time I want to reply")
or
"PAVEL DUROV ABANDONED MOTHER RUSSIA TO BRING YOU THIS SECURITY" (which tells you absolutely nothing about the protocol and is no proof of any motivation towards any direction. When you're as rich as Durov you can choose any other country in the world and I suspect Dubai isn't treating him too badly).
or
"DUROV REFUSED BACKDOOR SO THERE IS NO WAY TO GET USER DATA" (which is simply not true, it's not like government agents can't hack servers, if Durov could deliver such systems, he'd be making five figure hourly wage hardening Fortune500 companies' systems)
Telegram refused to provide decryption keys to Russia, US, China governments. That is great sign to me.
Meanwhile Whatsapp has web interface(sic!) where law enforcement agents can request user specific information and probably chat logs for whatever fake reasons they could come up with.
Telegram founder also lies a lot. First he says that Telegram developers are not in Russia, out of the FSB reach, but later proofs emerge that they work for Russia from the same office where VK developers worked from. Google Anton Rosenberg and his lawsuit. [1] The public position of Durov ("this man is just crazy freak") is very unconvincing, to say the least. I'd even suspect that it is plausible that Russian authorities have some leverage on Telegram, and all this conflict with RosComNadzor is just a publicity stunt. After all, the only "loss" of a Russian government is RosComNadzor reputation, which is bad anyway.
>Telegram founder also lies a lot. First he says that Telegram developers are not in Russia, out of the FSB reach, but later proofs emerge that they work for Russia from the same office where VK developers worked from.
I guess he has to protect his team. US government tried to bribe his programmers to weaken system security.
To add to the irony: Telegram has evolved in a bit of a darknet on its own where people casual share content that would be near impossible to find on the surface web.
Accessing chats from a new device has no technical relation (or constraint) to the lack of end to end encryption. Wire encrypts all chats end to end, and still provides syncing conversations to multiple devices on multiple operating systems. It does limit the sync to the last 30 days, but that’s mostly because of cost reasons rather than technical reasons.
Edit/correction: Neither Wire nor Signal sync conversations that have happened before the setup of a new device to the new device.
Signal also features multi-device end-to-end encryption.
This non-technical argument feels more and more a shill talking point because the claimed constraint is NEVER provided with technical arguments.
However, it feels intuitive to non-techies: "End-to-end means only one end and I have many devices therefore I have many ends so I can't end-to-end with every end, so better not end-to-end..."
If you can view your old conversation from a fresh installation on a new devices then this automatically implies that some 3rd party has access to your keys. I.e. your conversion cannot be considered truly private.
It can also imply syncing over an end to end encrypted (and verified, using QR codes at setup) channel between the devices being synchronised. I believe this is what signal does, for example.
No it doesn't. The sync could be device-to-device, or it could be encrypted in it's storage on the intervening server, and require that the user provide secrets on the new device.
I wouldn’t loop backups into that criteria. iMessage syncs your message history to all of your devices in an end to end encrypted way: https://support.apple.com/en-us/HT202303
and WhatsApp allows user to backup/restore their messages with iCloud (unencrypted)
I continue harping on this point often. The usability, reliability and feature set of Signal are far behind Telegram or WhatsApp. If you want a platform that sometimes works, may be slow in delivering messages, may send false “device changed” notifications, and doesn’t allow a way to backup and restore chats (on iOS), then Signal is the one. If you don’t like any of these deficiencies, then Signal is the last thing to suggest. There’s no point using a so called “secure messenger” if it’s going to numb users to accept device change notifications without out of the band verification because the app and platform are buggy to generate those when nothing has changed. Yes, this is anecdotal, but I don’t trust that Signal promotes security or secure messaging practices.
Instead, use Matrix (with end to end encryption enabled) or Wire.
1. Bikeshedding has lead to reduction in security agility: Any change will have to be first implemented for the protocol, then to SDKs, then to clients. This progress can take years.
2. Riot is the only client that delivers proper E2EE, majority of clients don't feature it.
3. E2EE is still not enabled by default.
4. IRC-bridges will break E2EE
5. Decentralization does break large silos and make less tempting targets, but now you have a bunch of server admins who have personal relationships with the people the content (when not end-to-end encrypted), and the metadata (always) of which they have access to.
6. Riot's key management and fingerprint verification has been a nightmare. Thankfully this is about to change.
Until all of these are are fixed, i.e.
Until all clients enforce E2EE, until the protocol design is safe enough, until client vendors are required to keep up with security, until no bridges are allowed, until fingerprints are trivial to compare, I will not, and I think no one should Matrix.
Well that DO have an optional 2factor auth, and yes, they definitely DON'T copy any keys between devices (like WhatsApp does when launching a web version).
Oh, come on! If Telegram can decrypt chats for a user, they can decrypt it if they really want. Any other kind of encryption is irrelevant - from third party attackers, tls works good enough.
Dealing with the endpoint security is a really tough problem but I have a pet-project that pushes the price per endpoint just below $500 https://github.com/maqp/tfc
Agree Snowden is significant because he was able to encourage enough people to INSIST on strong privacy/encryption. Then it all comes down to basic game theory. Why would a company ever want to release any product without strong encryption (end-to-end) when users never complain about their data being encrypted. The only reason companies don't encrypt is when they have a vested interest in spying, either in their own interest or the government's interest. Anytime I see something not have strong encryption, it is a red flag to me that something nefarious is up.
> Why would a company ever want to release any product without strong encryption (end-to-end) when users never complain about their data being encrypted. The only reason companies don't encrypt is when they have a vested interest in spying...
I think the second thought doesn't follow from the thought before it. to my experience, the main reason companies don't encrypt is because it's simply makes it that much harder to debug problems and consistently provide successful connections for users. HTTPS can fail in ways that HTTP does not.
If users aren't clamoring for encryption as a feature, the main reason not to provide it is simplicity and quality of service along the axis users appear to care about. If users want encryption enough that they're willing to tolerate that sometimes browser misconfiguration or server side error will cause the connection to fail because it cannot be trusted, then companies will implement it.
High-quality crypto libraries / systems lead to broader implementation, which makes it harder for elements in mostly-free societies to pressure implementers.
It's one thing for the NSA to quietly lean on ATT (and only ATT). It's a completely different thing for them to quietly lean on 1,000 different organizations and authors.
Similarly, it's easy to sneak a CALEA-alike amendment into national law when only PGP exists. It's harder when the narrative becomes "The government wants to take {beloved product used by millions} away."
I don't think I know many people who insisted on strong privacy/encryption. However, after the Snowden revelations people did consider it a preference. In that sense, it helped.
I think this has a lot more to do with Google punishing web results without https than it does with Snowden, couple that with Cloudflare and Let's Encrypt and you have an easy path.
This is the more important factor at play here. Browsers (Google Chrome mainly) are forcing everyone to go SSL lest you end up with a 'insecure site' warning for all your visitors. Most websites don't care about NSA intercepting their data.
Yeah, Google has done much for privacy and it's kind of ironic how people believe those who say it's malicious. The only crime Google ever did was to be successful.
- CPUs didn't have hardware acceleration for encryption (AES-NI) like they have today, so activating SSL on your webserver actually decreased your throughput a lot
- It was expensive and complicated to get a certificate for your website, now LetsEncrypt provides them freely and easily
Wasn’t the server load for ssl something like 3-5%? That doesn’t strike me as much of a factor as the complexity involved, especially with the confusion added by eg Thawte hawking their enhanced validation product.
"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load" according to Google back in January 2010 [1]. This was about the same time as Intel introduced AES instructions, but the post suggests that this wasn't a big factor in their conclusion that TLS simply isn't computationally expensive.
> Wasn’t the server load for ssl something like 3-5%?
Depends on the packets per second being handled. I’ve pegged a CPU core easily doing encryption just a bit over a decade ago due to high data rate. If you’re pushing >500Mb/sec without CPU accelerated encryption (or NIC offloading) it puts a pretty hefty strain on resources.
Snowden played no role in HTTPS adoption, because people don't seriously expect HTTPS to defeat NSA. Most peoples' threat models are much more modest.
HTTPS adoption has a lot more to do with Google and Mozilla pushing it in their browsers, and Let's Encrypt making getting certificates easy. I have mixed feeling about that - the cost of easy certificates was making spoofing far easier.
It is a very fair point that Snowden's revelations definitely had an impact. However, the impact you note was mostly technical.
The public backlash to these revelations is what seems lacking. It had very small political effects, and seemingly very little effect on the NSA. They did not change their stance much, and their weren't really consequences for what the NSA was doing.
What’s more important though? The public can’t particularly change things at that level. They don’t live at that level. It’s our job to help them. Just as they help me on non computer related stuff all the time. A barber shouldn’t be in charge of web encryption. It’s on us.
Also, I think the public has no problem with the government spying on other people. They just don’t want it spying on them. So in that regard, it opposing the policy, but instead mitigating the risk to your own communications is an expected result.
Though that gets... interesting, when we remember that the revelations were that the NSA was spying on <insert person here>, as it was/is untargeted dragnet “surveillance”. People still didn’t care though, politically
What happens if an agency gets in deep with one of the common trusted authorities shipped with every browser, or is an authority, or just hacked their root keys, or bought access like they did with RSA? It seems like they could man in the middle all day and the only difference would be the cert issuer, which means it would be invisible if used in a limited fashion.
Well, I don't really trust random certs even when they're signed by a respected CA -- but I still prefer using HTTPS. Even if the cert is fraudulent, HTTPS is still encrypting stuff and will protect me from other random attackers.
Security is never a binary secure/insecure proposition. There are shades of gray. The key is to use what security you can, but never think "I'm secure now".
As an old mentor once told me: the moment that you think you're secure is the moment that you're at the greatest risk, but you should still lock your door.
There have been fradulent certificates in the wild in the past but the CAs doing it usually get kicked out pretty quickly. That's what Google's certificate transparency project is for. And they are increasing requirements further and further. Hopefully one day we'll get to a state where the infrastructure of multiple independent companies in different countries needs to be compromised in order for one successful forgery. But even now certificate transparency has greatly reduced the number of entities able to forge certificates.
“Verisign also operates a ‘Lawful Intercept’
service called NetDiscovery. This service is
provided to ‘... [assist] government agen-
cies with lawful interception and subpoena
requests for subscriber records.’
If you now try to search for NetDiscovery or LEA services for CAs, you won't find any, but I guarantee you they haven't disappeared anywhere.
The CA doesn't have anything that helps you do Lawful Intercept. They just vouch for people's identities.
If you can persuade them to fraudulently vouch for your agency as being some other subscriber then this unavoidably produces a smoking gun which everybody can see, just like when Israel produces fake passports so its agents can travel abroad to murder people.
It doesn't let them passively intercept. The CA could not, gun to its head, help you do that. The mathematics just doesn't work that way, any more than the US Federal Reserve could intervene to make three dollars twice as many as five dollars.
This means that fradulently issued certificates either won't work, or will be contained in public logs run by Google (or Google needs to be forced by authorities as well).
I don't consider a cert trustworthy just because it's signed by a CA, unless that CA is mine or one run by someone I personally know and trust. I came to this position before Snowden, though.
> A signed cert has to depend on someone you dont know.
No, it doesn't. If it's signed by my own CA, then I clearly know who signed it. Likewise if it's signed by a CA run by someone else I actually know.
The point of the signing is to have someone I trust validate that the cert they signed is trustworthy even if I don't know the entity that made the cert they signed.
I feel like Namecoin and Ethereum Name Service are the most promising replacements for certificate authorities that I'm aware of. Are you aware of any better suggestions?
> yet they use the kind of encryption that only computer enthusiasts were interested in
I hear this so often about WhatsApp - that they are end-to-end encrypted... But I really have no proof that it's true or that I should trust Zuck.
I am sure you can check the messages being communicated, and on the surface you'll confirm to yourself that messages are encrypted. But how do you know there are no weaknesses in the design? How do you know they didnt "flip the switch" to allow a backdoor?
Maybe a noob question, but are whatsapp's messages secure from Facebook? Would some motivated employee at Facebook be able to read everyone's messages if they wanted? If no, how do we know?
As much as I despise Facebook and its properties, the “how do we know” question can only be answered based on the trust that there would be at least one person in the company/team who would be a whistleblower if the end to end encryption is removed (with their knowledge and not through some state sponsored hacking).
With that background, a motivated employee cannot read WhatsApp messages that they have not sent or received themselves because WhatsApp uses the Signal protocol implementation. Coming to your first question, WhatsApp does share metadata with Facebook. So the fact that content isn’t shared is a moot point because a lot can be inferred from metadata alone to target people for any purpose.
So WhatsApp is not really a secure messenger if Facebook is part of your threat model and is considered an adversary or an adversary who can be easily coerced or compromised.
If you’re talking about decrypting messages encrypted created and read via SSL (what they imply is the case), it’s not possible unless you have the private key, versus the widely available public key.
I doubt it’s lying around in Facebooks repositories but I’ve never worked there so cannot say that is the case with certainty.
This is all assuming they are even using modern SSL and are careful with user data. Unfortunately, not a great track record there for FB.
>But to me it seems he did provide a lot of the impetus for having HTTPS virtually everywhere
HTTPS connections are full of 3rd party surveillance systems that still have access to and monitor parts of the cleartext. WhatsApp is connected to the Facebook data vacuum (yes, just "metadata", but as the Snowden revelations you cite show, the metadata is the desirable surveillance records).
If anything, this is a step backwards because it uses the pretense of security while providing none and really just being a fight for exclusive data across multiple corporate surveillance systems.
SSL was THE thing in 1995. End-to-end encrypted email has existed since 1991 (PGP), and for instant messaging you've been able to use OTR since 2004.
End-to-end encryption is what is needed, SSL is the bare minimum for everything. It's the seatbelt + airbag. You can't have a car without those anymore. E2EE is the ACC+AEB+ABS. You should not have a car without those anymore.
19 days before Snowden flew to Hong Kong, former FBI counter-terrorism agent Tim Clemente spilled the beans on CNN[0] (for context, informarion from a phonecall between one of the Boston Marathon bombers and his wife had been leaked to the media):
>BURNETT: Tim, is there any way, obviously, there is a voice mail they can try to get the phone companies to give that up at this point. It's not a voice mail. It's just a conversation. There's no way they actually can find out what happened, right, unless she tells them?
>CLEMENTE: No, there is a way. We certainly have ways in national security investigations to find out exactly what was said in that conversation. It's not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her. We certainly can find that out.
>BURNETT: So they can actually get that? People are saying, look, that is incredible.
>CLEMENTE: No, welcome to America. All of that stuff is being captured as we speak whether we know it or like it or not.
This could be coincidental timing, but I've always wondered if the Snowden leak was a way of controlling the national discussion around the issue and putting an agent in place (Snowden) who could be a relatively moderate voice that the pro-privacy crowd could group around, while also creating a dramatic story with the potential for international espionage that allows pro-surveillance voices to distract from the they're-spying-on-us narrative by accusing Snowden of being a Chinese/Russian pawn. I don't feel comfortable saying Snowden is still working for the US government, but I'm certainly suspicious of him.
Snowden released a large collection of documents. Judging by his interview with Joe Rogan, he's a passionate advocate for encryption and says that the US is creating a tool for complete oppression. It's harder to get more apocalyptic than that.
If you want to win against a view, select a leader from your pocket, make him look plausible and make him take control of the whole view. At any point you desire, let that person discredit himself and take the whole view down.
Well if Edward Snowden weren't a CIA asset, how would you know? If there's no way for us to know if Edward Snowden is a CIA asset or not and he has every appearance of an independent actor, why should we care?
I think that it's not actually possible to know anything, and your only real options after recognising that are to reject the pursuit of truth entirely or fall back onto probabilistic models instead of binary beliefs. Not knowing whether or not Snowden is a CIA asset leads me to the question, "how likely is it that Snowden is a CIA asset?" As to why we should care, here's another question: "if Snowden were a CIA asset, how would that change my future behavior?" If it wouldn't change your future behavior, carry on not caring. If it would, then ask yourself "given that I may be wrong about Snowden being a CIA asset, which way would I rather err for an optimal risk/reward ratio?" Then you consider all three answers, and decide how to act in the future despite never actually coming to a conclusion about whether Snowden is a CIA asset. Maybe you think the risk of him being an asset is so low that you don't mind risking the chance that he isn't, or maybe you think it's reasonably likely that he could be an asset while not believing that your personal risk from being wrong is worth worrying about. Or, maybe you change your behavior.
I listened to that interview, and purposefully didn't include my take on it above because I didn't want my crazier belief to distract from my crazy belief. Let's dive into the deep end of my personal crazypool: he came off very politicany to me, even down to the purposeful missteps. People trust a speaker more when the speaker says something a little bit offensive, probably because it's a sign that the person will speak their mind regardless of what anybody else thinks. That moment where he said that no judge would refuse to sign a warrant for "Abu Jihad" or "Boris Baddenov" looked just like this to me -- it's offensive, but immediately juxtaposes an offensive faux-Muslim name with an offensive faux-Russian name, and it's weirdly still socially acceptable to make fun of Russians this way while the Muslim statement is not socially acceptable, so any in-depth discussion of the faux pas can get bogged down unpacking the distinction between these two and making the Muslim characiture seem more understandable in context. The bit where he keeps pulling Rogan away from specific lines of questioning in the beginning, not to avoid them but instead to tell his whole life story leading up to the leaks, seemed like the sort of thing an aspiring politician would do when presented with a microphone and a long format. He acted like his first book had to be autobiographical because the publisher insisted on it, but really, we all know Snowden could find a publisher for a non-autobiographical book. It's an obvious deception; he wants to familiarise the public with his personal story while also acting like he isn't trying to do that. If Snowden gets pardoned sometime between 2024 and 2032 and subsequently makes a run for President, I'm gonna be scared that one of the TLAs has a mole at the top of the one-eyed pyramid.
I'm not making any conspiratorial claims about this part, but as an aside it was weird to me that he claimed cellphone IMEIs can't be changed. It's not normally done, but it can be. I wasn't sure if that was dumbed down for Rogan's audience, a misspeak, or actual ignorance on Snowden's part.
Wait did he seriously say IMEIs can’t be changed? Surely he knows better: they definitely can. It’s not easy, but it’s doable. Or was, a decade ago at least.
>Edward Snowden: (02:26:27)
They’re two globally unique identifiers that only exist anywhere in the world in one place. This makes your phone different than all the other phones. The IMEI is burned into the handset of your phone. No matter what SIM card you change to, it’s always going to be the same and it’s always going to be telling the phone network. It’s this physical handset.
>> I don't feel comfortable saying Snowden is still working for the US government, but I'm certainly suspicious of him.
Well. My deep belief is that individuals that are truly dangerous to the system (here, any system that is powerful enough but one can view it globally as a continuously evolving technology-driven wanna-be-AI) get separated from power asap and then directly eliminated if needed. One should be very naive to think that the following makes any sense: "a young boy from government family says that the whole world is controlled by a few; he wants to stop it and so gets a platform to alarm about it via main media channels, supposedly controlled by the same few". Another point of the whole move was to identify people (e.g., you and me) who will not buy this so they will likely avoid buying other incoming BS.
> One should be very naive to think that the following makes any sense: "a young boy from government family says that the whole world is controlled by a few; he wants to stop it and so gets a platform to alarm about it via main media channels, supposedly controlled by the same few".
I'm very sympathetic to this view, obviously; at first glance it seems too good to be true.
Of course we're supposed to believe that the media isn't controlled by these same few, that Operation Mockingbird ended by the time CIA Director George H.W. Bush announced in 1976 that the CIA would stop paying journalists, and that Operation Mockingbird was actually limited to a couple of wiretaps rather than the fullscale infiltration of the press previously reported through non-official channels (though they did admit that they paid journalists, they claimed that this was not done as part of Operation Mockingbird). Obviously the CIA still has people in media agencies, but we're crazy for believing that; it supposedly isn't the case. But even with a CIA-infiltrated media, I could see the story getting out. The CIA can't be everywhere; it's possible that by going through more than one media institution (including a British one, as if that mattered) and also contacting a documentary film maker who had a previous run-in with the federal government (she claims to have been put on the highest theat-level list the DHS has after making a film critical of the occupation of Iraq) he was able to make sure that the government couldn't stop the information from coming out through some channel or another. Or they could have been worried that by blocking it in the press they would provoke him to release unredacted versions that would reveal even more (yes, he claimed to not have these by the time he entered China, but he could have given copies to another still unknown source -- or he could have been lying about not still having them in some format, perhaps steganographically hidden).
I could also actually see a whistleblower escaping capture/death by going to an area controlled by a foreign power and making the defection public immediately so that any suspicious death would be seen as an obvious assination without a fair trial. That seems plausible to me, whether or not it actually happened.