Hacker News new | past | comments | ask | show | jobs | submit login

If you read the article, it’s talking about not putting in a backdoor, and not Facebook saying “we have access to all encrypted messages, we’re just not giving them to you”. As it stands, they’re end-to-end encrypted so not even Facebook can’t see your messages, and that’s what Barr doesn’t like



> they’re end-to-end encrypted so not even Facebook can’t see your messages

Not quite. Facebook still controls the endpoints, so when you see the message so can they. This is obvious: you use their app to view the encrypted message, hence the app has access to the cleartext.

https://en.wikipedia.org/wiki/Endpoint_security


This seems like a extreme argument.

If the app is not phoning home with the cleartext, this seems okay. You need some software to retrieve/read text anyway, so this becomes an exercise about trusting trust, etc.


An extreme argument? If you don't care about security, perhaps you are right. In any other case, using a closed source endpoint from the company that promises you "encryption" is completely crazy. They "only" have the ability to decrypt all your messages, remove encryption altogether without notice, targeting ads based on your conversation history, and installing targeted decryption backdoors. Yeah not big deal at all, seems like something really trustworthy.

It's still better than nothing, although in the hands of Facebook it might actually be worse than nothing.


> This seems like a extreme argument.

Not at all. Good security often involves some black-and-white thinking, which not everyone is accustomed to.

If Facebook controls the endpoint, then they have the power to access the plaintext, full stop. Using their product (hopefully) implies a choice to trust them not to abuse such access.


Ok, what about the closed source hardware in the phones?

Would you argue against all encryption because clearly the CPU maker has a similar access to all decrypted content?


I'd argue for not trusting a cryptosystem that requires you to use a particular vendor's CPUs. Open standards and independent implementations at every level should be table stakes.


Although I argue the black-and-white "everyone is a potential adversary" thinking is misguided. Your threat model determines requisite security measures, and you usually have to trust someone. (Although Facebook should probably not be that someone)


> Using their product (hopefully) implies a choice to trust them not to abuse such access.

Which is what...they said?


The fear is that Facebook could push updates to targets that the US government is interested in and initiate a phone home. The update mechanism is the "front door" that could be used to implant a backdoor.


I agree that this is a risk for basically any networked app, but can't we distinguish whether this is an active concern or a hypothetical one?

In order to actually provide your messages to Facebook, the app needs to either call home when you view the message or write the cleartext somewhere on-device to send home later. If you view the message and then the app calls out with data we can't inspect, or writes something locally that we can't inspect, it could potentially be exfiltrating the message you viewed. If not... am I missing an attack vector, or is that message safe?

(To be precise: this would only prove forward secrecy, meaning safety for that viewing of that message. If we can't see the app's code, it could have testbench cutouts like Volkswagen or WannaCry, or more likely could only trigger for certain users or in certain cases à la Greyball.)


Yeah this - as far as I see it, there's nothing that prevents FB / WhatsApp from "accidentally" shipping the private keys on my phone / machine to their server.


Also, where is the private key stored and when/where exactly is it passed into the decryption algorithm? When/where is the original private key generated and managed? Inside fb software somewhere?

I'm sure realistically the US gov could creatively accomplish what they want.


Why is everyone so confident Facebook hasn’t already backdoored it?


They have in a manner of speaking. The communication channel is still encrypted, they just run their surveillance algorithms directly on your device.

https://www.forbes.com/sites/kalevleetaru/2019/05/05/faceboo...


That Forbes article was wrong: https://news.ycombinator.com/item?id=20587643


Facebook says in a hn post that the article is wrong, but I have no real reason to trust Facebook, given their track record.


> if we ever did it would be quite obvious and detectable that we had done it.

It's running in your device, you don't have to trust them if you have technical skill.


I have little desire to decompile every update and/or constantly analyse what their app is doing. They've proven to be untrustworthy on multiple occasions, and haven't given me much reason to think they've changed.

They might convince me if they open their client and server code, but even then they're yet another walled garden only interested in keeping their monopoly by building inferior products and using regularly capture to prevent any competition from doing the same thing they did to MySpace.


You can't read encrypted packages being sent by the app. The message telemetry could be bundled by the app with other telemetry and you wouldn't know despite your technical skill.


I don't consider a self-serving assurance from a company representative to be conclusive proof.


only if you choose to believe Facebook it was


Don't trust closed source software for encryption.


Not sure the fallout of the public finding out would be worth it over the value add of reading messages, maybe I'm wrong but that's how I would look at it.


People regularly reverse engineer the Facebook apps to see what’s inside. (For example: Jane Wong)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: