Hacker News new | past | comments | ask | show | jobs | submit login

I don't know why so many people here are patting themselves on the back over this. This is not the kind of encryption people were talking about in the 90s and 00s. A lot of this encryption is not point-to-point. It merely secures user's interaction with some middleman (or their server). What would the numbers be if you subtracted all the traffic that can be snooped on by Google, Amazon and Cloudflare?



Several reasons:

- The good is not the enemy of the perfect.

- This eliminates an entire class of attacks, namely, man-in-the-middle.

- A lot of (most?) user interactions require the server to know what the user wants, and it's unclear how this can happen if the server can't view the user's data.


MITM is not mitigated at all by HTTPS. What makes you think that? Do you understand how certificate signing works?


How does certificate signing not mitigate man-in-the-middle? Say you have control of DNS and you can fully impersonate and replace any server. You present a valid certificate for the server. It has the public key, which the client uses to try to encrypt traffic. The man-in-the-middle doesn't have the private certificate. You convince the client its talking to the right machine, but then you can't understand anything the client has to say, because it uses the legitimate public key.

Say you have control of the infrastructure and you forge a certificate. You'll have a hard time getting the client to trust the certificate unless you have compromised the signing key of a certificate authority and generated an apparently valid cert.

So, can it entirely prevent it? Can I get verisign to issue me a certificate for G00GLE INC.? If you can alter the client's list of trusted authorities, you can make yourself an authority, but you've already compromised the client. If you can get the server's private certificate, you've compromised the server. You can get creative, sure...probably, you stand a better chance of beating the people in the chain than the technology...but the difficulty of doing so seems to amount to 'mitigation' at the least.


Parent isn't wrong... technically.

Certificate Transparency exists, solely because any CA can issue an SSL cert for any domain, and use it to MITM via a proxy.

You are trusting every CA out there, not just Verisign. That is the ultimate weakness. Any CA can issue a cert for any domain.

Expect-CT header is the only thing protecting you from a MITM, and it's not even a protection, really, and it's trivial to strip that header as the MITM before proxying to the client.

How do you think mitmproxy[0] works?

[0] https://mitmproxy.org/


...do you? Unless the attacker has access to the private key associated with the SSL certificate, they can't read any HTTPS traffic encrypted via that certificate - mitigating the ability of that bad actor to perform a MITM attack.


And even if they get a key, they will show up in the CT Logs eventually and the attack becomes public.


The effectiveness of CT logs isn't a thing unless the website uses CT monitoring or is a huge company. A [delegated or non-delegated] DNS takeover, or IP address release (eg. cloud providers re-assigning an IP to another customer) could allow you to generate a certificate for some-forgotten-subdomain.medium-sized.company.com using the ACME http challenge. Of course this is mitigated by properly managing your DNS, and CT monitoring is encouraged everywhere.


In other words, the effectiveness of the CT logs is a thing. There are multiple services which will do this for you for free (Cloudflare and Facebook at least make it trivial to get notifications for your domains) and it’s a level of visibility which almost nobody had just a few years ago.


crt.sh offers a RSS Feed, I use that to track all certificates issued for my domain. Doesn't really need anything expensive or complicated.

CT Logs don't mitigate any of the attacks but they make them very very very visible if they happen. Especially if a CA goes rogue, this will be immediately visible and provable.


Cert pinning mitigates this too right?


IF you pin services to a key you control this mitigates the problem with bad guys obtaining bogus certs, BUT now you need to manage the pinning application to ensure it knows about any new keys before they roll into use. This may force you to compromise on your rotation schedule, maybe you'd prefer to use new keys for the new cert you're buying this week but alas the new app version is still waiting for Apple sign-off, so it'll have to be next year instead.

IF you pin to an intermediate key, which is under the control of a CA, then bad guys who obtain certs from that intermediate will not be inconvenienced by your pin, but these keys are intentionally long-lived (they are protected in an HSM) so the rotation issue isn't as fraught.


Only if there's an Expect-CT header, which is trivial to strip.


Well, no, the CT Log will include any valid certificate presented so any widespread attack will have to outright block access to the CT Logs or you're gong to have a bad time.

Expect-CT only controls if the browser will warn the user if the cert is not in the logs, it does nothing about certs being entered into the CT Logs themselves.


Yeah, I think they do, actually.

Two things...

Proxies are a thing, and stripping the Expect-CT header is trivial.

Any CA can generate a valid SSL cert for any domain.


The above poster is still technically correct though, getting the cert is just 1 more obstacle in the way of the attack, which isn't as much of an obstacle as one would think for some actors(see China).


Certificate transparency would make it blatantly obvious if Chinese CAs were issuing bogus certificates. (And if they issued certs without submitting them to CT logs they wouldn't be accepted by Chrome or Safari, so it wouldn't be very useful.)

Sure, they could do it, but it wouldn't be long until there were no Chinese CAs trusted by any browser.


An attack like this could still be done for CLI clients/library clients such as curl (ie. server-to-server connections), none of which I'm aware of incorporate CT log verification.


You can add CT checks to such software with e.g. ctutlz (a Python library towards this end) but you need to be on the sort of treadmill that browsers are on, with regular updates and disabling security features like CT checking in browsers that don't get updated in a timely fashion.

All the non-Google modern CT logs moved to rolling annual logs. Cloudflare's Nimbus for example, is actually logs named Nimbus2019, Nimbus2020, Nimbus2021 and so on. Nimbus2019 is for certificates that expire in 2019. Most of them are already expired 'cos it's November already, it doesn't see a lot of updates, in January Cloudflare can freeze that and eventually they can decomission it, browsers will stop trusting it, but it won't matter because those certs already expired. If you go get yourself a new Let's Encrypt cert now, it'll probably be logged with Nimbus2020, come January 2021 you won't care if Cloudflare freezes it and starts shutting it down.

As a result you need frequent (say, monthly seems fine) updates to stay on top of new logs being spun up and old ones shutting down, or else you'll get false positives.

For CLI or server software that has a regular update cadence anyway I can see this as a realistic choice, for a lot of other software it'll be tough to do this without more infrastructure support.


But the server can’t do header checks for CLI tools prior to certificate negotiations, so it would be difficult to get away with. Not impossible, but it’d limit your Targets to IPs who are exclusively non CR clients. Any slip up and you’d be busted.


Literally the only reason TLS uses certificates is to mitigate Man-in-the-Middle attacks.

Establishing a shared secret with another party over a public channel is not that hard (Diffie-helman, RSA). The hard part is to ensure the other party is who they say they are. Certificates tackle this by having a trusted party (CA) cryptographically bind the shared secret to an identity.

There are issues here, but if you can read and modify the traffic between my PC and the HN servers, you still won't be able to read and modify the traffic.


Technical corrections:

The binding is over a _public key_ not a _shared secret_.

Also that last sentence is confusing and I'm not sure how best to fix it. Maybe the last word should be 'meaning' not 'traffic' or maybe the word HTTP should be inserted?


If https doe not mitigate MITM attacks, what is the purpose of it?


The purpose is that, and that at which it fails. Why do you think CT logs exists? Exactly for this reason...


What are you talking about about? I think you better look up how https/tls works??? Sure you have to trust the certificate authority. Also can you imagine the scandal that would erupt if Google or AWS cloud was discovered to be eavesdropping on companies running things in their cloud? I don't think so.


I believe the OP is talking about encryption for user data, not merely for transport.

Google, Amazon, &tc still store user data uninhibited and though they are often competent about security, they also often provide data to state actors as a normal course of action. The fact the a web browser communicates safely with an endpoint doesn't mean that endpoint isn't a bad apple itself. In some cases these endpoints are logging proxies to other servers and services, and though transport is again encrypted, the data is normally accessible by operators of such services.

Cloud computing has taken away the ownership of data from individuals, and that sounds like it has seeds of some kind of a revolution brewing.


Can you define "uninhibited"?


> can you imagine the scandal that would erupt if Google or AWS cloud was discovered to be eavesdropping on companies running things in their cloud

Remember the "SSL added and removed here" image?

https://thumbs.mic.com/MTBjNTQzNTMzZiMvbWVtejZOdjJsaUdUVkZEa...


That wasn't eavesdropping by Google. That was Google not using encrypted traffic on internal wires. And that changed a lot of years ago.


Yes, it was the US government eavesdropping for them without their consent, but the end result is basically the same.

Yes, that exact hole was patched, but the point is it wasn't the end of the world that great grandparent implied it would be.


Google Compute Engine didn't even exist at the time that slide was made, or at least was not publicly available. That slide was about government intercepting Google's traffic, not cloud customer traffic.


It was certainly smaller, but GCE was first publicly available in April/May 2013, Snowden leaked things in June 2013. I'm not quite sure when this slide was released but sometime after that.

Google moved to fix the problem after the start of the leaks. Pretty quickly (good for them), but after.


The slide was created long before Snowden leaked it, which is before GCE was publicly available. I said, "before the slide was made," not "before the slide was leaked."


I'm pretty sure RPC privacy boost was underway before the leaks. It was just launched more hurriedly after they came out.


I am pretty sure that this is a reference to cloudflare.


Google and AWS aren't eavesdropping directly. However a lot of companies are running unencrypted connections between their load balancers and their backend services. And we know from the Snowden documents the US Government does passive data collection there.


The USG does not need to look for weak points to do passive data collection.

Due to the third-party doctrine [0], they can simply demand access, don't even need a legal warrant. Because there's no reasonable expectation of privacy for data you willingly gave to third parties.

[0] https://en.wikipedia.org/wiki/Third-party_doctrine


It's easier to do it quietly though. If there's unencrypted network traffic, they just need to demand access from someone with physical access to the switches, plant a listening device, and everyone with logical access will be blissfully unaware.

If they want to MITM encrypted traffic they need to demand access from somebody with access to the certificates, who is going to be higher paid and more likely to speak to at least a lawyer before granting access.


The point is that if you're communicating with someone via Google, encryption terminates at Google, not with the other party.


If that was discovered nothing would happen or change. To some degree has happened with Windows 10, android/iOS for personal computing.

They wouldn't monitor themselves but provide access to law agencies anyhow.



The encryption is still point-to-point, just that the website you are connecting to has chosen to make their "point" AWS or Cloudflare or whatever else. You could as easily host something in your own DC or from a machine under your desk.


You're not wrong, but the realistic alternative is having it the same way, just without any encryption.


Yes but in this case caching proxies and other distributed approaches still would work out of the box as alternatives to cdns. I am not sure what I have gained. Nobody cares about end to end email encryption. This would be a real benefit, but Google could not build profiles so easily...


> Nobody cares about end to end email encryption. This would be a real benefit, but Google could not build profiles so easily...

AFAIK google states (in their privacy policy) they do not do anything with the contents of your emails in a gmail account.


Which is fine too, since not all communication needs to be secure (even on the internet).

These numbers are meaningless without a proper context and can potentially create a "security theater".


> not all communication needs to be secure

There are good reasons to make all communication, even trivial conversations, secure.

If we only secure "important" communications then we are unnecessarily broadcasting useful meta information to prospective attackers. Encrypted communications rise to the foreground in visibility and that gives away who and when and where sensitive information is shared.

OTOH, if we secure all communication then we make the work of attackers or over-reaching governments much more difficult because no communication clearly says "high value sensitive information"


There's plenty of reasons to secure all communication as much as possible, regardless of the content.

Even if you don't care about what your ISP sees from a privacy standpoint, they still can inject ads or other content into your webpages if the connection isn't secured (at least, from the perspective of your ISP). And this helps prevent attacks against users in coffee shops or other public, unsecured WiFi.


>Which is fine too, since not all communication needs to be secure (even on the internet).

There was just an article on the front page today about "I have nothing to hide" and why it's wrong.


An example may illustrate my point: download software zip/tar files from a non-secure link. Obtain the signature and checksum files over a secure link, and verify the integrity of the software offline.

Not every communication is about hiding personal stuff.


And then find that your file doesn't match, because your ISP brokenly injected a human-targeted message at the start of your download, or some proxy corrupted it by stripping out the executable (yes, this happens)...

Absolutely nothing is lost by encrypting the downloaded data as well.


Moving the goalposts.


The acceleration of this global trend in recent years can reasonably be attributed to the actions of one person.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: