Hacker News new | past | comments | ask | show | jobs | submit login
Encrypted web traffic now exceeds 90% (netmarketshare.com)
802 points by gator-io on Nov 1, 2019 | hide | past | favorite | 295 comments



We often hear the complaint here that nobody cares / cared about Snowden's revelations. But to me it seems he did provide a lot of the impetus for having HTTPS virtually everywhere and a lot of the instant messenging apps being end-to-end encrypted. Most of WhatsApp's users are as non-technical as it gets, and yet they use the kind of encryption that only computer enthusiasts were interested in just a couple years ago. It's a great development (all the limitations and caveats notwithstanding) IMO.


BCP 188 "Pervasive Monitoring Is an Attack" sets Best Common Practice for the IETF to say that mitigating pervasive monitoring is appropriate because it's an attack on the network. So that's a pretty long way from "nobody cares".

It's very carefully written, it does not propose to make a moral judgement about whether the things Snowden revealed are evil only to show that in a technical sense they were an attack and so it made sense for the network to try to mitigate them. Work like D-PRIVE (privacy for DNS) was driven by this concern, and of course it influenced a lot of other work including QUIC.


While a BCP is indication that “nobody cares” is false, it’s pretty far from even a majority of people caring. If BCPs mattered, IP spoofing wouldn’t be an issue on the Internet.


I think the reason the majority doesn’t care is that most people can’t imagine what could happen.

They’re reading my WhatsApp messages? So what?

I hardly have an answer for that, except for: imagine you’d live in an authoritarian state.


I like the explanation that simply explains "privacy": When you are going to the toilet, and everybody knows that your going and what you'll do there, but you still close the door (for the most of us, most of the time).


Sure, but people close the door out of modesty not really privacy. If there was a machine that provided a written transcript of what someone did in the bathroom with no video/audio I don’t think people would mind.

Like when you’re in high-security areas and have to be monitored in the bathroom there might be a door between you and your guard but no real privacy.

Or when people loudly object to strip-searches at the airport but the scanner that sees everything but then only shows a cutout highlighting suspicious areas to pat down are mostly fine.


I think it's a pretty good workable analogy actually. People don't mind if you know they go to the toilet as an abstract thing, but once you start keeping a notebook of who is going to the toilet and when it starts getting creepy and undesirable. And that's just for collecting metadata! imagine if someone would actually intercept your sewer and analysed the makeup of your turds, folks would be up in arms.


We need this information to correctly gauge the interest on different kinds of foods we should keep available to purchase in the cafeteria.

It's also helpful as we can notify you early if you have some undiagnosed medical issue. You could unknowingly spread your illness to your children without this early detection. We're even able to reduce your monthly health insurance premium by providing this data to the insurance company!

This also enables us to find troubled Individuals before it's too late and address building drug issues before they're full addictions. We'll be able to get them the required attention they need to get back on their feet and be productive members of our society. (Maybe not here though)


Scary... but still very theoretical and thus people can't really relate to that I guess because


Similarly, you can still get falsely validated HTTPS certs via spoofing (not to mention older, easier validation exploits), and so it's possible all the newly encrypted traffic may result in most people having a significant false sense of security.


Ironically, Telegram markets itself as the most private and secure messenger, but in reality, it's much less private than WhatsApp or Viber: any regular (non-secret) Telegram chats are not end-to-end encrypted - if they were, you wouldn't be able to access them from a new device after authorization with a password.


This marketing message always confused me: my techie understanding was that Telegram is actually one of the least secure messaging choices. If you want security, my understanding is that your preferences should go Signal, Whatsapp, iMessage, Hangouts or whatever Google's flavor-of-the-month messaging app is these days, Telegram, and Facebook.


I was following you until Hangouts...


Google's security is still better than both Telegram's or Facebook's. It's not great, but that's why it's #4 on a list of 6. If you care significantly about privacy & security I would not use anything worse than iMessage, and even that's borderline.

(Your opinion of whether Google or Telegram is better will likely also depend upon whether you think malice or incompetence is a bigger threat. Google's business model relies upon it snooping on you, but they have really, really good security people ensuring that nobody else snoops on you. Meanwhile Telegram has less of incentive to actively violate your privacy, but they may let other parties violate your privacy by passively fucking up their engineering. They've done stuff like roll their own crypto algorithms, which is a terrible no-no for anyone that cares about security.)


How is iMessage worse versus Hangouts? Is Hangouts even end-to-end encrypted? IIRC it isn’t, neither is Google Chat (a product which is replacing Hangouts from what I can tell), just Allo and Duo.


iMessage has several problems:

1. iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has

a) been collecting messages in transit from the backbone, or

b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server

to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.

I've often heard people say "you're wrong, iMessage uses unique per-message key and AES which is unbreakable!" Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It's like transport of safe where the key to that safe sits in a glass box that's strapped against the safe.

2. The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On August 15, 2018, Samuel Gross factored a 768-bit RSA key.

To compare these key sizes, we use https://www.keylength.com/en/2/

1280-bit RSA key has 79 bits of symmetric security. 768-bit RSA key has ~67,5 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11,5 bits, or, 2896 times stronger.

The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1358-bit RSA keys in 2019. The conservative (security-consious) estimate assumes they can break 1523-bit RSA keys at the moment.

(Sidenote: This is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP, at least until OTRv4 protocol finishes).

Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.

3. iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.

This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn't her. But it also means she can't show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.

Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.

4. The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to https://safecurves.cr.yp.to/ is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: "the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90".

5. iMessage is proprietary: You can't be sure it doesn't contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server

6. iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it's impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.

You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It's not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.

So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: "Hey Alice, this is Bob's public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!"

Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy's device.

7. iMessage allows undetectable key insertion attacks.

When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won't get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple's key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple's message server where they sit until the buddy fetches new messages for some device.

Like I said, you will never get a notification like "Hey Alice, looks like Bob has a brand new cool laptop, I'm adding the iMessage public keys for it so they can read iMessages you send them from that device too".

This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.

You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it's also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can't do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.

To sum it up, like Matthew Green said[1]: "Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption."

Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.

In comparison, Signal

* Uses Diffie Hellman, not RSA

* Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage

* Uses MACs instead of digital signatures

* Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code

* Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place

* Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you've verified the safety numbers and marked the safety numbers "verified", you won't even be able to accidentally use the inserted key without manually approving the new keys.

So do yourself a favor and switch to Signal ASAP.

[1] https://blog.cryptographyengineering.com/2015/09/09/lets-tal...


Very interesting post, thank you for sharing !!

> 2. The RSA key strength is only 1280 bits.

This reminds me that in france, unless cryptography is not used for authentication, it is considered a military weapon, and civil usage is restricted in its key strength. Above a certain strength, you technically have to give your key to the government !!...!!!

I don't have a source, but fr.wiki [1] says that in 1999, the government allowed for 128 bit keys to be publicly used without depositing it to the government. It also says that PGP was illegal in france until 1996 (considered a war weapon of category 2, whatever that means).

So I wouldn't be surprised if it were illegal over here to use key strengths above 2048 for end to end encryption in france...

[1] https://fr.wikipedia.org/wiki/Chiffrement#En_France


TIL that content for wikipedia pages changes per language. I clicked 'English' in the left pane hoping to learn more about what you are saying, but the English version does not have the 'En Europe' section. not so great. Thanks for your post


Different wikipedia language pages have completely different people working on them with completely different biases and politics behind them.


They are in fact entirely parallel Wikipedia encyclopedias written in different languages. Not only will articles have different information and be organised in a different way, whole families of related articles may be organised in different ways from one language to another.

This seems pretty reasonable seen for the whole encyclopedia, but I suppose if you assume that the language change option will just translate the page you're currently looking at then it's quite a surprise.


That’s not what they said; they’ve said that iMessage has better security than Hangouts, and that this user wouldn’t use anything “worse” ie. further down on their list than iMessage


It is worse in that Hangouts does not make false claims about its security, so people who use it know that they are using it for features provided by the kind of security it provides (only between the user and Google), like searching chat history across devices.

iMessage can also only guarantee security between the user and Apple due to Apple distributing the public keys (but to a lesser extent because it uses worse crypto), but it does not provide the usability features like searching full chat history across devices that Hangouts does.


Sorry about that, you’re correct — I was missing the context of the parent-parent that was referred to.


Keybase is pretty decent now and is up there with the most secure apps.


Telegram does have some unique privacy-related features though that other platforms don't support. Examples are: ability to register without a mobile phone/app, open source library (tdlib), ability to edit messages, ability to delete messages for both sides (including images that are cached on the receiver's side), ability to clear chats for both parties, auto deleting messages.


They don’t claim end to end encryption by default though. You make it sound as if there is a revelation you made here.

Telegram has faults, I would even argue it has many, but it’s clear that only “secret” chats and voice/video calls are end to end encrypted.

Whatsapp, however, does allow you to download all of your messages from your device using WhatsApp web, and they were recently shown to have an exploit/backdoor in the applications themselves. So in that context they’re comparable in my opinion.


> They don’t claim end to end encryption by default though. You make it sound as if there is a revelation you made here.

They don't claim e2e encryption by default, they just use some very tricky words that non-technical users will assume as encryption.

From telegram.org:

"Private: Telegram messages are heavily encrypted and can self-destruct."

"Secure: Telegram keeps your messages safe from hacker attacks."

"Encrypt personal and business secrets."


"They don’t claim end to end encryption by default though."

They don't have to. The amount of my peers (i.e. who also major in CS) who think Telegram is more secure than e.g. WhatsApp, is staggering. People don't really think about the protocol, they only think what they hear on the news, or what their buddies think who have heard it in the news.

And what they hear is "Telegram, the new encrypted messaging app, blah bah..." and then they hear debate "Apple.. Encryption.. LEA can't read messages". So the incorrectly count 1+1=3 and think Telegram is safe against LEA.

When you're online and you try to point out Telegram uses home-brew protocol, EXACTLY the same security architecture as Facebook (TLS), and that both are created by Mark Zuckerbergs of separate nations, you'll very quickly drown in fanboys / sock puppets that come with following arguments

"WELL TELEGRAM'S ENCRYPTION HAS NOT BEEN BROKEN IN THE WILD NOW HAS IT???" (no need when you can hack the server and read practically everything)

or

"NOT TRUE TELEGRAM HAS SECRET CHATS" (which only works between mobile clients, and one-on-one chats, just like Facebook Messenger's opt-in end-to-end encryption. Like this one guy on the internet I talked to so eloquently put it: "I don't use secret chats because when I open my laptop, I want to type with my keyboard and not take out my phone every time I want to reply")

or

"PAVEL DUROV ABANDONED MOTHER RUSSIA TO BRING YOU THIS SECURITY" (which tells you absolutely nothing about the protocol and is no proof of any motivation towards any direction. When you're as rich as Durov you can choose any other country in the world and I suspect Dubai isn't treating him too badly).

or

"DUROV REFUSED BACKDOOR SO THERE IS NO WAY TO GET USER DATA" (which is simply not true, it's not like government agents can't hack servers, if Durov could deliver such systems, he'd be making five figure hourly wage hardening Fortune500 companies' systems)


Telegram refused to provide decryption keys to Russia, US, China governments. That is great sign to me.

Meanwhile Whatsapp has web interface(sic!) where law enforcement agents can request user specific information and probably chat logs for whatever fake reasons they could come up with.

Telegram is 300mil users and growing.


Telegram founder also lies a lot. First he says that Telegram developers are not in Russia, out of the FSB reach, but later proofs emerge that they work for Russia from the same office where VK developers worked from. Google Anton Rosenberg and his lawsuit. [1] The public position of Durov ("this man is just crazy freak") is very unconvincing, to say the least. I'd even suspect that it is plausible that Russian authorities have some leverage on Telegram, and all this conflict with RosComNadzor is just a publicity stunt. After all, the only "loss" of a Russian government is RosComNadzor reputation, which is bad anyway.

[1] https://medium.com/@anton.rozenberg/friendship-betrayal-clai...


>Telegram founder also lies a lot. First he says that Telegram developers are not in Russia, out of the FSB reach, but later proofs emerge that they work for Russia from the same office where VK developers worked from.

I guess he has to protect his team. US government tried to bribe his programmers to weaken system security.


Whatsapp also doesn't encrypt backups to icloud, which it nags you to turn on.


Also google backups on android are not encrypted. that's so bad on so many levels... : (


To add to the irony: Telegram has evolved in a bit of a darknet on its own where people casual share content that would be near impossible to find on the surface web.


how do I get in on this?


Install the telegram app, register, and use the search button to find whatever it is that you want to.

It isn't darknet in terms of anonymity; its wildnet in terms of content.


First rule of fight club


Tell all your friends while pretending that it's a secret?


Accessing chats from a new device has no technical relation (or constraint) to the lack of end to end encryption. Wire encrypts all chats end to end, and still provides syncing conversations to multiple devices on multiple operating systems. It does limit the sync to the last 30 days, but that’s mostly because of cost reasons rather than technical reasons.

Edit/correction: Neither Wire nor Signal sync conversations that have happened before the setup of a new device to the new device.


Signal also features multi-device end-to-end encryption.

This non-technical argument feels more and more a shill talking point because the claimed constraint is NEVER provided with technical arguments.

However, it feels intuitive to non-techies: "End-to-end means only one end and I have many devices therefore I have many ends so I can't end-to-end with every end, so better not end-to-end..."


If you can view your old conversation from a fresh installation on a new devices then this automatically implies that some 3rd party has access to your keys. I.e. your conversion cannot be considered truly private.


It can also imply syncing over an end to end encrypted (and verified, using QR codes at setup) channel between the devices being synchronised. I believe this is what signal does, for example.


No it doesn't. The sync could be device-to-device, or it could be encrypted in it's storage on the intervening server, and require that the user provide secrets on the new device.


Is it syncing through a centralized server or are they synced between devices?


I wouldn’t loop backups into that criteria. iMessage syncs your message history to all of your devices in an end to end encrypted way: https://support.apple.com/en-us/HT202303

and WhatsApp allows user to backup/restore their messages with iCloud (unencrypted)


Glaringly, Signal still doesn’t have a way to backup and restore chats on iOS.


Telegram is not at all secure, the only real secure product is Signal, which is what Snowden actually recommended.


I continue harping on this point often. The usability, reliability and feature set of Signal are far behind Telegram or WhatsApp. If you want a platform that sometimes works, may be slow in delivering messages, may send false “device changed” notifications, and doesn’t allow a way to backup and restore chats (on iOS), then Signal is the one. If you don’t like any of these deficiencies, then Signal is the last thing to suggest. There’s no point using a so called “secure messenger” if it’s going to numb users to accept device change notifications without out of the band verification because the app and platform are buggy to generate those when nothing has changed. Yes, this is anecdotal, but I don’t trust that Signal promotes security or secure messaging practices.

Instead, use Matrix (with end to end encryption enabled) or Wire.


Matrix is not usable as it is:

1. Bikeshedding has lead to reduction in security agility: Any change will have to be first implemented for the protocol, then to SDKs, then to clients. This progress can take years.

2. Riot is the only client that delivers proper E2EE, majority of clients don't feature it.

3. E2EE is still not enabled by default.

4. IRC-bridges will break E2EE

5. Decentralization does break large silos and make less tempting targets, but now you have a bunch of server admins who have personal relationships with the people the content (when not end-to-end encrypted), and the metadata (always) of which they have access to.

6. Riot's key management and fingerprint verification has been a nightmare. Thankfully this is about to change.

Until all of these are are fixed, i.e.

Until all clients enforce E2EE, until the protocol design is safe enough, until client vendors are required to keep up with security, until no bridges are allowed, until fingerprints are trivial to compare, I will not, and I think no one should Matrix.


Are you saying that when you log into Telegram on a new phone it downloads the chat logs from somewhere? And that this only requires a short password?

At best this'd mean the logs are encrypted using the password as the key...

Are you sure the user isn't copying anything between devices? Chat logs and a keyfile maybe?


Well that DO have an optional 2factor auth, and yes, they definitely DON'T copy any keys between devices (like WhatsApp does when launching a web version).


>if they were, you wouldn't be able to access them from a new device after authorization with a password.

Telegram isn't great but if your password was used to derive the encryption key, that feature would be entirely feasible.


they are not end-to-end encrypted but they are encrypted. Also, I read that WhatsApp is going to switch to the same mode, for user convenience: https://bgr.com/2019/07/29/whatsapp-update-to-bring-multi-pl...


Oh, come on! If Telegram can decrypt chats for a user, they can decrypt it if they really want. Any other kind of encryption is irrelevant - from third party attackers, tls works good enough.


Yeh "endpoint security" is hard. You need some trusted specialized hardware or network.

https://en.wikipedia.org/wiki/End-to-end_encryption#Endpoint...


Dealing with the endpoint security is a really tough problem but I have a pet-project that pushes the price per endpoint just below $500 https://github.com/maqp/tfc


Agree Snowden is significant because he was able to encourage enough people to INSIST on strong privacy/encryption. Then it all comes down to basic game theory. Why would a company ever want to release any product without strong encryption (end-to-end) when users never complain about their data being encrypted. The only reason companies don't encrypt is when they have a vested interest in spying, either in their own interest or the government's interest. Anytime I see something not have strong encryption, it is a red flag to me that something nefarious is up.


> Why would a company ever want to release any product without strong encryption (end-to-end) when users never complain about their data being encrypted. The only reason companies don't encrypt is when they have a vested interest in spying...

I think the second thought doesn't follow from the thought before it. to my experience, the main reason companies don't encrypt is because it's simply makes it that much harder to debug problems and consistently provide successful connections for users. HTTPS can fail in ways that HTTP does not.

If users aren't clamoring for encryption as a feature, the main reason not to provide it is simplicity and quality of service along the axis users appear to care about. If users want encryption enough that they're willing to tolerate that sometimes browser misconfiguration or server side error will cause the connection to fail because it cannot be trusted, then companies will implement it.


Encryption in the last decade has also become a hell of a lot easier to implement, so "why wouldn't we just do it" has less opposition


I think this is underappreciated.

High-quality crypto libraries / systems lead to broader implementation, which makes it harder for elements in mostly-free societies to pressure implementers.

It's one thing for the NSA to quietly lean on ATT (and only ATT). It's a completely different thing for them to quietly lean on 1,000 different organizations and authors.

Similarly, it's easy to sneak a CALEA-alike amendment into national law when only PGP exists. It's harder when the narrative becomes "The government wants to take {beloved product used by millions} away."


I don't think I know many people who insisted on strong privacy/encryption. However, after the Snowden revelations people did consider it a preference. In that sense, it helped.


I think this has a lot more to do with Google punishing web results without https than it does with Snowden, couple that with Cloudflare and Let's Encrypt and you have an easy path.


This is the more important factor at play here. Browsers (Google Chrome mainly) are forcing everyone to go SSL lest you end up with a 'insecure site' warning for all your visitors. Most websites don't care about NSA intercepting their data.


Practically speaking, i think you're right - the "not secure" badge of shame in chrome is the most important convincing factor for website owners.

but getting that through at google must have taken some convincing of various execs, and i'm sure snowden helped with that.


Yeah, Google has done much for privacy and it's kind of ironic how people believe those who say it's malicious. The only crime Google ever did was to be successful.


I think you can separate their punishing https with their lack of privacy in their core products.


Snowden was a factor, but not the only one:

- CPUs didn't have hardware acceleration for encryption (AES-NI) like they have today, so activating SSL on your webserver actually decreased your throughput a lot

- It was expensive and complicated to get a certificate for your website, now LetsEncrypt provides them freely and easily


Wasn’t the server load for ssl something like 3-5%? That doesn’t strike me as much of a factor as the complexity involved, especially with the confusion added by eg Thawte hawking their enhanced validation product.


"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load" according to Google back in January 2010 [1]. This was about the same time as Intel introduced AES instructions, but the post suggests that this wasn't a big factor in their conclusion that TLS simply isn't computationally expensive.

[1] https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


> Wasn’t the server load for ssl something like 3-5%?

Depends on the packets per second being handled. I’ve pegged a CPU core easily doing encryption just a bit over a decade ago due to high data rate. If you’re pushing >500Mb/sec without CPU accelerated encryption (or NIC offloading) it puts a pretty hefty strain on resources.


My P4 could do >400gbps AES128 in 2003. (We tested encrypted connections over Firewire.)


I feel Firesheep deserves at least some of the credit too: https://en.wikipedia.org/wiki/Firesheep


Firesheep co-author here. Thanks and agreed :)


Latest firefox tells me that site http://codebutler.github.io/firesheep/ is unsecure

got there from official blog https://codebutler.com/2010/10/24/firesheep/

Maybe force https when requesting http ?

https://drive.google.com/file/d/1maSpqYfFoBoCyao14VKzLKPMlm9...


Kudos for your work!


Definitely. Considering the plethora of unencrypted wifi SSIDs that were out there back then, it was huge. Massively reported on in the MSM, too


Snowden played no role in HTTPS adoption, because people don't seriously expect HTTPS to defeat NSA. Most peoples' threat models are much more modest.

HTTPS adoption has a lot more to do with Google and Mozilla pushing it in their browsers, and Let's Encrypt making getting certificates easy. I have mixed feeling about that - the cost of easy certificates was making spoofing far easier.


It is a very fair point that Snowden's revelations definitely had an impact. However, the impact you note was mostly technical.

The public backlash to these revelations is what seems lacking. It had very small political effects, and seemingly very little effect on the NSA. They did not change their stance much, and their weren't really consequences for what the NSA was doing.


What’s more important though? The public can’t particularly change things at that level. They don’t live at that level. It’s our job to help them. Just as they help me on non computer related stuff all the time. A barber shouldn’t be in charge of web encryption. It’s on us.


Also, I think the public has no problem with the government spying on other people. They just don’t want it spying on them. So in that regard, it opposing the policy, but instead mitigating the risk to your own communications is an expected result.


Though that gets... interesting, when we remember that the revelations were that the NSA was spying on <insert person here>, as it was/is untargeted dragnet “surveillance”. People still didn’t care though, politically


What happens if an agency gets in deep with one of the common trusted authorities shipped with every browser, or is an authority, or just hacked their root keys, or bought access like they did with RSA? It seems like they could man in the middle all day and the only difference would be the cert issuer, which means it would be invisible if used in a limited fashion.


They could definitely do that. They could also mandate the use of "national security certificates".

https://en.wikipedia.org/wiki/Kazakhstan_man-in-the-middle_a...


It is still better than situation where everyone would use HTTP and naively believe that authorities will respect their right for privacy.


I'd almost prefer to be on http knowing I was insecure than be on https and wrongly believing I was secure.


Well, I don't really trust random certs even when they're signed by a respected CA -- but I still prefer using HTTPS. Even if the cert is fraudulent, HTTPS is still encrypting stuff and will protect me from other random attackers.

Security is never a binary secure/insecure proposition. There are shades of gray. The key is to use what security you can, but never think "I'm secure now".

As an old mentor once told me: the moment that you think you're secure is the moment that you're at the greatest risk, but you should still lock your door.


They teach that adage in business school too. That when you think you have full control of an organization is when you have the least control.


If people really listened to Snowden they wouldn't be relying on CA authorities for certificates.


There have been fradulent certificates in the wild in the past but the CAs doing it usually get kicked out pretty quickly. That's what Google's certificate transparency project is for. And they are increasing requirements further and further. Hopefully one day we'll get to a state where the infrastructure of multiple independent companies in different countries needs to be compromised in order for one successful forgery. But even now certificate transparency has greatly reduced the number of entities able to forge certificates.


The problems is, companies like VeriSign offer LEA services

Quoting https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1591033

“Verisign also operates a ‘Lawful Intercept’ service called NetDiscovery. This service is provided to ‘... [assist] government agen- cies with lawful interception and subpoena requests for subscriber records.’

If you now try to search for NetDiscovery or LEA services for CAs, you won't find any, but I guarantee you they haven't disappeared anywhere.


The CA doesn't have anything that helps you do Lawful Intercept. They just vouch for people's identities.

If you can persuade them to fraudulently vouch for your agency as being some other subscriber then this unavoidably produces a smoking gun which everybody can see, just like when Israel produces fake passports so its agents can travel abroad to murder people.

It doesn't let them passively intercept. The CA could not, gun to its head, help you do that. The mathematics just doesn't work that way, any more than the US Federal Reserve could intervene to make three dollars twice as many as five dollars.


That paper is from April 2010 which is a different age in terms of internet encryption. Just for comparison, Google only started offering an encrypted version of its service in May 2010: https://googleblog.blogspot.com/2010/05/search-more-securely...

They quickly realized the problems that you describe. In Nov 2011, the Certificate Transparency project by Google had its initial commit: https://github.com/google/certificate-transparency/commit/6a...

In Chrome they have since enforced CT compliance for certificates: https://groups.google.com/a/chromium.org/forum/#!msg/ct-poli...

CT requires that each certificate issued needs to be contained in both a Google log and a non-Google log: https://github.com/chromium/ct-policy/blob/master/ct_policy....

This means that fradulently issued certificates either won't work, or will be contained in public logs run by Google (or Google needs to be forced by authorities as well).


State actors.


I don't consider a cert trustworthy just because it's signed by a CA, unless that CA is mine or one run by someone I personally know and trust. I came to this position before Snowden, though.


In the CA model is anything 100% yours? A signed cert has to depend on someone you dont know.


> A signed cert has to depend on someone you dont know.

No, it doesn't. If it's signed by my own CA, then I clearly know who signed it. Likewise if it's signed by a CA run by someone else I actually know.

The point of the signing is to have someone I trust validate that the cert they signed is trustworthy even if I don't know the entity that made the cert they signed.


Unless it's self-signed. Presumably you do know yourself well enough?


I feel like Namecoin and Ethereum Name Service are the most promising replacements for certificate authorities that I'm aware of. Are you aware of any better suggestions?


> yet they use the kind of encryption that only computer enthusiasts were interested in

I hear this so often about WhatsApp - that they are end-to-end encrypted... But I really have no proof that it's true or that I should trust Zuck.

I am sure you can check the messages being communicated, and on the surface you'll confirm to yourself that messages are encrypted. But how do you know there are no weaknesses in the design? How do you know they didnt "flip the switch" to allow a backdoor?


WhatsApp does control the clear text on both ends. They can do whatever they want with it.


Maybe a noob question, but are whatsapp's messages secure from Facebook? Would some motivated employee at Facebook be able to read everyone's messages if they wanted? If no, how do we know?


As much as I despise Facebook and its properties, the “how do we know” question can only be answered based on the trust that there would be at least one person in the company/team who would be a whistleblower if the end to end encryption is removed (with their knowledge and not through some state sponsored hacking).

With that background, a motivated employee cannot read WhatsApp messages that they have not sent or received themselves because WhatsApp uses the Signal protocol implementation. Coming to your first question, WhatsApp does share metadata with Facebook. So the fact that content isn’t shared is a moot point because a lot can be inferred from metadata alone to target people for any purpose.

So WhatsApp is not really a secure messenger if Facebook is part of your threat model and is considered an adversary or an adversary who can be easily coerced or compromised.


If you’re talking about decrypting messages encrypted created and read via SSL (what they imply is the case), it’s not possible unless you have the private key, versus the widely available public key.

I doubt it’s lying around in Facebooks repositories but I’ve never worked there so cannot say that is the case with certainty.

This is all assuming they are even using modern SSL and are careful with user data. Unfortunately, not a great track record there for FB.


>But to me it seems he did provide a lot of the impetus for having HTTPS virtually everywhere

HTTPS connections are full of 3rd party surveillance systems that still have access to and monitor parts of the cleartext. WhatsApp is connected to the Facebook data vacuum (yes, just "metadata", but as the Snowden revelations you cite show, the metadata is the desirable surveillance records).

If anything, this is a step backwards because it uses the pretense of security while providing none and really just being a fight for exclusive data across multiple corporate surveillance systems.


We need to promote the use of messaging apps which default to SSL. It shows a user-focused product and perhaps a healthy company.


SSL was THE thing in 1995. End-to-end encrypted email has existed since 1991 (PGP), and for instant messaging you've been able to use OTR since 2004.

End-to-end encryption is what is needed, SSL is the bare minimum for everything. It's the seatbelt + airbag. You can't have a car without those anymore. E2EE is the ACC+AEB+ABS. You should not have a car without those anymore.


[flagged]


19 days before Snowden flew to Hong Kong, former FBI counter-terrorism agent Tim Clemente spilled the beans on CNN[0] (for context, informarion from a phonecall between one of the Boston Marathon bombers and his wife had been leaked to the media):

>BURNETT: Tim, is there any way, obviously, there is a voice mail they can try to get the phone companies to give that up at this point. It's not a voice mail. It's just a conversation. There's no way they actually can find out what happened, right, unless she tells them?

>CLEMENTE: No, there is a way. We certainly have ways in national security investigations to find out exactly what was said in that conversation. It's not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her. We certainly can find that out.

>BURNETT: So they can actually get that? People are saying, look, that is incredible.

>CLEMENTE: No, welcome to America. All of that stuff is being captured as we speak whether we know it or like it or not.

This could be coincidental timing, but I've always wondered if the Snowden leak was a way of controlling the national discussion around the issue and putting an agent in place (Snowden) who could be a relatively moderate voice that the pro-privacy crowd could group around, while also creating a dramatic story with the potential for international espionage that allows pro-surveillance voices to distract from the they're-spying-on-us narrative by accusing Snowden of being a Chinese/Russian pawn. I don't feel comfortable saying Snowden is still working for the US government, but I'm certainly suspicious of him.

[0]: http://transcripts.cnn.com/TRANSCRIPTS/1305/01/ebo.01.html


Snowden released a large collection of documents. Judging by his interview with Joe Rogan, he's a passionate advocate for encryption and says that the US is creating a tool for complete oppression. It's harder to get more apocalyptic than that.


If you want to win against a view, select a leader from your pocket, make him look plausible and make him take control of the whole view. At any point you desire, let that person discredit himself and take the whole view down.


Well if Edward Snowden weren't a CIA asset, how would you know? If there's no way for us to know if Edward Snowden is a CIA asset or not and he has every appearance of an independent actor, why should we care?


I think that it's not actually possible to know anything, and your only real options after recognising that are to reject the pursuit of truth entirely or fall back onto probabilistic models instead of binary beliefs. Not knowing whether or not Snowden is a CIA asset leads me to the question, "how likely is it that Snowden is a CIA asset?" As to why we should care, here's another question: "if Snowden were a CIA asset, how would that change my future behavior?" If it wouldn't change your future behavior, carry on not caring. If it would, then ask yourself "given that I may be wrong about Snowden being a CIA asset, which way would I rather err for an optimal risk/reward ratio?" Then you consider all three answers, and decide how to act in the future despite never actually coming to a conclusion about whether Snowden is a CIA asset. Maybe you think the risk of him being an asset is so low that you don't mind risking the chance that he isn't, or maybe you think it's reasonably likely that he could be an asset while not believing that your personal risk from being wrong is worth worrying about. Or, maybe you change your behavior.


I listened to that interview, and purposefully didn't include my take on it above because I didn't want my crazier belief to distract from my crazy belief. Let's dive into the deep end of my personal crazypool: he came off very politicany to me, even down to the purposeful missteps. People trust a speaker more when the speaker says something a little bit offensive, probably because it's a sign that the person will speak their mind regardless of what anybody else thinks. That moment where he said that no judge would refuse to sign a warrant for "Abu Jihad" or "Boris Baddenov" looked just like this to me -- it's offensive, but immediately juxtaposes an offensive faux-Muslim name with an offensive faux-Russian name, and it's weirdly still socially acceptable to make fun of Russians this way while the Muslim statement is not socially acceptable, so any in-depth discussion of the faux pas can get bogged down unpacking the distinction between these two and making the Muslim characiture seem more understandable in context. The bit where he keeps pulling Rogan away from specific lines of questioning in the beginning, not to avoid them but instead to tell his whole life story leading up to the leaks, seemed like the sort of thing an aspiring politician would do when presented with a microphone and a long format. He acted like his first book had to be autobiographical because the publisher insisted on it, but really, we all know Snowden could find a publisher for a non-autobiographical book. It's an obvious deception; he wants to familiarise the public with his personal story while also acting like he isn't trying to do that. If Snowden gets pardoned sometime between 2024 and 2032 and subsequently makes a run for President, I'm gonna be scared that one of the TLAs has a mole at the top of the one-eyed pyramid.

I'm not making any conspiratorial claims about this part, but as an aside it was weird to me that he claimed cellphone IMEIs can't be changed. It's not normally done, but it can be. I wasn't sure if that was dumbed down for Rogan's audience, a misspeak, or actual ignorance on Snowden's part.


Wait did he seriously say IMEIs can’t be changed? Surely he knows better: they definitely can. It’s not easy, but it’s doable. Or was, a decade ago at least.


Here's a transcript[0]:

>Edward Snowden: (02:26:27) They’re two globally unique identifiers that only exist anywhere in the world in one place. This makes your phone different than all the other phones. The IMEI is burned into the handset of your phone. No matter what SIM card you change to, it’s always going to be the same and it’s always going to be telling the phone network. It’s this physical handset.

[0]: https://www.rev.com/blog/joe-rogan-edward-snowden-podcast-in...


>> I don't feel comfortable saying Snowden is still working for the US government, but I'm certainly suspicious of him.

Well. My deep belief is that individuals that are truly dangerous to the system (here, any system that is powerful enough but one can view it globally as a continuously evolving technology-driven wanna-be-AI) get separated from power asap and then directly eliminated if needed. One should be very naive to think that the following makes any sense: "a young boy from government family says that the whole world is controlled by a few; he wants to stop it and so gets a platform to alarm about it via main media channels, supposedly controlled by the same few". Another point of the whole move was to identify people (e.g., you and me) who will not buy this so they will likely avoid buying other incoming BS.


> One should be very naive to think that the following makes any sense: "a young boy from government family says that the whole world is controlled by a few; he wants to stop it and so gets a platform to alarm about it via main media channels, supposedly controlled by the same few".

I'm very sympathetic to this view, obviously; at first glance it seems too good to be true.

Of course we're supposed to believe that the media isn't controlled by these same few, that Operation Mockingbird ended by the time CIA Director George H.W. Bush announced in 1976 that the CIA would stop paying journalists, and that Operation Mockingbird was actually limited to a couple of wiretaps rather than the fullscale infiltration of the press previously reported through non-official channels (though they did admit that they paid journalists, they claimed that this was not done as part of Operation Mockingbird). Obviously the CIA still has people in media agencies, but we're crazy for believing that; it supposedly isn't the case. But even with a CIA-infiltrated media, I could see the story getting out. The CIA can't be everywhere; it's possible that by going through more than one media institution (including a British one, as if that mattered) and also contacting a documentary film maker who had a previous run-in with the federal government (she claims to have been put on the highest theat-level list the DHS has after making a film critical of the occupation of Iraq) he was able to make sure that the government couldn't stop the information from coming out through some channel or another. Or they could have been worried that by blocking it in the press they would provoke him to release unredacted versions that would reveal even more (yes, he claimed to not have these by the time he entered China, but he could have given copies to another still unknown source -- or he could have been lying about not still having them in some format, perhaps steganographically hidden).

I could also actually see a whistleblower escaping capture/death by going to an area controlled by a foreign power and making the defection public immediately so that any suspicious death would be seen as an obvious assination without a fair trial. That seems plausible to me, whether or not it actually happened.


I don't know why so many people here are patting themselves on the back over this. This is not the kind of encryption people were talking about in the 90s and 00s. A lot of this encryption is not point-to-point. It merely secures user's interaction with some middleman (or their server). What would the numbers be if you subtracted all the traffic that can be snooped on by Google, Amazon and Cloudflare?


Several reasons:

- The good is not the enemy of the perfect.

- This eliminates an entire class of attacks, namely, man-in-the-middle.

- A lot of (most?) user interactions require the server to know what the user wants, and it's unclear how this can happen if the server can't view the user's data.


MITM is not mitigated at all by HTTPS. What makes you think that? Do you understand how certificate signing works?


How does certificate signing not mitigate man-in-the-middle? Say you have control of DNS and you can fully impersonate and replace any server. You present a valid certificate for the server. It has the public key, which the client uses to try to encrypt traffic. The man-in-the-middle doesn't have the private certificate. You convince the client its talking to the right machine, but then you can't understand anything the client has to say, because it uses the legitimate public key.

Say you have control of the infrastructure and you forge a certificate. You'll have a hard time getting the client to trust the certificate unless you have compromised the signing key of a certificate authority and generated an apparently valid cert.

So, can it entirely prevent it? Can I get verisign to issue me a certificate for G00GLE INC.? If you can alter the client's list of trusted authorities, you can make yourself an authority, but you've already compromised the client. If you can get the server's private certificate, you've compromised the server. You can get creative, sure...probably, you stand a better chance of beating the people in the chain than the technology...but the difficulty of doing so seems to amount to 'mitigation' at the least.


Parent isn't wrong... technically.

Certificate Transparency exists, solely because any CA can issue an SSL cert for any domain, and use it to MITM via a proxy.

You are trusting every CA out there, not just Verisign. That is the ultimate weakness. Any CA can issue a cert for any domain.

Expect-CT header is the only thing protecting you from a MITM, and it's not even a protection, really, and it's trivial to strip that header as the MITM before proxying to the client.

How do you think mitmproxy[0] works?

[0] https://mitmproxy.org/


...do you? Unless the attacker has access to the private key associated with the SSL certificate, they can't read any HTTPS traffic encrypted via that certificate - mitigating the ability of that bad actor to perform a MITM attack.


And even if they get a key, they will show up in the CT Logs eventually and the attack becomes public.


The effectiveness of CT logs isn't a thing unless the website uses CT monitoring or is a huge company. A [delegated or non-delegated] DNS takeover, or IP address release (eg. cloud providers re-assigning an IP to another customer) could allow you to generate a certificate for some-forgotten-subdomain.medium-sized.company.com using the ACME http challenge. Of course this is mitigated by properly managing your DNS, and CT monitoring is encouraged everywhere.


In other words, the effectiveness of the CT logs is a thing. There are multiple services which will do this for you for free (Cloudflare and Facebook at least make it trivial to get notifications for your domains) and it’s a level of visibility which almost nobody had just a few years ago.


crt.sh offers a RSS Feed, I use that to track all certificates issued for my domain. Doesn't really need anything expensive or complicated.

CT Logs don't mitigate any of the attacks but they make them very very very visible if they happen. Especially if a CA goes rogue, this will be immediately visible and provable.


Cert pinning mitigates this too right?


IF you pin services to a key you control this mitigates the problem with bad guys obtaining bogus certs, BUT now you need to manage the pinning application to ensure it knows about any new keys before they roll into use. This may force you to compromise on your rotation schedule, maybe you'd prefer to use new keys for the new cert you're buying this week but alas the new app version is still waiting for Apple sign-off, so it'll have to be next year instead.

IF you pin to an intermediate key, which is under the control of a CA, then bad guys who obtain certs from that intermediate will not be inconvenienced by your pin, but these keys are intentionally long-lived (they are protected in an HSM) so the rotation issue isn't as fraught.


Only if there's an Expect-CT header, which is trivial to strip.


Well, no, the CT Log will include any valid certificate presented so any widespread attack will have to outright block access to the CT Logs or you're gong to have a bad time.

Expect-CT only controls if the browser will warn the user if the cert is not in the logs, it does nothing about certs being entered into the CT Logs themselves.


Yeah, I think they do, actually.

Two things...

Proxies are a thing, and stripping the Expect-CT header is trivial.

Any CA can generate a valid SSL cert for any domain.


The above poster is still technically correct though, getting the cert is just 1 more obstacle in the way of the attack, which isn't as much of an obstacle as one would think for some actors(see China).


Certificate transparency would make it blatantly obvious if Chinese CAs were issuing bogus certificates. (And if they issued certs without submitting them to CT logs they wouldn't be accepted by Chrome or Safari, so it wouldn't be very useful.)

Sure, they could do it, but it wouldn't be long until there were no Chinese CAs trusted by any browser.


An attack like this could still be done for CLI clients/library clients such as curl (ie. server-to-server connections), none of which I'm aware of incorporate CT log verification.


You can add CT checks to such software with e.g. ctutlz (a Python library towards this end) but you need to be on the sort of treadmill that browsers are on, with regular updates and disabling security features like CT checking in browsers that don't get updated in a timely fashion.

All the non-Google modern CT logs moved to rolling annual logs. Cloudflare's Nimbus for example, is actually logs named Nimbus2019, Nimbus2020, Nimbus2021 and so on. Nimbus2019 is for certificates that expire in 2019. Most of them are already expired 'cos it's November already, it doesn't see a lot of updates, in January Cloudflare can freeze that and eventually they can decomission it, browsers will stop trusting it, but it won't matter because those certs already expired. If you go get yourself a new Let's Encrypt cert now, it'll probably be logged with Nimbus2020, come January 2021 you won't care if Cloudflare freezes it and starts shutting it down.

As a result you need frequent (say, monthly seems fine) updates to stay on top of new logs being spun up and old ones shutting down, or else you'll get false positives.

For CLI or server software that has a regular update cadence anyway I can see this as a realistic choice, for a lot of other software it'll be tough to do this without more infrastructure support.


But the server can’t do header checks for CLI tools prior to certificate negotiations, so it would be difficult to get away with. Not impossible, but it’d limit your Targets to IPs who are exclusively non CR clients. Any slip up and you’d be busted.


Literally the only reason TLS uses certificates is to mitigate Man-in-the-Middle attacks.

Establishing a shared secret with another party over a public channel is not that hard (Diffie-helman, RSA). The hard part is to ensure the other party is who they say they are. Certificates tackle this by having a trusted party (CA) cryptographically bind the shared secret to an identity.

There are issues here, but if you can read and modify the traffic between my PC and the HN servers, you still won't be able to read and modify the traffic.


Technical corrections:

The binding is over a _public key_ not a _shared secret_.

Also that last sentence is confusing and I'm not sure how best to fix it. Maybe the last word should be 'meaning' not 'traffic' or maybe the word HTTP should be inserted?


If https doe not mitigate MITM attacks, what is the purpose of it?


The purpose is that, and that at which it fails. Why do you think CT logs exists? Exactly for this reason...


What are you talking about about? I think you better look up how https/tls works??? Sure you have to trust the certificate authority. Also can you imagine the scandal that would erupt if Google or AWS cloud was discovered to be eavesdropping on companies running things in their cloud? I don't think so.


I believe the OP is talking about encryption for user data, not merely for transport.

Google, Amazon, &tc still store user data uninhibited and though they are often competent about security, they also often provide data to state actors as a normal course of action. The fact the a web browser communicates safely with an endpoint doesn't mean that endpoint isn't a bad apple itself. In some cases these endpoints are logging proxies to other servers and services, and though transport is again encrypted, the data is normally accessible by operators of such services.

Cloud computing has taken away the ownership of data from individuals, and that sounds like it has seeds of some kind of a revolution brewing.


Can you define "uninhibited"?


> can you imagine the scandal that would erupt if Google or AWS cloud was discovered to be eavesdropping on companies running things in their cloud

Remember the "SSL added and removed here" image?

https://thumbs.mic.com/MTBjNTQzNTMzZiMvbWVtejZOdjJsaUdUVkZEa...


That wasn't eavesdropping by Google. That was Google not using encrypted traffic on internal wires. And that changed a lot of years ago.


Yes, it was the US government eavesdropping for them without their consent, but the end result is basically the same.

Yes, that exact hole was patched, but the point is it wasn't the end of the world that great grandparent implied it would be.


Google Compute Engine didn't even exist at the time that slide was made, or at least was not publicly available. That slide was about government intercepting Google's traffic, not cloud customer traffic.


It was certainly smaller, but GCE was first publicly available in April/May 2013, Snowden leaked things in June 2013. I'm not quite sure when this slide was released but sometime after that.

Google moved to fix the problem after the start of the leaks. Pretty quickly (good for them), but after.


The slide was created long before Snowden leaked it, which is before GCE was publicly available. I said, "before the slide was made," not "before the slide was leaked."


I'm pretty sure RPC privacy boost was underway before the leaks. It was just launched more hurriedly after they came out.


I am pretty sure that this is a reference to cloudflare.


Google and AWS aren't eavesdropping directly. However a lot of companies are running unencrypted connections between their load balancers and their backend services. And we know from the Snowden documents the US Government does passive data collection there.


The USG does not need to look for weak points to do passive data collection.

Due to the third-party doctrine [0], they can simply demand access, don't even need a legal warrant. Because there's no reasonable expectation of privacy for data you willingly gave to third parties.

[0] https://en.wikipedia.org/wiki/Third-party_doctrine


It's easier to do it quietly though. If there's unencrypted network traffic, they just need to demand access from someone with physical access to the switches, plant a listening device, and everyone with logical access will be blissfully unaware.

If they want to MITM encrypted traffic they need to demand access from somebody with access to the certificates, who is going to be higher paid and more likely to speak to at least a lawyer before granting access.


The point is that if you're communicating with someone via Google, encryption terminates at Google, not with the other party.


If that was discovered nothing would happen or change. To some degree has happened with Windows 10, android/iOS for personal computing.

They wouldn't monitor themselves but provide access to law agencies anyhow.



The encryption is still point-to-point, just that the website you are connecting to has chosen to make their "point" AWS or Cloudflare or whatever else. You could as easily host something in your own DC or from a machine under your desk.


You're not wrong, but the realistic alternative is having it the same way, just without any encryption.


Yes but in this case caching proxies and other distributed approaches still would work out of the box as alternatives to cdns. I am not sure what I have gained. Nobody cares about end to end email encryption. This would be a real benefit, but Google could not build profiles so easily...


> Nobody cares about end to end email encryption. This would be a real benefit, but Google could not build profiles so easily...

AFAIK google states (in their privacy policy) they do not do anything with the contents of your emails in a gmail account.


Which is fine too, since not all communication needs to be secure (even on the internet).

These numbers are meaningless without a proper context and can potentially create a "security theater".


> not all communication needs to be secure

There are good reasons to make all communication, even trivial conversations, secure.

If we only secure "important" communications then we are unnecessarily broadcasting useful meta information to prospective attackers. Encrypted communications rise to the foreground in visibility and that gives away who and when and where sensitive information is shared.

OTOH, if we secure all communication then we make the work of attackers or over-reaching governments much more difficult because no communication clearly says "high value sensitive information"


There's plenty of reasons to secure all communication as much as possible, regardless of the content.

Even if you don't care about what your ISP sees from a privacy standpoint, they still can inject ads or other content into your webpages if the connection isn't secured (at least, from the perspective of your ISP). And this helps prevent attacks against users in coffee shops or other public, unsecured WiFi.


>Which is fine too, since not all communication needs to be secure (even on the internet).

There was just an article on the front page today about "I have nothing to hide" and why it's wrong.


An example may illustrate my point: download software zip/tar files from a non-secure link. Obtain the signature and checksum files over a secure link, and verify the integrity of the software offline.

Not every communication is about hiding personal stuff.


And then find that your file doesn't match, because your ISP brokenly injected a human-targeted message at the start of your download, or some proxy corrupted it by stripping out the executable (yes, this happens)...

Absolutely nothing is lost by encrypting the downloaded data as well.


Moving the goalposts.


The acceleration of this global trend in recent years can reasonably be attributed to the actions of one person.


Good news for sure, but note that this isn't a total Internet scan:

> We collect data from the browsers of site visitors to our exclusive on-demand network of analytics and social bookmarking products.

More details about their samples: https://netmarketshare.com/methodology

I would be more inclined to trust sources like https://transparencyreport.google.com/https/overview and Firefox Telemetry which come directly from the browsers. But even these do not count data from mobile apps (most of which have to be encrypted now I think), embedded applications, scripts, and APIs.


It's especially weird that their methodology reports "0% secure" traffic as recently as June 2016.


> from mobile apps (most of which have to be encrypted now I think)

Since the end of 2016 on iOS and since Android v9, apps have to communicate over HTTPS. I guess you can technically visit HTTP sites via a browser, but I'd bet that >90% of the traffic from smartphones is over HTTPS.


> since Android v9, apps have to communicate over HTTPS

That isn't true. It is the default but Android lets you override the defaults and use unencrypted traffic both in WebViews and in networking APIs.


It’s not true in iOS either. It’s possible for an app to whitelist specific domains.


Do iOS or Android have any requirements vis a vis HSTS or HPKP?


banking apps require them anyway (because of pci-dss etc)


Also likely because in the past the internet was really diverse. One would visit 20 sites possibly during one session.

Today, the landscape looks more like: You visit Google, click some links that open in AMP (still Google), visit some social networks (primarily Twitter and FB-owned properties). These companies already operate TLS-only, which helps these numbers.


Right. Encrypted web traffic at 90% is different than encrypted web sites at 90%.


When netflix is half, torrent traffic included add in google/facebook/faangs and you arrive at 90% easily.


Don't forget porn.


I thought torrent traffic was generally not encrypted.


Not to mention the world-class encryption used by torrents is RC4. It would be hard to pick a worse cipher (the encryption protocol was designed in 2006).


That's good. One structural issue with the internet down, many more to go. There is essentially no guarantee that cloud providers don't snoop through memory and steal your keys and sift through your data, for instance. There are just some big companies that we implicitly trust. Whenever the endpoint for encryption /decryption is under the control of a 3rd party, any guarantee of data safety isn't real. We have devices that constantly go out looking for new code to run, with proprietary blobs in firmware, which means they're 3rd party controllable. Control of the internet is in the hands of organizations that can't be held accountable for abuse of power over individuals.

I guess I'm just saying I don't have faith that a system (I'm talking about the intersection of technology, government, and business here) which puts so little power in the hands of individuals will do an adequate job of serving their interests in the long term.


We do need HTTP because sometimes public WiFi networks need you to agree to terms before any requests stop being redirected. I recently found http://neverssl.com

That being said those public WiFi’s shouldn’t be redirecting sites in the first place because for HTTPs sites browsers don’t even let you see the page.


There's things like detectportal.firefox.com, which is used by Firefox to detect whether a captive portal is in effect.



I personally prefer http://example.com as is it is explicitly http-only, adminitred by IANA.

It boggles my mind that we haven't yet agreed on a signaling mechanism at the AP level (DHCP?) for signaling captive portals, as this seems to be quite a common use-case.


Awesome! Any idea how much of that is attributable to LetsEncrypt and HTTPSEverywhere?


It's probably more attributed to browsers marking non-https as 'Not Secure' than anything but LetsEncrypt definitely has had a substantial impact to make that change possible.


> It's probably more attributed to browsers marking non-https as 'Not Secure'

Browsers couldn't have done that if https wasn't free and simple for servers.


Never thought I'd see a world where that was the reality, nice to appreciate really.


Pretty much this.

I ran into a local store taking credit cards awhile back, no TLS, weird, so I go to the store owner in person. I explain the problem and he insists that can't be the case, he's mad at me. "See! It's got a lock on the website!"... on the homepage. I direct him to the store and now it says Not Secure.

That did more to explain the situation than my attempt at TLS and HTTPS and Certs. He was able to call his web guy and say "It says not secure, Jerry! Fix it".

It was such a simple addition to (at least in Firefox) use the words Not Secure that it's crazy no one thought of it before.


If that doesn't work, there's also the argument that "credit card providers require it, and could stop you from taking credit cards until you fix it".


You're right, but didn't have to. This guy when he could get past being mad at me knew that was against the rules. Also even if it was allowed, no one wants to shop at a place that says Not Secure.

Side topic, but I've been trying to explain to our terrible CFO for years that PCI / PCI DSS is a real thing. He thinks that's the type of regulation that only giant companies have to deal with.


Feel free to report your org to your merchant processor if necessary if you're not meeting compliance requirements and think you can get away with it without compromising yourself.


Even if it says "secure", that doesn't mean it really is. I worked at a place in the 90's that hosted a some sites taking credit cards through HTTPS. You know what they did? They sent emails, in clear text, to people at the store that would enter / process the cards manually.


Even scalier than PCI compliance mumbo jumbo is customers not giving you any money.


this is what gave us the pay.reddit.com loophole back when reddit https was for people with gold only


did you check the url the form submits to for https? it was a somewhat common pattern once upon a time to load the form in http but submit it in https. not great, but better than nothing (nobody should do this nowadays btw).


I did it. And it wasn't.

Doesn't matter though, the reality for him was that customers saw a Not Secure and that was a problem. All the crypto, certs, forms, probability of issue, technical things didn't matter, just the perception.


Also Google mentioned they would begin ranking HTTPS sites higher.


That was always a false duck.


I don't think any of my personal websites be HTTPS without LetsEncrypt. It's great for that use case.


The whole reason we started tracking HTTPS vs HTTP was because of LetsEncrypt. Love that they broke the need to pay ridiculously overpriced fees to generate certs.


I'd imagine that a lot of this is attributable to Firesheep calling attention to how anyone on a public Wi-Fi network could snoop on your Facebook traffic.

Most of the major websites fast-tracked HTTPS shortly after that.


Thanks! That was the goal. For those who aren't familiar, the original slides and our response blog posts are still up: https://codebutler.com/projects/firesheep/


There was a link a month or two back that showed Let's Encrypt is used by 30% of domains but if this is by volume of data not domain share then it'd be a lot less than 30%.


It is amazing how HTTPSEverywhere has revealed how incompetent people are regarding TLS/SSL.

I can't count the number of times I've seen the extension page during sign-ups or logins. Oracle Cloud just triggered it the other day during signup and initial login. Most times when I email, asking why an email marketing link, or an embedded token-login email link sends me through an HTTP URL, the person on the other ends tells me they don't know and that's unexpected, or a result of out-sourcing their marketing/email/whatever.

In one case, their marketing mail provider supposedly just blanket intercepted all links and unknown-to-their-customers passed them through an HTTP redirect. Stunningly unprofessional.


Is a LetsEncrypt certificate "just as secure" as other certs? I have to imagine the answer is "no" simply because LetsEncrypt is free and the other certs aren't -- what more do you get by paying for a cert?


They are just as secure. Here is an article explaining it deeper: https://www.troyhunt.com/on-the-perceived-value-ev-certs-cas...


It used to be you paid because everyone who was trusted by browser manufacturers charged a fee, not because signing a certificate is actually hard or could be done in a "non secure" way. A signature is a signature.

LetsEncrypt signatures are now trusted by the browsers, so there's usually no need to pay for the service.


Thing is, even if LetsEncrypt were less secure (I don't really think it is, but lets assume), that would hurt the security of every website.

If you use a paid CA, someone trying to impersonate you could still go to lets-encrypt and get a certificate there. In other words, the system is only ever as secure as its weakest link. It doesn't matter what link you chose, it matters what link a potential attacker would use.

All of this is because failure of a CA only means false certificates are issued. Its not like lets encrypt ever could get access to any of your private key material.


It is just as secure, you get nothing more by paying.


Nothing of value, but there usually is some (silly, IMO) justification, such as badges and warranty (practically useless). Funnily, Comodo starts its list of key features with "value" [0]. They also seem more expensive now than I remember them to be (hundreds of USD per certificate/year), and still call both X.509 and TLS "SSL".

Sometimes I hear about people just looking for "SSL certificates" because somebody told them that they should have one, and search engines would lead them to those websites; probably that's how it still works.

[0] https://ssl.comodo.com/sslcomodo-ov-wildcard


So for my personal projects, I use lets encrypt. As far as I know (and I could be wrong now, haven't checked in a while) - their certs are only good for 3 months. Which is simple enough to get around - run a script on your box that updates the cert every 90 days automatically.

At work, we use a paid certificate that is good for a longer period of time (normally a year). So that's one benefit to paying, I suppose.

As far as encryption technologies and security, the traffic encrypted by a lets encrypt cert is just as secure as the traffic secured by a paid-for CA signed cert.


The fact that Let's Encrypt certificates expire quickly is a feature, not anything to do with paid vs. non-paid.

Let's Encrypt could have just as easily generated certificates good for a year or more. But the point of Let's Encrypt is to force you to do this in an automated way, using scripts like you suggest.

You're not getting around anything. The choice was by design.

https://letsencrypt.org/2015/11/09/why-90-days.html


They have a built in command for their 'certbot' cli now that you can use to have your certificates update automatically.

(It's been a bit sinse I went through it but I think it may be as simple as a extra flag in the command to generate the inital cert)


Usually you set up auto-renewal with lets encrypt. Easier than remembering to renew every year.


I am sad that this question is down-voted. It seems honest enough, it's slightly off-topic but not dramatically so.

There are two halves to the first answer but they're both "Yes, Let's Encrypt is just as secure".

1. Most elements of TLS security have nothing whatsoever to do with certificates. This is easier to grasp in TLS 1.3 than earlier versions (all the encryption in TLS 1.3 is working before anybody sends any certificates anywhere) but it has always been true.

Even without certificates eavesdroppers can't see what was communicated, and nobody can change it en route between client and server. For these things even no certificate at all would be fine...

Certificates do add a vital thing though: Identity. A certificate from Let's Encrypt is a signed document from Let's Encrypt vouching for the identity of your site. Cryptography (with a "private key") lets you prove this certificate belongs to you and nobody else can do that. Without Identity somebody in a position to be an eavesdropper could just pretend to be you and intercept everything (a "Man in the Middle"). So even though it's a small aspect it's vital.

2. To go around issuing people with Certificates you need some way to know who is who. Until a few years ago there weren't many hard and fast rules about how to do this, and so a lot of rather dubious procedures were used by people who charged a pretty penny. Some of them would argue that charging validated the purchaser but that's not so smart, plenty of crooks are willing to spend money to make more.

So, Let's Encrypt actually helped write actual formal rules for how you can make sure you're issuing certificates to the real owners of the names they're certificates for. These are known as the Ten Blessed Methods, because there were once exactly ten of them and each is a method that the certificate issuer is allowed to use to do this Domain Validation. None of them are utterly foolproof, and there is ongoing work to further improve them or get rid of the least effective ones, but at least now there are written rules.

Having helped write these rules it should be no surprise that though they represent a significant tightening up of things for some of the incumbent for-profit issuers, Let's Encrypt was already doing everything required.

Partly this is actually helped by not taking money for certificates. Since Let's Encrypt doesn't make a profit from giving you a certificate, they've no incentive to do so unless they're sure.

Now: For the second part, I have written lengthy answers elsewhere, there are a lot of reasons you might pay somebody money. None of these reasons make Let's Encrypt any worse, and many of them are real niche cases, you'd know about it if you've hit those. Like if you make web sites for Nintendo's obsolete WiiU video game console - Let's Encrypt doesn't help you because the WiiU web browser doesn't have the right trust store for that. Or if you need S/MIME certificates for your corporate email system for some reason, Let's Encrypt don't offer that. If you need a special relationship with your issuer under contract (like Facebook has) then Let's Encrypt can't help you. And so on. For most people it doesn't matter.


While this milestone is wonderful, don't forget that it can't be decrypted for now. IMO we trust contemporary encryption algorithms too much, putting too much data through the wires that will only increase in value. We aren't at the end of the evolution either: we still don't have really secure random generators everywhere, we are still using key exchange methods that aren't quantum proof. And of course, computer programs (as well as hardware) still have security bugs.


Encryption is worthless without properly enforcing it. How easy is it to trick your victim's bank into granting you access with a SIM swap? We need 2FA everywhere and stop relying on SMS for authentication.


I agree that we need 2FA and that we shouldn't rely on SMS. That said, saying encryption is worthless because other threat vectors exist is a bit hyperbolic. Security is all about defense in layers. There's several orders of magnitude difference in the difficulty of performing a SIM swap attack vs sucking up passwords on coffee shop wifi.


It's not worthless. It shrinks the attack surface and makes attacks more costly to execute. There's always an arms race though :P


All banks should adopt U2F and hopefully sooner than later :)


This doesn't mean 90% of all websites. This simply means 90% of web traffic which I'm assuming a good chunk of it comes from a few handful of services such as Netflix and YouTube


most of the traffic is people watching people play videogames


No, it didn't. NetMarketShare has a very limited view into these things. Actual data from browser makers

Firefox - 80% https://letsencrypt.org/stats/

Google -- 88% on Android; 84% on Windows; 91% on Mac; 73% on Linux https://transparencyreport.google.com/https/overview?hl=en


I actually just set up SSL on my EC2 instance after reading this comment section. It was stupid easy, and I can't believe I didn't do it before


Nice. Remember the days when IT professionals would exclaim that this was a bad idea?

Seems like it's cyclical thing. DNS over HTTPS is now the big bad technology.


DoH /is/ a bad technology on a technical level. On a modern network DNS requests come in pretty much constantly and I've never seen so many DNS timeouts and slow lookups as when I tried running a DoH proxy for my LAN. The head of line blocking of HTTP / TCP is horrible and my router was running at 100% CPU with all the TLS overhead.

I'm all for authenticated and encrypted DNS but routing it over HTTPS is just a nasty hack.


It seems like it's a problem with your router not being able to handle TLS. Old equipment doesnt last forever.

HTTP is the internet, and the amount requests a client makes is magnitudes greater than DNS.

Like Google or Cloudfare's DoH isn't slow.


> Nice. Remember the days when IT professionals would exclaim that this was a bad idea?

It has made some things more difficult. In the old days when I had problems with a remote IMAP server I could watch each command and response going over the wire. It made troubleshooting dead simple. When a POP3 mailbox got hung up on a single huge message you could just telnet in and delete the offending message in a few seconds. It's crazy to suggest that encrypting everything hasn't made things more complicated than they were. It hasn't been an insurmountable problem, and in an age where everyone wants to sell your browsing habits the rewards have been greater than the pain but it did make things harder.


> I could watch each command and response going over the wire.

AFAIK, Wireshark supports decrypting TLS traffic if you give it the private keys.

> When a POP3 mailbox got hung up on a single huge message you could just telnet in

Use “gnutls-cli” or “openssl s_client” – transparent TLS for your terminal. Both those commands also have options supporting protocols’ use of STARTTLS.


For a modern TLS session Wireshark will need the session keys, which will need to be exported separately for each connection made because they change every time.

Private keys in modern TLS are used only to prove who you are, they aren't used to decrypt anything. Instead random ephemeral secrets are chosen by both sides and a Diffie-Hellman (ECDH) key agreement method is used to agree a shared secret based on those ephemeral secrets.

As a result of this design the connection is encrypted and delivers integrity and confidentiality protection before either side knows who they're talking to.


No, haha. When was that a thing?


In the early 2000s almost any traffic that wasn’t involving financial services or ecommerce was plain HTTP. Gradually, HTTPS became optional (remember encrypted.google.com?) and more sites used it for login (but not all pages, even with cookies.)

This meant that MITMs were a lot more effective. Hell, even today Comcast and some other ISPs will MITM you to send notifications when it can do so on a plaintext HTTP connection.

A lot of IT departments also used this to be able to block unwanted traffic and perform monitoring. Now a lot of that relies on DPI techniques like analyzing SNI, or intercepting DNS. DoH and encrypted SNI work together to close both gaps, and widespread deployment of them would largely kill the ability to MITM or monitor consumer devices without modifications.

In modern times the cost of TLS certificates and the overhead of TLS encryption has dropped to effectively zero, so that ship has sailed, and nobody even remembers there was any concern to begin with. Maybe this time, it will be different, due to the lack of other options for MITM.

I imagine in the future there will be similar concerns about protocols that encrypt session layer bits like CurveCP.


I can't find anything specific at the moment but anecdotally I remember seeing this and being told it hurt performance to encrypt everything. The "solution" was to only encrypt sensitive pages like forms for credit cards.

I'm sure there was some substance to it at the time when computers, networks and browsers were slower but I also completely ignored that advice at the time and always used SSL everywhere on sites I set up.

I've never manged a very high traffic site so any extra overhead from SSL was negligible for us.


When people were concerned about HTTPS overhead? Both in terms of increased latency when establishing a connection and AES overhead for the duration of the connection. Hardware TLS accelerators used to be a thing.


Back in the 1990s and early 2000s, it was very common to have "transparent proxies": your router or the ISP's router was configured to transparently redirect all connections to TCP port 80 to a Squid caching proxy or similar running on a nearby server. This meant that images, CSS, JS, or even whole pages (the web was much less dynamic back then) were transparently cached and shared between all users of that router. That could save a lot of bandwidth. Encrypting the HTTP connections completely bypassed the caching proxy; to make it worse, IIRC some popular browsers didn't cache the content from encrypted connections as well, so every new page view would have to come from the origin server. Obviously, the IT professionals which set up these caches didn't like it when most sites started switching to HTTPS, since it made the caches less useful.


A common problem back then with those caches back then was that in their common configuration they would limit the maximum upload size to a few megabytes... which would manifest itself as a broken connection when such an upload was attempted.

We regularly had to tell customers "can you try whether uploading works with this HTTPS link? now it suddenly works? okay, use that link from now on and complain to your network admin/isp"


Verizon used to tell people that until this year.


They never said it was a bad idea, the concerns were that at the time it took too much processing power to scale.


The 50-50 point appears to have only been June 2017, so the cutover rate is really quite rapid. I wonder how quickly we'll see 95%, 99%, and how long the long tail will be.

https://netmarketshare.com/report.aspx?options=%7B%22filter%...


One thing we recently started looking at is outbound traffic, i.e. links that people click within our website/app. We're going to publish some results (and joint forces) soon, but I wanted to share here because I feel outbound links are something usually ignored. Yet they can contribute to a significant part of the total Internet traffic.

So, upgrading outbound links from http to https (where possible) can be another way to contribute to achieving 100% of the web traffic encrypted.


The Federal Government may not like this but this is heading to as it should be. Sometimes the government needs to be saved from itself!


I would say there is a 50/50 chance that the government has access to any http certificates that it needs to crack any https session that they would like to crack. The Patriot Act created secret courts to enable this type of stuff. They're well known to rubber stamp any warrant that comes through.


And it is not unimaginable that the US government can crack the RSA. That would explain why they are not requiring people to use short keys, yet still collect the data worldwide.


Governments can force CAs to give them certs. HTTPS only stops non-government attackers.


They would be killing the CA by doing this, since all certs have to be publicly logged in order to be trusted by Chrome or Safari: https://en.wikipedia.org/wiki/Certificate_Transparency

If a minor CA suddenly issued a cert for, say, mail.google.com, they'd be distrusted by every browser/OS within days. If a government made a habit of doing this, there'd soon be no trusted CAs in their jurisdiction.

The US probably has the best chance of getting away with this since they also have all the major OS/browser vendors in their jurisdiction. But if Mozilla/Apple/Microsoft/Google all mysteriously decided not to distrust a CA that was issuing bogus certs for high-profile sites, it would be pretty conspicuous.


CAs don't have the private keys to the certificates they sign, so this doesn't compromise issued certs.

The ability for CAs to issue extra certs to governments to enable MITM has been reduced a lot by CAA and HPKP.


Excellent progress and a great credit to LetsEncrypt and others that brought free cents to the masses. There’s almost no excuse to not encrypt anymore. The “not secure” shaming of non https sites by major browsers also applied some needed peer pressure.


Does this include Netflix traffic? Which would skew the results.


Wouldn't the result be skewed if it didn't include Netflix?


Depends if we're looking at bytes or requests count. I don't see the latter being overly skewed for any site.


Thank you LetsEncrypt.


Thank you Edward Snowden


The trend is going in the right direction.


[flagged]


Typically very little. Most current CPUs integrate some variation of the Advanced Encryption Standard instruction set (AES, AES-NI, etc), so just like H.264 or H.265 decoding, it can be very, very efficient with custom instructions (magnitudes of order faster / less power consuming than without).

This is why most devices are equally capable of rendering http or https (you never have to 'revert' to http because a website is too slow in https... it's just not a thing). A stupid background app may consume 10x or 100x your encryption budget.


I wonder how much metal is spent making door locks. Eh, probably better leaving all doors unlocked.

I wonder how much steel is in a banks safe. Eh, better leave the safe open.

I wonder how much time people spend typing in their passwords. Eh, better remove passwords from all sites.


On a modern CPU with encryption instructions in the ISA, a vanishingly small amount of power compare to the databases and JavaScript involved in the requests.


I bet if you stress test a server via http and via https, the CPU time won't be as different as you might think.

The main efficiency lost is that you can no longer have big shared cache networks for everybody, but those were a security risk anyway.


The somewhat surprising main problem is not CPU load but the additional back-and-forth TLS requires to establish the handshakes. One of the main goals/draws of HTTPS/3 is eliminating these extra steps.


By encrypting all traffic, you protect sensitive traffic better because an adversary can't tell sensitive from non-sensitive communications.


Not much since all major processor vendors do this in hardware.

Many entities have published their numbers and overhead is in the low single digit percentage.


Efficiency is overrated. Encryption stops injection or modification of data in flight in its tracks, that alone makes it worth it. Otherwise, how would you know you receive what the sender sent you?


Not much with modern processors. And it's worth it, you can't put a price on privacy and rights to it.


Everything has its price.


greate!


I wonder how much of this is due to Cory Doctorow's novel Little Brother.


To the 90%: if you've got nothing to hide then why are you encrypting your traffic?


Because I can't connect directly to news.ycombinator.com, my request is first proxied through verizon and comcast and others. Without HTTPS it is super easy for them (or lesser known snooper) to add malware or whatever they want to the messages. It's useful because of the data integrity verification, not the encryption.


Probably sarcasm but...why shut the door when you're in the bathroom?


I like this analogy. We all know what goes on inside a bathroom, it's not really a secret. But it is private. There is a difference between secrecy and privacy, and this analogy captures the difference well.

I think I first heard the analogy in Cory Doctorow's presentation The Coming Civil War over General-purpose Computing, which was ironically given at Google. I highly recommend people watch it.


Or "why lock your car doors when stealing and hotwiring is a crime"? Sure, people have convertibles and jeeps with no doors, but they also won't leave anything valuable out in the open.



Because I know broken middleboxes will meddle with the traffic unless it's encrypted and authenticated.


Because privacy is a basic human right and not a privilege


Comcast.


Sorry about the downvotes. Poe's law is a bitch.


Oh, so finally most of the porn sites are defaulting to https?


It's been at over 90% for more than a month, from what I can tell


if you look at the trendline, it was 88.7% last month:

https://netmarketshare.com/report.aspx?options=%7B%22filter%...


Great. Now nobody can see what you do except companies who sell everything you do...


I’m sorry, but there’s a lot of smart people here. Why is everyone assuming HTTPS means no one is snooping? I presume someone is snooping no matter what.


Snooping on Https requires significant resources. Snooping on http is very cheap. A good analogy is locking your door. Sure, can be dealt with. But most would be criminals won't go further than twisting the door knob.


I know this is good and all, but it does bum me out that Netscape 4.8 works much worse than it did even a few years ago. I prefer it to iCab, which might fair slightly better. Any suggestions for Mac OS 7.6 web browsers that support the minimum encryption required these days?


My suggestion is to set up a proxy. This has long been necessary to run browsers like Mosaic on the modern web, since many websites are inaccessible from HTTP/1.0 (or earlier!)

It's funny how ever-moving Internet standards mean an Apple II from 40 years ago is more functional than an iMac from 20 years ago.


This is good to keep out moderate bad guys from your data. But the not so much for the NSA. The NSA already captures traffic end to end including the key negotiation and can break the rest https://arstechnica.com/information-technology/2015/10/how-t...


Even when it was first discovered, only 8.4% of the top million web sites were estimated to be vulnerable to the Logjam attack:

https://arstechnica.com/information-technology/2015/05/https...

Now I would expect the number to be much closer to 0%.


That was for 1024-bit SSL keys with specific primes, going to 2048-bit will not scale with $11B


This statement would be more meaningful had it been phrased something like this: "encrypted web traffic, which most adversaries cannot snoop on, exceeds 90%".

There will always be an adversary, far powerful than you, with an ability to snoop on your traffic - be it your ISP, the other endpoint, or owners of the infrastructure that you consume, but do not control.


You portray encryption as a magical energy. To the best understanding of cryptanalysis research, current TLS is secure. Hypothetically it could be broken and publicly unknown, but this is not a matter of "power".

> the other endpoint

It's not sensible to say encrypted web traffic is snooped on by an actor with direct access to the plaintext.


Moxie has good points about the problems TLS has. And they are not about breaking TLS via cryptanalysis: https://www.youtube.com/watch?v=UawS3_iuHoA


The simple statement made in OP, does not capture the complexity of operational security, which is very difficult to get right. I was merely trying to illustrate that.

For e.g., even though TLS is end-to-end secure (and I don't doubt that), a website that uses CloudFlare front [1] is susecptible to its secure traffic being intercepted by CloudFlare, because by-design TLS would be terminated at CloudFlare servers'. However, note that the end-user does not notice that, rather he sees his traffic end-to-end encrypted.

[1] https://support.cloudflare.com/hc/en-us/articles/200170416-E...


> a website that uses CloudFlare front [1] is susceptible to its secure traffic being intercepted by CloudFlare, because by-design TLS would be terminated at CloudFlare servers

Keep in mind, this is also true of cloud providers. By running the hypervisor, AWS has full access to your instance's RAM and could snoop on traffic if they pleased.

A compromised service provider is a risk you're accepting unless you own and physically control the hardware terminating TLS. Whether this is an acceptable risk comes down to your threat model. (As do so many things in infosec.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: