Hacker News new | past | comments | ask | show | jobs | submit login
Increasing HTTPS Adoption (chromium.org)
140 points by inian on July 14, 2021 | hide | past | favorite | 194 comments



What is sorely lacking today is an encryption solution for the intranet. When you are transferring confidential data (such as salary info) over the network in intranet situation we need to encrypt the information to prevent casual snooping using tools such as Wireshark. We don't need to verify the identity of the server because that's typically not a problem on the intranet.

Self-signed certificates used to be the solution in this situation. But browser makers have made it significantly harder, if not impossible to use self-signed certificates, by not allowing the user to visit sites that have self-signed certificates.

We need a simple solution for this -- a solution that works even for small businesses that do not have an IT department. (That means installing certificates on each end-user's machine is not a reasonable solution.)


> we need to encrypt the information to prevent casual snooping using tools such as Wireshark. We don't need to verify the identity of the server because that's typically not a problem on the intranet.

If somebody is on your network snooping on packets with wireshark, they can easily be poisoning ARP or DNS to redirect and man in the middle all your traffic too.

Your complaints come down to certificate management. Spend some time looking at group policy and cert management tools--there are tons of things in this space. Or... hire a IT person to do it, that's how business works. X509 certs are well established and trusted as a way of securing internal networks. They aren't perfect and they have some warts, but deployed properly they are secure.


I think that's exactly what parent is getting at though: it's a pain to manage certs for your own internal stuff and even worse for third party stuff in your environment. Rather than say "well, just hire more IT people" we should really come up with a simpler and easier solution if we want to actually see adoption.


Like I said, spend some time to investigate the space. There are lots of tools to ease the burden of running your own internal CA. For example: https://smallstep.com/certificates/ But really you probably want to use your OS' group machine management systems, like AD on Windows, to handle this with lots of machines.


That looks really nice, bookmarked for later. In general I agree with the previous commenter, Let’s Encrypt made TLS easy on the public net, but for small teams/organizations TLS in local networks with local host names is still a pain.


As someone that works with everyone from SME to fortune 50, internal cert management is a total disaster. Yes, a few companies have it taken care of better than others. But on average most companies have not bought the tools, nor have trained staff to deal with this in an enterprise wide fashion.


How come there is none?

Set up DNS in whatever way you prefer. Give each service a name.

Create a self-signed root cert, push it to all machines via whatever administrative update tool you use, or just walk around a small office with a flash drive.

Create an intermediate cert. From it, create certs for all services that need one, and push them.

Voilá, you can use https without warnings.

Am I missing something?


> Create a self-signed root cert,

The OP was specifically talking about a small business with no IT. I'd say it applies equally to a home.

I'm sure there's some ways to manage all the non-domain-connected computers, phones, tablets, chromebooks and other stuff in my house (which is going to vary by ecosystem) and "push" a root cert out, but I sure don't want to spend the time to learn all that just to secure a web app I have running.

The magic of Lets Encrypt is you run something on the server you're securing, and a few minutes later you're done -- it works with everything.


If self-signed certificates are too much, then perhaps this is, too - You can use Let's Encrypt SSL certs locally for the Intranet/LAN as well. Get a wildcard SSL cert for a lan-reserved subdomain of a TLD (e.g. local.mytld.com) through a DNS API (e.g. cloudflare - any other DNS API works, too). Override DNS entries in your router to redirect to local services, e.g.: cloud.mytld.com -> 192.168.0.10 gitlab.mytld.com -> 192.168.0.11

You can have your services individually query Let's encrypt (ACME), or have your router get the wildcard cert and deploy this locally (e.g. ssh, ftp). No one external can reach your services, no one external knows how your services are called (in case of wildcard certs). You do not need to open any port for this.


Inspired by this thread I just did this, except not with the wildcard.

I have a droplet at Digital Ocean, so I used their DNS service. I configured some CNAME records for the local services, pointing to the internal names[1]. I then configured my PiHole with a local DNS CNAME record, pointing service.local.example.com to service.localdomain.

Since my PiHole is not the DHCP server, I had to add a local DNS record for service.localdomain which matched the static IP that my router gives out. It seems the conditional forwarding done by PiHole happens earlier in the resolution process.

I could then configure my services to use the DNS challenge for service.local.example.com, using the DO plugin for certbot[2] or just acme.sh[3], depending on what was available.

I didn't get it to work immediately on Android devices, until I discovered that Android only uses IPv6 DNS servers if it has an IPv6 address, and I hadn't configured that in my router. So added the static ULA address of the PiHole lease to the DHCPv6 DNS server announcement[4].

Was a bit of fiddling since I'm a networking nub, but went smoother than I had feared.

Not sure how to best distribute certificates though, if I had found a way I could let the router do all the renewals.

[1]: Not sure if this is needed but didn't bother experimenting with removing it yet.

[2]: https://certbot-dns-digitalocean.readthedocs.io/en/stable/

[3]: https://github.com/acmesh-official/acme.sh/wiki/dnsapi#20-us...

[4]: https://openwrt.org/docs/techref/odhcpd#dhcp_section


> pointing service.local.example.com to service.localdomain

Not sure if I understand correctly, but service.local.example.com must point to your internal IP. You do not need a localdomain and the SSL certs will only work for what they were generated for (service.local.example.com). However, you can very much point your local DNS server's entry for service.local.example.com to any local IP, resolving these services internally. For that matter, you can equally simply edit the `hosts` file and add overrides.

For the ACME-certs I suggest using the fullchain-cert that you get from Let's Encrypt for service.local.example.com (e.g. in the nginx reverse proxy). Firefox/Chrome will typically not complain if you do not serve intermediate CA SSL certs, but it is better to provide the full chain of certs.

> Not sure how to best distribute certificates though, if I had found a way I could let the router do all the renewals.

My router is pfsense, I added a hook that stores SSL certs to a local NAS folder via script:

Action List: `sh /conf/acme/store_certs_nas.sh`.

From there, it is easy to pull certs through cronjobs on services.


> Not sure if I understand correctly, but service.local.example.com must point to your internal IP.

Yes, that is handled by the PiHole Local DNS configuration I mentioned later. Of course this only works internally in my network, if you ask DigitalOcean's nameservers for service.local.example.com they return NXDOMAIN (along with the CNAME). This way I don't need to keep the public DNS records in sync with my local IP addresses as defined by the static DHCP leases.

I'm not sure if I really need the CNAME entry to complete the DNS challenge, will do some more testing later.

> I added a hook that stores SSL certs to a local NAS folder

That's similar to what I had in mind. The acme.sh plugin for OpenWRT seems to be missing the ability to run post-renewal scripts though, unless I'm blind. Seems ripe for a contribution.


Typo:

gitlab.mytld.com and cloud.mytld.com

should have been: gitlab.local.mytld.com and cloud.local.mytld.com


At home you either don't need to secure it or if you do, you'll have to learn it.

And when you are in a small business with no IT and want to share salary data, well, you can print it and give it to the employer in an envelope. No need to go digital when doing it analog is not a big problem


I don't like this solution, mainly because installing custom CA certificates increases the chance of a MITM attack involving other services.

Any trusted "intermediate" certificate in X.509 is allowed to be a CA, so it can be used to sign certificates for any domain. If an attacker gets the private key for that certificate which has been manually installed on everyone's PC, they can impersonate any website, including external web mail, etc, until someone manually removes the certificate again from all the PCs.

In the past when we've used custom CAs for test environments, I've preferred to just deal with the TLS warnings instead of trusting a non-standard CA on the computers I use.


You just run your own CA and install that as a trusted root. Then you aren't vulnerable to external parties and you don't have to deal with alarm bells over self-signed certs.


Perhaps the CA could be constrained to only sign certs for *.foo.local?


As far as I know, X.509 doesn't currently have such a concept, and web browsers don't have it either.

If you install a certificate as a CA, it is a CA for any domain. If a CA signs another certificate with the "Certificate Sign" flag (intermediate certificate), that other certificate is also a CA for any domain. The subject name fields are not used when validating authority as a CA.


It does; they're called "name constraints".

This page https://systemoverlord.com/2020/06/14/private-ca-with-x-509-... claims they're supported by "all current browsers".

x509 extensions can be marked "critical" or not. If the name constraint was marked critical, then AIUI old software should (in theory) fail secure, and reject everything signed by the CA.



Thanks, I wasn't aware of that. I guess I'd feel a bit more comfortable installing a CA with those constraints, though I'd wonder how many administrators actually know to do it—seems like a feature that's not currently well known and something that can be easily overlooked.

There would also be risks to users if installing these custom CAs became common practice. It might be safe to do it for certificates with suitable name constraints, but until certificate installation UIs add something special for this ("this certificate has authority over X, Y, Z"), users aren't going to distinguish between safe/unsafe (constrained/unconstrained) CAs.



However, note that some relatively modern versions of Safari (because of course) don't support this constraint.

This might be no problem for you, or, it might be a complete showstopper.


> We need a simple solution for this...

I think that "simple" is the key word here.

Running your own DNS server isn't necessarily as easy as just using something like NameCheap or another domain name provider with a DNS manager. There probably are somewhat easy DNS servers out there, but in the case of BIND, the configuration format feels obtuse, things break in interesting ways and you'll probably need at least 2 of those servers for more resiliency.

Furthermore, you are assuming that there even is an administrative tool that can push certificates there in the first place, which isn't necessarily true in many environments. And then you also run into the fact that now you need to manage a PKI with all of the certificates, CSRs and now you'll also spend some of your time renewing all of the certificates.

Contrast all of those processes with just having the following in your Caddy web server's configuration (for public sites):

  my-public-site.com {
    reverse_proxy 10.0.4.5:8080
  }
We'd need something equally simple: https://caddyserver.com/docs/automatic-https


BIND is just a wrong scale. Think tinydns or dnsmasq.


This is exactly right. PKI is for everyone. It's really quite simple once you learn how it works!

Here's a great explanation that covers a ton: https://smallstep.com/blog/everything-pki/


One SmallStep for a LAN, one giant leap for LANkind?


This requires a level of expertise that is not available at most small businesses. Going around to each machine with a flash drive is not practical either, especially since this is not a one time thing (because machines die and are replaced).


This is a good call.

I thought Windows Server (or maybe just Pro?) makes this easy enough: AD must provide DNS and a CA, and any DC should be able to provide startup scripts to every workstation to install the certificates, set hostnames, etc.

For a more diverse office, with a mix of Macs and Windows machines, maybe there is a niche of an app server box / vm, or even router software that also handles that.


If the expertise isn't there then they should use SaaS solutions.

There's plenty that aren't expensive. And they don't need to roll their own software.


> If the expertise isn't there then they should use SaaS solutions.

Or browser makers can invent a solution for this scenario. That's what software companies do — identify problems that need solving, then solve them. Strive constantly to make things easier.


Using a proper domain and real certificates is literally the right thing to do. It prevents anyone squatting on your domain and breaking internal shit.


For purely internal services that are not supposed to be exposed to the internet?


Buying a domain and getting a wildcard cert is not hard.


What's hard is not forgetting to renew it :)


> Create a self-signed root cert, push it to all machines via whatever administrative update tool you use, or just walk around a small office with a flash drive.

Congratulations, you've just installed a massive security risk in an organisation that barely understands the word certificate


DNS server might be an overkill, mDNS should work fine for a small org.


> flash drive

Make sure to get the mobile devices too.

But yeah, that is the solution. Works great on all platforms.

---

The only time you run into problems with tech is if you try MITM/corporate proxy shenanigans.


If you have a really small office with 10 desktops / laptops, you may consider that deploying puppet / chef / saltstack / whatever is overkill. Walking around with a flash drive is just faster.


But it's not more consistent if you need to change something. Having declarative desired state configuration is useful for any environment, even when n = 1.

I would definitely deploy a configuration management solution for 10 hosts; it takes a couple hours of work to do properly and you can do it without a server initially (though a central server provides other benefits)


There is a good one, but everyone is pushing the craptastic / megasize management solutions - so I don't think there is anyone out there yet for the 15 machine non AD (or light AD) domain type setups with third party stuff.

The solution would be to allow self signed + pinning.

You click to a new box on your network with a self signed cert. Instead of blocking you basically (yes, you can flag flip, go to advanced etc etc), it says, hey this is self signed cert, do you want to pin it? You say yes. It pins that cert to that internal IP.

Now you've got encryption.

Let's say in a rather far out case someone spoofs internal resolution. You are generally already owned at that point, but if they redirect IP X to a new cert, browser could have a popup saying - heads up, the cert for this IP now mismatches. Be very careful and do the current basic block/lockout approach.


That's already what Firefox does. Despite not saying it clearly, if you accept a self-signed certificate, it will be used as long as it does not change.


> Let's say in a rather far out case someone spoofs internal resolution.

Seems trivial. Why wouldn't they?

> You are generally already owned at that point

Well, you might be, since your network is relying on self signed certificates with no PKI.

> heads up, the cert for this IP now mismatches.

Good thing everyone is already trained to click through certificate warnings.

Or you could just do PKI properly and avoid the entire issue, or not have an intranet and avoid the entire issue.


> Good thing everyone is already trained to click through certificate warnings.

With a proper UI there would be different messages when a cert mismatches compared to when a new cert is seen or the previous cert was about to expire. (Quite like Gemini browsers)


In the past, there were messages that specified exactly what was different about the situation, and they were by no means "proper UI":

https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/im...

In fact, in the past, browsers used to helpfully give you warnings whenever you started using SSL:

http://acc6.its.brooklyn.cuny.edu/~core51/labs/extralab_file...


Different messages doesn't matter if people are just clicking through no matter what the message.

Even people who should know better. Even with other protocols like how SSH should work. We have clients who send us PPI and other data via SFTP. They do much to make themselves look paranoid: insisting on whitelisting source addresses, in some cases mandating PGP, insisting we pass certain audit requirements including things they may unilaterally make up in future, etc. But do they actually care enough to follow procedures and verify things before clicking through (or hitting the "Y" key)? No. How can I be so sure? I gave out the wrong host key fingerprint for a while (accidentally, I sent a dev/test host's fingerprint instead of the production one's identity) and no one came back to say "hey, this doesn't match, we can't proceed, could you look into why the target host identity doesn't match ASAP?" before I noticed my error.

If supposedly paranoid data management and security professionals are not Doing It Right and are just clicking through no matter what, you can bet your arse & mine that pretty much the rest of the world isn't doing things right, and probably never will, either.


TOFU like SSH or Gemini?


Exactly - this is actually pretty established already in SSH land.


You can get a Let's Encrypt certificate for an internal subdomain: https://security.stackexchange.com/questions/103524/lets-enc...


One of the problems of that is now you need to list all your internal IP addresses in a public DNS server. It shouldn't theoretically be an issue but it might be stuff you don't want to just lay out there in public view just in case.

Also you might just not want a domain like "my-top-secret-project.amazon.com" to actually exist and resolve to an IP (even if an intranet IP) in public DNS, but you might still want an easy-to-remember intranet hostname that employees can use.


You don’t need to if you use ACME DNS challenge. I have split horizon where the ACME creates the temporary challenge record, LetsEncrypt sees it, and then the record is removed. There is never a public A record, and the window for enumerating the challenge record is short.

That said, I never really understood worrying about exposing DNS records because someone has to brute-force enumerate the domain names, and because it just obfuscates. I do split horizon for different reasons and am currently moving away from it.

Regardless, if you do care, you should be concerned about certificate transparency logs. The “workaround” for that is wildcard certs.


LE publishes all issued certs so you might as well just keep it public anyway in that case.


Yes, I said this. Wildcard certs, as I mention, largely avoid the issue because it’s just one cert for a higher-level domain and largely uninteresting from an information disclosure perspective, at least compared with individual domain certs.


Wildcard certs?


Yep. The CN is something like "*.edoceo.com"

And you can make self-signed too.


>One of the problems of that is now you need to list all your internal IP addresses in a public DNS server.

You don't actually. You can just have them resolve in your local DNS server alone.

My current setup is that the public DNS records for my internal network just return CNAME records, so "mail.myserver.com" has a CNAME record to "mail.myserver.internal" and my private DNS server has an A record for "mail.myserver.internal".


I’m going to steal this one, I’ve never thought about this.

Also useful for local development with mdns. E.g. MacBook-X.local as an CNAME: "MacBook-X.internal.hultner.se CNAME MacBook-X.local" should just work for LE on my local dev machine when roaming between the home, office or on the go and the actual IP changes wildly.


Still, you're exposing your infrastructure.

It's kind of nice to know that there is a mongo1.profiles.company.com somewhere.


You can use wildcard certificates so you're not exposing any infrastructure.

Also, you should really consider all your subdomains to not be private information. It's risky to build security based around that assumption.

Tons of DNS services actually sell their queries, and those can be used to reconstruct these "private" subdomains.


With this said, wildcards are massive security risks in themselves when devices start getting compromised. You can then chain together things like DNS poisoning and arp attacks to MITM further devices on the network.

And confirming what you said. DNS leaks everywhere, so don't consider its obscurity a form of security.


You can just obfuscate your names if you really need to keep it hidden.

profiles.company.com? coffee.company.com.

mail.company.com? tomato.company.com.

passwords.company.com? tincan.company.com.

If you're really paranoid, you can just flood a bunch of misdirects in there as well. If you need 10 internal subdomains, add 100 subdomains with random CNAMEs.


It's worth noting that split-horizon DNS can cause problems with DoH. If your "public" DNS entry has an A/AAAA record, DoH-enabled browsers will resolve your domain to that external IP instead of fetching the internal DNS record.

So if you do split-horizon, don't put an A/AAAA-entry into the internet-facing side, instead use the ACME DNS challenge as described in the sibling comments.


Just setup letsencrypt to give you a wildcard cert and copy it where it’s needed. I do it for my home network and it’s a script to run every three months.


Yeah, this. I have a dedicated domain for my internal network (it's just 9€ per year so might as well) and just use a LE certificate with the DNS verification, so no need to have any external web traffic.


If you have servers in your intranet, but don’t have enough expertise to install a CA server and a CA certificate on end-users’ machines, maybe move to a cloud based solution instead of hosting your own.


There are many use cases for accessing services local on the network that shouldn't need the 25-ton behemoth of a roll-your-own PKI ecosystem.

I have a NAS, HomeAssistant, some random local-only IoT-type stuff, Plex, Pi-hole, and a handful of several other web applications running. Running my own PKI and having to manually distribute root cert to my non-domain PC let alone even think about how to install root certs on the various Android/iOS stuff I have: no thanks.

I can think of several scenarios where a small business (with no IT) would have local servers but no dedicated IT (and where a full PKI infrastructure is a big burden): for example poor internet connectivity, or very high bandwidth costs.


Strongly disagree.

You can choose to run Pi-holes, Plex, and all that crap in your intranet and pretend you don’t need a CA. It’s not that hard to set up.

Either learn how shit works or don’t do it.

To be clear, I’m still talking about a scenario where you are running a company and process data for your customers. Hobbies are different.


This is the hard truth. With a business and internal data, _especially_ customer data, you need to think of the liability if any of that data leaks or is accessed inappropriately. It will not look good in a lawsuit if a judges asks why data was so easily stolen and your response is that you didn't know some exploit was possible and didn't want to pay for experts to secure access to it. Managing certs, securing an internal network, etc. are just part of an ever evolving and changing security and threat landscape. You need to dedicate resources like time and money to constantly stay on top of it.


Many vendors of internal tools don't have cloud offerings and even if they did, I wouldn't trust it considering their current security record. It would be a good CYA strategy, but that's about it.


Yes, it's far more sensible to use something from a company that really understands security like Solarwinds rather than run your own nagios install. /s


Self signed certs on an internal network are more secure than CA signed cert on a cloud.


> Self signed certs on an internal network are more secure than CA signed cert on a cloud.

Not if your threat model is someone who already has access to the local network (which has no one managing it) snooping on traffic.


It's still encrypted. They would have to man in the middle and hope that the user has not already accepted a cert. Exactly like most ssh servers.


I work for a Fortune 500 company with a huge IT department, and even we have trouble with this. Some intranet sites will work on certain browsers but not others because of certificate issues. So, it is clearly a problem for both large and small companies.


They way I have dealt with this in the past was to buy a wildcard subdomain cert and only use that subdomain internally. The downside is that you have to delegate that subdomain from your public DNS to your internal DNS or do split horizon/split view. Both have caveats and require some thinking ahead. This is not a free solution, but I would consider it affordable for any small business. It is certainly much less complicated than setting up internal CA's and avoids the opex expenses of managing an internal CA and greatly simplifies managing certificate expiration.

e.g.

  *.internal.some.tld
You may even be able to find a CA that will sell you a SAN cert with multiple wildcard subdomains listed. They lose money doing this and most will require you to buy a different wildcard cert for each sub-domain. There are no technical or CAB limits to wildcard subdomain SAN certs, only business policy per CA. If CAB have added any recent restrictions that I am unaware of, it would be for purely monetary reasons as the SAN RFC's have no restrictions on the number of names last I checked.

e.g. one wildcard SAN cert with names like

  *.sfo1.dev.some.tld
  *.uk1.dev.some.tld
  *.dev.some.tld


> This is not a free solution

FWIW I use letsencrypt for this, for free. The only downside is having a central set of HTTPS routers (e.g. nginx) that hold the wildcart certs to terminate TLS. If you spread your hosts across sites it can be annoying.


Name your internal services after some valid DNS domain that you own. Options include

- use split DNS, where hostnames like "hr.example.com" resolve on your internal network but not externally

- use split DNS for a subdomain, so everything under "corp.example.com" points at your internal network and you use https://hr.corp.example.com (which might be easier to manage - you just reserve "corp.example.com" on your public DNS)

- buy a completely separate domain name like examplecorp or something

- go full beyondcorp and make https://login.corp.google.com available on the public internet, and rely on SSL + strong authentication (like two-factor) instead of people being on the internal network (as a bonus, this approach to IT makes things more straightforward if, hypothetically, there was a global pandemic forcing people to not be in the office for over a year, idk, that might not be a realistic business risk)

Now that your services are running on a DNS name you control, you can get real, valid SSL certificates for them. Assuming you didn't go with that last option, you can

- use Let's Encrypt DNS challenges, by temporarily adding TXT records mapping to your internal names

- use Let's Encrypt HTTPS challenges, by temporarily (or permanently) running a web server in external DNS that corresponds to your internal names

- pay for a wildcard certificate from a vendor, which costs under $100/year

The one risk here is that names of all real SSL certificates are logged publicly (for Certificate Transparency, to help track mis-issuance). So if you happen to enjoy running web servers named things like secret-plan-to-acquire-foobarcorp-on-july-30.corp.example.com, then you want to go with the wildcard option. If the names you have are things like "hr" or "wiki", you'll be fine with the Let's Encrypt option.

This approach requires no installing of certificates on end-user machines and no asking users to click through any dialogs and no risk of people on the internal network spoofing websites and doing a man-in-the-middle attack (which is only ever so slightly harder than using Wireshark).


I thought about that, but have not fulfilled it. I wrote an application designed to be private and one-to-one. This particular application allows sending of texts, and file system segments directly from the browser. Encryption would just be a matter of one time public key exchange. I can do that just using the Node crypto API. No central authority needed... yet.

The problem is that identities can be faked easily if the application allows anybody to spin up a new instance/identity with few commands. The standard solution to this is centrally managed certificates (certificate authority). A user should always challenge the identity of the remote party against an agreed upon third party to verify trust before any encryption occurs. This is more challenging.


> We don't need to verify the identity of the server because that's typically not a problem on the intranet.

Sorry, not a security expert. What is the point of encrypting if you're not also sure you're sending the encrypted data to the right entity?


Encryption without signing is arguably more useful than not encrypting at all, although there's an argument that non-signed encryption gives a false sense of security that encourages bad practice.

However,

> we need to encrypt the information to prevent casual snooping using tools such as Wireshark.

Given this goal, signing is necessary. If you're not signing the encryption, you can use a tool like Wireshark combined with Bettercap to man-in-the-middle the encrypted session, and there's no point.


It helps against passive observers. This might not be very important over wired, but on WPA-PSK setups, knowing the network password allows you to eavesdrop on communications by any of the computers.


Why would an attacker in your intranet who's looking at your network traffic be passive? When people talk about passive attackers they're talking about the NSA/ your ISP, not someone who's hands-on-keyboard sniffing traffic.

There's virtually no reason to do encryption without authentication in your intranet.


It sounds like you just want to encrypt individual files. Most major office software suites like MS Office will have an encryption feature so that you can password-protect spreadsheets. The other easy solution requires certs, but not the kind you're thinking of: if you want to protect individual files and know who should have access, you could encrypt them using openssl (maybe write an app to wrap it up nicely) and S/MIME certs that could also encrypt your internal email. If you know what you're doing, you can provision out those keys and keep a copy in escrow for data retention policy and accountability.


> Most major office software suites like MS Office will have an encryption feature so that you can password-protect spreadsheets.

I don't know about now, but in the .xls days that wasn't encryption. You could bypass it by rewriting the file's header.


This works if you're emailing random files around for all information transfer, but it feels like GP is talking about the case where such information is accessed through intranet websites?


Yup, but I'd go even simpler. Imagine An OS-level switch to put arbitrary network traffic over TLS.

I'm sure you'd need various switches for edge cases, custom keys etc, but... imagine. Quick and dirty network apps? Half the security battle is done. Crusty endpoint security solutions downgrading your encryption? No need. Malware sending an encrypted traffic? Stands out like a sore thumb, because it's the only thing not sending cleartext to the socket.

Yes, if your OS is compromised, your processes encryption is. Guess what - that's already the case.


That's what Tailscale does. I use it to encrypt communication between my devices (both on home network and when I'm away) and it works like a charm.


Direct p2p over Wireguard or similar is a solution here.


I imagined it and it looks like SSH.


> Self-signed certificates used to be the solution in this situation. But browser makers have made it significantly harder, if not impossible to use self-signed certificates, by not allowing the user to visit sites that have self-signed certificates

That comes with other perils which is why the browser behaviour these days is like this. In browsers you can still trust the certificate by adding a manual exception if you want to persist with this route.

Better would be to create an in-house CA (easyca maybe) so that the CA cert can be added rather than lots of individual ones.

LetsEncrypt works when you are able to prove you are the owner of the object in the CN. They have a few auth plugins that try and verify this ownership in an automated way but none of them work (without some workarounds) for internal hosts e.g. you're using private IP blocks and potentially made-up DNS zones on your intranet you don't necessarily own them. e.g. you want a certificate for 192.168.1.1 with the domain foo.localdomain.


The simple solution is a local certificate authority which is added to the users’ browsers. It can deliver certificates automatically or manually.

This is one of the services provided by Microsoft Windows servers, or you can roll your own. It’s not hard and scales whatever way you want.

Expecting browser vendors to weaken security to avoid this tiny amount of work is backwards and will never happen.


> The simple solution is a local certificate authority which is added to the users’ browsers. It can deliver certificates automatically or manually.

I wish it was possible to add a certificate authority and restrict its usage to non-Internet servers (e.g. *.company.example).

(This is theoretically possible if the CA bakes that in, though not all TLS stacks support that. It isn't possible via local browser configuration.)


If you are the authority yourself, you can just commit to not giving out certificates outside a specific domain.


If you're the authority you can restrict the certificate authority to a given set of domains, but that's if you're the authority.

I'd like browsers to give users control over that, to trust a CA in a limited fashion for only a subset of domains without trusting it to MITM the web.


My "internal" (local hosts + server nodes) are on its own VPN with wireguard. Even with a VPN, I've letsencrypt provided certs on some of em because it's nice, but it's not strictly necessary. People being able to know that I've things running in my own /8 is not a major security problem.


"What we need is ... blah blah blah ... technology" is rarely what you want when you start on this path.

In my company (I'm the MD), salary info is pretty open and I believe income is proudly and publicly published in Sweden for all individuals but I do understand your reticence.

Do you actually need encryption or even a computer to get the message across? What about whispering?

If you want to ensure confidentiality then you need to agree on your standards with your partners. You have no choice in this matter. Rapidly you are in a technical debate on standards. Apart from the channel, are you sure you are safely getting the data in and out? Think about things like side channel attacks eg someone listening to your keyboard. Get the tin foil out!

I'll give you this for free, right now: TLS is a very decent way of creating a tunnel between you and your partner. So is IPSEC and probably WireGuard, also a SSL tunnel.

There is no such thing as a simple way to do confidential comms, just like there is no simple way to build a suspension bridge. I studied Civil Engineering at Plymouth Polytechnic in Devon and the building industry collapsed in the UK as I graduated in 1991. Building a bridge is quite complicated but for the end user it is a piece of piss to drive their car across it.

You say that self-signed is broken but all browsers and modern OS's allow you to import CA certs. You can generate a CA cert with a one liner with openssl and if you are careful you can be pretty sure things are secure. I trust my own CAs far more than anyone else's.

You cannot have simple solutions for confidential comms. If there was such a thing then James Bond would use it too, and he doesn't.


You can use a VPN e.g. overlay network

As an added bonus, it keeps working regardless of your location


Not to mention that if you set up an internal CA, it is a huge pain to add the CA to browsers, since most browsers don't use the system trusted CAs.


Though most (all?) do support enterprise policies via GPO that would cover the vast majority of enterprise use cases.


which has to be managed separately for every combination of OS and browser. If all employees use chrome on windows, that's not too bad, but if you have employees using a variety of different setups, it it is more complicated to set up.


Which ones other than firefox don't? (And firefox has a config option)


I think that if you are worried about someone sniffing salary info in your org, there are deeper problems to sort out.

There isn't really something like "casual sniffing". It has to be done deliberately and with full intent. Not to mention with the right timing or long duration and non trivial technical knowledge. It isn't really possible to protect against such an adversary (for example, even with self signed certs, he can mitm the connection and resign it himself), and therefor imo, its better not give a false sense of security.


Make your own organisational CA, and use that.

This covers both passive AND active attackers with access to your network.


> We need a simple solution for this -- a solution that works even for small businesses that do not have an IT department. (That means installing certificates on each end-user's machine is not a reasonable solution.)

I don't get it. Why even have an intranet then? Use Google Drive to share files, or Dropbox, or whatever. Why do you have an intranet at all if you don't want to manage an intranet?


Because Active Directory.


AD has a built-in certificate authority, and installs the root CA on domain-joined machines....


Could you elaborate? There's Azure AD, or you could do yourself a massive favor and not use AD at all. And like... why would you want AD and an intranet but not someone to manage it at all? Recipe for disaster.

The simplest solution to this problem, by far, is to just avoid it entirely. Don't have an intranet. Don't manage AD. Just don't do those things.

Pick an identity provider, pick a file sharing service, pick a video chat service, and set up SSO for them. It's 0 maintenance, far safer, and far easier to use.


> We don't need to verify the identity of the server because that's typically not a problem on the intranet.

Casual snooping with wireshark isn't typically a problem on an intranet either. The difficulty of both attacks are a somewhat similar level, it really doesn't make sense to care about one but not the other imo.


"But browser makers have made it significantly harder, if not impossible to use self-signed certificates, by not allowing the user to visit sites that have self-signed certificates."

What business are browser makers in. How do they make money. Why are browser makers the ones deciding what users should and should not be allowed to do.

Which goal is more important to the browser maker:

1. protect the user against the possibility of the wrong recipient possibly receiving an encrypted message (which she cannot decrypt)

2. require that the user must use the browser maker's software to "verify" the server (via a certificate)^1

The underlying assumption with #2 is that the user has no possible means of verifying a certificate other than the browser maker's software. Like on a corporate intranet, on a home LAN, I use another program, a proxy bound to the loopback, to verify certificates.

Chrome throws a warning "Your connection is not private. Attackers might be trying to steal your information from ____". That's a guess and it's wrong. The connection is from the browser to the loopback. The server certificate has already been verified by the proxy. I do not expect Chrome to figure that out, the warning is fine (the keylogger is creepy), but I do expect an option in the browser to turn off certificate verification. This is not an uncommon option in other TLS-enabled software.

It is very difficult for me to believe that Google's self-interest does not factor into these design decisions. It is always possible to spin the explanation for any design decision at a "tech" company to be for the benefit of users. That does not necessarily mean it is the whole truth. It may not be full disclosure. We have no way of knowing. Facebook and Twitter collected phone numbers under the guise they needed them for "authentication" but it was later shown they were using them for advertising. Then 533 million Facebook user accounts many which included phone numbers became available to the public, not just "friends".

https://www.wired.com/story/twitter-two-factor-advertising/

1. Consider that we just witnessed the browser maker with the most users, who also has the majority of search engine users, video hosting users, webmail users, online ad services users, etc., etc., launch an experiment and proposal to start tracking users through its browser instead of via cookies. Ideally, it would be best for users if this company, which effectively controls much of the world's web traffic through its search engine, did NOT also make a web browser.


I'm not sure I follow your argument, because, first, no major browser maker is in the business of selling their web browser software, and second, if you're using the browser maker's software to verify the server, you already have access to it. So even if they were selling it, you've already paid for it.

Google's self-interest does factor into these decisions, though: Google's interest is in making more people use and trust the internet. The more people who feel comfortable buying things online, for instance, the more ads they'll click from online sellers. The more businesses who feel comfortable selling things online, the more ads they'll buy, and thus the more profit Google will get.

So it's important to Google that the internet remains perceived as a secure and trustworthy place to do business. If users are in the business of making judgment calls about certificates, some fraction of the time they'll be wrong - and they aren't going to say "Oh, I made a mistake," they're going to say (correctly, even) "Oh, doing secure transactions on the web is too difficult."

There are a whole lot of ways in which Google's interests are badly biased against end users, and as you mention FLOC is a perfect example. Wanting the web to be secure is not actually one of them, though.


I hope HTTPS-First mode would become the default, so that the full page warning can finally convince my classmate to adopt HTTPS on their website that "does not contain any private info so it doesn't need encryption".


Is it a web app or just a static site? I still haven't seen a good argument for why static sites (blogs, personal sites, etc. that process no user information) should implement HTTPS.


Excluding things like zero-day exploits, the biggest problem with allowing any unencrypted traffic is cache-poisoning.

This was noticed when a Google engineer went on holiday, and stayed at a hotel with dodgy Wi-Fi that copypasted ad scripts into anything that looked like jQuery. Said engineer realized that his laptop was still getting hit with the hotel's ads for months afterwards, because it had managed to poison one of those "JavaScript CDNs" that a lot of other sites use.

This is, of course, an attack - a hotel that can get an ad script onto arbitrary sites by rewriting one unencrypted request can also add a script that, say, siphons information off of any other site it got included into.


Sounds like Chrome is finally taking steps to combat that, as the post mentions they plan to "Restrict how, and for how long, Chrome stores site content provided over insecure connections"

PoisonTap is a particularly good example of how devastating this type of attack can be: https://github.com/samyk/poisontap


Thankfully the impact of this is limited in modern browsers as the cache is partitioned by site.


Which, incidentally, also removes the last fringe benefit of those free "JavaScript CDN" services. They are a strict net-negative now.


Though small, isn't convenience a benefit? I think a lot of new developers especially find it marginally easier to copy a script-tag.


For a small test or personal site it's fine, but otherwise you're trading developer convenience for a longer load time for every user.


I use HTTPS on my blog, which is a static website with no comments, because my blog contains information that I want people to be able to trust, and so I don't want an MITM to be able to modify it.

There are a whole bunch of says that they can do that. The obvious way is that a lot of my blog is about programming, and so I have code on my blog people can copy/paste. If a MITM can modify that (perhaps by injecting something with font-size-zero), that directly harms my readers, and selfishly, that reflects poorly on me - it makes it look like I'm trying to harm my readers.

I also have prose blog posts where I express advice or opinions. If I write about, say, security advice, and that advice has been modified to be bad, that also harms my readers and reflects poorly on me. Why would someone do that? I don't know, there are lots of trolls on the internet. More interestingly, I also write about my religious beliefs. If someone modifies a post to make me look like I'm one of the most egregiously bigoted people of my religion, that would also be harmful to my readers and reflect poorly on me, and the casual reader might not notice that the post is out of character, and there are a lot of people on the internet who are angry at my religion.

Also, even if I didn't have any such information, a MITM could add a cryptominer or something to my blog - something that accesses no private information but still consumes my visitors' CPU and battery - would harm my readers and reflect poorly on me.


Do you want ISPs or other intermediaries to be able to inject ads into static sites?


Why is this something the sites have to care about? This is an issue to take with your ISP.


My ISP is fine. But no way am I going to let anyone who happens to be upstream of my visitors make arbitrary changes to my site!


Sure but what i asked wasn't about your own site but why it is something sites have to care about when the actual issue is with ISPs (in the case where ISPs are injecting ads or other stuff). There are *WAY* more sites than ISPs and the party that is wrong here is the ISP, not the site.


Even though there are far more sites, it's a matter of incentives:

* ISPs and other intermediaries have the wrong incentives: reading and modifying plaintext traffic can be very profitable.

* Sites have the right incentives: they don't want to be messed with or snooped on.


What makes you think HTTPS is going to prevent that? You can without much effort generate your own SSL certificate and MITM attack HTTPS traffic [0]. Not sure why to win an argument you stop short of the place where your argument would fall far apart, but not a single step further.

https://www.charlesproxy.com/documentation/proxying/ssl-prox...


Of course you can MITM HTTPS if you get the end user to install a custom CA, the point is that those are extra steps that few users will take (and if my ISP ever required that I would switch to a different one immediately since that's shady as hell).


And how prevalent is the practice of ISPs injecting packets into non-HTTPS traffic? Seams like OP is trying to argue against HTTP just because of a few ISP bad actors. HTTP is simpler, faster, less complex and requires much less initial configuration to set up. It also seems to me that HTTPS would be a great way for an evil tech monopoly (Google?) to solve the user attribution problem much more accurately in a cookie-less world (if you control the browser "Chrome" and the server "AMP" you just need to make sure the link between the two is encrypted to identify the user.) So I'm always worried whether opponents of HTTP have not been somewhat indoctrinated.


> And how prevalent is the practice of ISPs injecting packets into non-HTTPS traffic?

Is there anything preventing page alteration on unencrypted connections? There's certainly an incentive to do so.


could that be argued to be a violation of the DMCA?


Do you think a complaint from all three customers in your area who understand the issue is going to change anything, especially when options are limited and your only choices of ISP are engaging in the same behavior?

On unencrypted connections, there's nothing preventing an intermediary from altering a page. Assume it happens.


A complain by three people wouldn't make much but if it is just three people in an entire country then the issue doesn't matter much in the first place.

On the other hand, a complain by all the customers of the service over the entire country who understand the issue could make a difference.


No, but if that's the only benefit, I'll happy give that up in exchange for not having to deal with HTTPS.


As a whole https makes internet better for sure, but what's wrong with the classmate's argument in particular?


- It is still revealing what content you browse. For example your ISP may be profiling you. (There are other leaks that reveal the domain to the ISP but hopefully those are slowly being removed as well).

- A man-in-the-middle may replace your innocent content with something unpleasant. This is both bad for the viewer, and harms your reputation (even if it wasn't your fault).


The good old malware.cx is completely secure and trustworthy as long as it has a https cert warning.


> In particular, our research indicates that users often associate this icon with a site being trustworthy, when in fact it's only the connection that's secure.

I had the idea that browsers were showing a grayed-down padlock for standard HTTPS certificates (ie, "connection is encrypted") vs a full-blown green icon with the company name next to it for the HTTPS certificates that also validate identity (DV? EV? I don't recall the meanings and acronyms).

I guess that's where we should go now: make HTTPs the default (thus showing a standard icon that doesn't call for any attention), a big red ugly icon alerting non-encrypted connections, and a green one with identity attached meaning you can indeed trust this particular site to really be your bank.


Yes, the goal was that EV certificates would fill this gap. However, research showed that they didn't meaningfully affect user behavior[1], it was easy to get CAs to issue EV certs for company names that misled the user into thinking the phishing site was secure[2], and it was even possible to issue colliding EV certificates simply by registering your company in a different jurisdiction[3]. So in 2019, Chrome, Safari and Firefox all removed the "special" treatment of EV certificates. (In Safari it's still distinguished by a green vs grey lock icon, I believe)

[1] https://chromium.googlesource.com/chromium/src/+/HEAD/docs/s...

[2] https://typewritten-archive.cynthia.re/writer/ev-phishing/

[3] https://arstechnica.com/information-technology/2017/12/nope-...


Also, this feature (EV certificates) exists because the CAs wanted to sell a product with a higher ticket price, and it shouldn't be mistaken for something engineers designed to actually deliver any security.

For example, suppose you go to https://som.example/ which is the web site for "Somex Ample" products. You don't trust "mere" DV certificates for som.example, which you believe may be purchased by bad guys, but you're comfortable because "Somex" has purchased an expensive EV certificate for "Somex Ample" of Springfield.

You fill out a form on the secure web page, and hit submit. But, unlike you, your web browser intentionally has no idea who "Somex Ample" are and no interest in whether they spent a lot of money on their certificate. When the server it reaches has a boring DV certificate for som.example that's fine, the browser compares this name to the name in the HTTPS URL and it matches exactly so that's fine. The browser sends your form data to this server, gets back a 30x redirect and then (maybe after some more bounces) gets a fresh web page to show you. This page might come with one of those shiny EV certificates you like, or it might not. Either way, that form data you were careful to only fill out on the "safe" EV page, went to a server without an EV certificate.

So, getting rid of the separate UI indication for EV was largely reflecting a reality that already existed. The DNS name is correct because the browser always verifies that matches at every step, but if you're relying on something else it's on you.


I don't think your example illustrates an actual security issue, and I don't think it's useful to users to expect EV certificates to change how the same origin policy works. Personally, as a user, the value of EV certificates was not that I "didn't trust DV certificates", but that (at their best) EV certificates validated the link between a known corporate entity and a domain name. Once I know that "som.example" and "Somex Ample" are the same entity, there's no reason to worry about "downgrades" or not trust DV certificates for the same domain name.


>it was easy to get CAs to issue EV certs for company names that misled the user into thinking the phishing site was secure

The other way was an issue as well. We could only get an EV cert for our real/primary company name, but not the subordinate name we had registered for our B2C business. That would mean that our customers would see the correct URL, but the EV would show the name of a company they didn't know.


I think most browsers have stopped displaying EV certificates differently for a while.


This is also still a slightly different concern. EV ensures that you are talking to the company that you think you are talking to (in theory, in practice company names are not unique and not a good identifier) not that the company that you are talking to has good security.

Put another way users think "This site is secure" not "I am securely connected to the site I think I am".


I think that there should be some maintained white-list of important websites. I know that many people won't like this idea, but I think that for most users it would be better if Google domains, Facebook domains, etc were displayed differently, for example with green background, because that's what overwhelming majority of users are visiting every day and that's what most scams are targeting. Something like top-1000 websites in every region should qualify for that treatment.

Also I would include government websites to this list.


This is already a thing; it's called the HSTS preload list.

The sites on it are not set apart visually for the same reasons that browser vendors stopped distinguishing EV certificates.


HSTS preload list is absolutely different thing. It has nothing to do with reputation, anyone can add his website into that list. And yeah, the entire point is to set it apart visually, so scams are easier to detect.


Why doesn't EV fit the bill?


I think I recently read that browsers plan to make no visual difference between an EV certificate and a normal one


I don't understand the holy war against http. Let those who want https use it. Forcing the additional friction of certificates on every site and use case is dumb.

Not even touching on the fundamentally flawed trust model behind https, here's a sample of recent stories about expired certificates:

https://news.ycombinator.com/item?id=25132182

https://news.ycombinator.com/item?id=24237400

https://news.ycombinator.com/item?id=24187920

https://news.ycombinator.com/item?id=22227266

https://news.ycombinator.com/item?id=18649932

https://news.ycombinator.com/item?id=16541235


it's not a "war against http". it's a shift from indicating that http is the normal state and https is the abnormal state, to https being the normal state and non-https being abnormal. which is just a reflection of reality.


1. Can you give a use case for a website that needs no security what so ever?

2. Don't. Your web host should be doing it for you.


> 1. Can you give a use case for a website that needs no security what so ever?

http://neverssl.com


But, when was the last time you needed that?

My phone and my laptops both seem to correctly notice (via services for the purpose) when there isn't proper Internet access and give me access to some awful Captive Portal to fix that, for which they do not need neverssl.com. This desktop never leaves my home, which doesn't have any such nonsense.

The services they use to do that do involve plaintext HTTP, but importantly they aren't trying to just be something you type into a web browser, and so aren't affected by automatic upgrades of stuff you type into a web browser.


> But, when was the last time you needed that?

Sunday, thanks to a broken middlebox


1. http://n-gate.com/

2. You sure place a lot of trust into third parties.


The site links a Patreon, probably not something you want messed with in transit.


Actually I think that website's owner will be delighted if a MITM leads to you wasting your money by sending it to the wrong person.


Removing the lock icon is a very good idea. I’m not surprised that Chrome’s team found out that only 11% of participants to a survey understood what it really means.


not only is removing the lock icon a good idea, replacing it with a dropdown indicator is a great idea. there's all kinds of useful stuff in that menu, currently hidden behind something that looks like a status indicator rather than an interactive element.


That means they should wait until 90% of them know what it means, before removing it.


I don't see the logic here.


Can anybody suggest what might be the motivation for this? Beyond the silly "bad people might tamper with the cat picture you're shown" one that is always given? Chrome hates http with such a passion that there must be some evil motive behind it that I'm not seeing.

Because they just keep making life more difficult for websites that don't need SSL.

So now in addition to seeing a scary icon on the url bar with a scary message, my users are going to have to click past an interstitial banner just so they can visit a website and read silly travel stories. Chromium will try their best to convince them to leave, lest some nefarious agency on their home wifi substitute alternate silly travel stories that somehow cause them harm. In the 20 years the site has been live, I skeptical that this has happened often enough that we need to get Google involved.

It's frustrating.


I don't know Google's motivation but frankly I welcome TLS everywhere on the public-facing web now that certs are free thanks to LetsEncrypt.

For me, it comes down to two things:

1. Privacy. When I'm on non-private networks, its nice to have assurance that other people aren't able to get the specific contents of what I'm viewing.

2. ISP bad behavior. A number of ISPs have been doing things like injecting ads or other trackers into plain-HTTP sites. https://www.infoworld.com/article/2925839/code-injection-new...


Hopefully, LetsEncrypt is and will always be incorruptible organization. There was never a case of a community project being taken over by greedy capitalists. Oh wait...


I personally have put a lot of faith in them but there's also a lot of alternatives in case they were to flop or become corrupted:

1.https://zerossl.com/letsencrypt-alternative/

2. Cloudflare will issue you free TLS certs (assuming you're okay with them doing TLS termination). I assume other CDNs / caching proxies with free plans will also do this.

3. https://www.buypass.com/ssl/products/acme

4. https://www.sslforfree.com/

I'm sure there's more, but that's just what I could quickly find.


There are a lot of safeguards like certificate transparency logs. They would be caught.


99% of people if you tell them to go to website.com will type website.com into the address bar. This will send an unencrypted HTTP request on port 80, even if the site supports HTTPS. This initial navigation can be intercepted by a MITM and redirected, spoofed, whatever (coffee shop WiFi is a great example of where this could be dangerous). Making the default navigation use HTTPS negates this attack.

As a small side bonus, it also reduces navigation latency by several RTTs, as there is no longer the need for a connection to port 80 HTTP that only always gets redirected to HTTPS.


On relatively modern browsers, this is obviated if the DNS name you type into the browser has HSTS preload.

As well as preloading your corporate web site, HSTS preloading can be done hierarchically. For example all of .google, .dev, and .foo is HSTS preloaded, sites in those TLDs don't have plaintext HTTP in modern browsers. Perhaps some day the US government will preload .gov

(Plaintext HTTP still works for these names, a tool like curl doesn't obey HSTS -- it just can't be done in your web browser because it'll get upgraded to an HTTPS request)


Mozilla has taken similar steps to encourage websites to implement HTTPS, and I can't say whether you believe they share Google's evil motivations, but it's a clear trend across browser vendors.

Certificates are free now and not necessarily any more difficult to install and maintain than any of the other software you need to run even a basic website. It's something relatively simple I can do to ensure the integrity of the content I want people to look at, so why not?


that there must be some evil motive behind it that I'm not seeing.

Google wants to control the entire Internet, and that means centralising power. HTTPS requires certificates issued by a small list of CAs, and they can revoke them too.

Combine that with Google's efforts and those of others of locking down the whole stack (all in the name of "security", of course), and you can see just how much power to censor "inconvenient" information they've accumulated.

People like to cheer on the companies fighting the "bad guys", not giving a second thought to letting them have more power --- nor considering that this power may one day even be used against them. Of course, this power will initially only be used for fighting malware and phishing. Then they'll start revoking sites for "misinformation"...

There's a good reason why SciHub and LibGen are accessible via plaintext HTTP.

"Those who give up freedom for security deserve neither", as the classic saying goes.


> Can anybody suggest what might be the motivation for this?

The push for HTTPS everywhere started with HR4681[1] both of which were reactions to Snowden revelations.

> Beyond the silly "bad people might tamper with the cat picture you're shown" one that is always given?

Someone who is able to tamper with the cat picture you are seeing can also inject arbitrary code into your computer, so there is that.

Also, thanks to the push for HTTPS everywhere, a large volume of traffic now goes through CloudFlare enabling them to do that on behalf of state actors etc (I am not claiming they do it, but it is kind of ironic that the least friction thing for me to enable HTTPS on a whole bunch of sites was to put a MITM ;-)

Totally love CloudFlare BTW. The free level covers what I need. Still, it is ironic.

[1]: https://www.nu42.com/2014/12/https-everywhere-and-hr4681.htm...


Why single out HR4681? The IETF's Best Common Practice #188 "Pervasive Monitoring Is an Attack" is months earlier in terms of reactions to Snowden and is a direct trigger for this work which doesn't involve stoner thinking ("What if, like, the government snooping is actually reverse psychology to trick us into hiding stuff from them")


Because, at the time, HR 4681 was heralded as _limiting_ the U.S. government's ability to monitor and retain communications of U.S. citizens when, in fact, coupled with omnipresent encryption actually explicitly made it legal to retain such communications indefinitely.

Copying the plain language here:

"... Limitation on retention.–A covered communication shall not be retained in excess of 5 years, unless– ... (iii) the communication is enciphered or reasonably believed to have a secret meaning;"


> Someone who is able to tamper with the cat picture you are seeing can also inject arbitrary code into your computer, so there is that.

Someone can just inject arbitrary code into your computer using a site with an SSL cert. It's very easy to get a free one nowadays, judging from the other threads. Why are middle-men so important to protect against? The site owners are the usual source of attacks.


> Chrome hates http with such a passion that there must be some evil motive behind it that I'm not seeing.

I think it is just that Google wants exclusive access to your data. With http, your ISP can also understand your browsing habits. Then they devalue Google's ad profile that they have painstakingly created. So Google is pushing for https, which would exclude ISPs and other intermediaries from profiling you. Note that Google's ability to spy on you is un-curtailed since you are using Chrome and even signing in with your google account.


if you want the cynic's version, https will prevent ISPs or other nefarious operators between you on the website's hosting provider from swapping out the website's ads with their own ads. google's customer is the website operator, so they want to protect that relationship.

(the less cynical interpretation is that all the other non-ad content on that website is also valuable and shouldn't be modified in transit)


Widespreading mass surveillance (US PRISM, China GFW) is a part of reason.


> In particular, our research indicates that users often associate this icon with a site being trustworthy, when in fact it's only the connection that's secure.

Never really thought about that, but I guess it's pretty obvious. I can totally see my folks downloading/buying god-knows-what from a site because they see that lock icon.


They started realizing this when HTTPS was becoming common thanks to Let’s Encrypt and slowly started removing any greenness and “Secure” label from the URL bar in favor of doing the opposite: marking insecure websites in red.

This last change is obvious and needed but was probably left as a last step because people have been taught to look for it at some point. Glad to see it gone, even just ‘cause Safari puts it in the middle of the URL bar and I keep clicking on it by mistake. Hopefully they’ll drop it to by 2024


No mention of ECH (Encrypted Client Hello).

Current status: https://www.chromestatus.com/feature/6196703843581952


ECH is still under development. Draft 12 https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni-12 is like a week old.


Yes, though Firefox has support for it now, so current status is still interesting.


My home network includes a router and several WiFi access points. They are managed through a browser, which means they have a built-in web server. I have them configured so they are only visible from the internal IP addresses and changed usernames and passwords from the built-in defaults, but there's no way to install a certificate in them, let alone force them to use https. So whenever I use Chrome to reconfigure one of these devices I get warnings of impending doom. A big PITA.


Interesting that "Linux" is the platform with the lowest observed adoption of HTTPS ... implies some kind of bias in the way Linux users use Chrome. ChromeOS, which is also Linux but I assume not included in the data with the Linux label, has by far the highest fraction of HTTPS.


I assume it's mostly Devices and Servers. You probably visit the HTTPS site for Big Electronics Co. but your Big Electronics Co. "smart" television uses HTTP for the same reason it added a slow, clunky "Welcome" page with video adverts - you are a victim not a customer.

The type of server software developer who years ago searched Stack Overflow for how to "fix" the problem of certificate errors, now just uses the plaintext HTTP calls where possible, and thus avoids needing to "fix" the problem by not having any security.

And there's a bunch of completely automated stuff that genuinely does need plaintext HTTP on purpose. OCSP requests, for example, are plaintext HTTP. When you realise what they're for this is obvious, if I need an OCSP check to do HTTPS, but I need HTTPS to do an OCSP check, then I have infinite recursion.


Has anyone found scheduling information about this? When can we expect this in Chrome, for example?

Edit: Oh, here we go: https://chromiumdash.appspot.com/schedule

This is for Chromium and not Chrome, though:

...

Feature Freeze Thu, Jul 29, 2021

...

Stable Cut * Tue, Sep 14, 2021

Stable Release Tue, Sep 21, 2021

...


HTTPS is not secure if someone has a root cert and wastes energy, if you need encryption you should roll your own.

I used https://datatracker.ietf.org/doc/html/rfc2289 for login which is simpler and uses less energy than public/private key encryption and quantum safe out of the box.

Google of course has a root cert and is making sure less people can make web sites by building more protocol extensions that the average joe can afford to keep up with.

I expect to be severly down voted but it's ok, I'm used to it by this point. Truth is always downvoted by vested interests to higher degree than average joes are willing to upvote it.


Instead of just making the claims above, can you demonstrate with examples or evidence that they are true? I'm somewhat concerned about the state of matters myself, but why and how is it that you think Google is making website creation less accessible?

Your comment would have been fine without the last bit, albeit light on justification for your position to make the arguments.

Attempting to guilt-trip people that you want to reach is not an effective strategy, and potentially getting downvoted does not validate your position.


The police station in Stockholm can read visitors HTTPS traffic over their WiFi in clear text and they show it to you when you are there. They simply substitute their root cert and your browser behaves like normal only they can decrypt your HTTPS traffic.

Certificates are a bamboozle of power (who/why/how some entity gets a root cert) and the waste they involve is simply not worth it.

---

- HTTP/2 has head of line issues = it's not better than HTTP/1.1.

- HTTP/3 has adoption issues and ossification of a protocol is THE feature.

- WebSockets are a similar ordeal.

I use HTTP/1.1 Comet Stream and it works very well, it's simpler and can scale "joint parallel" on multiple cores.

---

I'm a bit weary that after 5 years of telling HN to force comment upon downvote nothing has happened.

Your downvote needs to be official otherwise it's unclear who thinks what.

Eventually the HN database will leak and then it will be pretty clear who has downvoted what, so it's only a matter of time anyway.


> The police station in Stockholm can read visitors HTTPS traffic over their WiFi in clear text and they show it to you when you are there. They simply substitute their root cert and your browser behaves like normal only they can decrypt your HTTPS traffic.

If you have actual proof of this (e.g. a copy of one of the rogue certificates issued by this CA) you can get them banned from all major browsers by emailing said proof to dev-security-policy@mozilla.org

My guess though is that you're simply incorrect; the root cert they use is most likely not trusted by modern browsers. Stockholm police can't intercept visitors' HTTPS traffic without their browser displaying an error page, or without those visitors manually marking the Stockholm police's CA as trusted.


I know it's just another test, but the constant changes to the lock icon/indicators that a connection is secure are becoming annoying...


HTTPS is complicated.

python -m http.server → you have a web server that you can use for ad-hoc needs.

Coding TLS into a web framework is hard. Ah, I should use a proxy? So installing a TLS on a proxy is hard. Ah, I should use caddy with LE? Sure (I use it for years), now how do I do that for 10.2.3.10?

I understand why HTTPS is useful (to encrypt your traffic, certainly not "to know you are on the right server"), but it is a failure form the start - usability-wise.


The "hit piss" (https) first thing is all very well but there are times when "hit pip" (http) is fine. You don't generally use an Enigma machine at home.

We generally live in a RFC1918 n stuff world which describes "internal" and "external". IPv6 focusses the boundary between you and me in a different way.

Why should my browser decide what I do on my own home network?

Why should a mere tool pontificate about stuff that I know more about than the kids who developed it? Fine, I should probably develop my own browser in ASM but I don't speak nonsense. I sort of know what a processor register is but it would probably bully me.

I am increasingly seeing top down decisions from monstrously huge corporations "for my own good" and I am increasingly getting worried. I rant at my elected government officials because that is what they are for (I don't really) but commercial corps are increasingly insinuating themselves into important discussions and their moral stance is undecipherable.


100% agree. These authoritarian corporations are out to "secure" their control over the population, squeezing and herding them however they want.

It's particularly telling when the most downvoted comments contain the most truth.


> Why should a mere tool pontificate about stuff that I know more about than the kids who developed it?

The decision might not be for your own good specifically, but ultimately the defaults affect 1B+ people. And the huge majority of those people do not understand many of the words that are in you post. You are an outlier and that will mean that product decisions will often not be focused at you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: