I couldn’t reproduce the attack with a pair of my own domains, so I think it might be even narrower in scope than the initial post suggests. But I suppose we will just have to wait to see what the CA says.
> Out of an abundance of caution, we have disabled domain validation method 3.2.2.4.14 that was used in the bug report for all SSL/TLS certificates while we investigate.
I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.
The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.
Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.
It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.
Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.
All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.
I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
What can I say, I am a pki nerd and I think the state of local networking is significantly harmed by consumer devices needing to speak http (due to modern browsers making it very difficult to use). This is less about increasing security and more about increasing usability without also destroying security by coaching people to bypass cert checks. And as home networks inevitably become more and more crowded with devices, I think it will be beneficial to be able to strongly identify those devices from the network side without resorting to keeping some kind of inventory database, which nobody is going to do.
It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.
Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.
Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Honestly, I'd just like web browsers to not complain when you're connecting to an IP on the same subnet by entering https://10.0.0.1/ or similar.
Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.
It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.
I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.
Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.
The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.
Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.
A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.
Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right
There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
There's no such thing as an ideal world, just the one we have.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.
In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.
People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).
.com can issue arbitrary certificates right now, they control what DNS info is given to the CAs. So I don't quite see the change apart from people not talking about that threat vector atm.
This is a bad faith argument. Whatever measures Google takes to prevent this (certificate logs and key pinning) could just as well be utilized if registrars delegated cryptographic trust as they delegate domains.
It is also true that these contemporary prevention methods only help the largest companies which can afford to do things like distributing key material with end user software. It does not help you and me (unless you have outsourced your security to Google already, in which case there is the obvious second hand benefit). Registrars could absolutely help a much wider use of these preventions.
There is no technical reason we don't have this, but this is one area where the interest of largest companies with huge influence over standards and security companies with important agencies as customers all align, so the status quo is very slow to change. If you squint you can see traces of this discussion all the way from IPng to TLS extensions, but right now there is no momentum for change.
It's easy to tell stories about shadowy corporate actors retarding security on the Internet, but the truth is just that a lot of the ideas people have about doing security at global Internet scale just don't pan out. You can look across this thread to see all the "common sense" stuff people think should replace the WebPKI, most of which we know won't work.
Unfortunately, when you're working at global scale, you generally need to be well-capitalized, so it's big companies that get all the experience with what does and doesn't work. And then it's opinionated message board nerds like us that provide the narratives.
This is a great point. For all of the "technically correct" arguments going on here, this one is the most practical counterpoint. Yes, in theory, Verisign (now Symantec) could issue some insane wildcard Google.com cert and send the public-private key pair to you personally. In practice, this would never happen, because it is a corporation with rules and security policies that forbid it.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
This isn't about the cert issuance servers, but DNS servers. If you compromise DNS then just about any CA in the world will happily issue you a cert for the compromised domain, and nobody would even be able to blame them for that because they'd just be following the DNS validation process prescribed in the BRs.
> every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
This was DigiNotar. The breach generated around 50 certificates, including certificates for Google, Microsoft, MI6, the CIA, TOR, Mossad, Skype, Twitter, Facebook, Thawte, VeriSign, and Comodo.
That only works for high profile domains, a CA can just issue a cert, log it to CT and if asked claim they got some DNS response from the authoritative server.
Then it's a he said she said problem.
Or is DNSSEC required for DV issuance? If it is, then we already rely on a trustworthy TLD.
I'm not saying there isn't some benefit in the implicit key mgmt oversight of CAs, but as an alternative to DV certs, just putting a pubkey in dnssec seems like a low effort win.
It's been a long time since I've done much of this though, so take my gut feeling with a grain of salt.
This isn't a contest of two vibes, one pro-DNSSEC and one anti-. You can just download the Tranco list and run a for loop over it checking for DS records. DNSSEC adoption in North America has actually fallen sharply in North America since 2023 (though it's starting to tick back up again) and every chart you'll find shows a pronounced plateauing effect, across all TLDs --- damning because the point at which the plateau flattens is such a low percentage.
Right. 'You can't "distrust" .com.' is probably not true in that situation. (If it were actually true, then "what happens" would be absolutely nothing.) I think you're undermining your own point.
A certificate authority is an organisation that pays good money to make sure that their internet connection is not being subjected to MITMs. They put vastly more resources into that than you can.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
> pays good money to make sure that their internet connection is not being subjected to MITMs
Is that actually true? I mean, obviously CAs aren't validating DNS challenges over coffee shop Wi-Fi so it's probably less likely to be MITMd than your laptop, but I don't think the BRs require any special precautions to assure that the CA's ISP isn't being MITMd, do they?
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
Much as I like the idea of DANE, it solves nothing by itself and you need to sign the zone from tampering. Right now, the dominant way to do that is DNSSEC, though DNSCurve is a possible alternative, even if it doesn't solve the exact same problem. For DANE to be useful, you'd first need to get that set up on the domain in question, and the effort to get that working is far, far from trivial, and even then, the process is so error prone and brittle that you can easily end up making a whole zone unusable.
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
Certificate pinning is suicide in an environment where certificates expire in max 47 days. You'll have to rebuild and push your app at least that often and probably sync your devops with your certificate management.
Only if you pin a CA/Browser Forum-approved certificate. But you don't have to do that.
You can instead pin a self-signed or private CA-signed certificate, and then it can have the maximum lifetime you're comfortable with and that the software supports. A related option is to ship your app with a copy of your private CA certificate(s) and configure the HTTPS client to trust those in addition to, or instead of, the system-provided CAs.
I'm not sure how viable these approaches are on more locked-down platforms (like smartphones) and, even if they are viable today, whether they will remain viable in the future. It's also only good for full apps; anything that uses the system browser has to stick with the system CAs.
How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.
What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.
I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).
I love the push that LE puts on industry to get better.
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
Based on that Wikipedia article, no. This is just more of the same friendless PKI geeks making the world unnecessarily more complicated. The only other people that benefit are the certificate management companies that sell more software to manage these insane changes.
Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.
Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
We did consider it.
As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
> Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
> We did consider it.
That is unfortunate. I just deployed a web server the other day and was thrilled to deploy must-staple from Let's Encrypt, only to read that it was going away.
> As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
Please delay the adoption of PQAs for certificate signatures at Let's Encrypt as long as possible. I understand the concern that a hypothetical quantum machine with tens of millions of qubits capable of running Shor's algorithm to break RSA and ECC keys might be constructed. However, "post-quantum" algorithms are inferior to classical cryptographic algorithms in just about every metric as long as such machines do not exist. That is why they were not even considered when the existing RSA and ECDSA algorithms were selected before Shor's algorithm was a concern. There is also a real risk that they contain undiscovered catastrophic flaws that will be found only after adoption, since we do not understand their hardness assumptions as well as we understand integer factorization and the discrete logarithm problem. This has already happened with SIKE and it is possible that similarly catastrophic flaws will eventually be found in others.
Perfect forward secrecy and short certificate expiry allow CAs to delay the adoption of PQAs for key signing until the creation of a quantum computer capable of running Shor’s algorithm on ECC/RSA key sizes is much closer. As long as certificates expire before such a machine exists, PFS ensures no risk to users, assuming key agreement algorithms are secured. Hybrid schemes are already being adopted to do that. There is no quantum moore's law that makes it a forgone conclusion that a quantum machine that can use Shor's algorithm to break modern ECC and RSA will be created. If such a machine is never made (due to sheer difficulty of constructing one), early adoption in key signature algorithms would make everyone suffer from the use of objectively inferior algorithms for no actual benefit.
If the size of key signatures with post quantum key signing had been a motivation for the decision to drop support for OCSP must-staple and my suggestion that adoption of post quantum key signing be delayed as long as possible is in any way persuasive, perhaps that could be revisited?
Finally, thank you for everything you guys do at Let's Encrypt. It is much appreciated.
All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
> surely the big ones all can and do respond in less than 90 days?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
I have a feeling it'll eventually get down even lower. In 2010 you could pretty easily get a cert for 10 years. Then 5 years. Then 3 years. Then 2 years. then 1 year. Then 3 months. Now less than 2 months .
I can’t see freely available intermediates ever happening. The first three reasons I can think of are here, but there’s more I’m sure.
1. No way to enforce what the issued end-entity certificates look like, beyond name constraints. X509 is an overly-flexible format and a lot of the ecosystem depends on a subset of them being used, which is enforced by policy on CAs.
2. Hiding private domains wouldn’t be any different than today. CT requirements are enforced by the clients, and presumably still would be. Some CAs support issuing certs without CT now, but browsers won’t accept them.
3. Allowing effectively unlimited issuance would likely overwhelm CT, and the whole ecosystem collapses.
That's a fair point, though CT is only strictly enforced by Chromium-based browsers at the moment. There would need to be some resolution to this issue, but the CT problem doesn't seem to be insurmountable in a 5 year timeframe if the relevant parties are motivated to solve it.
I think the main reason is it allows for easier access to Tor hidden sides with a “regular” web browser. Consider a wifi network that exposed .onion domains via normal DNS, or a VPN, or other similar mechanisms. It’s not as good as Tor browser, but may be a lot more accessible.
Right; it's imperfect, as everything is. But of course, it's also a huge bit of leverage for the root programs (more accurately, a loss of leverage for CAs) in killing misbehaving CAs; those programs can't be blackmailed with huge numbers of angry users anymore, only a much smaller subset of users. Seems like a good thing, right?
Any leverage against the root programs in the form of angry users is also leverage against the browser devs, is it not? Either way you look at it, the root programs/browser devs receive less pushback and gain more autonomy.
My biggest concern is long-term censorship risk. I can imagine the list of trusted CAs getting whittled down over the next several decades (since it's a privilege, not a right!), until it's small enough for all of them to collude or be pressured into blocking some particular person or organization.
Yes. It would add another potential point of failure to the process of publishing content on the Web, if such a scenario came to pass. (Of course, the best-case scenario is that we retain a healthy number of CAs who can act independently, but also maintain compliance indefinitely.)
To add to this, EU's 2021 eIDAS (the one with mandatory trustlist) was a response to similar lack of availability. Contrary to what most HNers instincively thought it wasn't about interception: EC was annoyed that none of root programs is based in EU, and that causes 100% of trust decisions to be made on the other side of the Big Water. EC felt a need to do something about in, having in regard the fact that TLS certificates are needed for modern business, healthcare, finance etc., they have seen it as economy sovereignity issue.
My point is, lack of options, aka availability, is (or may be perceived as) dangerous on multiple layers of of WebPKI.
No, eIDAS 2.0 was an attempt to address the fact that the EU is not one market in ecommerce, because EU citizens don't like making cross-border orders. The approach to solving this was to attach identity information to sites, ala EV certificates. The idea for this model came from the trust model for digital document signatures in PDFs.
That's orthogonal problem. eIDAS had to solve many problems to create full solution. You're right that we have many TSPs (aka CAs), NABs also. EU have experience running continent-wide PKI for e-signatures that are accepted in other countries. But no root programs in WebPKI, which were essentially unaccountable to EU, but a critical link in the chain in end-to-end solution. There's was no guarantee that browser vendors won't establish capricious requirements for joining root programs (i.e. ones that would be incompatible with EU law and would exclude European TSPs). Therefore the initial draft stated that browsers need to import EU trustlist wholesale and are prohibited from adding their own requirements.
(That of course had to be amended, because some of those additional requirement were actually good ideas like CT, and there should be room for legitimate innovation like shorter certs, and also it's OK for browsers to do suffifcient oversight for TSPs breaking the rules, like the ongoing delrev problem).
1) From an eurocrat pov, why build a browser when you can regulate the existing ones instead? EU core competence is regulating, not building, and they know it.
2) You don't actually need to build a browser to achieve this goal, you just need a root program, and a viable (to some extent) substitute already exists. cf. all "Qualified" stuff in EU lingo. So again why do the work and risk spectacular failure if you don't need to.
3) Building alternative browser for EU commerce that you'd have to use for single market, but likely wouldn't work for webpages from other countries would be a bad user experience. I know what I'm sayig, I use Qubes and I've got different VMs with separate browser instances for banking etc. I'm pretty sure most people wouldn't like to have a similar set up even with working clipboard.
There are things you can't achieve by regulation, e.g. Galileo the GPS replacement, which you can't regulate into existence. Or national clouds: GDPR, DSA, et al. won't magically spawn a fully loaded colo. Those surely need to be built, but another Chromium derivative would serve no purpose.
If you're talking about technical capability, yeah, no contest here.
But if EC can legislate e-signatures into existence, then it follows that they can also legislate browsers into accepting Q certs, can they not?
Mind you, they did exactly that with document signing. They made a piece of paper say three things: 1) e-signatures made by private keys matching Qualified™ Certificates are legally equivalent to written signatures, 2) all authorities are required to accept e-signatures, 3) here's how to get qualified certificates.
Upon reading this enchated scroll, 3) magically spawned and went to live its own way. ID cards issued here to every citizen are smartcards preloaded with private keys for which you can download an X.509 cert good for everydoy use. The hard part was 2), because we needed to equip and retrain every single civil servant, big number of them were older people not happy to change the way they work. But it happened.
So if the hard part is building and the easy part is regulating, and they have prior art already exercised, then why bother competing with Google, on a loss leader, with taxpayer funds. And with non-technical feature, but a regulatory one, which would most likely case the technical aspects like performance and plugin availability to be neglected.
If there was just one CA then there would be no CABforum and users would have no leverage. This is the situation in DNSSEC. I don't think it's that bad, as one can always run one's own . and use QName minimization, but still, com. and such TLDs would be very powerful intermediate CAs themselves. And yet I still like DNSSEC/DANE as you know, except maybe I'm liking the DNAE+WebPKI combo more. And I don't fear "too few CAs" either because the way I figure it if the TLAs compromise one CA, they can and will compromise all CAs.
Well, it's u/LegionMammal978's novel take, I just riffed on it.
> Personally: I'm for anything that takes leverage away from the CAs.
You can automate trusted third parties all you want, but in the end you'll have trusted third parties one way or another (trust meshes still have third parties), and there. will. be. humans. involved.
Non-browser clients shouldn't be expected to crib browser trust decisions. Also, the (presumably?) default behavior for a non-browser client consuming a browser root store, but is unaware of the constraint behavior, is to not enforce the constraint. So they would effectively continue to trust the CA until it is fully removed, which is probably the correct decision anyway.
To me that's an odd position to take, ultimately if the user is using Mozilla's root CA list then they're trusting Mozilla to determine which certs should be valid. If non-browser programs using the list are trusting certs that Mozilla says shouldn't be trusted then that's not a good result.
Now of course the issue is that the information can't be encoded into the bundle, but I'm saying that's a bug and not a feature.
Mozilla’s list is built to reflect the needs of Firefox users, which are not the same as the needs of most non-browser programs. The availability/compatibility vs security tradeoff is not the same.
> the information can't be encoded into the bundle
Can it not? It seems like this SCTNotAfter constraint is effectively an API change of the root CA list that downstream users have to in some way incorporate if they want their behavior to remain consistent with upstream browsers.
That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.
That said, it might be better to intentionally break backwards compatibility as a forcing function to force downstream clients to make that decision intentionally, as failing open doesn't seem safe here. But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.
> That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.
That's what the blog post I linked in the top comment suggests is the "more disruptive than intended" approach. I don't think it's a good idea. Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.
There's an extra issue of not-often-updated systems too, since now you need to coordinate a system update at the right moment to remove the root.
> Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.
Note that Mozilla supports not SCTNotAfter but DistrustAfter, which relies on the certificate's Not Before date. Since this provides no defense against backdating, it would presumably not be used with a seriously dangerous CA (e.g. DigiNotar). This makes it easy to justify removing roots at `DistrustAfter + max cert lifetime`.
On the other hand, SCTNotAfter provides meaningful security against a dangerous CA. If Mozilla begins using SCTNotAfter, I think non-browser consumers of the Mozilla root store will need to evaluate what to do with SCTNotAfter-tagged roots on a case-by-case basis.
> But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.
Yet that is the thing that goes around under the name "ca-certificates" and practically all non-browser TLS on Linux everywhere is rooted in it! Regardless of what the intent was, that is the role of the Mozilla CA bundle now.
I wonder how much I should be concerned about Mozilla's trust store's trustworthiness, given their data grab with Firefox? I've switched to LibreWolf over that (more in protest than thinking I'm personally being targeted). But I'm pretty sure LibreWolf will still be using the Mozilla trust store?
I haven't thought through enough to understand the implications of the moneygrubbing AI grifters in senior management positions at Mozilla being in charge of my TLS trust store, but I'm not filled with joy at the idea.
WebPKI is a maze of fiefdoms controlled by a small group of power tripping little Napoleons.
Certificate trust really should be centralized at the OS level (like it used to be) and not every browser having its own, incompatible trusted roots. It's arrogance at its worst and it helps nobody.
When are you imagining this "used to be" true? This technology was invented about thirty years ago, by Netscape, which no longer exists but in effect continues as Mozilla. They don't write an operating system (then or now) so it's hard to see how this is "centralized at the OS level".
It was true for at least Chrome until around 2020: Chrome used to not ship with any trusted CA list and default to the OS for that.
Firefox has their own trusted list, but still supports administrator-installed OS CA certificates by default, as far as I know (but importantly not OS-provided ones).
Ah, so you're referring to Chrome, which shipped Chrome 1.0 at the tail end of 2008, and was in effect exercising normal root programme behaviour by the time I was on the scene in 2015 (but presumably also before).
So your presumptions (no, Chrome did not have their own root program in 2015 or before; they announced [1] it in 2020, as I wrote, and blocking individual CAs out of it on a case by case basis does not quite make a full root program) and unwillingness to spend one minute to fact check them are a lack of perspective on my side?
My point was that Chrome (arguably not an insignificant browser) did use the OS’s root CA lists for 12 years (or 7, if you really want to count individual CA bans as a program; arguably both not an insignificant time span).
I think this is, at best, a lack of understanding of what's really going on, and at worst just obstinence.
As Ryan explains this was basically the same thing with new paint on it, it reminds me of the UK's "Supreme Court". Let me digress briefly to explain:
On paper historically the UK didn't have an independent final court of appeal, significant appeals of the law would end up at the Lords of Appeal in Ordinary ("Law Lords" for short) who were actual Lords, in principle unelected legislators from the UK's upper chamber. Hundreds of years ago they really were just lords, not really judges at all. The US in contrast notionally has an independent final court of appeal, completely independent from the rest of the US government, much better. Except in reality the Law Lords were completely independent, chosen among the country's judges by independent hiring process while the US Supreme Court are just appointed hacks for the President's party, not especially competent or effective jurists but loyal to party values.
So the fresh coat of paint gave the UK a "Supreme Court" in 2009 by designating a building and sending exactly the same factually independent jurists to go work in that building with their independent final court of appeal and stop requiring them to notionally be Lords (although in practice they are all still granted the title "Lord") who met in some committee room in the Palace of Westminster. The thin appearance changed to match the ideals the Americans never met, the reality was exactly the same as before.
And that's what Ryan is writing about in that post. In theory there was now a Chrome root programme, in practice there already was, in principle now you need a sign-off from Ryan's team in practice you already did. In theory now you're now discussing inclusion on m.d.s.policy because you want Chrome trust programme approval, in practice that's already a big part of why you're there.
When Chrome 1.0 shipped it was important to deliver compatibility to gain market share. Soon it didn't matter, they could and did choose to depart from that compatibility to distinguish Chrome as better than the alternatives.
7 years for one browser compared to 30 years for all browsers does I'm afraid seem a long way from "like it used to be" and much more "how I wrongly thought it should work".
Https certificate trust is basically the last thing I think about when I choose an os. (And for certain OSes I use, I actively don't trust its authors/owners)
The only thing that is genuinely weird is having four different certificate stores on a system, each with different trusted roots, because the cabals of man-children that control the WebPKI can't set aside their petty disagreements and reach consensus on anything.
Which makes sense, because that would require them all to relinquish some power to their little corner of the Internet, which they are all unwilling to do.
This fuckery started with Google, dissatisfied with not having total control over the entire Internet, deciding they're going to rewrite the book for certificate trust in Chrome only (turns out after having captured the majority browser market share and having a de-facto monopoly, you can do whatever you want).
I don't blame Mozilla having their own roots because that is probably just incompetence on their part. It's more likely they traded figuring out interfacing with OS crypto APIs for upkeep on 30 year old Netscape cruft. Anyone who has had to maintain large scale deployments of Firefox understands this lament and knows what a pain in the ass it is.
that’s not what he meant, and you know it. he means use the OS store (the one the user has control over), instead of having each app do its own thing (where the user may or may not have control, and even if he does have it, now has to tweak settings in a dozen places instead of one). they try to pull the same mess with DNS (i.e. Mozilla’s DoH implementation)
> I don't understand, because the user has control over the browser store too.
i already mentioned that ("may or may not"). former or latter, per-app CA management is an abomination from security and administrative perspectives. from the security perspective, abandonware (i.e. months old software at the rate things change in this business) will become effectively "bricked" by out-of-date CAs and out-of-date revocation lists, forcing the users to either migrate (more $$$), roll with broken TLS, or even bypass it entirely (more likely); from the administrative perspective, IT admins and devops guys will have to wrangle each application individually. it raises the hurdle from "keep your OS up-to-date" to "keep all of your applications up-to-date".
> As an erstwhile pentester
exactly. you're trying to get in. per-app config makes your life easier. as an erstwhile server-herder, i prefer the os store, which makes it easier for me to ensure everything is up-to-date, manage which 3rd-party CAs i trust & which i don't, and cut 3rd-parties out-of-the-loop entirely for in-house-only applications (protected by my own CA).
It's a good question! When you're testing websites, you've generally got a browser set up with a fake root cert so you can bypass TLS. In that situation, you want one of your browsers to have a different configuration than your daily driver.
Really depends on who you ask. You still need v4 to be "globally reachable", but v6 is optional.
AWS seems to finally be feeling the pinch of IPv4 exhaustion and is pushing v6 support everywhere now, and starting to charge for v4.
Mobile networks already have, and many are natively IPv6, with NAT64/464XLAT or other tech for bridging to v4. Apple's App store requires apps to support IPv6-only networks.
CDNs and clouds etc mean that websites don't even really need to worry about their own IP allocation, and just let their provider figure out exposing things worldwide.
> Apple's App store requires apps to support IPv6-only networks.
I read that and thought "huh, is that recent?" and found posts that were 9 years old about it. I guess apps just have to work on an IPv6-only network but I'm honestly surprised my apps do. I don't test in IPv6, my home network has it disabled, most of my servers don't have anything for IPv6 that I know of. Odd.
For longer, I expect. For a long time email has been partially centralised so for most real people and a lot of systems mail goes out through a specific host (or small number of hosts) on the edge of their network or completely outside it (sending individuals sending via services like gmail, and systems using services like sendgrid, and so forth) so the need to push for IPv6 is less apparent for mail sending than a number of other things. There are orders of magnitude less hosts sending mail than, say, making HTTP(S) requests.