Hacker News new | past | comments | ask | show | jobs | submit login
Should you use Let's Encrypt for internal hostnames? (shkspr.mobi)
238 points by edent on Jan 5, 2022 | hide | past | favorite | 190 comments



Several comments here mention running your own CA. Maybe that could be a signed intermediate CA with the Name Constraint extension [0] (and critical bit?), but one roadblock on this path is that allegedly Apple devices do not support that extension (edit: actually this was fixed! see reply). You there, @ LetsEncrypt?

To address the article a recent related discussion, "Analyzing the public hostnames of Tailscale users" [1], indicates in the title one reason you might not want to use LE for internal hostnames. There was a discussion about intermediate CAs there as well [2] with some more details.

[0]: http://pkiglobe.org/name_constraints.html

[1]: https://news.ycombinator.com/item?id=29579806

[2]: https://news.ycombinator.com/item?id=29614971


Apple devices support the Name Constraint extension just fine. I've deployed a bunch of internal CA's with Name Constraint and Apple's macOS/iOS/iPadOS block certs that are signed for anything outside of the constraints. As is intended.

AFAIK the Apple bug was fixed in macOS 10.13.3 from what I can find online. [1]

[1]: https://security.stackexchange.com/questions/95600/are-x-509...


That's great to hear! I'd only heard secondhand, so I updated my comment to reflect this detail.

Also I found https://bettertls.com publishes details about which TLS features are supported on different platforms over time, and it appears that the latest test in Dec 2021 shows most platforms support name constraints.

With that roadblock evaporated, I think this would be the perfect solution to a lot of organization- and homelab-level certificate woes. I'd really like to hear from a domain expert on how feasible it would be to automate for free public certs, ACME-style.


I'm happy to see that a big name like Netflix is getting behind this. I've been wishing for better Name Constraints support ever since learning how certificates work. Almost every situation where someone currently uses a wildcard could be done better with a name constrained CA cert.

I would love for this to become as widely supported as wildcards so those who choose to use them could do so easily.


Wish it were as simple - ultimately having a name-constrained and publicly-trusted CA is the same as having any publicly-trusted CA and comes with a ton of wonderful burdens like audits.

You're essentially running a public CA at that point, and that isn't easy.


This is not a technical limitation though. It's a policy limitation.

In theory, a name-constrained intermediate for `.example.com` has no more authority and poses no greater risk than a wildcard leaf certificate for `.example.com`. In both cases the private key can be used to authenticate as any subdomain of `example.com`.

But, name constraints are verified by relying parties (the clients and servers that are actually authenticating remote peers using certificates). It's hard to be certain that everything has implemented name constraints properly. This is, ostensibly and as far as I know, the reason CA/Browser forum hasn't allowed name constrained intermediates.

At some point it probably makes sense to just pull the bandaid off.


CABF and root programs allow NC'd CAs - they're just a pain to operate.

The infra itself, keeping up with compliance and root program changes (which happen with more frequency now!), CT logging, running revocation services (not easy at scale). Plus then things to consider like rotation of the NC'd CA. You'd have to rotate at least once a year, perhaps less given domain validation validity periods. You'd also likely need to have the chain ('chain' used loosely, we know it's not really a linear chain) be four deep like: root->CA->your NC'd CA->leaf, 'cos the root should be offline and unless you're not doing these in much volume I assume you'd want to automate issuance and not gather your quorum of folk to sign from the offline roots. That might not be an issue for many, but it certainly is for some.

(Full disclosure, I work for a CA for almost 2 decades and have pretty intimate knowledge in this area, sadly).


It's interesting to hear that there's already a NC protocol today, but most in this thread are aiming at "should" not "can". The point is that a 90-day, name-constrained CA has no more authority than 90-day wildcard cert if both are issued via DNS-01 validation (modulo nested subdomains), so it shouldn't be subject to the same regulations as a public CA with no restrictions (which require CT logging, audits, revocation services, security requirements, etc as you enumerated), or really any more restrictions than those necessary to be issued a wildcard cert. This would be very beneficial for private networks and would have even better security properties than wildcards. Is there any reason why this shouldn't be possible?


> Several comments here mention running your own CA.

You know, i feel like more people wouldn't have a problem with actually doing this if it weren't so challenging and full of sometimes unpleasant CLI commands. To me openssl and similar packages to it feel like comparing the UX of tar vs docker CLIs, where the former is nigh unusable, as humorously explained here: https://xkcd.com/1168/

In comparison, have a look at Keystore Explorer: https://keystore-explorer.org/screenshots.html

Technically you can use it to run a CA, i guess, but in my experience it has mostly been invaluable when dealing with all sorts of Java/other keystores and certificates, as well as doing certain operations with them (e.g. importing a certificate/chain in a keystore, or maybe generating new ones, or even signing CSRs and whatnot).

Sure, you can't automate that easily, but for something that you do rarely (which may or may not fit your circumstances), not struggling with the text interface but rather having a rich graphical interface can be really nice, albeit that's probably a subjective opinion.

Edit: on an unrelated note, why don't we have more software that uses CLI commands internally that correspond to doing things in the GUI, but with the option to copy the CLI commands when necessary (say, the last/next queued command being visibile in a status bar at the bottom)? E.g. hover over a generate certificate button, get a copyable full CLI command in the status bar.

Of course, maybe just using Let's Encrypt (and remembering to use their staging CA for testing) and just grokking DNS-01 is also a good idea, when possible. Or, you know, any other alternatives that one could come up with.


I never got why people think using tar is hard. Specify your archive File with f. want to eXtract it? add a x. want to Create it? add a c. Want it to be Verbose while doing that? add a v. if it's gZiped add a z. Granted, j for bzip2, t for listing is less obvious, but with that it's about everything you need for everyday usage and that more than suffices to disarm that bomb.


Here's an example of better UX (subjectively):

  zip my-archive.zip my-directory
  unzip my-archive.zip
(disclaimer: zip/unzip won't be a reasonable alternative for all of the use cases of tar)

Good software doesn't beg that much explanation. And when it does, then either "--help" or just the command with no parameters e.g. "zip" or "unzip" should provide what's necessary. I don't believe that tar does that, but instead overwhelms the user, whereas "tar --usage" is overwhelming.

Here's another comment of mine which serves a precise example of why tar is problematic in my eyes: https://news.ycombinator.com/item?id=29339018

I don't feel like it follows the UNIX philosophy that well either, though i won't argue that it should be much smaller (because it is powerful, although someone might argue that), but that its commands should be grouped better.

That said, maybe things would be more tolerable if we used the full parameters instead of memorizing silly mnemonics, here's an excerpt from the linked comment:

  $ tar --verbose --create --gzip --file=new-archive.tar.gz ./files-i-want-to-archive


tar -xzvf filename.tar.gz

The mnemonic I use:

x - extract

z - ze

v - vucking

f - files


I'm biased because I'm the founder of the company, but you should check out the certificate management toolchain (CA[1] and CLI[2]) we've built at smallstep. A big focus of the project is human-friendliness. It's not perfect (yet) but I think we've made some good progress.

We also have a hosted option[3] with a free tier that should work for individuals, homelabs, pre-production, and even small production environments. We've started building out a management UI there, and it does map to the CLI as you've described :).

[1] https://github.com/smallstep/certificates

[2] https://github.com/smallstep/cli

[3] https://smallstep.com/certificate-manager/


I really want to try and deploy smallstep at home but one stumbling block I always hit is deploying the CA (or ideally the mTLS certificate!) to end user devices like phones, laptops etc. Maybe I'm missing something entirely but I think I'd need a full MDM profile or setup for phones/mobile devices. Is this theoretically a lot easier than I'm making it? I'd just need an iPad, iPhone and MacBook.

Apart from that thankyou so much for what you've done and provided for the opensource community. The smallstep toolkit is truly fantastic.


GP's post prompted me to look into LE's ACME server implementation, Boulder [1], but it's pretty apparent that Boulder is not suitable for small scale deployments. But the smallstep "certificates" project seems to be a lot more reasonable for this use-case. Thanks for sharing, I'll definitely check it out!

[1]: https://github.com/letsencrypt/boulder


We have an internal certificate authority for internal domains at my job. We add the root CA certificate to each desktop or server through an endpoint agent that runs on every machine. That agent is used for monitoring, provisioning users, and even running arbitrary commands.

The article mentions BYOD (bring your own device) but we don't allow personal devices to connect to internal services, so this isn't an issue for us.

You can use something like EasyRSA to set up an internal certificate authority and generate server certificates signed by that certificate authority. I started using plain old OpenSSL for generating certificates, which EasyRSA uses under the hood, but I would have liked to start by using EasyRSA in the first place.

By the way, EasyRSA still isn't that easy, but it's better than using OpenSSL directly.


> We have an internal certificate authority for internal domains at my job. We add the root CA certificate to each desktop or server through an endpoint agent that runs on every machine.

One challenge to this is some software doesn't use the operating system's CA chain by default. A lot of browsers use their own internal one and ignore what the OS does (by default).


Chrome, Edge, Safari and (god forbid) IE will use system certificate stores.

Firefox was a challenge. But my understanding is that now, on Windows, it will now import enterprise root certificates from the system store automatically.

https://bugzilla.mozilla.org/show_bug.cgi?id=1265113

https://support.mozilla.org/en-US/kb/how-disable-enterprise-...


On Linux Firefox imports system certificates automatically, but shows a warning that the certificate is not trusted by Mozilla.


It is also troublesome when you have to manage cert loading not just on end devices but ephemeral VMs and containers as well.


The big-co I work for handles this via some tooling that checks for browsers and sees if the cert is installed, or by having the ca page signed regularly and having people self install. "Your site look wierd, likely you are missing the CA". It's not solved solved but it's mostly solved. The browsers that come with the image on the enterprise release cadence all have the cert. The people adding other browsers are usually devs or technically savvy enough to add a CA.


> By the way, EasyRSA still isn't that easy, but it's better than using OpenSSL directly.

The trouble with EasyRSA (and similar tools) is that they make decisions for you and restrict what's possible and how. For example, I would always use name constraints with private roots, for extra security. But you're right about OpenSSL; to use it directly requires a significant time investment to understand enough about PKI.

I tried to address this problem with documentation and templates. Here's a step by step guide for creating a private CA using OpenSSL, including intermediate certificates (enabling the root to be kept offline), revocation, and so on: https://www.feistyduck.com/library/openssl-cookbook/online/c... Every aspect is configurable, and here are the configuration templates: https://github.com/ivanr/bulletproof-tls/tree/master/private...

Doing something like this by hand is a fantastic way to learn more about PKI. I know I enjoyed it very much. It's much easier to handle because you're not starting from scratch.

Others in this thread have mentioned SmallStep's STEP-CA, which comes with ACME support: https://smallstep.com/docs/step-ca/getting-started That's definitely worth considering as well.

EDIT The last time I checked, Google's CA-as-a-service was quite affordable https://cloud.google.com/certificate-authority-service AWS has one too, but there's a high minimum monthly fee. Personally, if the budget allows for it, I would go with multiple roots from both AWS and GCP for redundancy.


I have created a script, that mimics most of the modern CA and intermediate CA infrastructure for testing HTTPS/ Content Security Policy and more at OrgPad, where I work. TLS Mastery by Michael W Lucas https://mwl.io/nonfiction/networking#tls helped me a lot.

Having an internal CA is a lot of work, if you want to do it properly and not just for some testing. It is still rather hard to setup HTTPS properly without resorting to running a lot of infrastructure (DNS/ VPN or some kind of public server), that you wouldn't need otherwise.


> but we don't allow personal devices to connect to internal services, so this isn't an issue for us.

You now have a hard dependency from what snake oil you use to how you provision TLS certificates for your servers, congrats.


I will never understand the obsession people have with hiding their private server names.

If somebody gets any access to your local network, there are plenty of ways to enumerate them, and if they can't get access, what's the big deal?

I get that you may want to obfuscate your infrastructure details, but leaking infrastructure details on your server names is quite a red flag. It should really not happen. (Instead, you should care about the many, many ways people can enumerate your infrastructure details without looking at server names.)


Ingrained practices are the sort of thing that change one funeral at a time (see constant password rotation).

It's a reasonable mitigation for certain environments and does leak information that makes structuring attacks easier, but it's certainly not a hard wall of any sort. The main problem for most people is articulating the realistic threat models they are trying to address and because that rarely resolves well assuming the conversation is had at all, there is little rational pushback against "everything and the kitchen sink" approaches based on whatever blog the implementer last read.

Personally I tend to advocate assuming your attacker knows everything about you except specific protected secrets (keys, passphrases, unique physical objects) and working back from there, but that's a lot of effort for organizations where security is rarely anything but a headache for a subset of managers.

You'll see similar opinions about things like port-knocking puzzles and consumer ipv4 NAT, which provide almost zero security benefit but do greatly reduce the incidence of spurious noise in logs.


One of the examples given wasn't a server name, it was leaking potentially confidential information via the domain olympics-campaign.staging.example.org - in many environments its fine if people know project names, but NDAs are a thing, and you could end up in hot water if you accidentally leak a partnership between two companies before it's been announced.


Well, if instead of making a lot of effort in hiding your names you just didn't, you wouldn't use a name like that.

Every single person that connects to any of your networks (very likely the sandboxed mobile one too) can find that name. Basically no place hides it internally. There is very little difference between disclosing it to thousands of the people that care the most about you and disclosing it to everybody on the world.


The other examples are better. Say a never-before-seen name appears, cisco520.internal.foo.bar. Suddenly, a well-formed email appears, “Re: Cisco Support Ticket #7779311” about some additional steps to provision your new appliance. It is trivial to automate that phish by crawling the CT log.


Is this valuable enough to resist every real advancement in network security since the late 00's? Because for each one of them it's certain that people will pop-up making a lot of noise about hidden server names.

It's mostly because of them that DNS is still not reliable. Well, at least this article isn't against certificate transparency, just about how to avoid it.


I don’t think anyone is arguing that Certificate Transparency defeats “every real advancement in network security”. If you want to avoid your internal hostnames, and maybe Subject and SAN, ending up in LE, then you’re free to run your own CA.

But getting back to your parent post, maybe we can see a nontrivial real-world list of a big network to make sure it’s leaking nothing of value?


It is a perfectly valid concern. Internal domain names can contain confidential information. They become vectors for attack (especially if running vulnerable software). Obfuscation doesn't mean perfect security but it still goes a long way towards it.


I think many of us, myself included, have been conditioned to be paranoid—just because I can’t think of/don’t know of any way some data could be abused doesn’t mean I’m going to make it public.


It's mainly mitigating exposure. Some possible vulnerabilities would be social engineering(i.e. it'd be easier to send a targeted phishing URL to gain recon on an employee of a company if you know an internal domain), or injection into a public facing service that has access to internal services.


So, security through obscurity?


Security is not boolean. What’s local can be public some day. Everything should be disclosed on a need to know basis.


This seems like a perfect use case for wild card certs, especially if you have internal sites on a different (sub) domain from your prod servers. Yes, multiple servers have the same private key, but when the alternative is self-signed or no encryption, that is an easy trade off for me.


> perfect use case for wild card certs

I don't like distributing wild card certs as you then have a bigger problem if the cert is leaked.

When the cert is host specific you immediately know where the leak comes from and the scope of the leak is restricted.


Yes, the scope of the leak would be limited. But if a privkey.pem file from one of the hosts of my network is leaked, how do I “immediately” know which host the leak came from?


I don't know how LE does it, but at least with DigiCert (and I assume other commercial CAs), servers sharing the same wildcard cert don't have to share a private key. You generate a separate CSR from each server, and then request a duplicate copy of the wildcard cert using that CSR. That way they can have different SANs as well.


When multiple CSRs [and thus multiple private keys] are involved you end up with multiple wildcard certificates. There is no sharing, technically speaking, but obviously the hostnames in all the wildcards are the same. However, that doesn't really buy you much in terms of security as any one of those wildcards can be used in an active network attack against any matching service if compromised.

That is, unless you're using some sort of public key pinning, but that's very rare to find today and works only in a custom application or something that supports DNSSEC/DANE.


They also say the "duplicate" "wildcards" have different SANs. Their whole narrative makes no technical sense, but presumably the situation is that they've technically got a very limited understanding of what they're doing and the people selling the product have understandably limited enthusiasm for trying to educate suckers who are buying a product. What's the line from Margin Call? Sold to willing buyers at the current fair market price.


Sorry? I'm not sure why you're calling me a sucker, but the wildcard certificates that we purchase from DigiCert can be reissued as many times as we want using separate CSRs, and, yes, with different SANs. DigiCert calls this a "duplicate", but yes, obviously it is technically a new certificate. What is the problem with that?


A wildcard is a name consisting of a single asterisk (matching any label) instead of the first label of a DNS name inside an eTLD+1. [Historically some other wildcards existed but they're prohibited today]

But SANs are just names (that's even what it stands for, "Subject Alternative Name" the word alternative is because this is for X.509 which is part of the X.500 directory system, in which names are part of the X.500 hierarchy, while these names are from the Internet's naming systems DNS and IP addresses which could be seen as an alternative to that hierarchy)

So in changing both the names, and the keys, you're just getting a completely different certificate, maybe the pricing is different for you than purchasing more certificates, but these certificates aren't in any technical sense related to the other certificate.

It's a problem to use nomenclature that's completely wrong in a technical discussion like this. If you call the even numbers "prime" you shouldn't be surprised at the reaction when you claim "half the natural numbers are prime" in a thread about number theory.

[Edited to fix eTLD to eTLD+1 obviously we can't have people issuing wildcards directly inside an eTLD]


Wildcard certs are (only?) issued from DNS-01 challenges. As long as the requester can satisfy the DNS challenge ACME doesn't care about key uniqueness.


With Digicert, you do a different API call “duplicate certificate” to avoid buying another cert unnecessarily.

I would consider it to be a best practice to keep unique keys as an SOP as it discourages bad behaviors, like keeping private keys accessible on file servers or even mail.


Right. If you control the DNS, you can point names at any IP address and get appropriate certs for them. Therefore, you must protect your DNS infrastructure.


Isn't the need to protect your DNS infrastructure pretty obvious anyways even when ignoring certificate validation?


Besides, if I can change your DNS, I can change your HTTP responses as well. So control over DNS already lets me get a lets-encrypt cert for you anyway. Though it is slightly easier to notice if someone changes your DNS to point to a different server than if someone adds a TXT record. I say slightly because if I change your DNS to point at my server I can just proxy requests to your old server so everything still looks like it works.

Heck, even with most other certificate issuers I can get a cert in similar ways when controlling DNS.


How often do one monitor their zone files and its updates?

Would you be able to catch new subdomains being created under your watch?


Obvious, but tend to be missed on small deployments.



Chrome is giving me certificate errors. NET::ERR_CERT_DATE_INVALID


…that’s why I said to have your LAN on a different domain or subdomain, so it can’t be a valid cert for your prod traffic.


My company uses Let's Encrypt extensively for many thousands of customers edge devices which live in their own LAN. As long as the hostnames are random or at least not too telling there's pretty much nothing that you're leaking. Except for the internal IP address (10.x, 192.x,) and how many servers you have. If you can live with that then it's perfectly fine.

I wrote about it a few years ago: https://blog.heckel.io/2018/08/05/issuing-lets-encrypt-certi...


If you have split DNS you're not even leaking internal addresses, the public name record just has to exist.


> […] the public name record just has to exist.

Specifically a TXT record for _acme-challenge has to exist for the requested hostname. Or a CNAME of the requested hostname pointing somewhere else that you control:

* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...

No A (or AAAA) records needed.


I read somewhere a while ago that LE are working on what’s called “intermediate CA” [0] which would solve the problem. Apparently from a regulatory standpoint there are some questions around abuse that need to be answered before they can go ahead. The basic idea is that you can issue your own certificates based on the LE CA that is already recognised by the browsers.

EDIT [0] https://community.letsencrypt.org/t/does-lets-encrypt-offer-...


We're a long ways away from name-constrained intermediaries being viable from a regulatory and technical perspective. I'd explain, but commenter in a thread linked to the one you posted has a pretty detailed explanation already: https://community.letsencrypt.org/t/sign-me-as-an-intermedia...


From that it looks like the main issue is regulatory requirements that force CAs to log all issued certificates via CT (certificate transparency) logs. Given that this is the very thing we're trying to avoid with a private CA ("CT" and "leaking internal hostnames" are functionally synonymous) we seem to be at an impasse at the level of base requirements.

Maybe an IP constraint that restricts certs to only be valid in private IP spaces (10.*, 192.168.1.*, etc)?


I wouldn't say that's even the main issue, but it _is_ probably one of the more difficult ones to solve assuming that just logging all certs publicly the same way every other CA does isn't an acceptable solution for you.

The bigger issue right now is this:

> under current BRs, a name constrained subordinate has to meet all the same requirements an unconstrained subordinate does, which means secured storage and audits

Basically, even a name constrained intermediate CA is subject to all the same regulatory requirements as a trusted root CA. From a regulatory compliance perspective it'd be pretty much equivalent to operating your own globally trusted root CA, with all the auditing and security requirements that go along with that. And if you ever screw up, Let's Encrypt, as the root CA your CA is chained to, would be held responsible for your mistakes as required by the current BRs.

Basically, it's not happening anytime soon without some serious changes to the Baseline Requirements and web PKI infrastructure.


Yeah the auditing, logging, and security requirements seem to be the main blockers.

But practically I don't see a difference between a name constrained CA with a 90 day life and a wildcard cert with a 90 day life from the perspective of the requirements listed above. There are only benefits, because now you can scope down each service to a cert that is only valid for that service.


You can still use wildcard certificates to avoid leaking the entirety of your private hostnames, while providing transparency around the "authority" portion of your domains.


First, that has it's own security drawbacks because now every service has access to a wildcard cert that is valid for any conceivable subdomain. Second, how is that better than an intermediate CA with a short life where the CA cert is CT logged? The cert path would still include that logged CA cert...


But then your constrained CA doesn't get you anything, you could not get from the parent CA. You could save your troubles as well.


I've used https://smallstep.com/docs/step-ca/ as a CA internally, works well.


What I'd want is an internal CA, like step-ca, but have the certificates signed by a "real" CA, so I don't have to distribute my own root CA certificate.


The dream would truly be an internal CA backed by a publicly trusted subordinate cert (limited to the domain you control). But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.


> But afaik that can’t happen until the Name Constraint Extension is enforced by “all” clients.

For those curious about this extension, see RFC 5280 § 4.2.1.10:

* https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.10


You really don't actually want this. This intermediate CA would still be subject to the same extensive CAB Forum / vendor root program requirements (audited yearly via WebTrust) as a root CA. There are a ton of requirements, including mandatory response times, that inevitably makes this require a fully staffed team to operate.


That would be a violation of the real CA's duty to only sign certs that they have some basis for believing are correct. (This basis almost always boils down to "controls the DNS".)


Out of curiosity, What's the problem with distributing your own root CAs? Is it security? Or is it "just a PITA"?


Mostly the second.


This would be called an "Intermediate CA" to those for whom this is unclear.


Wouldn't that allow you to issue certificates for Google.com? Correct me if I've misunderstood but for the sake of discussion pretend cert pinning doesn't exist, use another example domain if it's easier


I'm not a 100% sure how certificates work. What I imagined would be possible is having a certificate for mydomain.com, which can be used to sign certificates for subdomains.


You can put "name constraints" on an intermediate that, in theory, can restrict the intermediate to only signing certs for a particular subdomain. In theory, name-constrained intermediate certificate for `.example.com` would have no more authority than a wildcard certificate for `.example.com`.

But, name constraints are enforced by "relying parties" -- HTTPS/TLS clients & servers that are validating certificates and authenticating remote peers. In practice, there's a risk that a broken/misconfigured relying party would trust a cert for google.com signed by an intermediate that's name constrained / only trusted to issue for `*.example.com`.


Yeah, that is the major drawback.


I've been using it too and it works well, particularly with Caddy to do automatic certificates with ACME where possible

Plus all my services go through Tailscale, so although I am leaking internal hostnames via DNS, all those records point to is 100.* addresses


I'm a fan of both Caddy and Tailscale; any chance you have any devnotes to share on your setup?


My notes were pretty rough but I've tried putting them into a gist here:

https://gist.github.com/mojzu/b093d79e73e7aa302dde8e335945b2...

Which covers using step-ca with Caddy to get TLS certs via ACME for subdomains, and protecting internal services using client certificates/mtls

I then install Tailscale on the host which is running the docker containers, and configure the firewall so that only other 100.* IP addresses can connect to ports 80/443/444. The combination of VPN+MTLS mitigates most of my worries about exposing internal subdomains on public DNS


Awesome, thanks!


Tailscale+TLS: isn't it two strong layers of encryption?


Yeah, it's probably overkill but I think the multiple layers would help in cases I misconfigured something or if an account someone uses to log into Tailscale was compromised. For example when I ran the containers on a linux host I discovered later docker was bypassing the firewall rules and allowing all connections, but it probably wasn't a big deal because of the MTLS (and the server was behind a NAT router anyway so it was only addressable within the local network)


This is the correct answer ;)

If you're going to run a serious internal network, you'll need the basic things like NTP, DNS, a CA server, and, yes, some kind of MDM to distribute internal CA certificates to your people. The real PITA is when you don't have these in place.


I've been setting this up for my homelab under .home.arpa, seems to work pretty well so far.


you should not use wildcards or letsencrypt for internal authentication as its insecure for a few reasons.

0. implicit reliance on a network internet connection means any loss of ACME to the letsencrypt CA makes renewal of the cert or OCSP problematic. if the internet goes down, so does much of the intranet nonreliant upon it.

1. wildcard certs make setting up an attack on the network easier. you no longer need an issued cert for your malicious service, you just need to find a way to get/use the wildcard. you should know your services and SANs for the certs. these should be periodically audited.


1. Renewal is scripted to try every day for 30 days in advance with most common utilities. If lets encrypt and all other acme hosts are down for 30 days, I think you have bigger issues.

2. If you can't secure a wildcard cert, how does the same problem not apply to a root CA cert, which could also then do things like sign google.com certs that your internal users trust, which feels strictly worse. (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).


If an organisation I work for requires me to trust their CA, that trust will go into a VM where the only things allowed to run are internal to the org. This will hamper my productivity, but only for a short time until my notice period runs out, at which point I will be working for another, saner organisation.


I don't go that extreme - my employer is free to install their own root CA on devices they own and supply.

I understand some startups are a bit more "Go get your own computer". I think if they paid for it, it's still their device, but once you pay for it out of your own cash, yeah, mdm or root certs are a no go.


Right.

I should note that I'm a contractor and I always bring my own tools, which includes the computer. That said, I still prefer to use my own device where I can. It's got the tools I use, configured how I like them, and I'm very familiar with all its quirks which means I have less context switching.

I have worked for clients with tighter regulation controls where I was required to use designated devices for certain tasks but that's been pretty much all of it.

I would rather not have to carry 2 computers around just because an organisation can't trust me to use my own computer, despite having hired me for a substantial amount of money to operate their production infrastructure.


I find having a separate machine has it's advantages, the problem is when IT start managing it they typically so not udnerstand developers and 'standard users' like accountant has totally different needa


OCSP is still a problem, as youll need to either proxy a local ocsp response during outages or disable validation entirely. microservices in an aws partial outage, for example, would suffer here.

a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.

commonly your root is even offline, unavailable (locked in a vault) and only comes out for new issuing CA's.


> a root CA cert is stored in a gemalto or other boutique special HSM. it has an overwhelming security framework to protect it (if its ever online.) security officers to reset pins with separate pins, and an attestation framework to access its functions through 2 or more known agents with privileges separated. even the keyboard connected to the device is cryptographically authenticated against the hardware to which it connects.

There are many organisations not large enough to justify this setup, for which Lets Encrypt is clearly safer than a custom root CA.


If you're making your own root cert, you should use name constraints and block the issuance to certain DNS names.

https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1....

https://wiki.mozilla.org/CA:NameConstraints

Although... I have no idea if browsers/applications/openssl/etc actually verify this - but they should.

(Disclaimer I work at LE)


> (I know there are cert extensions that allow restricting certs to a subdomain, but they're not universally supported and still scoped as wide as a wildcard cert).

I even mentioned that in my post ;)


It seems like the easiest self-managed alternative is several orders of magnitude more complicated, though. Managing a local CA is trivial in a homelab, but pushing self-signed certs to every machine and service that needs them quickly grows quite complex as you need to manage more of them and they grow more heterogeneous. Every stinking system has a different CA management tool with different quirks and different permissions models, and the technological complexity can pale in comparison to the organizational complexity of getting access to the systems in the first place. If you even can: especially in the case of services, they might Just Not Work with private CAs, and now inventing a proxy service is part of your private-CA-induced workload. On top of that, if you want to do a comparably good job of certificate rotation and expiry notification to letsencrypt, you're going to need infrastructure to make it happen.

Is there a tool that solves (some of) this that I just don't know about?

I've seen big companies do it manually, but it's a full time job, sometimes multiple full time jobs, and the result still has more steady-state problems (e.g. people leaving and certs expiring without notification) than letsencrypt.


> Is there a tool that solves (some of) this that I just don't know about?

There's a company called Venafi that makes a product that lives in this space. It tries to auto-inventory certs in your environment and facilitates automatic certificate creation and provisioning.

From what I hear, it's not perfect (or at least, it wasn't as of a few years ago); yeah, some apps do wonky things with cert stores, so auto-provisioning doesn't always work, but it was pretty reliable for most major flavors of web server. And discovery was hard to tune properly to get good results. But once you have a working inventory, lifecycle management gets easier.

I think it's just one of those things where, if you're at the point where you're doing this, you have to accept that it will be at least one person's full-time job, and if you can't accept that... well, I hope you can accept random outages due to cert expiration.


It really depends on your risk tolerance and capability.

I built out a PKI practice in a large, well-funded organization - even for us, it is difficult to staff PKI skill sets and commercial solutions are expensive. Some network dude running OpenSSL on his laptop is not a credible thing.

Using a public CA is nice as you may be able to focus more on the processes and mechanics adjacent to PKI. You can pay companies like Digicert to run private CAs as well.

The other risks can be controlled in other ways. For example, we setup a protocol where a security incident would be created if a duplicate private key was detected during scans that hit every endpoint at least daily.


Can lets encrypt issue multiple wildcard certs for different subdomains like *.banana.example.com and *.grapefruit.example.com

Then you could give each server a different wildcard cert without exposing the full name to the certificate log: exchange.banana.example.com log4j.grapefruit.com

Ugly, but functional.

Alternatively should the certificate transparency log rules be changed to not include the subdomain? Maybe what matters is that you know that a certificate has been issued for a domain, when, and that you have a fingerprint to blacklist or revoke. Knowing which actual subdomain a certificate is for is very convenient, but is it proportionate?


> Alternatively should the certificate transparency log rules be changed to not include the subdomain? Maybe what matters is that you know that a certificate has been issued for a domain, when, and that you have a fingerprint to blacklist or revoke. Knowing which actual subdomain a certificate is for is very convenient, but is it proportionate?

That was a big debate in the CA/B Forum when CT was created; the current behavior is a deliberate choice on the part of the browser developers, which they will probably not want to revisit.


Running your own private CA is a great way to cause problems for yourself down the road (just ask anyone with a 5 year and 1 day old Kubernetes cluster). But I also don't want to be dependent on a 3rd party for my internal services. I want a better solution: not as annoying as a private CA, and not dependent on 3rd parties.

I want to deploy apps that use certs that don't expire. When they should be rotated, I want to do them on my own time. And I want a standard method to automatically replace them when needed, that is not dependent on some cron job firing at the correct time or everything breaks.

Cert expiration is a ticking time bomb blowing up my services just because "security best practice" says an arbitrary, hard expiration time is the best thing. Security is not more important than reliability. For a single external load balancer for a website, we deal with it. But when you have thousands of the little bastards in your backend, it's just ridiculous.


> Security is not more important than reliability.

Yes, it is. In most cases Confidentially > Integrity > Availability. Systems should fail-safe.

There are some scenarios such as medical devices where integrity or availability trump confidentiality. But most information systems should favor going offline to prevent a breach of confidentiality or data integrity.


I've done it with a few key services like Home Assistant, using split-horizon DNS, and considered it less than ideal.

However the alternatives suck as far as I know. I don't want to install my own CA certificate on all the various devices in the home, for instance, and keeping that up to date.

With browsers making self-signing a PITA, what choices do I have?


Not just browsers, but also iOS/android.


This is one area where I think AWS does a huge disservice by making their Private CA so expensive ($400 a month + cost of certificates). This ends up pushing people to use public domains instead of private ones, or relying on other solutions outside of AWS. If the cloud companies would make it as easy to get private certificates as public ones you wouldn't see as many issues like this.


My life experience has taught me that it's better to have an imperfect, but simple solution with known limitations (in this case LetsEncrypt), than an ideal solution that you can't configure correctly and do not fully understand (internal CA for a small team).

The former give you known limitations, the latter work fine for a while and you get a great feeling, and then disaster strikes out of the blue.

The same problem plagues IoT solutions and home networking - there are no industry-accepted frameworks to enable encryption on Lan like we do on the real internet. Thrre is no way to know that I connect to my home router or NAS when i type in it's address.

This is an area where we have kind of failed as an industry


> But there is a downside. The CT logs are public and can be searched. Firstly, [...]

This bit me recently. I have a certificate for homelab.myname.com, and as any public-facing IP address, I get the expected brute force ssh login attempts for users 'root', 'git', 'admin', etc...

But I was terrified (until I remembered about the public cert) to find attempts for users 'homelab' and 'myname' -- which, being my actual name, actually corresponds to a user.

It's obviously my fault for not thinking this through, and it's not a terrible issue, but thinking I was under a targeted attack was quite the scare!


Sadly, the answer is probably no (for the information leakage mentioned in the article).

But having an internal (even ACME API-supporting) CA is no walk in the park either. If you can swallow the trade off and design with publicly-known hostnames, I would highly recommend it.

There’s always some annoying device/software/framework requiring their own little config dance to insert the root cert. Like outbound-proxy configuration, but almost worse.

I don’t even want to imagine what would happen if/when the root key needs to be rotated due to some catastrophic HSM problem.


> Sadly, the answer is probably no (for the information leakage mentioned in the article).

Eh, even in large organisations of expert IT users, the internal CA ends up training users to ignore certificate warnings.

Sure, maybe the certificate is set up right on officially issued laptops - but the moment someone starts a container, or launches a virtual machine, or uses some weird tool with its own certificate store, or has a project that needs a raspberry pi, or the boss gets himself an ipad? They'll start seeing certificate errors.

IMHO the risks created by users learning to ignore warnings are much greater than the risks from some outsider knowing that nexus.example.com exists.


If you have a large organization your containers are based off the orgs containers which has the CA in it. Same with VMs, Java, .Net, etc.


Maintaining golden container/vm images with root cert customizations is a pretty complex task that needs constant maintenance and customizations for new runtimes. Also this does nothing for unofficial devices (byo laptops, byod smartphones, ceo's ipad, guest laptops).


Hard disagree on that one. For 99% of your software it’s stuck the cert in your distro’s trust store and use the result as your base VM / container templates. Adding to the Java trust store is a 10 line script and never needs to be touched again. It’s not that it’s completely trivial but it’s so much less complex than every other thing your infra team does.

Random BYO devices I can understand but in your cloud / datacenter it’s so easy just because you control everything.


> Maintaining golden container/vm images […]

I was under the impression that 'golden images' aren't generally encouraged as a Best Practice™ nowadays. The general momentum seems to me to be use a vendor-default install image (partitioning however you want), and then go in with a configuration management system once it's on the network.

Basically: you keep your config 'recipes' up-to-date, not your image(s).


at least at apple the dockerfile for their images were < 50 lines. Like super tweaks on the AL2 image. Every 60 days you need to pull it in. So you are right they just keep it up-to-date and keep it to the vendor images as well.


I run my own internal CA.

Would not recommend to anyone that they use publicly-valid letsencrypt certs for internal hostnames, since certificate issuance transparency logs are public and will expose all of the hostnames of your internal infrastructure.


The article answers that: use wildcard certs instead


I'd put using wildcard TLS for all your internal stuff in the category of unacceptably weird and unnecessary risk


Don't say that! I managed to sign up to Starbucks Rewards before it launched here in New Zealand by looking at the staging certificates that were issues ;)

Lots of fun stuff is possible but yeah, it's definitely something you should consider. Let's Encrypt allows wildcart certs from memory so you should probably use one of those per subdomain.


Why not just be your own signing authority for internal domains? You can propagate your toplevel public cert with most enterprise network provisioning tools.


Not only is running your own CA a pain, there is also minimal support for restricting CA scope validity, so anyone that needs to communicate with you effectively ends up trusting your CA for anything and everything. For most anyone except your own trusting partners or coworkers that's a complete non-starter.


Running your own PKI is fairly straightforward, particularly with tools like cfssl at your disposal.

But running your own PKI properly is quite hard.

Let's Encrypt gives you top tier PKI management for $0.


How do you define "properly"? What are some of the things someone can do wrong that Let's Encrypt does correctly?


Root certificate stored on offline HSM and intermediates on secure infrastructure. FIPS compliance. (Relatively) reliable revocation services. [See note 0]

The result is security of issuance, that is near complete confidence that certificates will only be used for controlled domains (not necessary if you want to MITM of course).

Also, ACME is generally easier and more reliable than other certificate rollover processes I've seen. I'm not sure if there's in-house PKI tools supporting it?

Depends on your organisation size though. Maybe your in-house PKI is fine, but it's not for everyone!

[Note 0] Revocation is of course a mess. Let's Encrypt isn't without fault either, particularly when used internally, since OCSP responders will need to be accessible from client devices.


I mean sure but an org doesn’t really need that much security. If you’re not taking that much care with your API keys and db passwords then you probably don’t need it for certs either. Keep your root CA offline and in an air gapped backup, issue team specific intermediates with med length and keep your endpoint certs short.

You need as much security on your CA as the accounts in your org with the authority to replace them with your provisioning tools.


That reasoning goes back around. If you don’t need that much security and are fine with exposing internal hostnames via CT logs, then Let’s Encrypt can be nicer (no internal CA to maintain).

It’s just that very specific bit in the middle, where you don’t want to expose the internal hostnames but don’t need top-tier security where having a private CA is worthwhile (assuming outbound internet connectivity to Lets Encrypt is allowed).


A business case for Let's Encrypt is to support internal hosts which are not visible on the internet (Let's Encrypt can check that) and omit the hostnames from the Certificate Transparency Logs.

Let a business pay $100/year for 10 internal hostnames.


I'm fairly certain LE is required to emit signed certificates to CT by the CA/B forum baseline requirements, with no "internal only" exception.

In other words, if they do this they will be untrusted in browsers. They could offer this service on a secondary untrusted root if they wanted.


They could augment the CT spec, such that only a hash of the domain needs to be made public.

Would be a great way to found LE :)


> Let's Encrypt gives you top tier PKI management for $0.

Ok, but it fails at one of the requirements.


I'm using a wildcard certificate and CNAME records to interal hostnames, it works pretty nicely for my use case. I don't need to leak out a map of my hostnames and I don't need to do a full split-horizon DNS.

So if I want to encrypt traffic to "service1.example.com", "service2.example.com" and "service3.example.com" that all run on server A, I'll make three CNAME records that all point to "server-a.internal", and I'll just resolve "server-a.internal" in my local network. Obviously, anyone can query what "service1.example.com" points to, but they won't figure out anything beyond "server A".


> OK, so you decide to have an internal DNS - now the whole world knows you have doorbell-model-xyz.myhome.example.com!

Uhm, or you use split horizon DNS? Who in their right mind would leak all their internal DNS names into a public DNS zone?


Sorry for the poor wording on my part. I meant that if you issue a LE Cert for your doorbell, and give it a "sensible" name, the name will appear in the CT Log.


That's in the article, Let's encrypt leaks them for you, if you use them for your intranet.


Named certs have the hostnames they’re valid for in the Certificate itself.

“View Certificate” in a browser, or openssl sclient on cli will show you.


I don't bother with split horizon DNS for my home network.


I just finished writing a long proposal: https://github.com/WICG/proposals/issues/43

PKI is fairly awful and bad for internal anything, unless you have a full IT team and infrastructure.

A much simpler solution would be URLs with embedded public keys, with an optional discover and pair mechanism.

Browsers already have managed profiles. Just set them up with a trusted set of "paired" servers and labels, push the configs with ansible(It's just like an old school hosts file!), and don't let them pair with anything new.

If you have a small company of people you trust(probably a bad plan!), or are a home user, just use discovery. No more downloading an app to set up a smart device.

The protocol as I sketched it out(and actually prototyped a version of) provides some extra layers of security, you can't connect unless you already know the URL, or discovery is on and you see it on the same LAN.

We could finally stop talking to our routers and printers via plaintext on the LAN and encrypt everywhere.

We already use exactly this kind of scheme to secure our keyboards and mice, with discovery in open air not even requiring being on the same LAN.

We type our sensitive info into Google docs shared with an "anyone with this URL" feature.

It seems we already trust opaque random URLs and pairing quite a bit. So why not trust them more than the terrible plaintext LAN services we use now?


So people seem to be conflating the requirements of a CA (the thing that signs certificates and is considered the authority) and an RA (the registration authority).

Running a CA that issues certificates isn't that hard. There are off-the-shelf solutions and wraparounds for openssl as well.

Running an RA is hard. That's the part that has to check who is asking for a certificate and whether they're authorized to get one and what the certificate restrictions etc are.

Then there's the infrastructure issue on the TLS users (clients & servers) that need to have the internally trusted root of the CA installed and need the RA client software to automagically request and install the necessary leaf and chain certificates.

AWS has private CAs for $400/month, but if you want a root and then some signing intermediates, that's $400 for each (effectively the PCA is just a key stored in an AWS HSM and an API for issuing certificates).

A real HSM will cost roughly a year of that service, but the management of that hardware and protecting it and all the rigmarole around it is very expensive.

Every mobile phone and most desktops have a TPM that could be used for this, but having an API to access it in a standard way isn't that available.


DNS names are public by nature. Split horizon, private roots, private CAs etc are a sign you are trying to bend things backwards. Just don't use sensitive DNS names.


disagree on that - it's entirely possible to have an openssl private root CA and private DNS that doesn't talk to the internet at all and exists in RFC1918 IP space with no gateway or route to the outside world. not just a matter of ACLs on things like DNS servers but those same servers/VMs not even having interfaces that have any way to get traffic to a global routing table.

split horizon I agree is risky.


Maybe a dump question, but was it so, that it is not only Let's Encrypt that uses Certificate Transparency Log, but all the other providers too?

If so, then the decision is more like, whether to use a public or private certificate for an internal service.


Yep, we recently moved from DigiCert to LE and someone was alarmed at the certificate transparency logs, until we scrolled down the page to reveal the same logs from DigiCert.

Wildcards hide it somewhat, but DigiCert charges per subdomain now, and every user thinks they need their own subdomain for some reason. So LE it is.


This is an interesting topic, for me.

I write iOS apps, and iOS requires that all internet communications be done with HTTPS.

It is possible to use self-signed certs, but you need to do a bit of work on the software, to validate and approve them. I don't like doing that, as I consider it a potential security vector (you are constantly reading about development code that is compiled into release product, and subsequently leveraged by crooks).

I am working on a full-stack system. I can run the backend on my laptop, but the app won't connect to it, unless I do the self-signed workaround.

It's easier for me to just leave the backend on the hosted server. I hardly ever need to work on that part.


For the project I'm working on currently, I use Charles Proxy's "Map Remote" function to map our UAT server's HTTPS url to my local machines HTTP URL.

Also ngrok.com works really well if you need to give other people access to your dev environment.


> I use Charles Proxy's "Map Remote" function to map our UAT server's HTTPS url to my local machines HTTP URL.

This looks really interesting. Thanks! I'll see if I can get away with it.


If you create a custom SSL CA, you can add that CA to your ios devices and simulators, and they will trust your backend served with an SSL certificate issued by your custom CA, no app modifications needed. (On modern Android, this does not work out of the box - it requires the custom SSL CA fingerprints to be added to a network configuration file embedded in the app - but you could always use gradle flavors and only add it to your debug/development builds)


> I write iOS apps, and iOS requires that all internet communications be done with HTTPS

What if the app is on the same network as the server?

I've got a Denon A/V receiver that has an HTTP interface and the Denon iOS app is able to talk to it. I've watched this via a packet sniffer and it definitely is using plain HTTP.


> I've got a Denon A/V receiver that has an HTTP interface and the Denon iOS app is able to talk to it. I've watched this via a packet sniffer and it definitely is using plain HTTP.

That's interesting. I wonder why Apple let that go by. I've had apps rejected, because they wouldn't use HTTPS. Maybe it's OK for a local WiFi connection. Even then, Apple has been fairly strict.

That said, I think that there are ways to register for exceptions.


Yeah same with a couple of apps I use, WLED, and HomeAssistant both work on HTTP.


So the problem is our naming scheme is insecure so we ask untrustworthy 3rd-party entities to vet our certificates. The CA mafia isn't gonna give up their hard-earned monopoly easily (remember CACert?), and most client companies are happy to have for-profit CAs for insurance/policy compliance. Something like DNSSEC+DANE[0] is more reasonable but unfortunately unsupported by most programs.

[0] https://datatracker.ietf.org/doc/html/rfc6698


No, the reasoning here is broken. Even with a secure naming scheme, you'd still need certificates, because you have to verify the authenticity of the secure channel you bring up (usually TLS), not just the security of the name. Any way you slice it, you end up with a third party vouching for your TLS certificate.


In the case of DNSSEC keys are distributed with the zone, so the trust anchor is the DNS root. Of course, your parent zone could lie about your keys (just like it could lie about your other records), but don't you think since DNS is already an attractive attack vector (as it can vouch for CAs to publish a trusted certificate), relying on it exclusively for certificate distribution would reduce the overall attack surface?

I'm not saying no trusted parties is the end goal (though Tor's onion or the GNU Name System work in this area), but maybe giving dozens of corporations/institutes the power to impersonate your server (from a client UX perspective) isn't the best we can do.


DNSSEC does the same thing (tradeoff: fewer corporations, but most of the important ones de jure controlled by governments, and no transparency logs†) --- you're just handing the CA role off to TLD operators.

To the extent DNS is an attractive attack vector, DNSSEC doesn't actually do much to mitigate those attacks. Most DNS corruption doesn't come from the on-the-wire cache corruption attacks DNSSEC was designed to address, but from attacks directly on registrars. There's nothing DNSSEC can do to mitigate those attacks, but not having CAs tied directly to DNS does mitigate them: it means the resultant misissued certificates are CT-logged.

If there was a huge difference in security from switching to DANE, this would be a different story. But in practice, the differences are marginal, and sometimes they're in the wrong direction.

Two really big things happened in the last decade that influenced the calculus here:

1. WebPKI certs are now reliably available free of charge, because of LetsEncrypt and the market pressures it created.

2. Chrome and Mozilla were unexpectedly successful at cleaning up the WebPKI, to the extent that some of the largest CAs were summarily executed for misissuance. That's not something people would have predicted in 2008! But WebPKI governance is now on its toes, in a way that DNS governance is unlikely ever to be.

(Cards on the table: I'd be a vocal opponent of DANE even if 1 & 2 weren't the case.)

Not only is there no CT for DANE, but there's unlikely ever to be any --- CT was rolled out in the WebPKI under threat of delisting from Mozilla and Chrome's root cert programs, and that's not a threat you can make of DNS TLD operators.


> not having CAs tied directly to DNS does mitigate them: it means the resultant misissued certificates are CT-logged. (...) Not only is there no CT for DANE, but there's unlikely ever to be any

Since DNS is public data that anyone can archive, isn't it easy to build a CT log from that for a list of domains? I mean regularly probing for DANE records on your domains can be done fairly easily in a CRON job. I'm personally very skeptical to trust CT logs from CAs in the first place and would much rather welcome a publicly-auditable/reproducible system.

> (Cards on the table: I'd be a vocal opponent of DANE even if 1 & 2 weren't the case.)

Why and what's the alternative? Is your personal recommendation to use a specific CA you trust over all the others and setup CAA records on your domain? Otherwise i believe DNS remains a single point of failure and hijacking it would make it easy to obtain a certificate for your server from pretty much any CA, so i don't see any security benefits, but i do see the downside that any CA can be compelled (by legal or physical threat) to produce a trusted certificate for a certain domain which of course could be said of TLD operators as well, but i believe reducing the number of critical operators your security relies on is always a good thing.

If you have a link to a more detailed read on your thoughts on this topic, i'd be happy to read some lengthier arguments.


The premise of certificate transparency is that the CAs themselves (generally) submit certificates to be logged. CAs generate pre-certificates, which are signed by CT logs, generating SCTs, which accompany the certificate in the TLS handshake. The SCT is a promise from a CT log (not the CA) that the cert has been recorded; the CT logs themselves are cryptographically append-only. The system is designed not to simply trust the CA to log.

You can't replicate that clientside by monitoring domains. A malicious authority server can feed different data selectively.

Could you replicate this system in the DNS? Well, it'd be impossible to do it with DNSSEC writ large (because there's no way to deliver SCTs to DNS clients), but you could do it with extensions (that don't exist) to DANE itself, and tie it into the TLS protocol. But that system would require the cooperation of all the TLD operators, and they have no incentive to comply --- just like the commercial CAs didn't, until Mozilla threatened to remove them from the root certificate program unless they did. But Mozilla can't threaten to remove .COM from the DNS.

So, no, the situations aren't comparable, even if you stipulate that DANE advocates could theoretically design something.

I'm hesitant to answer the second question you pose at length, because you have some misconceptions about how CT works, and so we're not on the same page about the level of transparency that exists today.


A public CA is for having a third-party entity so two different parties do not need to trust each other. So, the answer is no. Why would you even consider this for internal communication?


Installing a root CA on devices is risky.

From the article:

> It means your employees aren't constantly fighting browser warnings when trying to submit stuff internally.

If your employees gets a habit of ignoring certificate warnings then you have much bigger problems than leaking internal domain names.


Clients should not ignore the certificate warnings. You install the certificates on the client machines.


It's possible to configure DNS to make a public domain point to an internal IP and register a certificate for that domain.

For example, you can register a certificate for local.yourcompany.com and point local.yourcompany.com to 127.0.0.1 to get HTTPs locally. The same could be done for internal network IPs.

It wouldn't work well with Let's Encrypt because their bot would just end up talking to itself in this scenario.

Of course you could also use my side project (expose.sh) to get a https url in one command.


Is it that hard to setup an internal CA? I have no idea what I'm doing, and I managed one for years until we moved offices and ditched our LAN.


The hard part is getting the root certificate in the trust store on every device in your organization.


Worse, it is often not the trust store on every device. It is often multiple trust stores on a device.

The OS might have one. Each browser might have its own. For a developer, each language they use might need separate configuration to get its libraries to use the certificate.


Active Directory for windows, MDM for OSX and phones, custom package for linux.


That should worry the hell out of you.

If you could install CAs only for a certain domain (default to the name constraints but actually set in the browser/Os) that would be fine, but installing a CA gives anyone with access to that CA the ability to make pretty much any valid cert, and your potential lack of security raises flags


I like the wildcard certificates option, however I have not been able to find an easy solution to distribute those certificates to every host I have internally. Is this usually done manually? is there some equivalent to acme.sh?

The kind of hosts I have are OPNSense router, traefik servers, unifi controller etc.


My method is manual-ish¹. One VM is in charge of getting the wildcard certificates. Other than answering DNS requests for validation and SSH it has no public face.

Each other machine regularly picks up the current outputs from there via SFTP weekly and restarts what-ever services. I'm not running anything that I need near-perfect availability on ATM, so it is no more complex than that. If wanting to avoid unnecessary service restarts check the for changes and only do that part if needed, and/or use services that can be told top reload certs without a restart.

This does mean I'm using the same key on every host. If you want to be (or are required to be) more paranoid than that then this method won't work for you unmodified and perhaps you want per-name keys and certs instead of a wildcard anyway. For extra carefulness you might even separate the DNS service and certificate store onto different hosts.

Not sure how you'd do it with unifi kit, my hosts are all things I can run shell scripts from cron on running services like nginx, Apache, Zimbra, … that I can configure and restart via script.

[1] “manual” because each host has its own script doing the job, “ish” because once configured I don't need to do anything further myself


> acme.sh

Another shell-based ACME client I like is dehyradted. But for sending certs to remote systems from one central area, perhaps the shell-based GetSSL:

> Obtain SSL certificates from the letsencrypt.org ACME server. Suitable for automating the process on remote servers.

* https://github.com/srvrco/getssl

In general, what you may want to do is configure Ansible/Puppet/etc, and have your ACME client drop the new cert in a particular area and have your configuration management system push things out from there.


For any device that has a web interface, and no way of updating the cert automatically built in or via an API, you'd probably have to automate the process with something like puppeteer.

https://www.npmjs.com/package/puppeteer


At my last job I implemented the certificate generation as a scheduled job, which pushes the generated certificates to a private S3 bucket.

Then, our standard Ansible playbooks set up on each node a weekly systemd timer which downloads the needed certificates and restarts or reloads the services.


If you have root ssh on each machine you can make rsync cron jobs. Imo it's reasonably secure if you spend the time setting up ssh keys and disabling password auth after.


Vault makes a pretty decent internal PKI tool, even if the UI is limited. (https://www.vaultproject.io/docs/secrets/pki)


Interesting question. Quite complex. IMHO there is no clear right or wrong here.


Another nuisance is that unencrypted port 80 must be open to the outside world to do the acme negotiation (LE servers must be able to talk to your acme client running at the subdomain that wants a cert). They also intentionally don't publish a list of IPs that LetsEncrypt might be coming from [1]. So opening firewall ports on machines that are specifically internal hosts has to be a part of any renewal scripts that run every X days. Kinda sucks IMO.

[1]https://letsencrypt.org/docs/faq/#what-ip-addresses-does-let...

UPDATE: Apparently there is a DNS based solution that I wasn't aware of.


As these are internal hostnames, you're probably doing a DNS-01 challenge rather than HTTP-01. With DNS-01 you don't need to open up any ports for incoming HTTP connections; you just need to place a TXT record in the DNS for the domain.


That's not true. You can validate domains using dns-01, without exposing hosts.


and even with HTTP challenge you don't have to expose the host directly, but e.g. can copy the challenge response to a public webserver from the internal host or from a coordinator server.


But it is possible to have initial certificates without opening anything: https://gruchalski.com/posts/2021-06-04-letsencrypt-certific...

From there, it’s possible to use HTTPS negotiation.


This looks kind of interesting. I might try this. Thanks.


Only true if you're using HTTP validation. Use DNS validation instead and this isn't an issue.


Fair enough. Although that seems rather complicated for those of us just trying to get a quick cert for an internal host. The LetsEncrypt forums are full of this discussion:

[1] https://community.letsencrypt.org/t/whitelisting-le-ip-addre... [2] https://community.letsencrypt.org/t/whitelist-hostnames-for-... [3]https://community.letsencrypt.org/t/letsencrypt-ip-addresses...


Internally even small companies should be doing some PKI for device enrollment and email security. If you manage your own dns you need a pki infra as well imo


I bought an extra domain for our internal network.

$COMPANY_SHORT_FORM.network

Works really well and wie no longer have the issue of deploying root certs in devices.


Last time I considered using it, it was a pia for IIS servers because you have to manually renew every 30 days. Has this changed?


I think using DNS for the intranet nodes is generally a good idea. It allows HTTPS to be seamless.


> The only real answer to this is to use Wildcard Certificates. You can get a TLS certificate for *.internal.example.com

Does Let's Encrypt support Subject Alt Names on the wildcard certs?

My experience suggests that wildcard certs work, but require a SAN entry for each "real" host because browsers don't trust the CN field anymore. e.g., my *.apps.blah cert doesn't work unless I include all of the things I use it on - homeassistant.apps.blah, nodered.apps.blah, etc.

Do Let's Encrypt certificates have something special that negates this requirement? Or am I completely wrong about the SAN requirement?


This sounds like something is broken in your client (or maybe server config)?

I use Let's Encrypt wildcard certs quite extensively, both in production use at $dayjob and on my home network, and have never encountered anything like this. The only "trick" to wildcard certs is one for .apps.blah won't be valid for apps.blah. The normal way to handle this is request one with SANs .apps.blah and apps.blah.

Similarly, it won't work for sub1.sub2.apps.blah. I don't run setups like this myself but if you need it I'd recommend using a separate *.sub2.apps.blah for that, mainly due to the potential for DNS issues when LE is validating. Same thing with multiple top-level domains. The reason is when renewing if one of N validations fail, your certificate gets re-issued without the failed domain which then means broken SSL. If you have completely separate certificates and validation of one fails the old (working) version stays in place. With normal renewals happening at 30 days before expiry, this means you have 29 days for this to resolve on its own, manually fix, etc, and LE even emails you a few days before expiry if a certificate hasn't been renewed.


Wildcard certs from LE work fine for internal domains. I've been using one for a while now. I had to set up some cron jobs to copy them around and restart some services, but it seems to be working well.


The whole point of a wildcard certificate is that you don't have to exhaustively list all covered hostnames.


"Or am I completely wrong about the SAN requirement?"

Not w/r/t Chromium.

https://web.archive.org/web/20170611165205if_/https://bugs.c...

https://web.archive.org/web/20171204094735if_/https://bugs.c...

In tests I conducted with Chrome, the CN field could be omitted in self-signed server certs without any problems.


I use a few wildcard certs from Amazon, and they work well on Firefox, Safari and Chrome.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: