Hacker News new | past | comments | ask | show | jobs | submit login
You Shouldn't Use Public CAs for Internal Infrastructures (smallstep.com)
76 points by kiyanwang on Dec 18, 2022 | hide | past | favorite | 56 comments



Counterpoint: In many simple cases it absolutely is okay to use public CA certificates for internal hosts.

Point 1 will incentivize you to not create complexity where no complexity is needed. Just do what Let's Encrypt does, it'll be fine.

Point 2 is true, but you can just as well use public DNS names.

Point 3 is something to be aware of, but likely also just a reminder that security by obscurity is no good idea and never was.

The advantages of using a public CA: It makes a lot of things simpler for you. You don't have to bother understanding the various complexities of X.509 and a PKI structure. Other people will make sure your certificates are well formed, your keys comply with common recommendations etc. pp. And you don't have to worry how to get that root certificate on a variety of devices.


Security by Obscurity is often misused.

SBO is bad or insufficient when it’s the only control you have, when it’s not it’s just called OPSEC….

There is absolutely no reason to leak unnecessary information that can describe how your system is built, how many components are there and giving out information such as internal service names that make exploitation of various vulnerabilities such as SSRF easier.

Heck you can look up just how many bounties Google has paid out because everyone is aware of what “dogfood” is these days and that it’s the nomenclature for their internal test environment/endpoints.

You will never have a system that has no design or implementation flaws.

The less information an attacker can gain about it the less likely they’ll be able to either discover a flaw or successfully exploit it.

Relying on external CA’s opens you to additional supply chain attacks it’s not uncommon for highly regulated industries or high security systems in general to not only use an internal CA but to break the public chain of trust altogether by removing the public certs from the system.

As far as the complexity goes yes running a CA adds another component but there are plenty of good solutions out there that abstract the complexity. Most of the “complexity” is around constructing the CSRs and getting the certificates securely deployed during the deployment or post deployment configuration management stages anyhow which a 3rd part public CA doesn’t solve.

And tools like Istio and other service meshes usually allow you to automate all of that fairly easily regardless if you use an internal or external CA.

lastly if you can’t figure out how to manage root CAs on your endpoints you shouldn’t do anything with PKI regardless of who issues your certificates.


> it’s not uncommon for highly regulated industries or high security systems in general to not only use an internal CA but to break the public chain of trust altogether by removing the public certs from the system.

Yeah, I work with internal PKI systems professionally and that's usually the biggest or one of the biggest motivating factors, especially these days with Trustcor / the Turkish and Kazakh examples of public CA sussiness in the news. From the headline I assumed the article would be about this.

I guess you could theoretically dump the bundle and explicitly only trust the public CA used to issue your internal certs, but I've never seen that configuration in the wild; public CA usually implies bundle.


Trolling open sources for operational intelligence on internal factors which should not be publicly traded is often recommended to both attackers and defenders.


There is also the open network movement where work services are available from the internet directly without a VPN.


Beyond corp doesn’t mean that there are no private systems. You still have jump boxes….


Just like security can mean "carefree" or "hypervigilant" depending on your POV, obscurity can mean different things.

It can mean trivally or fatuously odd; it can mean hidden; it can mean removal or at a distance; it can mean friction. While these differ for evaluating a security posture, for the outside observer all of these can mean much the same: removal from view, consideration or observation. The outside observer is not preeminently qualified to pass judgement on the nature of obscurity from their "privileged" position no matter how certain they are of their inability to observe.


I mean point 2 has parallels to ipv6. Addressable does not mean routable. Just own the domain you're using internally or it will become a headache.


Nobody verifies certificate policies anyway, you can't rely on it. So point 1 is moot.


> security by obscurity

I’ve often wondered at a more abstract level, where’s the line? A password or a private key are security by obscurity.


> A password or a private key are security by obscurity.

No, they're not. You could think that if you're trying to guess what the term means, but it means something specific that is the opposite of basing your security on a specific secret that is decoupled from all implementation concerns.

>> Security by obscurity is the reliance on design or implementation secrecy as the main method of providing security to a system or component.


It's not that black and white. Port knocking for instance, is basically extending the password. Is the port configuration "implementation"?


You mean "it's not always black and white" and you are right. However the grandparent's case is clear-cut.


They said “it means” i.e. defining a rule. The point of my original comment is that the boundaries are fuzzy, so I wasn’t commenting on specific examples but the concept of the rule.


Yes and no, passwords are an implementation detail.

There’s a reason why bank cards can securely use 4 digit numeric pins but banking websites need much longer passwords.


> Yes and no, passwords are an implementation detail.

Passwords are not an "implementation detail" unless they are hardcoded (and a hardcoded password is very arguably security by obscurity).

> There’s a reason why bank cards can securely use 4 digit numeric pins but banking websites need much longer passwords.

Yes, because the bank card provides some additional form of authentication, typically in the form of a chip on the card.


Not quite. Bank cards can be revoked after unsuccessful attempts while doing that with usernames is problematic.

You need a carefully constructed system to make the use of passwords viable.


> Not quite. Bank cards can be revoked after unsuccessful attempts while doing that with usernames is problematic.

The same happens with websites, have you never been locked out of a website (or had your IP temp banned) for failing too many login attempts?

Again, the reason that bank cards only require a PIN from you is that _the card_ provides the other half of the credentials.

If a bank card simply contained the equivalent of a username (say your legal name and address), then it would not use just a PIN.


The reason you can get away with a 4 digit PIN is that you can only try the PIN if you’re in physical possession of the card. And if an attacker is already in physical possession of the card, you want revoke the card in any case.

Web sites however, cannot just revoke the username - it would allow for trivial denial of service attacks. I could just enumerate all account numbers for my bank and lock out all customers. So the best that’s available is a temporary lockout, and then the attacker gets to try again.


Plus all you can lose is money (not privacy / control / data) and the CC companies take the hit as cost of doing business.


A temporary ban is just a rate limit by another name. Blocking IP’s is all well and good expect you can multiple attempts * thousands of IP’s means a 4 numeric digit password would literally always get cracked.


Security through obscurity is a great idea and works to slow down, prevent and even stop and attack from going forward for all kinds of systems whether is be an animal and their camouflage or a computer system holding PII with a vacuous name. Where the utility breaks down is when that is your ONLY means or layer of security. For example my house is hard to see from the road…so an opportunist thief might never see it and so just pass on by. But say this criminal finds my house by some fortuitous means. Well now he has to get past my locks and then the and if I am home, the bullets that may fly his way. And then he will have to try and get away before the police arrive.


All security relies on some sort of secret.

When brute force guessing of that secret in the best case gives a motivated attacker or a random person a chance of stumbling upon it, that's a failure of security by obscurity.

With passwords and private keys we can mathematically verify that brute force attacks won't work. At that point we're not relying on obscurity because we're assuming the attacker has knoweldege of everything except the secret itself.


A castle on the hill is hard to hide but also hard to approach, and requires hauling breaching tools and weaponry uphill.


In online fora, “security by obscurity” is when you disagree with someone.

In reality, it’s when the primary control used is restricted, shared knowledge. Famously back in the day if you knew a phone number and AT&T lingo, you could compromise the phone system.

In the case of a public CA, if your hostnames aren’t license plates, the names of servers may provide context useful for an attacker. They also provide an attack vector - the control panel for the CA.


Quite bs points. Downsides only, where are the upsides?

This is from a company which offers an enterprise cert management product. Is this a coincidence?

Come on, this is not hn quality material. Pure marketing.

Discuss pki complexity, ca distribution, non-domain clients where you can’t deliver your custom ca.

Discuss monitoring.

Discuss the governance of your public ca. Who is going to hold the keys? Are you going to issue sub-cas? Who’s going to handle and monitor those?

Come on. Private CAs are an hassle. Welcome public CAs everywhere!


Google, Microsoft, Apple, Facebook, NSA, etc. do agree with your view. I personally do not welcome public CA's everywhere, I don't welcome IPv6 everywhere. I don't want people knowing any more about my private systems than I choose to divulge.


Having talked with PKI folks at the first three of those, no: there is considerable weight at Google and Apple at least that non-public certificates do NOT belong in the public PKI. There’s reasonable evidence to be had that many of the problems public CAs have had with content and algorithm agility has stemmed from the use of public CA infrastructure for internal or private uses that never belonged in the public space. Things like OU fields, custom OIDs, and old algorithms were all difficult to move away from because large customers of public CAs were busy using those things for purposes better suited for internal infrastructure.

This is one reason many of the browsers have started enforcing things like CT stamps for roots with clear exceptions for enterprise CAs. They want to encourage people to move their internal uses to internal PKIs instead, so that it’s easier to make clear rules about the content of public certs.


> I don't welcome IPv6 everywhere

Huh? What about it? As in the v4 case you should put your network behind a firewall, hiding it from access from outside. If you’re referring to NAT you can use it with v6 too, though I don’t think it’s reasonable - NAT doesn’t add much security and it shouldn’t be used as a security mechanism.


Adding to the point on flexibility, not that anyone would do this, but SAN certs technically have no restrictions on sub-domain wildcard names but public CA's will not allow more than 1 sub-domain wildcard because they would lose money doing that.

In a private CA one could not that one would add as many wildcard sub-domain names as they wish.

e.g.

    *.some-name.private-tld
    *.*.some-name.private-tld
    *.*.*.some-name.private-tld
and so on. Again, not that anyone would do this, but CA's will not permit it for financial reasons not technical reasons. They would instead have one buy a restricted signing cert bound to an apex domain+tld and I only know of a couple of CA's that even have that option and it is not an official offering. One has to be a partner of that CA. More common is to have customers buy one wildcard sub-domain cert per sub-domain level.

Another reason one might use an internal CA is to remove all public CA trusts on hardened systems and document this in SOC1/SOC2 controls as one of many mitigating controls for data leakage. Intra-datacenter communication can be forced through a MitM proxy in addition to VPN's and deny anything that is not specifically signed with the internal CA. This is not a common setup and only a handful of orgs would need such a control, but they do exist.


Full disclosure: I work for Smallstep.

I love your point about being able to limit trust on hardened systems to your own CA. For servers, in many cases you don't need any CAs in the trust store, because a lot of services will only trust the roots you've explicitly configured (if you're using client authentication).

I've also noticed that Linux container distros generally ship with empty trust stores. So, a container distro can be a nice starting point for this.


> Setting up your own private CA gives you complete control and flexibility to customize your certificates to suit your needs.

This is probably the second strongest point in the article, but one that is also probably limited in necessity. Many servers standardize on TLS formats and algorithms. At least on my internal network, I've not had a use case that I couldn't use one of LetsEncrypts PKI for.

> If you decide to do the same, it means that you can’t use publicly trusted CAs to issue certificates to your internal web apps and services, DevOps build servers, testing resources, IoT devices, or other entities with domain names that are only valid in the context of an internal or private network (e.g. example.local, 10.0.0.0/8, or even localhost). The solution is to get your own CA.

You shouldn't be using non-routable local domains either. Internal domains should be owned externally and the authoritative servers should not propagate externally.

> The solution is to set up your own CA to issue certificates for your internal infrastructure. That way, you can keep your internal hosts from showing up in a public CT log.

The last point is the strongest, imo. Transparency means that you have created maps of your network - past and present. That said, if you're relying on secrecy to be sure, that is no form of security. Obscurity these days buys you time by wasting the attackers time, but it's not a great strategy, imo. It'd be like if the military assumed nobody had ever seen a base from above.

She goes on to talk about configuring your own internal CA. Is there anything prohibitive about using LE PKI to run an internal CA? Maybe short expiry? I wish they'd spent the meat of the article on this rather than concluding with "don't".


This seems like a marketing blog entry from a company selling private CAs. My current job is at a company that is 100% BeyondCorp - there are no internal networks (except stuff like building IOT etc). It's the best thing ever from a user experience perspective and it's not even mentioned.


"This seems like a marketing blog entry from a company selling private CAs."

Anyone can set up their own private CA, no need to buy anything: https://www.digitalocean.com/community/tutorials/how-to-set-...


Sure, ideally you'd not have the CA root private key lying around in your disk though for a production use case.


Here's a few points to add to that:

1. You should not keep more than one internal CA.

CA infrastructure tends to sprawl, multiply, and creep around the organization unless properly pruned. They will also quickly diverge in certificate issuing rules. Better to use the one you hopefully already have, and secure it well. Lots of eggs are in that basket anyway. The article is a commercial for their product, but there are plenty mature ones out there. FreeIPA is one.

2. You should renew certificates well in good time until they expire.

As long as the certificate is issued internally and fulfills the rules of allowed certificates, let Ansible/Puppet or similar tool renew them. Just make sure applications gets restarted when the certificate is rotated. Defining it as a configuration item helps everyone.

3. Any certificate that hits disk should generate monitoring.

Because renewals can fail, and you really want to know in advance if certificates haven't been rotated properly. There will always be special cases and externally created certificates too.


I think private CAs are a mistake if only because of all the time wasted on certificate errors from improperly configured platforms. Don't forget that some languages don't respect system stores.


If those platforms hasn't been configured with the most basic things such as the company CA, there's going to be lots of other headaches too, including security.

How do you enforce authentication standards in that kind of environment, let along an internal user directory?

No software can enter an environment completely unconfigured and be expected to work.


> But, setting up a private CA is no easy feat

The amount of hours I've seen companies spend trying to get private CAs to work with all their dev tools (like artifact repositories, etc.) often outweighs the benefits.

If you have a highly centralized process for setting up developers machines it can work, but you will probably dramatically underestimate how much time and troubleshooting will be involved.


Point 1 doesn't make sense. One doesn't recommend GIMP or ImageMagick to Paint users just because you'll eventually run into limitations as you want more control from your image editor. You can still start to use those things if and when you run into limitations. It's as much work to switch as it is to do the more complicated thing in the first place, so you might as well start with the simple default.

Point 2 is the same as 1: once you want to do something and it turns out you cannot, you can always still go for an internal CA.

Point 3 states the obvious: if you send internal hostnames to third parties, third parties will know of them. Perhaps it could be made more obvious that your domain names will end up in public logs (protip: wildcard), but how many people run publicly reachable services meant to be accessed by systems outside their control (not having a common internal CA, like a friend or business partner) but aren't meant to be used by the general public? This issue isn't specific to internal infrastructures.

Users clicking past certificate warnings makes life as a security tester so much easier that I would say: please use a valid certificate and don't listen to anyone saying you absolutely need to do an internal CA and shouldn't use a public one, if that means you'll put off the task or if there are any end user systems where the CA won't be deployed (e.g. pentesters part of a larger organisation will often have a corporate laptop with Windows and all the policies, and a proper system running *nix for doing their job, which typically wouldn't have the CA).


Meta: does HN have any way to vote for a particular domain to be marked as... I'm blanking on the right word here, but maybe "shallow"?

Every time I see a link to smallstep, it is ALWAYS some contrarian FUD specifically engineered to generate clicks, ending in an upsell for their (uninteresting) commercial product.


They forgot the fact that you can create internal certs with longer expiration dates than a year.

You’d be hard pressed to find an organization that hasn’t had operations taken down at least once by expired certs.

Renewing all of the certificates in the org every year is just a time consuming hassle.

Obviously auto-renew setups are preferred, but in the instances where this is not possible, i prefer 20 year certs over hoping someone will remember to change the cert for the next 20 years.

It’s all internal traffic anyway. You could use all of the time saved to harden your network instead, which would provide greater security than cert rotation.


You would think with the frequency this happens some adults in the room would have introduced a "soft expire" mode by now, where it just loudly complains instead of binary refusing to work, particularly given how marginal a "security feature" expiry is anyway.


Sounds to me that a "soft expire" is just shifting the problem slightly. If things continue to work, then people will ignore the soft expiration until it becomes a hard expiration. If things stop working, then there's no point in having the soft expiration.

I'm sold on the LetsEncrypt philosophy of having short expiration dates so that organisations are more or less forced to properly automate their renewal processes.


This is a great point.

There's an issue in Docker Desktop where some certificate expires and then Docker Desktop fails to start. The worst part about this is that its a self-signed certificate that expires in a year. Why just 1 year when it's self signed and could be 20 years?

Internal traffic shouldn't require changing a certificate every year. Even an automated rotation process can fail, and having more frequent rotations allows more chances for failure.


Full disclosure: I work for Smallstep.

The recent trend toward "Zero Trust" security has come about in the wake of attacks on internal infrastructure, where having a firewall wasn't enough. There can be lots of ways into internal networks. And attacks can come from the inside. Of course, every environment is different and every threat model is different. Internal CAs are not for everyone. But, a lot of orgs have a threat model that demands authenticated encryption for internal traffic. And at that point, you may want an internal CA.

As for validity periods... as with many things, there are tradeoffs. We advocate for short-lived certs (a few minutes to a few days). Short-lived certs can greatly simplify the revocation process, in the case where an attacker steals a private key. You often don't need the complexity of CRL or OCSP, because you can just let the cert expire and tell the CA to disallow renewal.

And, if you have a policy where certs last for 7 days and is renewed every day, it forces the organization to use good hygiene around automated renewals and monitoring.

However, there are scenarios where long-lived certs make a lot of sense. For example, if the private key is hardware-bound and is non-exportable, then it makes sense to use longer validity periods. In this case, a successful attacker might be able to use the key, but they cannot steal it. So, you can get away with a longer-lived cert here. But, all certs do eventually expire and you still have to have some answer to that.


Not sure why this is getting downvoted. I upvoted it because I'm not sure people understand the complexity of internal operations and all of the uses of certs and opportunities for things to go wrong. But 20 years? Geesh. Couldn't you have just advocated for staggered expirations?


The CA that issues the 20 year certs stays offline, so the only way to get the private key is to hack each machine.

If the machine is compromised to the point where you can extract the key, the cert does very little since you already own the endpoint.


I manage an internal CA at home with Caddy, which uses Smallstep. I distribute the root crt to my devices with Ansible. Works great.


Not only that, but should verify CA… the _exact_ ca. If you’re bringing in external information into a system, the source needs to be cryptographically authenticated.

Do you store identities in an external system? (LDAP for example) If you don’t verify an exact ca (or tls key), it just means you system is open to a dns rebind attack.


Another thing I see far too often is using decorated domain names for public hosts. This pattern weakens the public's ability to separate out real domains from fake ones.

It's one of those rules that everyone makes then immediately breaks, like don't combine your company logo with ad-hoc graphics.


What's a decorated domain name? (Given the season I'm imagining bells.lights.tree.yourcompany.example.com)


If I'm understanding correctly, I think they're referring to a case like this:

https://www.experian.com/ is the primary domain of the company.

https://www.experianidworks.com/ is a "decorated" domain for a service offered by the same company.

Anyone could register experian*.com. So, if I want to determine whether the decorated domain is actually part of Experian or not, I'd have to go to Experian's website and dig around for a link to it. What makes it even worse in this case is that Experian is an incredibly high value target.


Every certificate should root in self-attestation, and all PKI sigs should be attached/revoked after the fact. But that GPG-like schema is probably never gonna happen, because money.


Why? Operating an internal CA is a liability. Any concerns about hostnaming can easily be solved


How is point 2 a problem? I can buy a domain, use DNS challenges to get certs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: