Hacker News new | past | comments | ask | show | jobs | submit login
Google Open-Sourced Kubernetes to Boost Its Cloud (wired.com)
84 points by kalgen on June 10, 2015 | hide | past | favorite | 35 comments



I've been working with Kubernetes a fair bit (I'm contributing to porting it to AWS). It's an exciting time: I believe that we'll be running everything in containers within the next few years, and Kuberentes solves some of the big missing pieces with actually running Docker in production across a cluster. There are other competing systems trying to do the same thing (including Docker Inc itself), but I think Kubernetes has a huge advantage by having years of experience of what did and didn't work well in Borg.

Most importantly of all though, I have found it a great project to use, and a really great project to contribute to. I think that contributions are the lifeblood of open-source projects, and I give Kubernetes 10/10 for their community and processes, which I think bodes very well for the future.


I am a little surprised that Google is join this route. Making Google Cloud into a huge business must be a high priority. I used Borg for a while in 2013, and it was amazingly, specially the logging for tracking down runtime problems.


Why? Google hands out all sorts of papers and such a few years after the fact.

This is just another example, an open source version of Borg that will always be a few years behind.


While it is true that Borg is 10 years ahead of Kubernetes, we have the benefit of knowing where Borg arrived rather than following the same winding course.


Of course, but unless there is a massive maintainability flaw in Borg, its unlikely you can duplicate 10 years of effort AND maintain pace with Borg's/Borg's successor(s) progress.

Well, without a huge influx of developers to the open source side.


First, there are a LOT of people working on Kubernetes right now.

Second, of course it will be asymptotic for a long time. But keep in mind that we don't want or need everything that Borg has, and Borg does not have everything that Kubernetes does. Kubernetes is not a clone of Borg - it is inspired by the lessons and experiences we got by doing Borg. Often that means we learned how NOT to do something.

If we can get 75% as functional as Borg in 1/3 the time, we'll be doing pretty darn well. I think we're on track for that.


> If we can get 75% as functional as Borg in 1/3 the time, we'll be doing pretty darn well. I think we're on track for that.

I'm sure you can do that. That doesn't invalidate my statement that it isn't surprising Google would do this precisely because it'll only be 75% as functional as Borg at any given point in time.


Is it me or does anyone else interprets all the recent wave of open sourcing as a trend to not create open standards anymore, but instead open source the technology altogether (without even creating the standard).

Have standards failed ?


My 2c, not speaking as a Google rep but as myself, is that if you have a problem only a single actor needs to solve, you're going to end up with software, not a standard. To get a standard, you have to have a situation where the value of cooperation is higher than the cost.

There are plenty of people now who need to solve the container problem, but Googlers been working on this shit for years, before it was really on anyone else's horizon. Google employees incepted the cgroup feature way back in 2006, to solve problems that were already being felt acutely at that time within Google. Folks have been working on this stuff a long time before it mattered to anyone else, and that's why what's coming out is software rather than standards. There is no way a big company is going to delay solutions to an urgent strategic problem in order to be part of a democratic process for the sake of a few people's ideals. Maybe if they'd seen it coming five or ten years in advance, to give enough time for the standardization process to occur, but Google was far too small and the future far too uncertain in 2001 to predict what might be needed in 2006.


Almost every problem that containers solved, was solved with EclipseBSD http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.8... way back in 1998


Yes, the concept of resource control wasn't new. Getting it from a research project to the form where it was production-ready and supported for inclusion in the mainline tree of Google's kernel of choice took a bit longer.


I used EclipseBSD, it was production ready. The community wasn't receptive at the time for getting it into the mainline FreeBSD tree.

For this I applaud Google for making it in, even when the masses don't see the point.


Standards are great when multiple companies want to provide the same service, which must communicate with software written by other companies.

Compiler toolchain, devops, sysadmin, languages, runtimes, etc don't really fit that picture.


This is a very good debate regarding this very topic: https://www.youtube.com/watch?v=kRVWjC6osuw

TLDR: we are in a stage where we don't know yet what functionality needs to be supported, it is not a good time to form standards, it is a time for checking possibilities and finding best technical solutions.


This is an interesting question. Really, open source and standardization are not competitors. Ideally you'd have both at once; multiple open-source implementations of a standard specification.

That said, there are times when the business model behind an open-source project is ostensibly at odds with standardization. The situation between Docker and CoreOS' app container spec comes to mind. The Docker container spec was defined by the implementation, not the other way around, and CoreOS took that opportunity to define an actual spec (and an implementation). Heated debates erupted.

In the area of cloud orchestration which Kubernetes seems to fill, I think it's still in a "discovery" stage. Early on the Kuberbetes devs said they wanted to focus first on identifying the right abstractions. I imagine standardized specs might come out of it once things stabilize.


Why would you want your code private?

Instead, let anyone work on it and benefit.


The point of open protocols and standards isn't to keep the code private, there are plenty of open source SMTP or IMAP servers.


Kubernetes is alpha software right now. I was at a meet up last month and there are a lot of missing features from what I recall. They're planning to fix one of these feature with some project call Ubernetes.

While people are raving about containers, there are still security issues with containers no?

I think VM will be here to stay for a long while and while we might have to pay a performance and memory hit for them, they offer better isolation.


While I have no doubt someone will pop-out with some wild example of how docker lowers your security bar, wrapping your application in a container shouldn't really hurt you. In fact, adding the need to escape a container should really add a net benefit to security, but it's what happens next what is the concern.

So you've removed a major barrier of deploying more than one app per server. You no longer need to worry about dependancy hell and you've made moving services around super easy. You decide you can save a crap ton of money by sharing resources; this is where the problem lies. If you run multiple app on the same server without a container layer, you'll still have the same app isolation concerns, only attackers now don't have a container to escape from, and you might have dependancy problems.

So the point is, you can't rely on docker isolation instead of vm's from a security pov, but if you stick with using a single docker per VM, you'll still have the deployment benefits such as the ability to create idempotent binaries and deploy these. This is, in my opinion, an improvement over trying to reproduce builds on different platforms or scp'ing your builds hoping all the required packers are in your vendor, etc. Maybe not a big deal if you're deploying go, but a really nice thing when working with php, ruby, python, etc.


> there are still security issues with containers no?

In Linux, perhaps. However FreeBSD jails and illumos zones are rock-solid. There's this crazy hype around containers these days and people just ignore the stable, secure, and tried technology, I don't understand it at all!

FreeBSD and illumos are not Linux, but their still Unix-like, it's not like you'd have to use OpenVMS. Plus you'd get other benefits too, like DTrace and ZFS. And on illumos now you can even run Linux binaries in a zone.

So why do people simply pretend these secure technologies don't exist? Can someone explain?


> In Linux, perhaps.

If you generalise from Docker. There are other container models on Linux -- LXC, lmctfy, Rocket, Garden etc have different security tradeoffs.


Because there are lot more Linux users / developers than BSD.


I don't think that "X has more users than Y" is a valid complete argument in itself, otherwise nothing would ever change. Of course it matters in the grand scheme of things, but the causal relationship is more complex, involves many degrees of freedom and goes both ways.

Before docker came, Linux had LXC, which wasn't as popular as docker is now, but it was certainly known and used by people. So when docker came, LXC had more users than docker, and yet docker surpassed LXC in popularity in weeks, so the "X has more users than Y" state can be changed by various factors and it's not enough to keep the system in equilibrium.

So yes, the fact that Linux is more widely used than FreeBSD and illumos in the developer community is certainly an important factor, but I don't see anyone ever saying "FreeBSD is great but we want to use something supported by a larger community", or "illumos is great, but we don't have expertise with it", which are certainly important arguments to consider when making a decision.

But I hardly see anyone making these arguments, or any other arguments really. It's like these systems don't even exist. At first I attributed this to the "X has more users than Y" factor, but then I see people having particular problems with Linux container technology, in areas such as security, virtual networking, etc. And these are problems already solved by FreeBSD and illumos. Surely when you have a problem you look for alternative solutions that don't have these problems?

But I don't see people looking over the alternatives at all. As I said, there are many valid reasons not to use these other technologies, but I am very perplexed that people refuse to even acknowledge the existence of them.

And now that illumos can run Linux inside a zone (and FreeBSD did this too 15 years ago, and still does for 32-bit binaries, I believe work is well underway to extend this to 64-bit as well), I think the "I only know Linux" argument loses some potency, you can run Linux after all...


LXC and Docker are operations technologies. It takes the same codebase and just generates a different product from it.

Switching to a different OS, however, requires different development-time technologies. Some third-party dependencies like libraries you're used to using might not even exist on the other platform.

Effectively, developers are locked into a pretty tiny development-time ecosystem: all the devs I know develop on Ubuntu (or on OSX with testing on Ubuntu, if they can get away with it.) They depend on the apt package graph, including PPAs.

Half of the renewed enthusiasm behind containers isn't about security; it's about the fact that a lot of operations people prefer RPM-based distros, and it was always really annoying to try to keep a given piece of Linux software portable between deb-based and RPM-based distros. You needed to figure out how to specify operation-time dependencies against at least two package graphs, and also compile-time dependencies using autotools or similar.

In contrast, Docker and similar ops software are interesting (from one perspective) precisely because they let devs learn fewer things: you develop your software on your Ubuntu machine, create a container that basically replicates your development environment, "install" your software in there, and then distribute that. Now your software can be run on some other dev's (Ubuntu, OSX) machine, or deployed to a production (RHEL) machine. The other deployment scenarios are pretty minor in comparison.

Or, in short: devs and ops are separate jobs. Containers make ops people do more work/learn more things, but let devs do less work/learn fewer things. That's why devs are enthusiastic about them: it pushes the work of packaging their software (or writing autoconf scripts) for various platforms off their plate.

Devs are interested in learning one thing—e.g. how to write a Dockerfile—that lets them drop an entire stream of continuous work/learning/keeping-up they have to do, e.g. handling changes in the multiple platforms their software supports.


> Now your software can be run on some other dev's (Ubuntu, OSX) machine, or deployed to a production (RHEL) machine.

What about dependencies between user-space and the host kernel? Aren't all containers forced to use the same kernel as the host?

The packaging scenario thst you describe has existed for years with VMs, where the VM can have a kernel or even OS version that is different from the host.


The main difference (besides overhead) is that a VM contains a running collection of OS services along with your app. Because of this, the ops team needs to be involved in keeping the VM up to date, the VM usually needs to be "managed" with orchestration much like a physical machine, and all-in-all it's big interdependent mess where the devs can't just forget about deploy-time concerns.

Idiomatic usage of containers forces one particular solution for this: a container contains one service; multiple services means multiple containers, and the orchestration of those containers is up to the ops people and their software.

VMs can also be done this way. EC2 ephemeral instances work great for doing a CoreOS-like "upgrade by starting up new AMIs instances and killing the old ones" strategy.

However, since ops people can't be guaranteed that random VMs they're handed do not, in fact, have arbitrarily-old services running in them with possible security vulnerabilities, they have to be conservative about deploying random VMs created by devs. Thus, VMs don't tend to get created by devs; thus, the devs still have to solve the deploy-time problem some other way to get the ops people something they can build into a VM. This isn't as much of a problem with containers.

Unikernel VMs, on the other hand, are effectively equivalent to containers: they both provide "just one service in a sandbox"-level granularity, that ops can then manage as it pleases. If Unikernels had come around 10 years ago—if Linux had been factored into a rump kernel, for example—I don't think we'd be nearly as interested in containers today.


Any references to the publications containing the real blueprint?



This has nothing to do with the article, but I absolutely hate the way wired underlines their article links. It might fit with their magazine style design but it's so incredibly distracting. I always want to bump it 1px up or down!


Google open source projects in general are just shitty. The reason is that they don't open source the whole thing and the code ends up being full of assumptions that don't hold because you are either not running things at google scale or are missing certain key bits of the "secret sauce".

In any event google compute is a terrible user experience compared to the likes of AWS and other cloud providers. Heck, even the shittiest VPS providers tend to be better than google compute. So open sourcing their "secret sauce" as the article puts is still missing key bits so I don't know how many people actually fall for the good will part.


disclaimer: i am a founder of the Kubernetes project and did the article with Cade at Wired. i also was product lead for compute engine back in the day fwiw :).

I am not sure which projects you have looked at from Google in terms of Open Source, but in the case of Kubernetes we have worked pretty hard to engage a community outside of Google and work with the community to make sure that Kubernetes is solid. One of the things that I like about the it is that many of the top contributors don't work at Google. People like Red Hat have worked very closely with us to make sure that (1) Kubernetes works well on traditional infrastructure (2) that it is a comprehensive system that meets enterprise needs, (3) that the usability is solid. People like Mirantis are working to integrate Kubernetes into the OpenStack ecosystem. The project started as a Google thing, but is bigger than a single company now.

Another thing worth noting: building a hosted commercial product (Google Container Engine) in the open by relying exclusively on the Kubernetes code base has helped us ensure that what we have built is explicitly not locked into Google's infrastructure, that the experience is good (since our community has built much of the experience), and that the product solves a genuinely broad set of problems.

Also consider that many of our early production users don't run on Google. Many do, but many also run on AWS or on private clouds.

-- craig


I'd be interested to see whether Google follow's Pivotal's lead and donates its IP to an independent foundation, as happened with Cloud Foundry.

Disclaimer: I work for Pivotal, in Pivotal Labs.


What key bits do you think are missing from Kubernetes?

I'm sorry you seem to have had a bad experience with GCE, but please know that Kubernetes runs on several other clouds, too, with no crippleware or anything. It is 100% open.


And 100% on-track for awesome.

Yes, sometimes development/testing for new kubernetes features 'feels' like it's focussed-first on GCE functionality (before other platforms) and earlier on, it had some hooks that weren't great (like GCE-only external load balancers and storage). But hey, it's not even v1.0 yet - and all those things are either fixed or being worked on already.

And as a non-GCE user, you aren't a second-class citizen. It works everywhere.

We've deployed successfully in AWS, vagrant and bare-metal (in the garage), so far. All with 'one-command' automated deployment and re-use of our pod & service specs throughout.

Roadmap/Architecture-wise, it would be good to see a more 'pluggable' approach for 3rd party integration (more like an Open Stack model), but again, we're still pre-v1.0...

Also, I think the google-folk here are being very 'reasonable' in their replies. Your comment was mis-directed & ill-informed. Go do some reading or watch Kelsey Hightower's presentation from a couple of months ago:

http://chariotsolutions.com/screencast/philly-ete-2015-16-ke...


This is in no defence to Google, albeit it might seem so, but why isn't Amazon or other big cloud providers opening their stuff ? It's all SDKs and agents.

Google have dedicated developers who are hacking on a lot of open source projects - not just Kubernetes - which takes significant amount of time.

After all - this is for all open source users out there - it's all Open Source - you don't have to use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: