Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it'll be quite interesting to see how the smaller players organize themselves around the multitude of cluster resource management tools emerging as a natural reaction to Kubernetes growing out of the work Google's done on Borg.

I am curious to see how long of a shake-out period will exist before there's either a de facto stack of "compute resource" tooling, or if there's always going to be a highly fragmented and diverse way to accomplish your goals. Just off the top of my head (and there's way more) I'm thinking about Tectonic[1], Mesosphere[2], Rocket [3], Kismatic [4] as a few examples.

As a technologist and a planner, it's been challenging to see far enough into the future to decide on what tools to devote myself to learning at this point. I do think we're certainly in a "post-public cloud" timeline where we're getting good enough (or will be in 6-12 months) at abstracting virtualization right up to a millimeter or two below the application layer of our stacks. How we choose to do so seems to be currently up in the air.

In my mind, this opens up the possibility of compute as a resource much wider than had previously been possible. We'll be less reliant upon Azure, AWS, and GCP's mixture os Paas and Iaas and much more interested in compute as a resource, likely from bare metal or private cloud providers.

I'm looking forward to the increased efficiency (both through compute power and cost) and security available in moving from a application-level virtualization to operating system-level virtualization.

[1] https://coreos.com/blog/announcing-tectonic/ [2] https://github.com/mesosphere [3] https://github.com/coreos/rkt [4] https://github.com/kismatic



Disclosure: I work at Google and was a co-founder of the Kubernetes project.

I think your observations are interesting. From my (somewhat biased) viewpoint I don't think we will enter into a 'post cloud' world. There are very real efficiency gains from running at public cloud provider scale, and the economics you see right now are not what I would consider 'steady state'. Beyond that the systems we are introducing with Kubernetes are focused on offering high levels of dynamism. They will ultimately fit your workload precisely to the amount of compute infrastructure you need, hopefully saving you quite a lot of money vs provisioning for peak. It will make a lot of sense to lease the amount of 'logical infrastructure' you need vs provisioning static physical infrastructure.

There are however legitimate advantages to our customers in being able to pick their providers and change providers as their needs change. We see the move to high levels of portability as a great way to keep ourselves and other providers honest.

-- craig


Since we have someone who worked on these projects here, there was a report a couple of years ago about Borg and its successor, then called Omega. Is Kubernetes related to / a renamed Omega?

Edit: Wired story: http://www.wired.com/2013/03/google-borg-twitter-mesos/


Omega is a separate system than both Borg and Kubernetes.

Kubernetes is heavily inspired by both Borg and Omega, and incorporates many of the ideas from both, as well as lessons learned along the way. And many of the engineers who work on Kubernetes at Google, also worked on Omega and Borg.


Hi Craig!

Please feel free to respond to me at your leisure, but are you * sure * we will never enter a post-cloud world?

Not to say that there will be no cloud infrastructure, per se, just as mainframes still exist today.

On the other hand, I imagine someday we will have "datacenter in your pocket" type devices. The challenge will be who has the data -- obviously Google has already identified this as a key strategic advantage. The challenge will * not * be who has enough resources to compute it.

These pocket devices seem natural as a way to place strong AI at your fingertips, Siri-like agents, autonomous robots, etc. The first ones, which we have now, either use a data connection or are optimized to have small data sets, but the need for larger data sets is obvious. Once it becomes the primary limiter, I think it will only be a matter of time before "big data" is decoupled from the cloud and personal computing retakes its dominant position. Some will use laptops, some will use phones, but the effect will be the same.

There are also the privacy benefits from managing large datasets on your own device -- solutions are already available for things like how to back up your data, how to sync large sets of common data among a network of untrusted peers, and how to curate that data.


Cloudlets might be the herald of the post-cloud world.

http://elijah.cs.cmu.edu/

http://elijah.cs.cmu.edu/DOCS/satya-ieeepvc-cloudlets-2009.p...

http://www.akamai.com/cloudlets

Disclaimer: I work on Google Cloud but not Kubernetes or GKE. Also, Satya was my PhD advisor.


good AI tends to run on massive clusters. Barring some quantum leap in computing technology, I don't see how computation on local devices would fill our computing requirements.


Can you comment a little bit more on where you see the steady state economics of public cloud going? From where we are today, what factors (other than the dynamic provisioning you mentioned) will lead to better economics?

Thanks for commenting on this thread!


Yeah, I think that sadly, there is going to be a little bit of an inevitable equivalent to the unix wars of the early 80s. The sooner we can reach a standard place, the better it's going to be for the container community and developers more generally.

One of the reasons that I pushed hard to get Kubernetes open sourced, is the hope that we could get out in front of this, and allow the developer community to rally around Kubernetes as an open standard, independent of any provider or corporate agenda.


Disclosure: co-founder of Kismatic

We've spent a lot of time working with the Kubernetes community. I can only speak to our experience, but Brendan, Craig, and the rest of the team at Google have 100% lived up to the commitment of treating the Kubernetes project as truly open and independent.

Our Kubernetes dashboard was recently merged into Kubernetes [1]. We brought our own vision of a web ui to the project, and we could have gotten bogged down defending technology decisions, and philosophical nits. Instead, the response from Google, RedHat, and others in the community, was basically "Awesome! How soon can we get it in?"

All of the key players have the right approach, and that gives me confidence in the project's longevity.

[1] UI Demo video - https://www.youtube.com/watch?list=PL69nYSiGNLP2FBVvSLHpJE8_...


"allow the developer community to rally around Kubernetes as an open standard, independent of any provider or corporate agenda"

I look forward to Kubernetes becoming an independent project outside of Google then :)


I'm curious, @caniszczyk why would it need to become independant outside of Google? It's already an Apache licensed open-source project hosted on GitHub.


In essence, having diversity in ownership can help the project have a long life instead of being governed by one entity. There's a lot of risk that the main entity in charge will do things in its self interest instead of the self interest of the project (and its constituency) over the long term.

Independent ownership and proper governance will setup the project for long term success and as a small company, you should prefer it to be that way.


Disclosure: co-founder of Kismatic

I'm extremely pleased that Kubernetes has been open sourced by Google. It truly seems to me that the developer community is and will remain to be able to rally around Kubernetes as an open standard both today and in the future without fear of any outside agendas; as Brendan so eloquently stated. I for one applaud Google's level of transparency when it comes to the future of the project and the overall product vision.


I'm wondering if it was intentional or subconsciously accidental that you went with the "I, for one" construction... which is of course usually suffixed with "welcome our new [adjective] overlords".


"I, for one" was a common construction before, and remains common outside of, that text-meme.


What's the largest k8s deployment you guys have observed? You're not using k8s for anything major yet, right?

Thanks for building k8s! Even if it doesn't "win" in the end, it's been an extremely useful and reliable solution for my needs.


There needs to be a compelling way to run it on AWS for this to happen.


Please check out: https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...

for turn up instructions on AWS, it's as easy as:

export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash


Ugh, to think people responsible for running large infrastructure installations are still piping random webpages to their shell in 2015.

It's not just them - doing things this way makes it seem like this is in any way acceptable. It's not. Stop it.

No wonder it's so easy for TAO.


Just download the script and check it first before running it then.


It's not me; I don't run scripts written by people who think it's okay to exec shit from the network.

It's for the people who don't know any better and see this anti-pattern everywhere and thereby begin to think it's okay or accepted. It's not.


What is a better alternative?



You mean like the work that Meteor is doing and hiring for https://www.meteor.com/jobs/core-developer-cloud-systems-eng... ?

Disclaimer: I work on Google Cloud but not Kubernetes or GKE.


It's important to note that some of the items in your list complement Kubernetes rather than replace it.

Think of a cluster of VMs running CoreOS + Tectonic as an alternative to Google Container Engine.

Kismatic apparently calls itself "the Kubernetes Company."

Disclaimer: I work on Google Cloud but not Kubernetes or GKE.


I'm also very curious which direction things will move. I think I'm less convinced than you are that it'll be away from AWS and the like though, they're innovating at least as fast as the open-source container cluster tools (at least it seems that way to me).

I can imagine a future where it gets easier and more common to build an arbitrarily complex backend by just hooking together AWS services, using Lambda (or something that evolves from it) to write all your custom business logic without ever thinking about a server, VM, or container. I'm working on a greenfield app and very seriously considered this route now we but ended up deciding the uncertainty vs doing it the way we know wasn't quite worth it. It feels very close to the tipping point to me though.

Either way it's definitely an exciting time


>just hooking together AWS services, using Lambda (or something that evolves from it) to write all your custom business logic without ever thinking about a server, VM, or container.

you're risking to awaken the ghost of Application Server.


There's quite few different tools in this space. I made a list of them, http://datacenteroperatingsystem.io/ feel free to add your own pull request.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: