Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think Google's interests are aligned with developer interests here. The main benefit to GCP that k8s success brings is that migration between clouds becomes easier (e.g. moving from EKS to GKE would presumably be easier than moving from Elastic Beanstalk to App Engine). Less lock-in benefits everyone except the current market leader.


K8s only gives you theoretical leverage in negotiations with "cloud providers". As long as you can't reasonably run it on your own hardware (because it's simply way too complex and you'll be having trouble finding experts, even at obscene salaries) you won't be realistically able to migrate to another "cloud provider" either.

It's an old trick from the playbook of ERP software vendors: make your software absurdly configurable so that all the meat is in the configuration.

Worse (and I know this isn't popular and will hurt) k8s for all intents and purposes just runs Docker images which are just pointless wrappers that don't add anything but point-of-failures and vulnerabilities.


A container is just a linux process with special restrictions. Its not just a pointless wrapper.


Don't think the term "container" is really well-defined.

The "container" that docker and others implement is actually a collection of different kernel namespacing features. I assume the one you are referring to are cgroups. I think a better description would be that each process in a linux system is part of (many) cgroup hierarchies. And you can have more than one process in each of the groups.

I think what parent meant is that you can actually get all of these really nice isolation features for your service without using "Docker". It is trivial to enable them using linux command line utils, or use something like systemd which can also do it for you.


Docker and cgroups/namespaces really only isolates you from mixed shared libraries situations, by essentially pinning your lib in the Docker image. That is, by ignoring the problem of stale/insecure shared libs, which is the entire point of using shared libs in the first place.

Docker doesn't isolate you from resource exhaustion (out of memory or files, infinite loops), from incompatibilities of the host kernel and Docker version bumps (so your shiny image isn't guaranteed to work on newer kernels and Docker versions), and makes it impossible to use host user identities and permissions. Thus projects tend to avoid plain regular file access, using databases and block devices and what not as workarounds.

IMHO Docker is an anti-pattern to "solve" an incidental problem of your own making.


I meant all the namespacing features.

My main argument was against a container being something "more" than just a standard linux process in the sense that "more" is extra software that wouldn't otherwise be ran.


Of course Docker is just the popular brand name for those isolation features and 'image' 'container' and all related terminology exist independently without Docker


Docker is also a cross platform command line tool that helps to manage said images and automate a lot of work that would otherwise need to be done. For me it is kind of like Dropbox: yes you can piece it together yourself, but using it is very convenient and you can spend your time elsewhere.


In my opinion its on its way out. CNCF and k8s has already basically replaced it with the CRI initiative and containerd[0][1]. Runc, rkt, and a multitude of other tools run containers. img, ocra-build, and others can build them.

[0] I realize this is from Docker as well but I feel it supports my point that Docker, Inc themselves are shedding baggage to still stay relevant.

[1] https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-...


I'll have to look into it thanks. I'd like to not end up requiring different tools to run the same container on Windows/Mac and Linux. Currently my workflow is easy, I create my Dockerfiles on a mac, do all of the building and testing, and then just tell devs on Windows to pull a repo and do a docker-compose --build up. I hope in the future this will not grow into tool hunting.


You'll probably be able to continue using the same tools across platforms to run containers.

I personally feel that Docker tries to do too much, almost the systemd of the container world. I believe having alternative container runtimes and build systems decoupled from Docker (both in the running program sense but also the company) will be the best in the long run.

With or without docker your workflow will remain the same. Its the image itself (the CRI spec) that makes that cross-platform magic work. I myself do my development on a Windows machine, ship a tar.gz off to Google's Cloud Builder to build the image and publish to a registry which then gets tested and debugged on a linux host.


The stateless services (and cloud native stateful) become portable, the new lock-ins are their value added data-stores (spanner, aurora, cosmos).


Well I agree that the interests are aligned insofar as Google is trying do dethrone AWS which currently has a near-monopoly on cloud. But I'd still rather live in a world where the compute fabric of the internet is not centralized in the hand of a few corporations. I'd much rather get paid a salary from money that would have otherwise gone to a monthly *-Cloud payment. And I believe that in a lot of cases, the non-Cloud version of the product would actually be a better, cheaper, more reliable solution. But that will become a harder and harder sell the more successful the marketing from cloud providers is.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: