ok, so... what? The notion here is to take the compose file, which is basically saying "this is what I want the state of docker to look like", and turn it into a generic application specification that multiple orchestration engines can contribute to and use... And... How would this work? Are we gonna have this smorgasbord of definitions in compose now with a subset of each working on specific orchestration systems, or something? Because that would be completely useless.
So unless there genuinely is a commitment to being able to have each orchestration engine just unreservedly guaranteed take any definition possible in docker-compose and deploy it, then there is no value to be gained here. None. Without that guarantee, then for every system you attempt to deploy a compose file to, you must DEEP DIVE into its internals to understand its limitations and costs and tradeoffs.
This gigantic massive timesink of a process is something I have always had to do. And I always discover all kinds of things that would have absolutely blocked practical deployments. This is the real thing that's bothering me, not that k8s and azure and docker all can sorta read some subset of a configuration. I don't care at all about that. It's not useful.
I'm not saying this will happen, but one way I might like to see this play out is a bit like the following.
I have a bunch of apps. Some of them talk to each other, some do not. In general, if I deploy 5 copies of something, you should assume I mean "never on the same box" by default. (I'm looking at you, Kubernetes. WITAF.)
You could, as a vendor, do a lot of sophisticated work on the spanning tree to reduce the amount of network traffic versus loopback traffic. You might try to spread out unshared volumes across as many motherboards as possible. You could differentiate on how to most efficiently and/or stably migrate from previous config to new config, or vice versa. You could do a bunch of knapsack work, including (and this is a pet peeve of mine) pre-scaling for cyclical traffic patterns.
If you've ever looked at the Nest thermostat, one of several defining features was that it figures out the thermal lag in your heating and cooling system and it can make the house be the correct temperature at a particular time, rather than waiting until the appointed time to do anything. If a hockey puck on my wall can do shit like this then why doesn't every cloud provider do this day 1?
Tack onto this some capacity planning infographics, and a system to schedule bulk, low priority operations around resource contention, and I could probably help get you a meeting with my last boss, my current boss, and probably at least the one after that.
So unless there genuinely is a commitment to being able to have each orchestration engine just unreservedly guaranteed take any definition possible in docker-compose and deploy it, then there is no value to be gained here. None. Without that guarantee, then for every system you attempt to deploy a compose file to, you must DEEP DIVE into its internals to understand its limitations and costs and tradeoffs.
This gigantic massive timesink of a process is something I have always had to do. And I always discover all kinds of things that would have absolutely blocked practical deployments. This is the real thing that's bothering me, not that k8s and azure and docker all can sorta read some subset of a configuration. I don't care at all about that. It's not useful.