> "Despite the importance of a shared standard, after six months of effort the Open Container Initiative (OCI) body has yet to decide whether it should or should not develop and standardize an image format. Today, the primary focus of the OCI community is creating standards for the container runtime environment, rather than the container image. Specs for container runtime features are also a worthy discussion, but we think there is a more urgent need – and a more open, industry-wide upside – for a standard container image specification."
I haven't been following the OCI at all, but could somebody shed some light as to why the runtime is the most important part to standardize? Also, any insight as to whether or not the container image format should be standardized?
I want to assume good faith, but OCI is starting to look like standard-washing. What good is runC if you have to use docker pull to download images first?
That isn't adequate. The reason LXC languished and Docker took off is because of immutable images, layers, Dockerfiles, and push/pull. If OCI has none of those then it is pointless.
I work with containers all day every day and the image format is really important
I have code on my laptop and I need to package it for many environments:
A development VM
A CI service
Pushed and stored at rest in a registry
Run on multiple runtime services like AWS ECS and Lambda
It would be really nice if one image worked everywhere. Having to rebuild is at the very least inefficient, but sometimes disastrous if a different dependency or language version sneaks in.
The Docker image format and registry API is becoming ubiquitous. We have also been using tarballs for builds for decades.
Does it need to be standardized? Probably not.
But it would be nice if we can do a better job of cooperating this generation than we did with VM images.
OCI focuses on the intermediary runnable format (think "ELF for containers") for a simple pragmatic reason: to accomodate pre-existing container packaging systems which were not compatible with each other. A good analogy would be designing a common vm that different programming languages can compile to, without forcing all developers to adopt the same syntax (a hopeless task).
Ironically this was specifically to accomodate the ACI format pushed by CoreOS, which is completely incompatible with Docker images. We designed OCI and runc to allow both Docker and ACI to target a common format and low-level runtime. The reasoning was "let's deliver a solid building blick which everybody can use instead of boiling the ocean with a giant over-reaching spec".
“I believe in the rkt model,” said Lennart Poettering, systemd lead developer. “Integrating container and service management, so that there’s a 1:1 mapping between containers and host services is an excellent idea. Resource management, introspection, life-cycle management of containers and services – all that tightly integrated with the OS; that’s how a container manager should be designed.”
I think we're seeing a shift in focus in the container world from container runtimes such as Docker, to container orchestration systems such as Kubernetes. At some point the container runtime becomes just an implementation detail.
Unless Docker finds a way of moving up the stack they are going to have a hard time defending their current valuation. Their current efforts provides close to zero monetizable value.
Orchestration isn't monetizable either. Amazon is giving it away for free on ECS.
Fleet, Mesos, swarm and kubernetes are effectively different implementation details too.
Operating these things as infrastructure services with high SLAs is a money making business.
I hope there are other ways to make this stuff a good business because CoreOS, Docker, Hashicorp, and Mesosphere et al are doing pretty darn excellent engineering.
Congratulations to CoreOS and the rkt team. I've been waiting for this to really dig into rkt, as I am a big fan of how CoreOS has been approaching this project, and eager for a container system that is not Docker.
Security is good, but it isn't a big problem for my current local container apps. However, I've found Docker clumsy in various areas. Does this improve on the design any?
Also, is there a PPA planned for Ubuntu, or plans to get it into Debian soon, now that it has reached 1.0?
I can only speak for myself, but for me it does, by forcing you to do stuff differently.
Eventually you realise that Dockerfiles are fine and dandy, but this mechanism isn't really needed and it can be an obstacle. You realise that a good package manager is your real friend. So now I use wonderful xbps-install from Void Linux to create a complete rootfs + actool to make an ACI file and that's it. A basic webserver can work as a repo for your xbps packages and your ACI images. No need to use Docker Hub or Quay, etc.
If I'm developing on OS X, would it still be possible to use rkt?
How are the tools for managing your rkt deployments? Since Hashicorp supports it, I'm starting to think that I would be better off using their tooling to abstract myself from the specific container implementation.
yes, support exists today, its just not complete, i.e. its close but not completely feature parity with docker as the runtime. When it does get there we will all be happy.
So I should have been more clear, you can specify rkt today, but many things wont work, a lot of things have improved for the upcoming 1.2 k8s release but still not perfect. hence what I meant by "released", something that is can be viewed as a complete replacement for the docker runtime.
I don't have a list handy but can throw an example
a simple thing that was missing was managing /etc/resolv.conf (see rkt 1.0 release notes that this was added so now kubernetes can take advantage of it going forward).
Without it, one either has to jump through some hoops (not impossible) to manage it yourself or your system just wont work (i.e. trying using GCE metadata server from within GCE without being setup for GCE's dns server).
I have CoreOS building rkt stage0 for ARM64, but have just done minimal runtime testing so far. It is on my todo list to do more work with it to get full ARM64 support. Contributions welcome! https://github.com/glevand/coreos--coreos-overlay/tree/maste...
Good to see they are confident enough to cut a 1.0 release. We have been happily mixing the cgroup and kvm/Clear Containers runtimes for a for a couple of months now.
TPM support caught my eye. Brushing off the controversy surrounding EFI secure boot, the TPM is the under-appreciated "Secure Element" in business laptops and high end servers.
I haven't been following the OCI at all, but could somebody shed some light as to why the runtime is the most important part to standardize? Also, any insight as to whether or not the container image format should be standardized?