This isn't doing devops, this is release management, which is something that traditional sysadmins do all the time. Making sure that the image file and repository information match up is pretty basic, and that a release is deployed correctly is pretty basic. A project like this, I'm surprised they don't have tools like nagios constantly checking to make sure downloads are working and that checksums, etc all match up, preferably on the servers before whatever load balancing system you have points at them. Deployment can, and should, be as atomic and possible, regardless of who is pushing the go button.
Docker's messaging is specifically anti-devops. They market containers as a "separation of concerns" between dev and ops, where dev is responsible for everything that's in the container, and ops is responsible for deploying the black box.
It's true that Docker helps separate dev concerns from ops concerns. But it doesn't prevent dev and ops teams from collaborating, or the same team to wear both hats - the most common aspects of a "devops" methodology in my experience.
In fact separation of concern makes collaboration more efficient, because everybody knows who is responsible for what. So you could argue that Docker actually facilitates "doing devops" if that's the methodology you choose.
You make a good point that "doing devops" is a methodology that you can choose to use or not. There is no moral hazard in not using it.
That said, having ops excluded from the architecture decisions that go into building containers is absolutely antithetical to "doing devops."
As for your assertion that people know who is responsible for what in your model, I argue that devs are usually not thinking ahead that they're going to be the ones on-call to fix whatever breaks in production in the middle of the night, because they're the only ones who know what's inside the container. Ops can't be responsible for fixing whatever is inside an artifact that it had no role in creating.
We're going to to have the inverse of the little-girl-smiling-at-the-house-fire meme. "Kubernetes is Running Fine, Dev Problem Now."
> That said, having ops excluded from the architecture decisions that go into building containers is absolutely antithetical to "doing devops".
My point precisely. Just because there is clean separation of concerns doesn't mean anyone needs to be excluded.
The methodology I've seen work best is one where people are not divided by skillset (dev/ops) but instead by area of responsibility (app/infrastructure). Then you embed people from different functional areas into app teams: devs of course, but also security engineer, operations specialist, and various domain experts. From the point of view of IT, you're influencing the development of the app before development to make sure it follows best practices.
A second important point is that, just because you're running a container built by someone else doesn't mean you can't enforce good operations practices. For example, you can mandate that all the containers on your production swarm expose health information at a specific url prefix and pass CVE scanning - or they will not be deployed.
DevOps isn't (just) Dev doing Ops and Ops doing dev, though. It's about understanding each teams domain and facilitating communication. Nothing about Docker limits that, per se.
e.g:
> I argue that devs are usually not thinking ahead that they're going to be the ones on-call to fix whatever breaks in production in the middle of the night
That's not a Docker issue. That's just a DevOps culture that is incomplete.
Maybe you are being downvoted because your comment is too short, but it's something that crossed my mind when reading the end of the parent comment.
The best thing is, this might end up being the best proof of why you need to embrace devops methodologies and maybe take advantage of tools like Docker while doing so :)