Hacker News new | past | comments | ask | show | jobs | submit login

I have a few contentions with the study.

First, if you look at their own analysis the number drops from 30% to 23% when limited to only the latest tagged images in the official repository. I'd expect to see a higher rate of vulnerabilities in previous versions...that's why you rebuild. Find me a linux admin that would accept their OS is vulnerable if you're citing old, unpatched versions.

Second, they seem to virtually _all_ be package vulnerabilities. These would, ostensibly, reach parity with whatever the target distro is by simply updating packages on a rebuild.

Finally, I think one would be hard pressed to lay any vulnerabilities traced to updated, current packages at the feet of docker. That fault would seem to lie squarely with distro package maintainers.

So, two simple rules would seem to bring the security of container deployment in line with standard bare metal deployment (by the metrics applied in this research):

1. Don't use old shit

2. Rebuild your selected docker container to ensure packages are up to date. Why? See rule #1.




I thought the point of using docker containers was that they were pre-packaged apps. Not so you had to continually rebuild the container with your own updated packages. Doesn't having to rebuild the container to fix security vulns defeat one of the major reasons to have versioned docker images released for use? You could very well end up breaking dependencies.


You're sort of combining two things: 1) Docker makes it super simple for anyone to package software and run it 2) Dockerhub makes it simple to share software that you have packaged with other people.

Personally, my biggest gripe with Dockerhub is that a Dockerfile should be required in order to upload to the hub, and it should show the Dockerfile that produced each version. The fact that people can create fundamentally unreproducible binaries is nasty (there's also the issue of not specifying versions in the apt/yum steps used in the Dockerfiles, but that's just a general problem with the way package management software is designed).

None of that's a problem with Docker itself though.


Ahh got it. I have only really used lxc, so not super familiar with docker other than it being a container tech. Thanks for the explanation :D


I would say the primary benefit of docker is that you can build once, run the same everywhere.

E.g. you have a consistent, reproducible application environment which _should_ be vetted through a gauntlet of continuous integration, testing, etc. that once created will run identically on any host running docker.

If you have a "trusted source" to do all the grunt work for you, fine. But docker's promise isn't guaranteeing a trusted source. It's providing a consistent, invariant application target from developer laptop -> production host.


Just like Java. We've seen how this one ends :)


Well, sort of. Java was never really like Docker, and in fact always struggled architecturally to provide a good container abstraction for applications. The "servlet container" idea was (and is) a failure. Java never had the equivalent of the Docker daemon, and it only (relatively) recently got something like Dockerhub via Maven--and Maven repos aren't integrated with the JVM or the (non-existent) Java daemon.


Great points!

Just to clarify, our article was not meant to blame any particular party, but rather to provide awareness of the security vulnerabilities that exist even in the latest official images on Docker Hub.

As you point out, this study specifically focused on the OS package vulnerabilities -- including application-level packages and/or other types of vulnerabilities would increase the percentage of vulnerable images.

As we also mention in the article, rebuilding is a great way to solve some of the problems. However, rebuilding comes at a cost -- the overhead of redeploying the container infrastructure, managing audit trails, potential instability introduced to developer applications, etc. These need be balanced against the benefits of rebuilding constantly.


My primary contention with your post is that docker doesn't provide a package-manager-like way of ifnding out whether or not you're running older images. Everyone has their own homegrown way of doing it.

1. Don't use old shit 2. Docker should provide a way to tell you you're not running the latest tagged image so you stop running old shit 3. Don't use base images whose maintainers can't be bothered to rebuild when security updates hit


Well, all docker containers are hashed and can be version tagged. If you do a pull and run the 'latest' tag, it'll always be the HEAD of the commit hash.

This is assuming you want to trust some 3rd party with the maintenance and security of your production environment.

Docker containers are, usually, just operating systems running a single logical application service. I don't think Docker promises a free Sys Admin. ;)


My complaint is primarily that there's no mechanism to let you know "hey, there's an update to this" in the same way as apt, yum, and other systems do.

It's not about trusting a 3rd party with the maintenance and security of your production environment as much as it is "Docker should provide a way to let the people handling the maintenance of your production environment to know shit may be happening". Rebuilding from the 'latest' tag is great. If you know you have to rebuild, and that there's an update available.


Does AWS does this with their AMIs? Everything you listed can be applied in virtually the same way with VM images, and their are community based AMIs with all sorts of vulnerabilities and non-updated code, people just know not to use them or build their own.


Well, no. Everything I listed can be applied in virtually the same way to openstack images or AMIs or whatever... except that the intended use case of those includes regularly updating packages, which docker does not.


So if you rebuild your docker containers every time you deploy, and you deploy daily, security updates should happen on a daily basis. Correct?


Correct!

And if you have a continuous integration environment building and validating artifacts on every developer commit with a regular, vetted release cycle that catches any regression bugs...

Well, now you're on the right track.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: