I don't know if I'd say they're too simple. I think they're too simple in some ways and too complex in others. For me, I think a ton of unnecessary complexity comes from isolating per build step rather than per pipeline, especially when you're trying to build containers.
Compare a GitLab CI build with Gradle. In Gradle, you declare inputs and outputs for each task (step) and they chain together seamlessly. You can write a task that has a very specific role, and you don't find yourself fighting the build system to deal with the inputs / outputs you need. For containers, an image is the output of `docker build` and the input for `docker tag`, etc.. Replicating this should be the absolute minimum for a CI system to be considered usable IMO.
If you want a more concrete example, look at building a Docker container on your local machine vs a CI system. If you do it on your local machine using the Docker daemon, you'll do something like this:
- docker build (creates image as output)
- docker tag (uses image as input)
- docker push (uses image/tag as input)
What do you get when you try to put that into modern CI?
- build-tag-push
Everything gets dumped into a single step because the build systems are (IMO) designed wrong, at least for anyone that wants to build containers. They should be isolated, or at least give you the option to be isolated, per pipeline, not per build step.
For building containers it's much easier, at least for me, to work with the concept of having a dedicated Docker daemon for an entire pipeline. Drone is flexible enough to mock something like that out. I did it a while back [1] and really, really liked it compared to anything else I've seen.
The biggest appeal was that it allows much better local iteration. I had the option of:
- Use `docker build` like normal for quick iteration when updating a Dockerfile. This takes advantage of all local caching and is very simple to get started with.
- Use `drone exec --env .drone-local.env ...` to run the whole Drone pipeline, but bound (proxied actually) to the local Docker daemon. This also takes advantage of local Docker caches and is very quick while being a good approximation of the build server.
- Use `drone exec` to run the whole Drone pipeline locally, but using docker-in-docker. This is slower and has no caching, but is virtually identical to the build that will run on the CI runner.
That's not an officially supported method of building containers, so don't use it, but I like it more than trying to jam build-tag-push into a single step. Plus I don't have to push a bunch of broken Dockerfile changes to the CI runner as I'm developing / debugging.
I guess the biggest thing that shocks me with modern CI is people's willingness to push/pull images to/from registries during the build process. You can literally wait 5 minutes for a build that would take 15 seconds locally. It's crazy.
Compare a GitLab CI build with Gradle. In Gradle, you declare inputs and outputs for each task (step) and they chain together seamlessly. You can write a task that has a very specific role, and you don't find yourself fighting the build system to deal with the inputs / outputs you need. For containers, an image is the output of `docker build` and the input for `docker tag`, etc.. Replicating this should be the absolute minimum for a CI system to be considered usable IMO.
If you want a more concrete example, look at building a Docker container on your local machine vs a CI system. If you do it on your local machine using the Docker daemon, you'll do something like this:
- docker build (creates image as output)
- docker tag (uses image as input)
- docker push (uses image/tag as input)
What do you get when you try to put that into modern CI?
- build-tag-push
Everything gets dumped into a single step because the build systems are (IMO) designed wrong, at least for anyone that wants to build containers. They should be isolated, or at least give you the option to be isolated, per pipeline, not per build step.
For building containers it's much easier, at least for me, to work with the concept of having a dedicated Docker daemon for an entire pipeline. Drone is flexible enough to mock something like that out. I did it a while back [1] and really, really liked it compared to anything else I've seen.
The biggest appeal was that it allows much better local iteration. I had the option of:
- Use `docker build` like normal for quick iteration when updating a Dockerfile. This takes advantage of all local caching and is very simple to get started with.
- Use `drone exec --env .drone-local.env ...` to run the whole Drone pipeline, but bound (proxied actually) to the local Docker daemon. This also takes advantage of local Docker caches and is very quick while being a good approximation of the build server.
- Use `drone exec` to run the whole Drone pipeline locally, but using docker-in-docker. This is slower and has no caching, but is virtually identical to the build that will run on the CI runner.
That's not an officially supported method of building containers, so don't use it, but I like it more than trying to jam build-tag-push into a single step. Plus I don't have to push a bunch of broken Dockerfile changes to the CI runner as I'm developing / debugging.
I guess the biggest thing that shocks me with modern CI is people's willingness to push/pull images to/from registries during the build process. You can literally wait 5 minutes for a build that would take 15 seconds locally. It's crazy.
1. https://discourse.drone.io/t/use-buildx-for-native-docker-bu...