Yeah, version control of Jenkins itself has always scared me. There seems to be a pattern that we go through.
(in the beginning, there was light...)
* Create a small, tight, single-purpose Jenkins job
* Add a small tweak to it
(repeat adding tweaks)
(realize the Jenkins job now contains MANY different configurations options and the job itself is now a shell script in its own right)
* Sweep the "job" into a shell script. Check in said shell script
* Back up the Jenkins config, and hope no one asks why something's happened.
I now have a plugin that automatically checks in the Jenkins config to source control, but it again doesn't solve the problem of matching up a particular jenkins artifact to exactly what built it, and why.
At my work we're running all Jenkins jobs in Docker containers using some simple scripting [1].
Works really great. Jobs can run on any slave, no snowflakes, full CI config is versioned in the repo along with the code. Jenkins job only point to a single script and that's it.
(in the beginning, there was light...)
* Create a small, tight, single-purpose Jenkins job
* Add a small tweak to it
(repeat adding tweaks)
(realize the Jenkins job now contains MANY different configurations options and the job itself is now a shell script in its own right)
* Sweep the "job" into a shell script. Check in said shell script
* Back up the Jenkins config, and hope no one asks why something's happened.
I now have a plugin that automatically checks in the Jenkins config to source control, but it again doesn't solve the problem of matching up a particular jenkins artifact to exactly what built it, and why.