> Salsa's shared CI runners seem to be about 10 times slower than that, so it is completely unfeasible to even one full build in CI.
I maintain the CI&D pipeline for my ~3 million LOC c++ project (plus a handful of non c++ components for online services), and this hits hard. None of the existing services are designed for building large projects, and all really require custom work to maintain.
It's really quite painful and a big barrier to adoption of modern best practices. I would love it if there were services we could utilise to speed up these builds (like a hosted networked ccache, preferably one that runs on Windows too for us).
An incredubuild build farm is certainly a solution, but definitely not an out of the box solution, and it requires some pretty serious cash. Off the top of my head, an agent license is $1500/year, plus $300 per seat (including each of your build requesters). If you're on Windows you also have to pay the windows server per core licensing
It doesn't "burst" like cloud (so you pay annual licensing on your peak usage), and your build system needs to support it. It's almost certainly going to be cheaper (and likely easier to manage) to use the fattest server you can find over incredibuild.
That said, if you are in an office with 10/100Gb cabling, and money is no object, giving developers an 8 core i9, backing them up with incredibuild and local linking is an incredible workflow to provide.
Is cross-compiling from Debian to Windows via mingw32 using WSL an option? This way you could keep Win32 desktops and (at least) have a build process which takes advantage of a shared ccache directory or http service.
(not an out-of-box solution, but perhaps useful in addition to what you already have, for checking basic build integrity,
for example)
Ive not looked into that approach yet. Our next step optimisation is going to be to manually change our compile checks to check every file changed individually, rather than a full build every change, and catch the cascaded changes on a regular schedule. Best guess is this will end up reducing our ci costs by around 25%, at the cost of writing it ourselves
I can't speak for anyone else, but I help maintain a large-ish codebase that releases on all the major desktops. Compile times aren't great but they only account for around 15% of the total CI time. Cross compiling wouldn't really help us, since we need to run end to end and integration tests on actual machines.
>The linux package also takes a lot of resources to build; around 80 minutes on the fastest PC I have at home
I was under the impression some of the bigger kernel teams have access to large amounts of cloud power. Is that only at kernel upstream level (rather than the distro's kernel team)?
I maintain the CI&D pipeline for my ~3 million LOC c++ project (plus a handful of non c++ components for online services), and this hits hard. None of the existing services are designed for building large projects, and all really require custom work to maintain.
It's really quite painful and a big barrier to adoption of modern best practices. I would love it if there were services we could utilise to speed up these builds (like a hosted networked ccache, preferably one that runs on Windows too for us).