I was amazed to learn that the Linux kernel supports 1,400 distinct 32-bit ARM targets!
That's ... a scary amount, and it's easy to see that automated testing might be a good thing, there.
I think the combinatorial explosion happens at least in part because even though there's a limited number of actual ARM cores, the peripherals which Linux needs to support are often vendor-defined and thus different for each system-on-a-chip (or at least different for each device series from a particular manufacturer). I didn't dig through the sources to verify this, but I've heard of the problem before, ARM doesn't define a standard way for the CPU core to learn about its peripherals at run-time.
Keep in mind that's mainly because the ARM world is not standardized the way the PC world is. In practice the differences between most of these targets will be the very early initialization code and things like the clock hierarchy but beyond that most of the code will be shared across many variants.
Imagine having to use a different kernel every time you upgrade your desktop, that's basically how the ARM world works so far.
Personally, I find it amazing that a project the size of Linux has relatively few automated tests and testing done by its maintainers, leading to projects such as this (and the LTP, etc.) to come about to actually ensure ongoing quality.
How many other major projects the size of Linux have as little upstream testing?
The bet I usually make about hardware vendors is that, since their core competency is hardware, in many cases software is an afterthought.
Not to pick on hardware companies. Nearly every type of company has a few select areas that they focus on, hire for, and are truly good at. With everything else, they do what it takes to get by. Not because they don't care but because it takes a concerted effort to develop your organization into one that has high competency in any particular area.
Maybe PC hardware vendors has some automated kernel testing. I think it's different with embedded. The SoC company I worked for a couple of years ago didn't have anything like that.
That's because SoC companies for the most part simply never update the kernel. Whatever kernel they were using when the chip tapes out is the kernel they're still using when the products hit EOL. Wireless routers with Broadcom 802.11ac are all running a kernel branch from 2010.
This project could use a downloadable script which would automatically compare "some machine you have which runs linux" to hardware configurations currently available within the CI platform to see if it would be a useful contribution.
That could be really interesting. Grabbing system configuration from lshw should be relatively simple - what could be more interesting is the backend that would tell you your machine is an interesting one or is already covered.
Whack in a build/regression test for the NVidia driver against the latest kernel sources. I spent so long today trying to get those compiled on Debian against the latest sources, and I feel dumber as a result. Which is probably fair enough, also had a few beers.
It is all as it was foretold. The mighty Beowulf cluster reawakens, summoned by its true calling. Servants of the Dark File, bring forth your abandoned and dying devices, that they may be blessed with an IP in this new Mecca of logic and crystal.
Wow, looks like the definition got cleaned up a bit over the decades. Back when I was a kid, it was just every computing device the group could get their hands on, networked. Running simple delayed-echo servers, in our case.
That's ... a scary amount, and it's easy to see that automated testing might be a good thing, there.
I think the combinatorial explosion happens at least in part because even though there's a limited number of actual ARM cores, the peripherals which Linux needs to support are often vendor-defined and thus different for each system-on-a-chip (or at least different for each device series from a particular manufacturer). I didn't dig through the sources to verify this, but I've heard of the problem before, ARM doesn't define a standard way for the CPU core to learn about its peripherals at run-time.