I remember every other book on Windows programming saying "Process creation/destruction is expensive, use thread pools (or at least process pools) instead, that's the way to go on Windows". Perhaps this mindset is ingrained for Windows QA team too - they don't have [enough] test cases for such scenarios.
Seems like a feedback loop. It's expensive, so most apps avoid doing that, meaning there's less need to check for performance regressions. Then if there is a regression, it further increases the incentive for developers to avoid it, so in the future even fewer apps will do that, making it even more of an unusual use case, so even less need to test for it..
This is probably also why Cygwin and even the WSL subsystem in general are a lot slower when running more complex shell scripts, which is typically spawns tons of processes.
I wrote a pretty simple shell script to test WSL process spawn speed, which loops over a simple echo piped to a grep, and add 1 to a counter until it reaches 1000.
On my windows machine, in a Linux VM, I consistently get times like this:
real 0m1.381s
user 0m0.073s
sys 0m1.472s
On the same machine in WSL, I get results like this consistently:
real 0m14.878s
user 0m0.469s
sys 0m12.109s
That is 10 times slower... I don't have cygwin installed anymore, but when I tested it initially when trying out WSL, it was even slower...