A less precise borrow checker can usually still be satisfied by adding workarounds, at the cost of performance of course. Adding redundant reference counting or increasing the lifetime of data and the duration of critical regions make the program slower. In this sense, improving the borrow checker is analogous to adding new analyses and optimizations passes to a compiler.
OK, I suppose I have to dig deeper into Rust to determine whether I really disagree with that, or maybe this is too vague. The question is: who applies your workarounds? If this is always the compiler, then I agree, but if the programmer has to do any work, then your analogy fails.
Yes, the programmer has to apply the safe workarounds. They always work, but improvements to the borrow checker's heuristics can make them redundant. Removing these workarounds increases performance. A certain minimum set of heuristics are part of the documented language spec by now, and this set is going to become larger with time. If a significant fraction of the ecosystem is affected by a split in heuristics coverage, then Gccrs will have to catch up.
In the PR dug out by a sibling commenter, the improvement to the compiler was indeed to apply the workaround that the programmer had to do. I believe that the churn of new heuristics is going to become smaller, as presumably most of the low-hanging fruits have been harvested by now.