In my observation, the problem is recompilation avoidance. If you miss it the first time around (maybe you're just trying to get the thing to run) they won't come back unless you do a full rebuild. With a little care you can build a build system that will print them out every time you run, but it takes some care.
Why avoid recompiling? Why this reluctance to do a full rebuild?
I started programming in a world where you had to wait a day to get 300 lines of Fortran built. Now I routinely build the whole 44Gb of community ports of a major Linux distro in three days on a three hundred buck box at home.
There are 24 hours in a day, and your full rebuild will go just fine while you sleep, so long as you DON'T use -Werror.
I don't want to eat a 24h overnight full rebuild every time I fix a single typo. I do want to verify it still builds (i.e. I haven't missed one of the use cases of a renamed variable.)
The shorter the feedback loop, the less context I have to rebuild for errors, the faster I can fix the error, the more efficient I am.
I'm not so far along that I make much use of the red squiggly lines generated by my IDE to highlight syntax errors before I even hit save - I use too many languages that can't be adequately parsed that fast for them to be terribly accurate - but it's a sign of just how much people want to shorten that feedback loop.
Have the CI server do full rebuilds overnight? Sure. Although I still have to bug my coworkers to pay enough attention to the CI server to even notice it's gone red from their changes, at times. Convincing them to read through the warning logs is a nonstarter.
That's a good idea for a makefile hack - somehow force files with warnings to rebuild every time. You'll see them, and also get motivated to fix the warnings.