The variable-size int, unfortunately, made a lot of sense in the early days of C. On processors like the x86 and 68000, it made sense for int to be 16-bit, so you don't pay for the bits you don't need. On newer systems, it makes sense for int to be 32-bit, so you don't pay to throw away bits.
The variable-sized word made more sense when writing code to work acoss machines with 16-bit and 18-bit words or 32-bit and 36-bit words. This is also why you get C99's uint_least32_t and friends, so you're not accidentally forcing a 36-bit machine to have 32-bit overflow behavior everywhere.
Before the mid-late 1990s, programmers rarely needed to worry about the difference in size between 32 and 36 bit words.
Problem is simple—there are still systems out there like that, and people are still buying them and writing C code for them. They're just in the embedded space, where day-to-day programmers don't encounter them.
Then those systems are maintained, then then could correct their legacy C code to "fixed-C" (which should be not that much in-real-life anyways).
It would be possible to do a quick-and-dirty work with preprocessor definitions. The real thing is when you want to write "fixed-C" with a legacy compiler: you realize than it does so much things without telling you, you would need a really accurate warning system in order to catch all those things. I was told gcc can report all integer promotions, true?
You shouldn't fix it by making users choose what bit width their ints are. That's not the right answer for performance (they don't know what's fastest) or correctness (the only choices are incorrect).
If you have a variable whose values are 0-10, then its type is an integer that goes from 0-10, not from 0-255 or -127-127.