On a TI DSP I used this millennium, sizeof(char) == sizeof(short) == sizeof(int) == sizeof(float) == sizeof(double) == sizeof(void *) == 1. Each and every type was 32-bit. Each memory address pointed to a unique 32-bits, ie: (int *)0, and (int *)1 did not overlap on that system. As sizeof measures addressable units, not bytes, so they're all size 1.
Even on more standard systems, int is 4 bytes on systems using an ILP32 or LP64 convention, but ILP64 is a thing too, making int 8 bytes.
Nope. On 8-bit platforms int is 16-bit wide, and even on some 16-bit platforms too. And this is not just history, embedded toolchains are prepared to handle the code as if it were platform-independent. The C99 uint32_t type is defined to 'unsigned int' on most sane platforms (not sure about ILP64, maybe it's unsigned short, but what's the uint16_t then?) But in the arm-none-eabi toolchain it's defined to 'unsigned long', because it is assumed that the same code is being built on 8-bit and 32-bit platforms, and so only 'long' guarantees the 32-bit range. And to avoid format string warnings, printf format strings shall contain the PRI*32 macros like PRIu32 instead of the raw %u / %lu.
The crazy one is `long`, which is 32 bits on some platforms and 64 bits on other platforms.
I solved that problem by never using `long`, opting instead for `int` for 32 bits, and `long long` for 64. `long` should be deprecated for 32 and 64 bit platforms, it's not fixable.
In D, we use `int` for 32 bits, `long` for 64 bits, and `size_t` for a pointer index. All the craziness just melts away. You can port the code back and forth between 32 and 64 bit patterns, and it just works. All those `int32_t` types are out back in the bin along with the whiteout.
I suspect the impetus for suffixing the number of bits is a bit of a backlash from C, where you never know how many bits are in a type. That has caused C programmers a lot of trouble and extra work.
So it's easy to say: I don't know.