Interesting discussion. I can honestly say that I have never seen any of these come up.
In case anyone is worried, the sizeof('i') one looks, at first glance, like the most worrying one.
However, it is less serious than it might look. Given "char c;", sizeof(c) evaluates to 1 in both languages, it is just a single character constant like 'i' which causes a possible problem.
For anyone curious why, I'm not sure why character constants are int in C, but in pure C I think it is impossible to tell that a character literal is an int (as there is no function overloading, or type deductions).
In C++ however, with function overloading, we can tell. In particular, we really want:
std::cout << 1 << 'i' << std::endl;
To print the number 1, followed by the letter i. Therefore we need the letter i to be of type char, rather than another int.
That answer feels like a cheat to me - aren't both sizeofs implementation defined, so it would be possible (although arguably silly) to have an implementation where char and int were the same size?
sizeof(char) has to be 1 in both C and C++ (5.3.3, para 1), and this is the smallest addressable amount, so should be a byte on most machines. "Plain ints have the natural size suggested by the architecture of the execution environment", which on most modern machines is not a single byte, but 4 or 8. Also C++ says that an int must store INT_MAX, which according to the C standard has to be at least 32767.
So you could have sizeof(char)=sizeof(int), but it would have to be on at least 16-bit addressable machine, with a natural machine size of 16-bits. Possible but very unusual.
The compiler for the TI C55x family [1] of fixed-point DSPs is implemented that way: sizeof(char) == 1 and sizeof(int) == 1, both 16-bit fields. A practical consequence is that you can't easily use the same struct header files, for example, on a more conventional platform. Another is that ASCII strings use twice the space.
Well, sizeof(char) is 1 by definition, so that will never change. Now sizeof(int) is indeed implementation defined (only a minimal range is guaranteed), so technically two different C or C++ implementations might give different results.
It's even possible that sizeof(int) could be 1 on a certain architecture. In that case, both C and C++ would behave in the same way. It might happen on some architectures that don't support byte access and have 16bit-wide chars and ints?
It's nice to see a HN submission of an on-topic stackoverflow question (instead of all the off-topic ones that get submitted, followed by comments complaining about the question being closed on SO.)
In case anyone is worried, the sizeof('i') one looks, at first glance, like the most worrying one.
However, it is less serious than it might look. Given "char c;", sizeof(c) evaluates to 1 in both languages, it is just a single character constant like 'i' which causes a possible problem.
For anyone curious why, I'm not sure why character constants are int in C, but in pure C I think it is impossible to tell that a character literal is an int (as there is no function overloading, or type deductions).
In C++ however, with function overloading, we can tell. In particular, we really want:
std::cout << 1 << 'i' << std::endl;
To print the number 1, followed by the letter i. Therefore we need the letter i to be of type char, rather than another int.
(Fixed last 'C' to 'C++' : Thanks dbaupp)