Even for fixnums, integer overflows and underflows of any kind should ideally result in an exception by default. I think it's a pity that C(++) doesn't have support for this. A lot of bugs and weaknesses could have been prevented (for example, CWE 680 http://cwe.mitre.org/data/definitions/680.html).
I know this decision (to simply wrap around in the case of an over/underflow) was probably performance-driven, but on the other hand, if the common languages had required it, CPUs would have better support for it...
Edit: Some googling shows that Microsoft has a SafeInt class in common use that reports under/overflow: http://safeint.codeplex.com/ . Still it feels like a kludge for this to be not part of the main language.
I don't think I'd necessarily want to trap on all (signed) integer overflows. As you say, it might break working code. There's just too much C/C++ code around to change that retroactively. And for modular arithmetic and such it is desirable for integers to wrap around.
But a "trap on overflow" signed and unsigned int type would be nice.
Recent gcc versions use the fact that signed overflow is undefined to do some unexpected optimizations (in particular, a + b < a will never be true if a, b are ints.) I don't think -ftrapv is going to cause many additional errors, but I haven't actually tried it. (Also, http://embed.cs.utah.edu/ioc/ looks interesting.)
> (in particular, a + b < a will never be true if a, b are ints.)
Code of exactly that form in the patch for this bug made me do a double take. Fortunately, they'd also changed the a & b from plain ints to size_t, so it was ok.
I know this decision (to simply wrap around in the case of an over/underflow) was probably performance-driven, but on the other hand, if the common languages had required it, CPUs would have better support for it...
Edit: Some googling shows that Microsoft has a SafeInt class in common use that reports under/overflow: http://safeint.codeplex.com/ . Still it feels like a kludge for this to be not part of the main language.