Hacker News new | past | comments | ask | show | jobs | submit login

Stack overflow question illustrating the problem:

http://stackoverflow.com/questions/7682477/why-does-integer-...

"Principle of Least Surprise" is that an integer will wrap, because that's what the hardware does in pretty much all cases. Any clever optimizations or undefined behavior should be happening due to explicit flags--which is exactly what the guy in that bug report wanted.

The thing is that, for like the last half-century, we've expected integers to overflow and wraparound..that's just how they work. Ignoring that kind of expectation is asinine.




There is an explicit flag, he's compiling with '-O2'. As he noted, without -O2 the output is correct. gcc does exactly what you're saying it should do in this instance, so I don't see what you're unhappy about.


Consulting the original bug report, the optimization is hardly clear in performance benefits. Note also that, again, optimization somehow breaking 50 years of numerical reasoning is probably not a good 'default' behavior (even in O2! especially without clear benchmarks proving its utility!).


50 years?

Two's complement didn't predominate until the late 1970s, early 1980s. Before that time ones' complement predominated.

And there are plenty of processors today which only use sign-magnitude. In particular, floating point-only CPUs. Compilers must emulate two's complement for unsigned arithmetic, and so signed arithmetic is significantly faster.

The C standard is what it is for good reason. It's not anachronistic. Rather, now there are a million little tyrants who can't be bothered to read and understand the fscking standard (despite it being effectively free, and despite it being 1/10th the size of the C++ standard) and who are are convinced that the C standard is _obviously_ wrong.

Which isn't a comment on this friendly-C proposal. But the vast majority of people have no idea what the differences are between well-defined, undefined, implementation defined, or unspecified behavior, and why those distinctions exist.


I think the point is that (integral) numbers stored in hardware naturally wrap, and that this behaviour is not restricted to two's complement. For that matter it's not even restricted to binary - the mechanical adding machines based on odometer-like gear wheels, operating in decimal, would wrap around much the same way, from the largest value back to the smallest... and these were around for several centuries before computers: http://en.wikipedia.org/wiki/Pascal%27s_calculator




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: