How was it goofy? Personally what the compiler did made perfect sense to me. If you assume integers can't overflow, then 'b' must be larger then 'a'. Thus why would the compiler bother performing the statement 'c=(b>a)' when it's 'obvious' that it's just going to be 'c=1'?
That said, looking at that page the guy who made this bug was being more then extremely annoying, the person responding to the bug was fairly civil all things considered.
You're complaining about UD but it's a necessary evil in C. Integer overflow was perhaps a bad choice by the standards makers, but the fact still stands that even if GCC did the 'right' thing, there's no guarantee that clang or any other compiler will do the same thing. The code would still be broken, it just might be harder to figure that out. If you want integer overflow and wrapping then use the compiler flag for it and write non-standard code.
IMO, the bigger problem is that people write their code, compile it with gcc, and then assume it's standards compliant because it 'works' with gcc.
"Principle of Least Surprise" is that an integer will wrap, because that's what the hardware does in pretty much all cases. Any clever optimizations or undefined behavior should be happening due to explicit flags--which is exactly what the guy in that bug report wanted.
The thing is that, for like the last half-century, we've expected integers to overflow and wraparound..that's just how they work. Ignoring that kind of expectation is asinine.
There is an explicit flag, he's compiling with '-O2'. As he noted, without -O2 the output is correct. gcc does exactly what you're saying it should do in this instance, so I don't see what you're unhappy about.
Consulting the original bug report, the optimization is hardly clear in performance benefits. Note also that, again, optimization somehow breaking 50 years of numerical reasoning is probably not a good 'default' behavior (even in O2! especially without clear benchmarks proving its utility!).
Two's complement didn't predominate until the late 1970s, early 1980s. Before that time ones' complement predominated.
And there are plenty of processors today which only use sign-magnitude. In particular, floating point-only CPUs. Compilers must emulate two's complement for unsigned arithmetic, and so signed arithmetic is significantly faster.
The C standard is what it is for good reason. It's not anachronistic. Rather, now there are a million little tyrants who can't be bothered to read and understand the fscking standard (despite it being effectively free, and despite it being 1/10th the size of the C++ standard) and who are are convinced that the C standard is _obviously_ wrong.
Which isn't a comment on this friendly-C proposal. But the vast majority of people have no idea what the differences are between well-defined, undefined, implementation defined, or unspecified behavior, and why those distinctions exist.
I think the point is that (integral) numbers stored in hardware naturally wrap, and that this behaviour is not restricted to two's complement. For that matter it's not even restricted to binary - the mechanical adding machines based on odometer-like gear wheels, operating in decimal, would wrap around much the same way, from the largest value back to the smallest... and these were around for several centuries before computers: http://en.wikipedia.org/wiki/Pascal%27s_calculator
That said, looking at that page the guy who made this bug was being more then extremely annoying, the person responding to the bug was fairly civil all things considered.
You're complaining about UD but it's a necessary evil in C. Integer overflow was perhaps a bad choice by the standards makers, but the fact still stands that even if GCC did the 'right' thing, there's no guarantee that clang or any other compiler will do the same thing. The code would still be broken, it just might be harder to figure that out. If you want integer overflow and wrapping then use the compiler flag for it and write non-standard code.
IMO, the bigger problem is that people write their code, compile it with gcc, and then assume it's standards compliant because it 'works' with gcc.