But undefined behaviour isn't disallowed by the standard, and these things aren't not allowed to happen! If they were, the standard wouldn't even bother to mention any of it, and certainly wouldn't bother to suggest that one option is for things to behave "during translation or program execution in some documented manner characteristic of the platform". (See the C11 draft standard, 3.4.3.2; wording is basically the same in C99 I think.)
It seemed obvious to me from the moment I first heard about this stuff that undefined behaviour is there to avoid binding implementations' hands too tightly. It's a way of allowing as wide a range of implementations as practical to be standard-compliant, by not forcing the compiler to patch over every last difference between systems or provide missing functionality. But is it a way to let gcc do whatever it likes, having proven your program invalid on a technicality? Well... I'm less sure about that one.
The suggestions that performance improvements brought by compiler optimizations are meaningless bother me, though.
First, because hardware isn't getting faster that quickly anymore: Moore's Law hasn't meant for a long time that CPUs actually double their per-thread performance every 18 months, so that "1/10th as effective" from your first link, which refers to compilers hypothetically doubling performance every 18 years, starts to get more and more attractive.
Second, because while in C/C++ code the programmer can often avoid useless machine code, newer languages such as Rust and Swift tend to do more stuff implicitly (safety checks, reference counting) which could often be eliminated by a Sufficiently Smart Compiler - increasing the need for good optimizations. (I think that this somewhat mirrors C++'s early history compared to C, but I was too young then to have any personal experience.) Of course those languages also tend to have no undefined behavior, so it's a bit different...
Third, because I don't think undefined behavior is as evil and hard to avoid as people think it is. I think there are a few weird points (you can cast pointers into malloced buffers to any type as long as you're consistent, but there's no way to do that for a static buffer without fully static layout), and it would be nice to have more control over things like aliasing - both to loosen rules and to tighten them (i.e. more flexible restrict-like functionality). But despite being a fun topic, it doesn't seem to come up that often in practice from what I've seen, so the performance gain is close to free. And when you're, say, fighting for 60fps in a CPU limited scenario, it's hard to turn down even a small free performance gain.
Fourth, because as someone who reads assembly frequently, idiotic looking assembly bothers me aesthetically even if it often doesn't have much performance impact. I speak in particular of reloading struct fields over and over when the data is already in a register and any person looking at the code would know it would be illogical for it to change in memory since it was loaded - but the compiler isn't smart enough to prove it can't alias... Sure, in individual cases it's easy to cache it in a local variable to stop this from happening, but in the large it's hard to avoid. Strict aliasing improves the situation somewhat, which is one reason I like it.
It seemed obvious to me from the moment I first heard about this stuff that undefined behaviour is there to avoid binding implementations' hands too tightly. It's a way of allowing as wide a range of implementations as practical to be standard-compliant, by not forcing the compiler to patch over every last difference between systems or provide missing functionality. But is it a way to let gcc do whatever it likes, having proven your program invalid on a technicality? Well... I'm less sure about that one.
(Obligatory links: http://blog.metaobject.com/2014/04/cc-osmartass.html, http://robertoconcerto.blogspot.co.uk/2010/10/strict-aliasin...)