People contribute to compilers because they need features or performance improvements, not "to kick that other compiler's ass". Sometimes (as here) contributions are from CPU vendors who want to make certain that compilers show off the features of their new hardware. There were tons of great performance impovements in every new version of GCC before clang, and there continue to be today. You just can't model an open source compiler the way you model some closed source word processor :(.
The real question that should be asked is whether, now with two compilers to contribute features into instead of one, whether the progress of either of the compilers is actually faster than the one previous one. Again: people aren't just not going to improve the compiler if there is only one of them, as improvements come from contributors who need the better compiler, not from some abstract notion of "have the bestest compiler". Having multiple open source projects in the same space is a detriment--it is a cost--that hopefully is justified by the benefits.
What happens in a world of "competition", is that now Intel has to ask whether it is valuable to add this feature to GCC or to clang; maybe they have time for both, but probably not. That this work ends up in one compiler and some other work ends up in a different compiler is unfortunate. It isn't even clear who is "competing" or what they are "competing" for. It certainly isn't the performance optimization itself: you can't credit the GCC team (whatever that might even mean) for that, as this work is from Intel. GCC is an open source project that is contributed to by a vast number of independent actors, not centrally developed by a small handful of people.
The argument for clang and LLVM being helpful is therefore not "competition" (which would make absolutely no sense), but instead "GCC has accumulated years of entrenched architecture that, in 2010 as opposed to 1980, we would design differently: LLVM is thereby designed in a way that is easier to modify; so, even though effort is now forked (meaning fewer contributions to each compiler), a smaller number of contributions will have a more powerful effect on the state of compilers". If this is true (and many think it is), then that's great, but it isn't "competition" in the way people normally consider that concept.
Stallman's reaction¹ to LLVM was noisy and possibly childish, but it gave the FSF another important goal: not only should it supply a "libre" compiler but it should also be worth using over the first alternative that comes along. That second goal is a moving target, unlike the first, and it supplies a real motivation to make the best compiler they can.
The real question that should be asked is whether, now with two compilers to contribute features into instead of one, whether the progress of either of the compilers is actually faster than the one previous one. Again: people aren't just not going to improve the compiler if there is only one of them, as improvements come from contributors who need the better compiler, not from some abstract notion of "have the bestest compiler". Having multiple open source projects in the same space is a detriment--it is a cost--that hopefully is justified by the benefits.
What happens in a world of "competition", is that now Intel has to ask whether it is valuable to add this feature to GCC or to clang; maybe they have time for both, but probably not. That this work ends up in one compiler and some other work ends up in a different compiler is unfortunate. It isn't even clear who is "competing" or what they are "competing" for. It certainly isn't the performance optimization itself: you can't credit the GCC team (whatever that might even mean) for that, as this work is from Intel. GCC is an open source project that is contributed to by a vast number of independent actors, not centrally developed by a small handful of people.
The argument for clang and LLVM being helpful is therefore not "competition" (which would make absolutely no sense), but instead "GCC has accumulated years of entrenched architecture that, in 2010 as opposed to 1980, we would design differently: LLVM is thereby designed in a way that is easier to modify; so, even though effort is now forked (meaning fewer contributions to each compiler), a smaller number of contributions will have a more powerful effect on the state of compilers". If this is true (and many think it is), then that's great, but it isn't "competition" in the way people normally consider that concept.