The disclaimer about non-Intel processors is necessary, but sad. The dirty tricks they pulled with their compiler creating slow code on AMD processors are really underhanded.
They are adding this page/slide to everything, that is connected with Intel CPU (probably after that case with AMD mis-optimization, but I am not sure, if it was the first appearance of this page). For example, if they release a new instruction set, even if there are no alternatives in AMD world, Intel would add this page. Treat this like a boilerplate.
It's 100% boilerplate. The whole "handicapping" debate that pops up every ICC thread is likely several years out of date (not that I don't expect to see it again and again for the next decade or two).
Case in point, take bench completion time on a Xeon supercomputer [1]. Compare that to relative completion time on an Opteron supercomputer [2] (for both, lower is better). If that's Intel's compiler handicapping AMD performance, I wonder what GCC is trying to do?
Well it cost them some business, because some companies wouldn't use it due to performing poorly on other processors. I can't see how it would be beneficial for them to cripple AMD performance.
"Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel"
Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.
Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.
That's not what people are asking. Removing the code that sabotages performance on non-Intel processors would be trivial for Intel to do and would benefit users enormously.
Traditionally (with ICC) the criticism has been that it just checks the vendor string for "GenuineIntel" instead of actually checking for feature support with CPUID. People just want the option to be able to just check just for feature support.
A given "feature" does not behave identically on different CPUs. To generate optimal code, you need to know which model is being targeted to make codegen decisions. It is simply not reasonable to expect Intel to do the work on their compiler to figure that all out and test it for competing vendor chips.
If you want more, use a different compiler, period. $VENDOR spends money on their compiler to support their chips, not to appease every whining, butt-hurt user on the Interwebz that thinks $BIGCO owes them something on a competing chip.
So they should just then believe what another third-party vendor CPU says, and ship the compiler and hope for the best?
Not what I would do. I would only enable specific optimizations when I could guarantee the output would be optimal or a given model. "Hope is not a plan."
I remember earlier AMD processors, supporting SSE1 (with the CPUID bit on), but this was so slow to be unusable in practice, slower than general instructions.
In this case, the optimal thing to do was not to use these instructions despite the processors claiming support.
I'd argue that the optimal thing for a compiler to do is to use SSE when the processor supports it... and then the optimal thing for customers to do is not buy processors that run reasonable code slowly.
> So they should just then believe what another third-party vendor CPU says,
They're already doing that by checking whether it says it's a genuine Intel. If you can't trust it when it says that it supports a feature, why trust it when it reports a manufacturer?
It would be questionable if Intel could actually enforce their trademark there. The case would not be all too dissimilar to the Sega v. Accolade case, which Sega lost:
These are not "specific optimizations", they are things like "use SSE3 iff SSE3 is available on this CPU". Except that they actually code it as, "use SSE3 iff SSE3 is available on this CPU and it is an Intel CPU".
It's not that the engineers behind ICC failed to take the architectures of non-Intel processors into account, it's that it intentionally uses a slow path on non-Intel processors that cripples performance. We're talking assuming that the CPU doesn't support SIMD bad here. If you use virtualization to make your CPUID vendor string look like "GenuineIntel", or modify the executable to remove the Intel processor check, the "Intel optimized" builds run just as fast (barring relative processor strengths) on comparable AMD processors.
This matters because:
Until this became public, Intel gave no mention or disclaimer of suboptimal performance for ICC-generated code on non-Intel processors.
News outlets did and probably still do use ICC-generated binaries for benchmarking, giving an unfair marketing advantage to Intel processors ("Look how poorly this AMD CPU runs this binary we specifically designed to function horribly on their CPU! Buy Intel today!")
1) "Finite resources", when talking about Intel, is a relative concept.
2) As mentioned above, Intel has a history of undermining compilation on non-Intel processors. It's not just a question of passively not spending time testing on other CPUs.
Yea it'll definitely be cool to see OpenMP in LLVM/Clang. It's one of the biggest areas where GCC still wins out. It's not a panacea by any means but I know there's a lot of image processing libraries that use it.
This is exactly the kind of thing GNU was worried about with GCC (and thus used the GPL), right? They didn't want a company using all their parsing code with a proprietary back-end.
I know Clang and LLVM were made with licenses explicitly designed to allow such extension (used, for example, to allow Xcode to integrate so tightly).
But in the end, clang and llvm may got some contributions back to the open source project, that will benefit everybody..
If not now.. its very common that the project dont get the traction they are expecting.. and then they just open it up to at least get some good marketing of it..
Theres also the whole net effect.. hiring people to get to know a open source project.. that can contribute back to it, or get hired by someone else that do contribute back..
I think in the end, open source with BSD/MIT licenses are likely to have better results for the open source project as a whole..
I think GNU make a lot of sense when stallman give birth to it.. because the software "ecosystem" was very brutal about openness back than.. it required strong leadership and actions.. i think it makes less sense nowadays.. you know.. even microsoft are opening their stuff!!
I think we need to think now more about open data, because there a lot of sensitive stuff behind black boxes and paywalls..
Anyone notice the weird first two chars of the title of this post? Looks like: "". Hexdump says it's twice "ef bf bc". That's not a unicode BOM is it?
The UTF-8 tool (http://www.ltg.ed.ac.uk/~richard/utf-8.html) says it's U+FFFC OBJECT REPLACEMENT CHARACTER, which is intended to act as a placeholder for images, media, etc.
I thought it was a clever logo, given the subject matter (i.e. compilers producing object code, even MS compiler has (had?) *.obj as an extension for output)
Intel has put a lot of effort into building a compiler that emits very tightly optimized machine code, oftentimes by exploiting quirks of its own processors (and, as other posters are pointing out, sometimes by intentionally crippling it for other CPUs). Clang has invested a lot of effort into producing helpful programmer-friendly errors & diagnostic messages. Put them together and you get a compiler that's both fast and developer-friendly.
You can already use ICC on OS X as it is, so that presumption doesn't really fit. It certainly seems like they intend to use the Mac edition of Composer as a trial run for eventually ditching their own frontend code. Which, given the current pace of C++ standards development, isn't an unreasonable proposition.
But since this devmeeting poster is the only public information in existence, only Intel knows.