Hacker News new | past | comments | ask | show | jobs | submit login
Intel Clang-based C++ Compiler [pdf] (llvm.org)
70 points by Someone on April 28, 2014 | hide | past | favorite | 44 comments



The disclaimer about non-Intel processors is necessary, but sad. The dirty tricks they pulled with their compiler creating slow code on AMD processors are really underhanded.


They are adding this page/slide to everything, that is connected with Intel CPU (probably after that case with AMD mis-optimization, but I am not sure, if it was the first appearance of this page). For example, if they release a new instruction set, even if there are no alternatives in AMD world, Intel would add this page. Treat this like a boilerplate.


It's 100% boilerplate. The whole "handicapping" debate that pops up every ICC thread is likely several years out of date (not that I don't expect to see it again and again for the next decade or two).

Case in point, take bench completion time on a Xeon supercomputer [1]. Compare that to relative completion time on an Opteron supercomputer [2] (for both, lower is better). If that's Intel's compiler handicapping AMD performance, I wonder what GCC is trying to do?

[1]: http://www.nersc.gov/users/computational-systems/edison/perf...

[2]: http://www.nersc.gov/assets/_resampled/resizedimage730421-ho...


Well it cost them some business, because some companies wouldn't use it due to performing poorly on other processors. I can't see how it would be beneficial for them to cripple AMD performance.


It's a none issue. This compiler is for OSx only, and I can't even remember the last time Apple released an OSx machine that's had AuthenticAMD.


They still do this.


"Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel"

Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.


Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.

That's not what people are asking. Removing the code that sabotages performance on non-Intel processors would be trivial for Intel to do and would benefit users enormously.


Not generating model-specific code for a processor that doesn't match your CPUID table is hardly what I'd call "sabotaging."


Traditionally (with ICC) the criticism has been that it just checks the vendor string for "GenuineIntel" instead of actually checking for feature support with CPUID. People just want the option to be able to just check just for feature support.


There are such options. Just use -mavx instead of -xAVX and it will compile for any AVX-compatible processor.


Will that make the compiler code harder to maintain ?


A given "feature" does not behave identically on different CPUs. To generate optimal code, you need to know which model is being targeted to make codegen decisions. It is simply not reasonable to expect Intel to do the work on their compiler to figure that all out and test it for competing vendor chips.

If you want more, use a different compiler, period. $VENDOR spends money on their compiler to support their chips, not to appease every whining, butt-hurt user on the Interwebz that thinks $BIGCO owes them something on a competing chip.


Ignoring the results of feature checking when on a competitor's CPU is, though, and that's what they do.


So they should just then believe what another third-party vendor CPU says, and ship the compiler and hope for the best? Not what I would do. I would only enable specific optimizations when I could guarantee the output would be optimal or a given model. "Hope is not a plan."


Yes, they should.

If your CPU lies about its feature support, there are far, far, far bigger problems for the end user than whether a binary is optimal or not.


I remember earlier AMD processors, supporting SSE1 (with the CPUID bit on), but this was so slow to be unusable in practice, slower than general instructions.

In this case, the optimal thing to do was not to use these instructions despite the processors claiming support.


I'd argue that the optimal thing for a compiler to do is to use SSE when the processor supports it... and then the optimal thing for customers to do is not buy processors that run reasonable code slowly.


"So they should just then believe what another third-party vendor CPU says"

Of course. This is obvious, if you had any idea about what you were talking about.

" I would only enable specific optimizations when I could guarantee the output would be optimal or a given model."

You can do that with compiler flags, ever heard of them?


> So they should just then believe what another third-party vendor CPU says,

They're already doing that by checking whether it says it's a genuine Intel. If you can't trust it when it says that it supports a feature, why trust it when it reports a manufacturer?


Because they would be breaking Intel's trademark if they claimed to be GenuineIntel.


It would be questionable if Intel could actually enforce their trademark there. The case would not be all too dissimilar to the Sega v. Accolade case, which Sega lost:

http://en.wikipedia.org/wiki/Sega_v._Accolade


These are not "specific optimizations", they are things like "use SSE3 iff SSE3 is available on this CPU". Except that they actually code it as, "use SSE3 iff SSE3 is available on this CPU and it is an Intel CPU".


http://www.agner.org/optimize/blog/read.php?i=49

It's not that the engineers behind ICC failed to take the architectures of non-Intel processors into account, it's that it intentionally uses a slow path on non-Intel processors that cripples performance. We're talking assuming that the CPU doesn't support SIMD bad here. If you use virtualization to make your CPUID vendor string look like "GenuineIntel", or modify the executable to remove the Intel processor check, the "Intel optimized" builds run just as fast (barring relative processor strengths) on comparable AMD processors.

This matters because:

Until this became public, Intel gave no mention or disclaimer of suboptimal performance for ICC-generated code on non-Intel processors.

News outlets did and probably still do use ICC-generated binaries for benchmarking, giving an unfair marketing advantage to Intel processors ("Look how poorly this AMD CPU runs this binary we specifically designed to function horribly on their CPU! Buy Intel today!")

Almost no one is aware of this.


1) "Finite resources", when talking about Intel, is a relative concept.

2) As mentioned above, Intel has a history of undermining compilation on non-Intel processors. It's not just a question of passively not spending time testing on other CPUs.


One bits interests me is the contribution of clang-omp: http://clang-omp.github.io/

looks promising, and cannot wait to try this in trunk.


Yea it'll definitely be cool to see OpenMP in LLVM/Clang. It's one of the biggest areas where GCC still wins out. It's not a panacea by any means but I know there's a lot of image processing libraries that use it.


This is exactly the kind of thing GNU was worried about with GCC (and thus used the GPL), right? They didn't want a company using all their parsing code with a proprietary back-end.

I know Clang and LLVM were made with licenses explicitly designed to allow such extension (used, for example, to allow Xcode to integrate so tightly).


But in the end, clang and llvm may got some contributions back to the open source project, that will benefit everybody..

If not now.. its very common that the project dont get the traction they are expecting.. and then they just open it up to at least get some good marketing of it..

Theres also the whole net effect.. hiring people to get to know a open source project.. that can contribute back to it, or get hired by someone else that do contribute back..

I think in the end, open source with BSD/MIT licenses are likely to have better results for the open source project as a whole..

I think GNU make a lot of sense when stallman give birth to it.. because the software "ecosystem" was very brutal about openness back than.. it required strong leadership and actions.. i think it makes less sense nowadays.. you know.. even microsoft are opening their stuff!!

I think we need to think now more about open data, because there a lot of sensitive stuff behind black boxes and paywalls..


Anyone notice the weird first two chars of the title of this post? Looks like: "". Hexdump says it's twice "ef bf bc". That's not a unicode BOM is it?


Nope, BOM is FE FF.


The UTF-8 tool (http://www.ltg.ed.ac.uk/~richard/utf-8.html) says it's U+FFFC OBJECT REPLACEMENT CHARACTER, which is intended to act as a placeholder for images, media, etc.


What's the symbol at the beginning of the title supposed to be? It looks like [OBJ][OBJ] on my computer.


This is the https://en.wikipedia.org/wiki/Unicode_Specials -- U+FFFC  object replacement character. I have no idea where this came from.


I would guess it came from copying and pasting from the PDF in something like Acrobat Reader.


I thought it was a clever logo, given the subject matter (i.e. compilers producing object code, even MS compiler has (had?) *.obj as an extension for output)



Can someone explain why it's a good (or bad) thing ? I don't use C++ so I'm a bit lost as to why they would do this.

As I understand it, LLVM produces some low level byte-code that is then compiled.

Is it because LLVM is better at parsing and Intel compiler better at performances that they combined both ?


Intel has put a lot of effort into building a compiler that emits very tightly optimized machine code, oftentimes by exploiting quirks of its own processors (and, as other posters are pointing out, sometimes by intentionally crippling it for other CPUs). Clang has invested a lot of effort into producing helpful programmer-friendly errors & diagnostic messages. Put them together and you get a compiler that's both fast and developer-friendly.


Great news, Intel does make suprinsingly effective optimizations.


This looks great!

Is this for OS X only?


Probably because Clang is the default compiler on Mac OS X, but not on Windows or Linux


You would presumably use ICC on windows or linux.


You can already use ICC on OS X as it is, so that presumption doesn't really fit. It certainly seems like they intend to use the Mac edition of Composer as a trial run for eventually ditching their own frontend code. Which, given the current pace of C++ standards development, isn't an unreasonable proposition. But since this devmeeting poster is the only public information in existence, only Intel knows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: