Hacker News new | past | comments | ask | show | jobs | submit login

"Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel"

Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.




Right, because Intel should spend their finite resources buying every imaginable Intel-compatible CPU from other vendors just to test and qualify them for the Intel compiler.

That's not what people are asking. Removing the code that sabotages performance on non-Intel processors would be trivial for Intel to do and would benefit users enormously.


Not generating model-specific code for a processor that doesn't match your CPUID table is hardly what I'd call "sabotaging."


Traditionally (with ICC) the criticism has been that it just checks the vendor string for "GenuineIntel" instead of actually checking for feature support with CPUID. People just want the option to be able to just check just for feature support.


There are such options. Just use -mavx instead of -xAVX and it will compile for any AVX-compatible processor.


Will that make the compiler code harder to maintain ?


A given "feature" does not behave identically on different CPUs. To generate optimal code, you need to know which model is being targeted to make codegen decisions. It is simply not reasonable to expect Intel to do the work on their compiler to figure that all out and test it for competing vendor chips.

If you want more, use a different compiler, period. $VENDOR spends money on their compiler to support their chips, not to appease every whining, butt-hurt user on the Interwebz that thinks $BIGCO owes them something on a competing chip.


Ignoring the results of feature checking when on a competitor's CPU is, though, and that's what they do.


So they should just then believe what another third-party vendor CPU says, and ship the compiler and hope for the best? Not what I would do. I would only enable specific optimizations when I could guarantee the output would be optimal or a given model. "Hope is not a plan."


Yes, they should.

If your CPU lies about its feature support, there are far, far, far bigger problems for the end user than whether a binary is optimal or not.


I remember earlier AMD processors, supporting SSE1 (with the CPUID bit on), but this was so slow to be unusable in practice, slower than general instructions.

In this case, the optimal thing to do was not to use these instructions despite the processors claiming support.


I'd argue that the optimal thing for a compiler to do is to use SSE when the processor supports it... and then the optimal thing for customers to do is not buy processors that run reasonable code slowly.


"So they should just then believe what another third-party vendor CPU says"

Of course. This is obvious, if you had any idea about what you were talking about.

" I would only enable specific optimizations when I could guarantee the output would be optimal or a given model."

You can do that with compiler flags, ever heard of them?


> So they should just then believe what another third-party vendor CPU says,

They're already doing that by checking whether it says it's a genuine Intel. If you can't trust it when it says that it supports a feature, why trust it when it reports a manufacturer?


Because they would be breaking Intel's trademark if they claimed to be GenuineIntel.


It would be questionable if Intel could actually enforce their trademark there. The case would not be all too dissimilar to the Sega v. Accolade case, which Sega lost:

http://en.wikipedia.org/wiki/Sega_v._Accolade


These are not "specific optimizations", they are things like "use SSE3 iff SSE3 is available on this CPU". Except that they actually code it as, "use SSE3 iff SSE3 is available on this CPU and it is an Intel CPU".


http://www.agner.org/optimize/blog/read.php?i=49

It's not that the engineers behind ICC failed to take the architectures of non-Intel processors into account, it's that it intentionally uses a slow path on non-Intel processors that cripples performance. We're talking assuming that the CPU doesn't support SIMD bad here. If you use virtualization to make your CPUID vendor string look like "GenuineIntel", or modify the executable to remove the Intel processor check, the "Intel optimized" builds run just as fast (barring relative processor strengths) on comparable AMD processors.

This matters because:

Until this became public, Intel gave no mention or disclaimer of suboptimal performance for ICC-generated code on non-Intel processors.

News outlets did and probably still do use ICC-generated binaries for benchmarking, giving an unfair marketing advantage to Intel processors ("Look how poorly this AMD CPU runs this binary we specifically designed to function horribly on their CPU! Buy Intel today!")

Almost no one is aware of this.


1) "Finite resources", when talking about Intel, is a relative concept.

2) As mentioned above, Intel has a history of undermining compilation on non-Intel processors. It's not just a question of passively not spending time testing on other CPUs.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: