Hacker News new | past | comments | ask | show | jobs | submit login
Rivals Intel and AMD Team Up on PC Chips to Battle Nvidia (wsj.com)
348 points by MekaiGS on Nov 6, 2017 | hide | past | favorite | 201 comments



This is like, "Hell has frozen over" kind of news

Am I the only one who smell this as very "Apple" wanted?

I dont think AMD will be giving up any GFX secret, more likely this is AMD shipping Intel a Mobile Gfx Die to be integrated within the same CPU package.

But in any case, Why not just have Intel ship a Mobile CPU without iGPU and a Separate GPU.

And AMD, why now? When Zen is doing great, has great roadmap and potential, along with much better GFx then Intel. Why?

Edit: OK, I didn't read carefully, while this is WSJ, it is still a rumor, nothing has confirmed.... yet.

Edit2: It is confirmed now.

Edit3: Yes AMD will be shipping die to Intel, and it is EMIB at work. https://www.pcworld.com/article/3235934/components-processor...


This is actually a smart move for AMD, and here's why.

These chips from Intel are going to be the super-high top end mobile chips in laptops that cost serious money, and have a rumoured 45w TDP.

Raven Ridge on the other hand is a traditional AMD APU, albeit with a decent CPU component - it's not going after the same market segment at all, it's going after the segment where you want to maybe do some light e-sports gaming.

This way, AMD gets both bites of the cherry - they get to sell their APUs with Zen, and they also get to target the super top end.


Also smart move for Intel. Frees them up to concentrate on CPU functionality and clean offloading to accelerators. This IMHO is the key, without something better than CAPI, CCIX, NVlink, then these are just accelerators. Putting them in as actual compute devices on same level as the CPU will take some actual guts and innovation.


Kyle Bennet leaked this almost a year ago:

https://hardforum.com/threads/from-ati-to-amd-back-to-ati-a-...


If you don't read hardforum you should!



> But in any case, Why not just have Intel ship a Mobile CPU without iGPU and a Separate GPU.

Power, manufacturing costs, integration and testing costs, design size...

> Am I the only one who smell this as very "Apple" wanted?

Why would Apple care about NVIDIA? They're already beating them in the kind of performance that matters on products Intel will be competing for.


My theory on why Apple prefers AMD over nVidia:

AMD hardware at a specific price point is more powerful than nVidia hardware at the same price point. However, nVidia has superior drivers that eliminates the difference.

And since Apple prefers to use their own drivers, nVidia loses their main point of differentiation.

But of course the "Apple" drivers for video cards are basically vendor drivers with Apple doing QA & release management. But the driver is nVidia's secret sauce, they're not going to show it to Apple, since Apple is now a very competitive GPU manufacturer. At some point Apple will probably put their own GPU's inside Macs, so nVidia doesn't want to give them a head start.

AMD cares less about giving Apple a head start because they care more about the short term than the long and competing against nVidia.


AMD forked over the driver code, NVIDIA doesn’t let Apple handle the drivers and pushed CUDA which apple didn’t want anything to do with.

As for the performance per buck thing it’s not accurate. For gaming and 3D applications AMD has a huge bottleneck in thier geometry pipeline which causes the to underperform in games especially pre VEGA. NVIDIA also used tile rasterization and shader output caching on die to increase performance this is something that AMD adopted only with VEGA.

Pure flops don’t mean much even for compute.


The new Vega boards lag nvidias 1080ti apart for some edge cases of interest to cryptocurrencies


Though to be fair, they also are significantly cheaper than the 1080ti. Well, if you can find one at MSRP at least.


Coding with Vulkan directly seems to eliminate the nvidia perf advantage in many cases, so...is it still the hardware or the software lagging?


Vs. coding for nvidia or vs. coding for DirectX independent of card-maker?


Quite they are all being snapped up by miners


Looking at AMD’s server strategy, hey are going directly at NVidia, nvidia just doesn’t know it yet. They Epyc line is built for heavy PCIe comma and low comms overhead from NIC to GPGPU. Of course, guess which gpgpu works best with them?


Apple wants low powered, thin, but powerful chips.


In other words, exactly what everyone who uses chips wants.


That's not true at all. Cost is king. Only flagship products care about pushing size down.


> And AMD, why now? When Zen is doing great...

Well, last time AMD had an upper hand on the CPU's technical features, they tried taking Intel into a fight and couldn't handle it. Why would the same thing have a different result now?

It is much smarter to not have a full-on direct fight.


They could handle it and beat them at several key points (1 ghz barrier, first to ship x86-64, first to integrate memory controller, not to mention eating the p4 for lunch performance wise), but Intel cheated and our justice department let them. The US refused to do anything, and the EU went at it a decade after the facts.

I agree with your conclusion that fighting head on right now would not be smart when they have nvidia on one side and arm on the other. But not with your assertion that "AMD failed to defeat Intel products in the market" the last time.


How they failed is of little relevance, since there is no reason to believe this time things would be different. (In fact, I would expect the US gov to be more corrupt now, not less.)


Yes it is of relevance, since the EU made it clear if Intel tried it again not only would they act much much more swiftly, but they would also be very tough.


This.

I'd love AMD to bloody Intel's nose (Intel deserves it after their payola anti-competitive behavior against AMD in the 2000's), but it's clear that Intel is too big to fail.

So AMD can win by helping Intel win.

It is not good in the long run for consumers, but it's probably the right move for AMD today.


I assume AMD is making these moves to get their products back out there where consumers read their name even when (web-)stores predominantly stock Intel products (I think it's called "mindshare").

I also hope this is the start of a long journey where AMD chips away bit by bit at Intel (near-)monopoly in the CPU market.


Yes, definitely both mindshare and piggybacking on Intel's existing product pipelines. Playing sidecar to Intel means getting in and getting the spotlight with every major computer manufacturer, where right now they are struggling to get put into pre-assembled systems.


Intel wouldn't be willing to do something like this without the all too plausible now threat of a customer moving to a full AMD stack. There are still reasons to prefer Intel in some cases but nobody has to feel like Intel is the only choice any more.


It has been rumored for quite awhile. Chiplets from multiple vendors and on package memory (HBM) will be the way all future high end chips will be built. The key to getting this right is getting the handoff between devices at a systems level and extending the virtual memory fabric cleanly to these devices. The win for these is that you get the best hardware out there on A high bandwidth low latency substrate. AMD has better GPU hardware than Nvidia (Nvidia just has better drivers). Intel still has an edge on CPU physical implementation so, why not combine. I fully expect to see custom accelerators (TPU-like things in he near future) incorporated as well. If companies like IBM and Arm were smart they’ll be entering the space too, selling IP to put over EMIB. Suspect IBM is already there :), Arm probably not.


> And AMD, why now? When Zen is doing great, has great roadmap and potential, along with much better GFx then Intel. Why?

Why not now? Why not generate additional income from the sale of these Intel/AMD CPUs instead of relying on just Zen?

This way, AMD can make money both when a Zen and a Core CPU is sold.


yep hell has indeed frozen over.


No. This is not. Source: I know someone who worked at AMD before and this is not a remote possibility. Employees have been talking about this possibility openly inside the company. Why? Simply because the GPU is a HW black box that anybody can use (including the competitors). At least that's how I understood it.



This seems like a lose lose and a net negative for everyone.

1) AMD must be preventing Intel from in future building similar integrated GPUs using anything like AMDs patent portfolio in GFX.

2) Intel will be able to 100% push out any vendors from moving to Ryzen for presumably faster integrated graphics.

3) Nvidia doesn't have an x86 CPU and last time I looked - all PCs and Mac laptops were using x86.

As some have speculated maybe this is Apple telling their suppliers to jump and Intel and AMD said how high...

I'm going to be interested to see if we ever get a jump to Apple using ARM and internally designed GPUs, I have a feeling that Jonny Ive must be wetting himself over making a reasonably powerful laptop that thin.


No, I think it is win-win.

1) AMD can sell chips in high end systems where it can't compete with Intel on CPUs or Nvidia on GPUs.

2) Intel can ship a high end graphics experience without dGPU which gets them a level of graphics in a form factor they couldn't otherwise achieve.

3) Ryzen is a low/mid range chip. Even if it could match Intel's performance it could never match Intel's brand, and Intel doesn't want to match AMDs price so they will stay in different market segments. Ryzen sales will not be hurt.

4) AMD gets valuable brand recognition by getting Radeon into more premium devices which could actually boost sales for cheaper Ryzen/Radeon devices down the line.

5) Shafts Nvidia which is a win for both sides.

6) Opens lines of communication for possible future merger or fab deal, which is not so much an issue from an anti-trust standpoint when you look at the total competitive landscape of ARM, Nvidia, Apple, Qualcomm, etc and the shrinking relevance of x86 in the big picture.


Regarding 3): most benchmarks I've seen quote 2-7% difference in IPC to Intel. I wouldn't call that low/mid-range. Am I misunderstanding something?


From the parent: "Even if it could match Intel's performance it could never match Intel's brand".


Not with that attitude.

(i.e. argued that way this would be more like "setting up for failure" on AMDs part.)


Brand perceptions can definitely change but the reality right now is that for the past 5 years AMD has been shipping non-competitive parts and has had zero premium design wins and very little money spent on advertising.

For the average consumer, every high end system they have seen in recent memory is Intel, and every ad they have seen is Intel, and if they have seen AMD at all it has always been positioned as a budget product.

That is going to take a long time for AMD to turn around assuming they can sustain performance parity with Intel.

That is also why AMD is focused on servers and semi-custom where they are selling to technical people who are evaluating based on price/performance and not based on brand perception.


I am pretty sure the current PS4 and Xbox count as premium design wins for AMD.


There's nothing "premium" about those gaming consoles, maybe the PS4Pro/XboneX could be considered "premium" for the console sector.

Imho parent was talking about the PC sector and in that regard, AMD has been sadly trailing way behind Intel/Nvidia for quite a while now. AMD has struggled to oppose Intel's i5/i7 dominance. Similarly, they still have no real competition for the high-end NVidia GPU's, like the 1080's.

If money is not a limiting factor, then you will be hard-pressed to come up with a build that does not include an Intel CPU and at least one Nvidia GPU.


> If money is not a limiting factor, then you will be hard-pressed to come up with a build that does not include an Intel CPU

If money is not a limiting factor, you probably look at AMD EPYC now.


Yeah, the PCIe lanes alone are killing it, never mind the increase in CPU performance. Intel doesn't have much to offer to oppose this dominance, and their at a wall with process technology (can't go much smaller than 7nm without massive power leakage) with a 3 to 4 year pipeline just to get a new chip out. Making one chip takes 9 months end to end, assuming the design is ready now, hence the long pipeline.


That looks promising but is aimed at the server market. I haven't been keeping up that well since Ryzen but imho AMD has been really struggling with the premium desktop market, especially CPU wise.

Ryzen helped some with that but afaik Intel's chips are still way ahead in terms of single-thread performance, which is something many desktop users (like gamers) are looking for.

On one hand, it's cool to have i5's/i7's last so long, on the other hand, it's never a good situation when there's no real competition. Too bad Intel didn't pull through with Larrabee, a third player in the dedicated GPU market would have made this whole situation way more interesting/dynamic.

Tho could just as well have resulted in AMD going belly up, trying to compete against Intel and Nvidia vs the current situation sounds like it would been a worse deal for AMD.


And consumers don't say, I want the same processor as the Xbox.


> And consumers don't say, I want the same processor as the Xbox.

XBox One's processor is from a pre-Ryzen generation, so this should not be a concern. :-)


APUs pre-Ryzen weren't too bad, used to have a VM box running an A10 before I put a 1500X in it. Price wise they're very competitive, and AMD is happy to fab semi-custom silicon for anyone, hence how they were the primary vendor for all consoles for years.


Yeah, I know...it's just in light of this argument the timing seems strange.

AMD are on the brink of offering products that for the first time since forever at least has potential to build up their brand vs Intel, to actually have a significant differentiator that they can market. It seems really harsh to kill that before it even begins. But yeah, it could actually be a sign that AMD knows it couldn't go the whole nine yards in this game anyway.


10% IPC advantage compounded with up to 20% higher clock speeds in top of the line Desktop CPUs (when OCed at least)


I don't think OC is a fair baseline to qualitatively distinguish between CPU classes (at least low/mid vs high-end). OC is surely a large factor in purchasing decisions for a subset of high-end users, but not all.


I was talking max OC on R7 (4.0 - 4.1) vs max OC (5.0-5.1) on i7-7th gen though. Which is a good indicator of the max clocks each platform can get.


Brand is more important than benchmarks for most buyers and even if AMD chips were better Intel would still be the premium brand.


Are people actually looking to buy a computer for an Intel CPU? I have honestly never met anyone that said "I need my computer to be Intel Inside™ ". They just say they want fast / modern / able to do X task and 99% of people have no idea what a CPU even is.


> Are people actually looking to buy a computer for an Intel CPU? I have honestly never met anyone that said "I need my computer to be Intel Inside™ ".

I've had friends go out to buy a computer and insist on an i7 because "it is the best."

Same people who buy the newest Samsung Galaxy every year.


> I've had friends go out to buy a computer and insist on an i7 because "it is the best."

But i9 is the best... ;-)


My dad went to Best Buy to replace his 10+ year old laptop last year. He actually said he wanted Intel instead of AMD because he heard AMD processors were very slow. I had to tell him the Intel N-whatever and Celerons were also very slow so that he'd stick with one of the i-whatevers.


Basically anything they sell him will be faster than his current laptop. Many of the low power i series aren't actually much better than a celeron, but they're very common in new laptops.

CPUs aren't the bottleneck though, disk I/O and I/O in general are where most "slowness" issues are promulgated from.


To most consumers, AMD sounds like a knockoff brand. Have you even seen an AMD commercial in the last decade?


I would say yes. Intel are better at marketing than AMD (think decades of TV ads and the little Intel Inside stickers).


And the ones in the UK calling i3 processor "high powered" I am looking at you PC World :-)


> think decades of TV ads and the little Intel Inside stickers

When I see "Intel Inside", I immediately think "Intel Inside - Idiot Outside". ;-)


You might have a good point for the home market - I couldn't say either way. I do know for the corporate market, businesses want as low a number of SKUs to support as possible, and so they will often only buy Intel (and only specific CPU models at that). This is particularly true in the datacenter / cloud world.


I work in heavily optimized compute, and it does matter there. We write assembly that is supported by one architecture only (for now), and frontier Intel chips are easier to come by. So our code runs on Intel’s processor extensions a generation or two before AMD, and that represents a high switching cost.

But this is server compute stuff which this news doesn’t seem to be about.


Agree — that seems like a very 2003 idea. The whole Intel Inside doesn’t have the cache it used to.


Clocks matter too.


I am deeply curious about point 6. If intel bought amd I think it would lead to a massive change in how systems are built. I know there's the danger of a full x86 monopoly, but nvidia has a lot of pressure on intel at the high end and if the windows arm thing ever takes off I'm sure nvidia will have a laptop chip out in no time.

If there is a merger, I hope this will make a lot of AMD dream projects like the HPC APU take off, but I also fear that it might lead to a more stagnant intel+amd in a few years time.


They could never do a merger for anti-trust and dual sourcing concerns...

Having said that, it MIGHT open up the potential for AMD running (some, all?) chips through Intel's fabs and actually enabling a nearly competitive playing field.

The best thing that the (US) government could do for the market would actually be to force Intel to split in to a fabrication company and a chip design company. That would enable the military to contract fabrication of higher-end validated CPUs on US soil from whichever sources they wanted...


How is this not duopoly collusion to push out Nvidia? Smells just like 'competing gas stations' on opposite corners with the same exact prices.

AMD can be Aldi, Intel can be Whole Foods.


What Whole Foods are you shopping at that's as cheap as Aldi??!


1) Intel does not ship GPUs on high-end CPUs.


Regarding 3. It's rumored in the chip design community that NVidia designed an x86 CPU and got as far as engineering sample silicon but the project was cancelled because they couldn't work out the licensing.


>As some have speculated maybe this is Apple telling their suppliers to jump and Intel and AMD said how high...

That sounds a bit far fetched IMO. They aren't that dominant in the PC market (esp. compared to the Big Three (Lenovo, HP, Dell)) for Intel and AMD to warrant such an "out there" cooperation just at the behest of Apple...


Lenovo/HP/Dell might sell more units, but every single Apple product has a high end i5/i7 or extreme low power part in it. The margins for Intel are likely an order of magnitude higher from Apple than Dell.


I think you overestimate the heft of Apple in x86 world.

a) Sure, Apple doesn't have a low end and as such higher overall margins, but: The Big Three all have a sizable mid-/highend/ultrabook/business range as well with just the same i5/i7/core m CPUs (only updated more often). Apple surely is a good customer, but I severely doubt that the overall margin difference for Intel between them and Apple is even in the remote vicinity of an order of magnitude. What is an o.o.m. higher though is the market share of those three (~60% vs ~6%).

b) I just don't see a bargaining chip. What's the "or else" from Apple's side? Full Ryzen? Would actually be cool, but Ryzen is still completely unproven in the mobile world, and this would be all the more reason for AMD not to enter the deal. ARM? Yeah...I wouldn't hold my breath.

You have to consider that this isn't a simple internal SKU customizing, but a significant and delicate licensing deal with THE direct competitor in x86 world. (The tech is actually the simpler part, seeing that it's not an integrated solution but a MCM.) Apple would have to have severe leverage that they would be the main driver for Intel for such a move. Or they are paying a real buttload for this tech - but then you can be sure they want exclusivity. And the statement really doesn't sound like it.


Don't almost all apple's use the mobile versions with only the Pro using HDET parts


Depends for some massively parallel applications workstations etch threadripper has the edge on price performancem even more so if they bring out ThreadRipper 2 next year and replace the two dummy cores


Isn't Apple market-share of x86 tiny?


About 4%.


Some thoughts from anandtech: "The agreement between AMD and Intel is that Intel is buying chips from AMD, and AMD is providing a driver support package like they do with consoles. There is no cross-licensing of IP going on: Intel likely provided AMD with the IP to make the EMIB chipset connections for the graphics but that IP is only valid in the designs that AMD is selling to Intel"[1]

[1]https://www.anandtech.com/show/12003/intel-to-create-new-8th...


If EMIB is that valuable, then they should expand it to main system memory as well as NVMe flash/XPoint storage.


EMIB requires that all components can fit on a die that can be connected to the other components like the CPU using an embedded channel through the substrate. Expansion of the substrate to accommodate other components is not without costs.


For laptop purposes, wouldn't Intel just stack 8 or 16GB of main memory DRAM, like they would HBM?


Umm, probably, but NVMe is still connected via PCIe. The whole point of this deal is the EMIB connecting RAM and GPU. And its hard, EMIB connections is mostly to do with power dissipation and throughput.

I am probably out of my depth guessing why companies are doing this with NVMe. Maybe its just that the market does not exist for it maybe its just the cost...


Intel + AMD should get together in offering a CUDA alternative/compatibility for Deep (Reinforcement) Learning/AI where NVidia is experiencing exponential growth for the past few years and they are simply non-existing there full of half-baked efforts.


AMD doesn't care about Deep Learning.

This is a quote:

"Are we afraid of our competitors? No, we're completely unafraid of our competitors," said Taylor. "For the most part, because—in the case of Nvidia—they don't appear to care that much about VR. And in the case of the dollars spent on R&D, they seem to be very happy doing stuff in the car industry, and long may that continue—good luck to them. We're spending our dollars in the areas we're focused on."

"Car stuff" being self-driving cars, while "the areas we're focused on" is VR. From http://arstechnica.co.uk/gadgets/2016/04/amd-focusing-on-vr-...

AMD has made numerous press releases about supporting deep learning, sometimes via OpenCL, sometimes via cross compiling CUDA or sometimes something else.

I used to get excited about it.

Now, I have a rule: don't get excited about AMD (or Intel, or any new hardware) until they are winning at training neural networks on an absolute speed basis. (Note: vendors will frequently release benchmarks showing how they beat Nvidia. Almost inevitably these are for inference, and often on a speed per watt or speed per dollar, or you can't actually buy the hardware.)


AMD and Google IIRC are already working on a CUDA implementation for AMD GPUs.


How much do I need to wait until TensorFlow can run stable and at the same speed as on cuDNN on AMD hardware? I can't even contemplate buying AMD right now (gaming is not very important to me).


AMD's performance deficit is about to get a lot worse. Nvidia's upcoming Volta architecture is massively optimised for deep learning - they're touting a 12x performance increase for training and 6x for inferencing over Pascal.

I think Intel have a better chance of catching up with Nvidia at this stage. They've been on an acquisition spree and have picked up a huge amount of DL-related IP. They have immense R&D and fab resources at their disposal.

https://wccftech.com/nvidia-volta-tesla-v100-gpu-compute-ben...


That's only if you use Volta's Tensor cores however.

AMD's 16-bit packed performance with Vega is more than respectable vs NVidia's 16-bit packed performance in Pascal.

In the future, all AMD needs to catch up to Volta's Tensor cores is to build Tensor cores themselves. That doesn't seem like a major technical hurdle. I'm fairly certain that Google would be the primary patent holder on Tensor-cores.


Google open sourced CUDA support for Clang/LLVM, which has been upstream a while now (years) and is kept up to date with various CUDA versions. This has not been a collaboration with AMD.

I haven't kept up to date on what AMD has done with that work, but i believe they use it as part of their compatibility story.


This. CUDA gives Nvidia a huge advantage


AMD has HIP. HIP allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs https://github.com/ROCm-Developer-Tools/HIP


> a CUDA alternative

this is what OpenCL is, afaik.


Apples to oranges. The two APIs aren’t comparable, and being forced to use OpenCL is a hindrance itself. But unfortunately it has “Open” in the name so it must be better than that proprietary single vendor CUDA nonsense... /s


...a very underwhelming one when it comes to Deep Learning


Well mainly because Nvidia gimped their OpenCL drivers and there was no customer pushback.


OpenCL vendors never provided a competing tooling alternative to CUDA.

Drivers are only part of the story.

Even Google preferred to create their own Renderscript dialect than supporting OpenCL.


That would still be fine if OpenCL was similarly-performing on Intel/AMD and a 1st class citizen of various Deep Learning frameworks. OpenCL is just an afterthought there sadly.


They could leverage OpenCL and they both already have implementations.


Then they better provide SysCL and sys backends to modern languages.


Intel and AMD had the opportunity in 2014 but they made a series of bad decisions that has put them waaaay far behind. Nervana was a bad move for intel and they've figured it out by now. AMD is deeply mismanaged and shareholders should lobby for change. OpenCL has been such a missed opportunity.

Nvidia will be hard to beat; they're going to be the next Intel.


> AMD is deeply mismanaged and shareholders should lobby for change

I don't know about that... Sure they've had to liquidate some things but that made them survive long for Ryzen and Vega to come out which has enabled them to claw their way out of the red and into the black for the first time in forever.

PS: Am I right to assume that by "bad decisions" you mean things like the "Bulldozer" architecture with that wonky resources-shared-between-cores-thing?


By "bad decisions" I mean they completely overlooked the enterprise market and machine learning when it should have been obvious. Ryzen and Vega have such small profit margins compared to Nvidia's enterprise lineup. And that's their other problem: they keep chasing low value markets. AMD's life in the black will be short lived. Maybe Intel will just end up buying them, but then what's in it for intel?


Intel most likely would not be allowed to buy AMD, something something monopoly, although a real lawyer could probably tell you more.


> OpenCL has been such a missed opportunity.

Why? It’s really a shit api, hard to program for and hard to make efficient. I think AMDs problem is that they didn’t make their own tooling pipeline (and then hopefully make an open standard out of it). Instead they stuck with a crappy open standard with poor tooling because, well, it was the standard.


I should be more specific: Their strategy around getting people to use their cards for GPGPU is a missed opportunity, their OpenCL strategy being a component of that.


Ah, agreed.


This is the whole point, CUDA only got this far, because OpenCL sucks forcing everyone to use a C dialect.

They had to be beaten to finally start proving a bytecode format similar to PTX, for multiple languages, while accepting that most researchers want to use C++ or migrate their Fortran code into GPUs.


OpenCL 2.1 was released two years ago and introduced a C++ version of the kernel language.


AMD only supports OpenCL 2.0 today.

So practically speaking, OpenCL 2.0 is the best you can get, unless you want to run on Intel's iGPUs or Intel's AVX 512 on their CPUs.

AMD does have support of C++ in OpenCL 1.2 as an optional extension, but their support of C++ in OpenCL 1.2 doesn't work when you enable OpenCL 2.0 for some reason. Also, the CodeXL debugger only works on OpenCL 1.2 at the moment...

As far as I can tell, AMD's implementation of OpenCL 2.0 is still early stage. Its fine if you're cool with debugging with "printf" statements.


I know, most of the drivers still don't support it properly after two years, and there are no good debuggers available.

Meanwhile on the CUDA C++ side,

"Designing (New) C++ Hardware” - https://www.youtube.com/watch?v=86seb-iZCnI


That was an interesting talk.


Worth noting: Skylake and Kaby Lake have been horror shows for Intel Integrated GPUs driver crashes. To add to this, some paths (like DXVA) are actually slower than Haswell.

I wonder how much of this is a fix for GPU reliability rather than GPU performance.


> Skylake and Kaby Lake have been horror shows for Intel Integrated GPUs driver crashes

My experience with Intel iGPUs on skylake and kaby lake is quite different. They work very well on Linux.


I guess I should have clarified: Win7


Skylake and Kaby Lake aren't supported by Windows 7: https://arstechnica.com/information-technology/2017/03/micro...


Skylake is supported, as mentioned in the article you linked to.

Kaby Lake's GPU is not a change from Skylake, so I expect the same problems, just in Win10 instead of 7.


>only security fixes for Skylake systems running Windows 7 and 8.1

I don't think a security fix is going to address your GPU issues. You should update to a supported operating system.


It's only been two months since the last Intel driver was released in Windows Update on Win7. The GPU is supported just fine thanks.

Also, we build systems which run our application suite controlled by a custom control panel, and we have yet to manage to fully tame Win10's intrusiveness into the operator's workflow. Even if we do fully tame it, maybe the next version will change the number of things needing tamed. I'd use Windows Server instead, but it isn't supported by ASUS on desktop motherboards. We're in a dead end and seriously thinking about Linux at this point.


I suggest just using the superior OS at this point: Linux.


I'm not super familiar with the law in this area, but doesn't AMD+Intel teaming up get into antitrust territory? The x86 architecture still basically has a monopoly on the desktop, laptop, and server space. Could someone a bit more familiar in this area elaborate?


I think this would be Intel+Radeon division and not the AMD CPUs, Intel would still compete with the AMD CPU division


Not a lawyer, but: in theory; yes. However, we don't really deal with anti-trust issues anymore as any reasonable fine is pocket-change for these companies. Not to mention that if there's a legitimate threat they just fund a startup and point to it as independent competition that will go bankrupt the second people stops talking about the anti-trust.


Intel has paid Nvidia billions in the past to use Nvidia's tech at the expense of AMD..

https://newsroom.intel.com/news-releases/intel-announces-new...

Since both Nvidia and Intel are monopolies in their respective fields.. that was far more egregious towards AMD than this is towards Nvidia.


Any chance this is try before you buy for Intel?


Close to zero chance of Intel being allowed to buy AMD, unless they open up x86 for others. Although if alternative architectures like ARM start gaining some actual ground on desktops/laptops, say in 20 years, then it might not trigger anti-monopoly bells as hard.

Then there's the option of Intel buying just the Radeon parts of the company. However seeing as the GPU market seems to be growing fast, it would have to be quite the offer for shareholders to agree.


They might be allowed to buy Radeon Technologies Group, though.


For those who are excited about AR/VR Mixed Reality headsets that Microsoft recently released, I think being able to use them with more laptops or maybe even thin and light laptops will be a great option. If you haven't tried the floating 2D desktop inside a 3D VR of the computer you're using is an interesting experience.


AMD should focus on getting great support for their existing tech. If their current GPU lineup would be better supported by Tensorflow & the likes that would greatly increase their adoption. Also, more RAM per GPU guys. Memory is a big bottleneck for many models I use.


Check out the Radeon ssg. Its a gpu with a directly addressable 2tb ssd built into the gpu.


Next new desktop / graphics card purchase (hell, throw in a VR headset/gun-style controller, an android horse[0] and a real Mongolian saddle[1]) = Mount & Blade: Bannerlord[2], IMHO set to be the best game in decades. No bandwidth for thought until then. :)

[0] Played a cowboy game on one of these in a Shenzhen VR house, most fun I've had in years! Just don't look at the video of yourself playing afterwards!

[1] Cheap realism.

[2] https://www.taleworlds.com/en/Games/Bannerlord/


Is this really going to be Intel with AMD GPU, or Intel simply paying to AMD for patents instead of Nvidia, while still making their own GPU?

Somehow I doubt, AMD want to give away their APU competitive advantage to Intel.


https://arstechnica.com/gadgets/2017/11/intel-will-ship-proc...

AMD GPU Die + Intel CPU die on one package.


Interesting. Given it's Intel and AMD, I hope it will work seamlessly on Linux, unlike the Optimus horror.


PRIME offloading on laptops with same combination worked flawlessly for years. Its obviously work on desktop as well between GPUs with open source drivers.


Not on combination of Intel + Nvidia though.


Genuinely curious. I wonder how a partnership between competitors like this usually starts off? Someone near the top (i.e. mamagement) of each company say "Hey, we are behind on this. Can we borrow it?". Or their staff talk and suggest it up? Or else? Does this happen all the time but few would only make it through? Or these discussions rarely happen?


This prove how dominant Nvidia is, Intel and AMD is literally shaking in their boots to even consider partnering up to attempt to defeat a common enemy


Is Nvidia that dominant? I don't have any numbers, but I'd be surprised if they had more GFX cards installed than Intel. In the high-end market, sure, but that's not a very big part of the whole story.

And as for game consoles, both Xbox One and PS4 uses an AMD. Only the Switch uses NVidia.


Comparing dedicated graphics cards to integrated graphics and then claiming intel has more chips installed than Nvidia is asinine.


there is nobody paying $500+ for an intel GPU, or more like $5000+ in the pro market.


More likely it proves how financially weak AMD is.


"Literally"? Are you sure that's the right word here?


Ok, I haven't read the article, on mobile... But Intel and AMD shouldn't be aligning on anything. This sounds extremely anti-competstive.

Nvidia should get an automatic x86 license.

Actually, I have a proactive anti monopoly idea. Any company that is the predominant player in a market cannot use patents to limit the ability of a competitor to make a compatible product.

This would mean anyone could make x86 chips w/o a license.


The reason why Intel has iGPUs in their CPUs in the first place is a result of Intel licensing Nvidia tech. So you're telling me after Intel has paid Nvidia billions to use their tech, now it should be illegal for AMD to get in bed with Intel?

Source: https://newsroom.intel.com/news-releases/intel-announces-new...

Good luck with that argument.


There are rumours and speculation about this, but my reading into it is that this is going to be a single SKU (maybe 2 or 3 variants), possibly for a single customer.

My best guess is Apple -- they were already using Intel CPUs with AMD GPUs, and they probably opened up their checkbook to make this collaboration happen. It should give them performance and power benefits, and Intel and AMD sell the same number of CPUs and GPUs, respectively, as before.

Not sure why you think this is so anti-competitive. Who knows if NVIDIA was offered the same deal and declined? They are much less prone to making "semi-custom" parts for third parties, while AMD already has a track record of doing this for Sony with the PS4 and Microsoft for the Xbox One.


Given that gaming was specifically mentioned, I'm assuming it probably isn't Apple. It's probably for products more similar to the Razer Blade Stealth. The concept of something like that is cool, but the integrated graphics are definitely the bottleneck.


I doubt gaming is really their endgame here, there are many reasons to want integrated GPU and CPU on the same die beyond gaming that would be relevant to Apple for desktop and mobile (integrated pipelining, higher memory bandwidth)


Define "predominant in a market." Is intel competing in the x86 market, or the computing market? Intel could rightly point out that vastly more ARM cores are sold than x86 cores, so really it should be open season on ARM, not Intel.


You are arguing using a hypothetical. Translate to Latin and become profundis.


Couldn't they just create a small company with 1 "employee" and patent everything under that company and just "share" their secrets?


That should be treated like an SEC violation. Life is not a rulebook to be gamed. Those that manipulate the law should be shown the door.


This is fantastic for AMD! Intel has the largest market share for GPUs with integrated graphics. While I haven't looked into it too much, it sounds like an opportunity for Intel's market share on top of what they already have. Let's just hope that Intel doesn't pull any legal tomfoolery and steal AMD's IP.


Oh, so still not confirmed officially.

Why would AMD agree to this? This would massively eat into Raven Ridge laptop sales.


Probably next Apple release cycle is the main motivation, but who knows?


I'm actually quite surprised Apple didn't go with Ryzen/Threadripper already in their Mac Pros, considering what a huge multi-thousand dollar margins that would have offered them (at least if they went with the same absurd prices as the Xeon Mac Pros). And they could've still claimed a significant boost for their Mac Pro performance compared to the old Intel chip in the last generation.

They could've also replaced all of their dual-core laptops with quad-core Ryzen APUs for about 2-2.5x increase in both CPU multi-thread performance and GPU performance. And I don't think it would've cost them more, or not significantly more at least. AMD seems to price their cores at 50-60% of Intel's cores.


I would guess that Apple/Intel have a multi-year contract in place. Apple probably has a legal obligation to keep using Intel for the next couple years. There are many benefits that to both sides that can be put into a contract - different for each case.


Pricing and Volume most likely.


Isn't thickness and therefore heat dissipation and power consumption paramount in Apple's laptop line? AMD struggles in all those areas as far as I know.


AMD's latest chips and APUs are more efficient than Intel's. You should do a hard reset on everything you know about AMD's chips that is older than a year.


I thought Ryzen was finally catching up in those areas? They certainly seem to be competitive in the desktop space.


I would have liked to see that too but I assume one of the reasons can be porting the BIOS/UEFI and Chipset Drivers, and keep the quality the same as the one that they have already. AMD has done good on the hardware but they are not famous for being stable on the driver and software side.


Considering how successful in the laptop space AMD has been for the last... decade, I don't think they have much market share to lose here.

Laptops have always been the most profound demonstration of Intels monopoly because they are tightly integrated products that end users never get to customize deeply. So its really easy for Intel to just "persuade" notebook vendors to only put AMD chips in garbage models, if at all.


I just got a G702ZC - basically a desktop Ryzen in a somewhat portable package.

The keyboard is crap. It has a dedicated key for a 'Look! This is my current system load. You can even connect a mobile to see it on your phone!' ROG key where a numlock would be. It doesn't have a End key, which infuriates me. There's a power button in the top right of the keyboard which I WILL press in the future. It's just a failure about to happen. The keyboard is utterly flawed.

Well.. I at least have a Ryzen 7 with 8 cores, right? Yeaaaaah... Right now the braindead EFI interface (you can switch between 'ez' (sic) and advanced mode) doesn't seem to expose a method to enable AMD-v. So ... Yeah. I have 8(16) cores to .. I don't know. Watch YouTube I guess.

AMD and Asus really dropped the ball here. The keyboard is unacceptable. The AMD-v issue is .. low. So fucking low.

Don't buy this laptop?


Non-Paywall link http://archive.is/X3vJN


So, this might seem like strange bedfellows, but recall that AMD and Intel are both competitors and each other's customers.

AMD licenses x86 from Intel, and Intel licenses x64 from AMD (because Itanium failed to win the 64 bit market).


First of all, Intel and AMD are not rivals. The only reason AMD is alive is because Intel lets it live. Intel would pretty much become a monopoly in the server line if AMD were to disappear.


> The only reason AMD is alive is because Intel lets it live.

Didnt Intel play dirty with AMD by forcing OEMS to use Intel chips instead of AMD?


That was a long time ago when AMD might have been a threat.


And the stock goes wild in exactly 10m on market opening.


AMD has a decent bump of being up over 6% so far today (markets open for 15 min). I was going to go in but read this news too late. Amd is too volatile and random to her short term they'll go up more than the current 6% bump.


Superman carrying Kryptonite with him?

P.S AMD K series were named after kryptonite as Intel's pentium series were considered to be superman.


Great news. The only way we will see progress w/OpenGL, and eventually Vulkan, as well as Wayland and its litany of Wayland enabled compositors is with open source drivers.

Intel and AMD seems to have seen the light of Linux. I wonder who's next - Microsoft perhaps ditching Windows for Linux and building a super GUI on top of Linux to make it an OS X killer?


I honestly can't see this going anywhere; Intel have no skin in the discrete graphics game, so what do they have to gain from this? They're not really competing with nVIDIA, except perhaps in the emerging AI hardware market?


"Our collaboration with Intel expands the installed base for AMD Radeon GPUs and brings to market a differentiated solution for high-performance graphics".

Intel wants/needs better GPUs than their own ones for their laptop offers, a fairly impressive admission of incompetence, and (as others noted) AMD lacks laptop CPUs and wants to sell more laptop graphics solutions, which are supposed to be their specialty. Good products can be expected, but I suspect Intel might strike a similar deal with Nvidia: whatever thin laptop the public buys, it's going to have a Intel CPU.


Let's not forget Raven Ridge (Ryzen Mobile).. It targets lower power and lower performance, but it's still 3x faster than Intel's current solutions when it comes to graphics.


Intel has already displaced Nvidia in most laptops because its integrated graphics got "good enough". Some OEMs continue to add low-end Nvidia graphics to Intel-based laptops, even though they have the same or less performance than Intel's graphics, simply because they don't want Intel to think they are their exclusive supplier and can raise prices at will.

What this deal will do is allow Intel to become the de-facto leader in the growing "laptop gaming" market, displacing both Nvidia and AMD from that market. AMD had a chance to dominate that market now with multi-core Ryzen CPUs and its dedicated GPUs, but it seems they've just decided to hand that market over to Intel on a silver platter.

Such a huge mistake from AMD. This is why I would rather AMD would be acquired by someone like Samsung or Broadcom to give AMD the money it needs than do stupid deals like this one with Intel because it's so strapped for cash.


I guess AMD is betting on EPYC; desktop is not progressing much and they are non-existing on laptops either, possibly blocked by Intel on the vendor level, so it's better for them to get a few % of profit instead of none from this segment.


Exactly my thoughts. I posted in a comment below (also responding to mtgx). Essentially, the laptop chips AMD can produce are still slightly better (well, cheaper) - it's just they can't get into the market for a few years until vendors are up again.


> Some OEMs continue to add low-end Nvidia graphics to Intel-based laptops, even though they have the same or less performance than Intel's graphics

https://www.youtube.com/watch?v=h0MeI0sQfy4

https://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmar...

Intel's highest end can't meet Nvidia's lowest end embedded, which can be had at a very affordable price.


> Some OEMs continue to add low-end Nvidia graphics to Intel-based laptops, even though they have the same or less performance than Intel's graphics

The only competitive part from Intel is Iris Pro 580 with the Crystal Well cache, and those parts are capital-E expensive.


Preventing obliteration in the ultra thin laptop segment.


Intel would've been wiped out in this segment by AMD. only choice to survive


Why doesn't NVIDIA build a CPU? They'd have to license x86 from Intel/AMD (since most Windows apps are built for x64) but AFAIK CPUs are far less complex (in sheer amount of transistors) than GPUs.


So the way I understand it is gpus are really just a lot of really dumb CPUs and they toss out all but the simplest interlocking. So Nvidia gets to skip a lot of legacy complexity and focus on building chips that just have lots of copies of the same thing. The complexity is pushed downstream. Stuff also has to be rewritten for gpus. And this is for people chasing speed so they're more likely to rewrite. These people couldn't do what they wanted without this new tech.

CPUs need to make legacy code up to 40 years old run faster so there is a ton of complexity in the hardware. They are chasing people who want modest speed bumps without large changes.

Kinda like how apple was able to pull off a performant phone/tablet after Microsoft failed a bunch... Because they got people to rewrite apps (or create) for their platform instead of shoehorning windows apps into a different form factor. Much bloat was cut, usability was redone. It's an analogy so don't go silly over the differences.


Fewer transistors doesn't mean less complex.

If it were easy to build a CPU competitive with Intel, AMD would do it more consistently.


Because x86 is a patent minefield, AMD has a license but other companies would have to negotioate with Intel about those patents before they can do x86.


And then AMD for x64.


Yes, that's why I mentioned that in the question.

As an aside: I know it's not the done thing to complain about down votes, but I do find an honest, polite question - not even a statement - being modded to oblivion somewhat unusual.


They do, ARM based ones.


It's important to note their ARM CPUs aren't anything to write home about. Mostly a slightly modified version of ARM's own reference designs.


This will probably go down as AMD's biggest strategic mistake in the past decade (other than the Bulldozer architecture).

Did they at least revise their licensing deal where Intel basically adds a requirement that AMD can't be sold to other companies? If not, then AMD's leadership must be clueless. They should've revised that clause the first chance they got to make another deal (like this one!) with Intel.


I can't imagine they would do this without a new licensing deal. Essentially, they are handing over their APU technology. I still think their ryzen chips are likely cheaper to produce and equally performant to Intel's. So overall they are still a solid option.

This just helps them make inroads with Intel who already locked up much of the laptop market. I'm pretty convinced there was no way AMD could enter that market for a few years in any meaningful way regardless.


As far as some sources report, AMD will be shipping Dies to Intel so the IP of the GPUs is not revealed. I also think that we will hear that AMD gets to use some patents/Intel technology over the deal (such as AVX512).


As a "strategic mistake" counter-argument: It is possible that AMD would go under without this deal.


Leading AMD must be emotionally exhausting. Can you image leading a company that for so long is fighting for survival?


This is basically every startup. Many of the quiet successes take decades.


This is most businesses in general where healthy and creatively destructive competition takes place.


Given the entangled licensing situation, what would happen to Intel if AMD went bankrupt? Would they not lose their x86_64 license, unless they bought what remained of AMD? And if they did, would that raise some major antitrust issues?


Depends on the license terms. e.g. if their license agreement says its irrevocable or "irrevocable unless <some action that blatantly violates the spirit of the contract>", then a future AMD buyer would not be able to retract the license just because they think they could get a better deal now. I'd imagine Intel's lawyers are good enough to get such a deal.


Is that reasonable though? I thought AMD was doing great with Ryzen and Epyc, and Vega is selling out as much as they can make.


AMD is BARELY free cash flow positive. They have been on the brink of running out of cash as long as memory serves me.

https://ycharts.com/companies/AMD/free_cash_flow


Depends how long




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: