Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD vs. Intel CPU Market Share: 7nm Makes Landfall as Price War Begins (tomshardware.com)
61 points by ItsTotallyOn on Nov 7, 2019 | hide | past | favorite | 57 comments


Wait... what?

> Intel CFO George Davis implied during an interview with Barron's this week that the company is digging in its heels for the long haul, saying "What we’ve said though, the delay in 10 nanometer means that we’re going to be a little bit disadvantaged on unit cost for a period of time." Davis then noted the company expects to return to revenue growth and margin expansion in 2023 when it overcomes the late ramp to 10nm. As a result, it's rational to expect lower pricing on Intel's upcoming 10th-Gen Comet Lake processors, too.

If everything goes right for Intel, they don't expect revenue growth until 2023?

By the time Intel hits 10nm in stride, TSMC plans to be rolling out their 3nm... [1]

[1] https://fuse.wikichip.org/news/2567/tsmc-talks-7nm-5nm-yield...


Intel plans to start shipping 7nm in 2021, and 5nm would be coming online perhaps 2023. See https://www.anandtech.com/show/14312/intel-process-technolog... as a source.


This article is from May and it doesn't mention 5 nm, 7nm plan was based on the Intel investor press, that was published on the same day from Intel PR.

10nm CPU Ships in June; 7nm Product in 2021

https://newsroom.intel.com/news/2019-intel-investor-meeting/...

And it's probably very optimistic about deadlines, since June is a while ago ;)

> Davis then noted the company expects to return to revenue growth and margin expansion in 2023 when it overcomes the late ramp to 10nm.

Intel isn't mentioning 7nm anywhere anymore. They are mentioning 10nm for 2021.

5nm was mentioned by Samsung for 2023, TSMC had 3nm planned for then.

-----------------

That means AMD still has > 3 years to capture significant market share. That's plenty of momentum. Worst case is at the current 5% they have now without Ryzen 3.

Q3 2016 was 9,1% share. Q3 2019 is 18%. And they have free play at least until 2023. As the 4th and 5th generation of Ryzen seems to be on schedule, according to AMD :) .

AMD is also not alone, they gain momentum thanks to TSMC and TSMC's clients ( nvidia, Apple, ...)

-

Previously I thought 2021, but I'm adjusting it to 2023, as Intel doesn't seem to have a real competitor for AMD yet. Just a next iteration chipset.


The thing is, Intel can't progress to their 7nm node if they can't even spin up 10nm one.

Siltronics (Intel's wafer supplier) is said to now to have a field day. Intel haven't opened any new 14nm fabs, and those were maxed out years ago. The only explanation why Intel not only expanded their orders to Siltronics, but even entered new negotiations with other suppliers, is that they are spending a whack a lot of wafer on something other than existing 14nm manufacturing.

It is either their GPU's are actually being taped out in extreme secrecy, or the yield on 10nm is so low that they are "bruteforcing" it


My bet is on the later, considering early 10nm CPUs had the iGPU fused off and fairly abysmal and inconsistent overclock performance.


10nm has been a trainwreck and is late, 7nm and 5nm have been removed from quite a few timelines they post because they suspect the same issues as with 10 nm (and 10nm isn't ready until 2020, how are they going to ship 7nm in 2021?)


They use different lithography technology and different teams are working on them. So 7nm does not depend on the success of 10nm.


I would still bet on 7nm being the same trainwreck of a failure.


It might be. I don't think we can draw any conclusions based on how 10nm is going though.


Similar to the Siltronics (Intel's wafer supplier) comment above, We can certainly gets some hint from ASML, the only supplier of EUV lithography system. There are already backlogs to fill and Samsung and TSMC seems to be holding majority of those orders according to their investor meeting notes.

So unless ASML has some EUV system sitting around waiting for Intel to pick up, their 7nm ramp won't be anywhere near TSMC or even Samsung.

So 7nm could just be another 10nm, it shipped in July, but it is still no where to be seen in November.


This illustrates one of the truism's in semi-conductors which is that you need to hold an advantage for 18 months before you have enough momentum to actually move the needle. If I were AMD's marketing team I would start hammering on ECC memory in the desktop and on laptops. Anything with 16GB or more of main memory. Intel has held that as a Xeon differentiator for a long time and it adds something like 20% margin to the part. That AMD can push PCIe 4.0 and ECC at 'desktop' prices is pretty neat.


Outside of our bubble, people don't care about ECC memory on their laptop or desktop. It also adds ~12% (iirc) more memory cells, meaning higher memory cost, which is going to be a hard sell given that it's a solution to a problem that most customers don't even know about, nor would most care much if it was explained to them. From a user perspective, memory is fine as it is, and if memory corruption does occur, it's probably blamed on buggy software. That would make any such push look like a desperate "we need a differentiator to tell people why they should buy our CPUs and upsell them in the process" move. Competing on price, speed, and (for laptops) power consumption would appear much more promising.


People don't care about a lot of things until they're told it matters. I mean, look at the "gaming" and "X with RGB" market and tell me people name economically reasonable choices when buying computer hardware. I'm sure marketing could sell quite a few ECC sticks.


Actually, you can have ECC without an extra chip, and 4 gen DDRs actually have quote low single bit error rates because they have error correcting PHYs


It needs to be marketed as a reliability improvement it is. Lots of people simply don't know about it at all, but would love better stability.


This is mass market retail we’re talking about. Unless it’s easily demonstrable, people are going to buy the prettier box on the shelf or the one that’s cheaper.

I’d bet money that when company X adds ECC and calls it “super stable memory”, company Y adds a button in the settings menu that calls fsck and calls that “hyper stable memory” and undercuts company X by 12%.


I agree with your view. Just want to add that from recent intel and AMD server purchases, the price difference in processors though considerable, is shadowed by the cost of memory sticks... for a fully loaded server, the cost savings from going to AMD appears like a small number next to the cost of mem sticks.

PCIe 4.0, more mem channels, and faster memory (3200) does make things very attractive.


Iirc, all AMD CPUs support ECC- but motherboard support is flaky.


Yes. Most (all?) Asrocks support it. Some ASUSs do. Most (all?) Gigabytes don't.


That is correct, it requires that the BIOS set up the appropriate machine check vectors and initialize RAM during POST. My current desktop with 32GB gets 2 - 3 single bit errors a month. (it has a Xeon, will be replacing it with a Ryzen 3000 series relatively soon with 128GB of RAM)


Sadly 128GB is easy without ECC, but hard with ECC. There are 16GB ubuffered ECC dimms around for $100 or so, but no 32GB ECC I could find.

So 64GB ECC is easy, 128GB ECC not ... unless you upgrade to one of the "P" Epycs like the 7302P or 7402P. That gets you 16 cores/32 threads (ryzen 3900 x = 12 cores/24 threads) but also 4 times the memory bandwidth.


Threadripper is an option there also. The main issue I've run into is that I can't find anything faster than 2133 in ecc


M391A4G43MB1-CTD


ECC UDIMM RAM is way too expensive comparing to LRDIMM and you can barely get 32GB ECC UDIMM modules. Not going to happen...


I've lost track of all the different kinds of RAM going these days. Does anyone have a good link to a primer of what's current?


You under estimate my willingness to overspend on things :-).


Sounds like Epyc to me.

7302P (16c/32T), 7402P (24c/48t), 7502p (32c/64t), or 7702P (64c/128T).

You also get 4 times the memory bandwidth and motherboards guaranteed to work with ECC.

Of you could go half way, threadripper comes out tomorrow or so and has twice the memory bandwidth of Ryzen/half of Epyc in the TRX40 parts. There's also a TRX80 which is using the same socket/memory bandwidth as Epyc.


Yeah, I was trying to figure out how to cool a Supermicro dual socket board in my desktop case. Its a challenge, they really want the big datacenter type rack fans blowing through the system and that is a bit too noisy for me. On the plus side it could have way more RAM. :-)


Indeed, for noise and cost reasons I'd just got with a single socket. Any decent case with 2x140mm in front and 1x140m in back should easily handle a single socket quietly.


Dual 7702P with 4TB ECC RAM and 4x RTX 8000 would be a dream ;-) At the price of Corvette...


And there it is; ASUS Prime TRX40 Pro, 128GB ECC memory, and a 3950x CPU.


That's because there is no demand. If AMD were to move towards ECC the memory demand for those ECC DIMMs would increase and price would go down as more are produced.


EOY and beginning of year is when most purchases for servers come in too. I suspect we will see AMD will spike in the next six months.

I know I’m also waiting to buy a laptop until the 7nm mobile chips are out. That’ll be a massive battery life increase (A couple hours), to the point that people will flock to AMD.


Chances are, whatever laptop gets made with that new chip will just have a thinner battery and the same battery life unfortunately :(


I think AMD's marketing team is more based on partnerships ( and custom solutions), than on big ads.

They should push more for partnerships with OEM's though.


> ECC memory in the desktop and on laptops

Wow and here I though RGB lighting was pinnacle of pointlessness.


Non-ECC RAM bit flips are pretty common at modern densities. They can cause everything from a stray pixel on screen, to crashes, to corrupted files and file systems.

At $dayjob we only have like 200 laptops but our help desk staff find 1-3 bad non-ECC DIMMs a year causing crashes or borked files.


> find 1-3 bad non-ECC DIMMs

The problem of faulty memory is separate and can be addressed by memory testing (which can even be run while computers are online).

My understanding is that ECC's main purpose is to guard against corruption of non-faulty memory, due to ionising radiation for instance. It's rare for this to be a problem in normal scenarios (not in space or high-altitude), and use cases.


Well, you said it... 3 crashes a year vs cost of 200 ECC RAM. (say refresh of pc every 3 years). So in lifetime of those machines you will experience 9 crashes. Is it really cost effective? Also file versioning, automatic backups of files should be a standard in any serious business.

So in reality no work would be lost to crash, only bit of time.


The problem is that backups don't work if the file is already corrupted when it's being written, most filesystems aren't very robust to on-disk corruption (and even those that are like ZFS have quite a few fun bugs if there is memory corruption present). Even versioning will not save you if you version already corrupted data.


I am programing over 10 years, that never happened to me and I never heard of anyone who had experienced something like that. Your words/excels save a copy of files upon opening (to handle crashes or corruption).

Is it worth entertaining this scenario? Unless you are running on a space probe or nuclear facility pc that's non-issue worth any time.


Processors use ECC in their on-board caches for a reason.

Transient memory errors are not uncommon when you have many terabytes of collective RAM in use.

Google’s study reports 8% of DIMMS see a transient error per year. https://ai.google/research/pubs/pub35162


Oh, I've experienced such things, it's a matter of scale. If you have 1000 machines running, the probability of one eating importing project data due to cosmic bitflips starts to matter.


I have two questions for people at intel and the peanut gallery

- If I want a high performance CPU that's not "workstation" class: why should I buy an i7 over a 3700x or 3800x?

- How meaningful is the "intel inside" brand when IT/GIS departments buy machines in bulk for their enterprise users?


Intel Inside is not that important anymore unless you have a big Intel AMT architecture for handling deployments / etc.

What is more important for IT depts is pricing, shipping timelines, available configurations, enterprise deployment options, etc.

There's no reason these can't all be at-par with AMD-based machines, but because of Intel's deep relationships with the vendors, sometimes you simply can't get their AMD machines at a price point that makes sense. Consider the fact that desktop processing performance was good enough 8+ years ago for anything most office workers need to do now; what IT wants in many industries is cost-effective machines that are easy to manage. The component brands inside don't matter that much.


I think why you want to buy a Intel now is:

- Laptop ( battery life is better on Intel CPU's), but AMD has this covered next Q.

- Intel NUC ( it's very practical )

- For Single core optimized applications ( eg. Some games) when you don't care about the total price at all. Intel's one core performance was better and support in games is lacking for a lot of cores. I think next Ryzen will handle this.

- When AMD is sold out completely and you need a CPU now :p

Other than that, i think it's all AMD (eg. ECC memory, price, performance and buying the "underdog" ).

Except if someone just wants to pay the highest price just for show, then you could still buy an Intel.

Intel inside is an OEM partnership, that i hope AMD will break :) .


When I was building a PC last December, I considered AMD until I saw that https://github.com/mozilla/rr requires an Intel processor.

> rr currently requires an Intel CPU with Nehalem (2010) or later microarchitecture.

I don't even use rr at the moment, but I hate the idea of building a monster machine only to find that I'm left out of some cutting-edge technology because I went off-brand.


They have been working on AMD support, see for example: https://github.com/mozilla/rr/issues/2034

The problem is that AMD CPUs have lots of bugs in their performance counter implementation. Intel CPUs don't have those bugs. With newer generations of AMD CPUs, AMD has fixed some of these bugs, but others remain.

I wonder why AMD hasn't stepped up to get involved in this project. There might be nothing that can be done with currently shipping silicon, but at least AMD could make sure that on their future silicon it all works. Fixing their performance counter bugs would likely benefit other projects as well.


>With newer generations of AMD CPUs, AMD has fixed some of these bugs

Or rather, broke something. There are suitable counters working correctly on late Bulldozer iterations, but not on Zen.


Considering Zen is a groundup new architecture, it's hard to call that "breaking" it, it wasn't correctly implemented on the new design.


Isn't this a sign that AMD's test suites are incomplete?

If they implement a feature correctly then break it in the new design, it suggests their test suites didn't exercise it properly, otherwise (you'd think) they would have caught the regression in the new design and fixed it before release.


I find it kind of funny that 5 (?) decades of Intel - AMD competition still haven't persuaded you to not call AMD "off-brand".

It's a bit like calling Pepsi "off-brand", in my opinion :-)


It's got nothing to do with how long they've been competing, it's got to do with dominance and quality. If AMD overtakes Intel in price/performance, capability, and market share in 10 years (and I think it's totally possible) then I would call Intel off-brand.

If I buy a CPU from company A for the purpose of running programs, but there are some useful programs it cannot run (like rr), not because of some anti-competitive proprietary nonsense by their rival B but rather because of some genuine capability that A lacks, then I call A off-brand.


Avx512


It's nuts how Intel is using it's monopoly. I don't have any other explanation why more OEM's changed to AMD for desktop and only recently Intel dropped prices.

For laptops, we'll see in the next iteration as battery is the most important aspect there and the next awaited version is coming soon.

Interesting times ^^


You mean new types of batteries?


Improved Ryzen for laptops so battery life would be better ;)

Current iteration is "not good enough" on that aspect ( = battery life).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: