> Intel CFO George Davis implied during an interview with Barron's this week that the company is digging in its heels for the long haul, saying "What we’ve said though, the delay in 10 nanometer means that we’re going to be a little bit disadvantaged on unit cost for a period of time." Davis then noted the company expects to return to revenue growth and margin expansion in 2023 when it overcomes the late ramp to 10nm. As a result, it's rational to expect lower pricing on Intel's upcoming 10th-Gen Comet Lake processors, too.
If everything goes right for Intel, they don't expect revenue growth until 2023?
By the time Intel hits 10nm in stride, TSMC plans to be rolling out their 3nm... [1]
This article is from May and it doesn't mention 5 nm, 7nm plan was based on the Intel investor press, that was published on the same day from Intel PR.
And it's probably very optimistic about deadlines, since June is a while ago ;)
> Davis then noted the company expects to return to revenue growth and margin expansion in 2023 when it overcomes the late ramp to 10nm.
Intel isn't mentioning 7nm anywhere anymore. They are mentioning 10nm for 2021.
5nm was mentioned by Samsung for 2023, TSMC had 3nm planned for then.
-----------------
That means AMD still has > 3 years to capture significant market share. That's plenty of momentum. Worst case is at the current 5% they have now without Ryzen 3.
Q3 2016 was 9,1% share. Q3 2019 is 18%. And they have free play at least until 2023. As the 4th and 5th generation of Ryzen seems to be on schedule, according to AMD :) .
AMD is also not alone, they gain momentum thanks to TSMC and TSMC's clients ( nvidia, Apple, ...)
-
Previously I thought 2021, but I'm adjusting it to 2023, as Intel doesn't seem to have a real competitor for AMD yet. Just a next iteration chipset.
The thing is, Intel can't progress to their 7nm node if they can't even spin up 10nm one.
Siltronics (Intel's wafer supplier) is said to now to have a field day. Intel haven't opened any new 14nm fabs, and those were maxed out years ago. The only explanation why Intel not only expanded their orders to Siltronics, but even entered new negotiations with other suppliers, is that they are spending a whack a lot of wafer on something other than existing 14nm manufacturing.
It is either their GPU's are actually being taped out in extreme secrecy, or the yield on 10nm is so low that they are "bruteforcing" it
10nm has been a trainwreck and is late, 7nm and 5nm have been removed from quite a few timelines they post because they suspect the same issues as with 10 nm (and 10nm isn't ready until 2020, how are they going to ship 7nm in 2021?)
Similar to the Siltronics (Intel's wafer supplier) comment above, We can certainly gets some hint from ASML, the only supplier of EUV lithography system. There are already backlogs to fill and Samsung and TSMC seems to be holding majority of those orders according to their investor meeting notes.
So unless ASML has some EUV system sitting around waiting for Intel to pick up, their 7nm ramp won't be anywhere near TSMC or even Samsung.
So 7nm could just be another 10nm, it shipped in July, but it is still no where to be seen in November.
This illustrates one of the truism's in semi-conductors which is that you need to hold an advantage for 18 months before you have enough momentum to actually move the needle. If I were AMD's marketing team I would start hammering on ECC memory in the desktop and on laptops. Anything with 16GB or more of main memory. Intel has held that as a Xeon differentiator for a long time and it adds something like 20% margin to the part. That AMD can push PCIe 4.0 and ECC at 'desktop' prices is pretty neat.
Outside of our bubble, people don't care about ECC memory on their laptop or desktop. It also adds ~12% (iirc) more memory cells, meaning higher memory cost, which is going to be a hard sell given that it's a solution to a problem that most customers don't even know about, nor would most care much if it was explained to them. From a user perspective, memory is fine as it is, and if memory corruption does occur, it's probably blamed on buggy software. That would make any such push look like a desperate "we need a differentiator to tell people why they should buy our CPUs and upsell them in the process" move. Competing on price, speed, and (for laptops) power consumption would appear much more promising.
People don't care about a lot of things until they're told it matters. I mean, look at the "gaming" and "X with RGB" market and tell me people name economically reasonable choices when buying computer hardware. I'm sure marketing could sell quite a few ECC sticks.
Actually, you can have ECC without an extra chip, and 4 gen DDRs actually have quote low single bit error rates because they have error correcting PHYs
This is mass market retail we’re talking about. Unless it’s easily demonstrable, people are going to buy the prettier box on the shelf or the one that’s cheaper.
I’d bet money that when company X adds ECC and calls it “super stable memory”, company Y adds a button in the settings menu that calls fsck and calls that “hyper stable memory” and undercuts company X by 12%.
I agree with your view. Just want to add that from recent intel and AMD server purchases, the price difference in processors though considerable, is shadowed by the cost of memory sticks... for a fully loaded server, the cost savings from going to AMD appears like a small number next to the cost of mem sticks.
PCIe 4.0, more mem channels, and faster memory (3200) does make things very attractive.
That is correct, it requires that the BIOS set up the appropriate machine check vectors and initialize RAM during POST. My current desktop with 32GB gets 2 - 3 single bit errors a month. (it has a Xeon, will be replacing it with a Ryzen 3000 series relatively soon with 128GB of RAM)
Sadly 128GB is easy without ECC, but hard with ECC. There are 16GB ubuffered ECC dimms around for $100 or so, but no 32GB ECC I could find.
So 64GB ECC is easy, 128GB ECC not ... unless you upgrade to one of the "P" Epycs like the 7302P or 7402P. That gets you 16 cores/32 threads (ryzen 3900 x = 12 cores/24 threads) but also 4 times the memory bandwidth.
7302P (16c/32T), 7402P (24c/48t), 7502p (32c/64t), or 7702P (64c/128T).
You also get 4 times the memory bandwidth and motherboards guaranteed to work with ECC.
Of you could go half way, threadripper comes out tomorrow or so and has twice the memory bandwidth of Ryzen/half of Epyc in the TRX40 parts. There's also a TRX80 which is using the same socket/memory bandwidth as Epyc.
Yeah, I was trying to figure out how to cool a Supermicro dual socket board in my desktop case. Its a challenge, they really want the big datacenter type rack fans blowing through the system and that is a bit too noisy for me. On the plus side it could have way more RAM. :-)
Indeed, for noise and cost reasons I'd just got with a single socket. Any decent case with 2x140mm in front and 1x140m in back should easily handle a single socket quietly.
That's because there is no demand. If AMD were to move towards ECC the memory demand for those ECC DIMMs would increase and price would go down as more are produced.
EOY and beginning of year is when most purchases for servers come in too. I suspect we will see AMD will spike in the next six months.
I know I’m also waiting to buy a laptop until the 7nm mobile chips are out. That’ll be a massive battery life increase (A couple hours), to the point that people will flock to AMD.
Non-ECC RAM bit flips are pretty common at modern densities. They can cause everything from a stray pixel on screen, to crashes, to corrupted files and file systems.
At $dayjob we only have like 200 laptops but our help desk staff find 1-3 bad non-ECC DIMMs a year causing crashes or borked files.
The problem of faulty memory is separate and can be addressed by memory testing (which can even be run while computers are online).
My understanding is that ECC's main purpose is to guard against corruption of non-faulty memory, due to ionising radiation for instance. It's rare for this to be a problem in normal scenarios (not in space or high-altitude), and use cases.
Well, you said it... 3 crashes a year vs cost of 200 ECC RAM.
(say refresh of pc every 3 years). So in lifetime of those machines you will experience 9 crashes.
Is it really cost effective?
Also file versioning, automatic backups of files should be a standard in any serious business.
So in reality no work would be lost to crash, only bit of time.
The problem is that backups don't work if the file is already corrupted when it's being written, most filesystems aren't very robust to on-disk corruption (and even those that are like ZFS have quite a few fun bugs if there is memory corruption present). Even versioning will not save you if you version already corrupted data.
I am programing over 10 years, that never happened to me and I never heard of anyone who had experienced something like that.
Your words/excels save a copy of files upon opening (to handle crashes or corruption).
Is it worth entertaining this scenario? Unless you are running on a space probe or nuclear facility pc that's non-issue worth any time.
Oh, I've experienced such things, it's a matter of scale. If you have 1000 machines running, the probability of one eating importing project data due to cosmic bitflips starts to matter.
Intel Inside is not that important anymore unless you have a big Intel AMT architecture for handling deployments / etc.
What is more important for IT depts is pricing, shipping timelines, available configurations, enterprise deployment options, etc.
There's no reason these can't all be at-par with AMD-based machines, but because of Intel's deep relationships with the vendors, sometimes you simply can't get their AMD machines at a price point that makes sense. Consider the fact that desktop processing performance was good enough 8+ years ago for anything most office workers need to do now; what IT wants in many industries is cost-effective machines that are easy to manage. The component brands inside don't matter that much.
- Laptop ( battery life is better on Intel CPU's), but AMD has this covered next Q.
- Intel NUC ( it's very practical )
- For Single core optimized applications ( eg. Some games) when you don't care about the total price at all. Intel's one core performance was better and support in games is lacking for a lot of cores. I think next Ryzen will handle this.
- When AMD is sold out completely and you need a CPU now :p
Other than that, i think it's all AMD (eg. ECC memory, price, performance and buying the "underdog" ).
Except if someone just wants to pay the highest price just for show, then you could still buy an Intel.
Intel inside is an OEM partnership, that i hope AMD will break :) .
When I was building a PC last December, I considered AMD until I saw that https://github.com/mozilla/rr requires an Intel processor.
> rr currently requires an Intel CPU with Nehalem (2010) or later microarchitecture.
I don't even use rr at the moment, but I hate the idea of building a monster machine only to find that I'm left out of some cutting-edge technology because I went off-brand.
The problem is that AMD CPUs have lots of bugs in their performance counter implementation. Intel CPUs don't have those bugs. With newer generations of AMD CPUs, AMD has fixed some of these bugs, but others remain.
I wonder why AMD hasn't stepped up to get involved in this project. There might be nothing that can be done with currently shipping silicon, but at least AMD could make sure that on their future silicon it all works. Fixing their performance counter bugs would likely benefit other projects as well.
Isn't this a sign that AMD's test suites are incomplete?
If they implement a feature correctly then break it in the new design, it suggests their test suites didn't exercise it properly, otherwise (you'd think) they would have caught the regression in the new design and fixed it before release.
It's got nothing to do with how long they've been competing, it's got to do with dominance and quality. If AMD overtakes Intel in price/performance, capability, and market share in 10 years (and I think it's totally possible) then I would call Intel off-brand.
If I buy a CPU from company A for the purpose of running programs, but there are some useful programs it cannot run (like rr), not because of some anti-competitive proprietary nonsense by their rival B but rather because of some genuine capability that A lacks, then I call A off-brand.
It's nuts how Intel is using it's monopoly. I don't have any other explanation why more OEM's changed to AMD for desktop and only recently Intel dropped prices.
For laptops, we'll see in the next iteration as battery is the most important aspect there and the next awaited version is coming soon.
> Intel CFO George Davis implied during an interview with Barron's this week that the company is digging in its heels for the long haul, saying "What we’ve said though, the delay in 10 nanometer means that we’re going to be a little bit disadvantaged on unit cost for a period of time." Davis then noted the company expects to return to revenue growth and margin expansion in 2023 when it overcomes the late ramp to 10nm. As a result, it's rational to expect lower pricing on Intel's upcoming 10th-Gen Comet Lake processors, too.
If everything goes right for Intel, they don't expect revenue growth until 2023?
By the time Intel hits 10nm in stride, TSMC plans to be rolling out their 3nm... [1]
[1] https://fuse.wikichip.org/news/2567/tsmc-talks-7nm-5nm-yield...