Hacker News new | past | comments | ask | show | jobs | submit login
Intel: 10nm Product Era Has Begun, 7nm on Track (anandtech.com)
208 points by frutiger on Oct 29, 2019 | hide | past | favorite | 177 comments



Daily reminder: Intel is having various technical problems and arguably made some bad business decisions, but remember, semiconductor manufacturing processes are no longer directly comparable using the "x nanometer" number. A comparison should mention their technical differences.

"x nanometer" has became a technically meaningless trademark solely exists for indicating new generations of process for marketing since ~2009's introduction of FinFET. It has no actual relation to gate length, metal pitch or gate pitch. When some people say "5 nm" is the early-stage of nanotechnology, remember that the gate length actually stays at 25 nm since 2009.

GlobalFoundries' 7 nm process is similar to Intel's 10 nm process. TSMC and Samsung's 10 nm processes are only slightly denser than Intel's 14 nm in transistor density. They are actually much closer to Intel's 14 nm process than they are to Intel's 10 nm process. I'm not saying that Intel is 100% faithful, the 14 nm in the original ITRS roadmap was described by Intel as "10 nm". Also, DRAM chips are yet another different can of worms, the process "number" for DRAM chips is not directly comparable with CPU's "number" as well.

As conventional Moore's Law is reaching the end, the semiconductor firms spent a lot of efforts to make things as confusing as possible. I'm not a semiconductor engineer, I'm just an ordinary programmer, I don't fully understand the funny business going on since 2009, any correction is welcomed.

In other words, Intel still stays on the state-of-art of semiconductor manufacturing, in line with GlobalFoundries and TSMC, not significantly better or worse, in contrary to what the "x nanometer" number would make you to believe, but they do have a lot of production issues.


Like you said, I'm starting to ignore the "nm" units and just think of these new processes in terms of generation id numbers.

The thing is, I don't really care that they're fudging the numbers for marketing, because at the end of the day the end result for me as a consumer is awesome.

High-end products like my iPhone X, AirPods, NVIDIA 2080 RTX, or the upcoming AMD Threadripper 2 chips are simply mind-blowing to me. The ridiculously huge transistor density has put crazy amounts of computer power in my hands and I absolutely love it.

I'm constantly excited for what the future will bring, and from what I can it'll just keep on getting ever more spectacular.

If some foundry dumps $20B of investment and man-centuries of cutting edge research into technology this awesome, they can call it whatever they want.


On the other hand, the top AMD Ryzen (3950X, $749) is considerably faster than the top Intel CPU (10980XE, $1000).

https://pcper.com/2019/10/ryzen-9-3950x-benchmark-i9-10980xe...

Technically meaningless, including thinking that Intel is superior to AMD. They are losing the desktop wars hard to AMD with Ryzen, and starting to lose ground on the server front.


> including thinking that Intel is superior to AMD

I never stated that.

Edit: I updated the original comment to clarify that.

> They are losing the desktop wars hard to AMD with Ryzen, and starting to lose ground on the server front.

I never purchased any new Intel's CPUs (or AMD's CPU, for that matter) since Ivy Bridge. If I have to build a new x86 PC today, I'll definitely buy AMD.


I'm curious, do you still use the old Ivy Bridge or have moved on to something else non-x86?


Sorry if it is disappointing: I'm still using the original Ivy Bridge machine ;-)

I found it's a fairy capable machine for day-to-day uses, even building a relatively large project is not a problem. There is a performance hit after Meltdown/Spectre, especially when you disable hyperthreading.

But yes, I'm actively looking forward to try a non-x86 platform as my desktop, and I'm watching for this PowerPC laptop [0]. Another option is the MNT Reform ARM laptop that offers an interesting, hackable design [1]. For desktops, HoneyComb LX2K is an ARM motherboard that offers workstation-level specs [2]. But the first two projects are still in development. HoneyComb looks good but the 550 USD price tag requires me to consider carefully (the same money may buy me more interesting hardware in the near-future soon).

For POWER, the Raptor Talos II workstation [3] is the first choice comes to mind, but it's POWER, the 3000+ USD price tag is not for the light-hearted. For RISC-V, it costs around 2000+ USD (1000 USD for the CPU board [4], another 1000 USD for the PCI-E board [5]) to build a desktop from the current development system. I'm still waiting for an affordable RISC-V SoC that can be used as a 400-500 USD desktop.

I can pay 2000 USD if I decide it's the One True machine that can be used as my daily driver, but it's obviously not the case for now. So... waiting, waiting, waiting, and decision, decision, decision...

[0] https://www.powerpc-notebook.org/en/

[1] https://news.ycombinator.com/item?id=21231031

[2] https://www.solid-run.com/nxp-lx2160a-family/honeycomb-works...

[3] https://www.raptorcs.com/

[4] https://www.sifive.com/boards/hifive-unleashed

[5] https://www.crowdsupply.com/microsemi/hifive-unleashed-expan...


Why do you think your [2] is looking good when their [1] errata list shows all you get for 550$ is lacking what rev. 1.4 will bring, by then i assume for the regular 750$ ?

[1] https://developer.solid-run.com/knowledge-base/lx2160a-cex7-...

OTOH 2x SFP+ @10GbE is nothing to sneeze at. Regarding the 'Work-Station-level' specs, which GPU would you use in there, assuming you'd want more than a simple frame buffer? I mean one which is fully supported by opensource drivers for that architecture and special platform quirks?

Edit: writing this from some specced out Core i7-640LM docked into assorted cybertrash. Asking because experience taught me i can throw anything at old x86/x64, while that isn't the case with anything else, even if the physical slots/interfaces ARE there. At least not for now. Catch-22.


Can't speak for GP, but can say that for me, I'd been on a 4790K since launch and only recently jumped for a Zen 2 x570 setup when my old system was acting up. Waiting for a 3950X to displace the 3600 I'm using temporarily.

I've looked at generational numbers and only in the past year or two has it gotten kind of bad. Been itching to upgrade since last November, but held out for a 16c Ryzen, I wanted more Linux stability, but pulled the trigger early as I said my old system was acting up.

In the end, it's quite a jump just to an R5 3600 from the i7-4790K ... I don't know if/when I'll feel the need to upgrade again after next month.


Part of this is that AMD is using TSMC’s ‘7nm’ process, which is comparable to intel’s delayed 10nm. Development of that process was funded by Apple (as they use it for the A12/A13). So the pairing of AMD and Apple can out spend and out develop intel.


My brother has a bankrupt after market motorbike company; that company—combined with Apple—can outspend Intel.


Is this really true? It seems very unlike Apple to let another company profit from their funding. Surely TCSM already had the tech and they are just licensing it to both AMD and Apple?


It's developed by TSMC, but you can argue that they developed it primarily to offer it to Apple. In that sense you can say Apple paid for it, just by singlehandedly making the investment worth it.


Exactly. Apple said “if you do X we will pay Y” in that sense, Apple put the money up to cover development costs.


apple has TMSC build their chips. amd has TMSC build their chips.

apple doesn't quite have to clout to tell TMSC not to build chips for anyone else.


It also doesn't benefit them in any way to do so. AMD doesn't make any products that compete directly with anything they make, and is their primary provider of dedicated Macintosh graphics hardware.


To top it off, AMD competing well with intel reduces the price of parts Apple puts in Intel Macs.


good point, it is probably a win win to have more money in the node.


AMD no longer has a fab so can’t compete with Intel on process (though they can in theory choose someone else’s process other than Global Foundry).

Clearly this restriction isn’t hurting AMD right now! But process is the topic under discussion.


AMD is currently using TSMC for their new line of CPUS. I think this gives AMD a huge advantage over Intel, as they can always just go to whoever has the best foundry.


You’re taking the current situation and crafting a narrative around it. If Intel were ahead, you would be extolling the tight integration of fab and design that only Intel offers. Apple is one current company that’s bucking your narrative by bringing more and more in house and is doing well.

The truth is that both models work and Intel’s problems are created by their management, not by their operating model.


I can't see why you are getting down-voted - I see people do this kind of post-factum 'reasoning backwards' all the time, and it's all make-believe.

If these theories had real predictive power, they would allow you to make a killing on the stock market.


Thanks, I hadn't realized that!


In theory, so could Intel.


We just saw on Hacker News today that AMD machines can't even boot and are having trouble getting their microcode updated. How is that a "huge advantage?"


Do you have a link to that?

Anecdotally, I'm using two Ryzen 3000 systems right now (a 3900X and a 3200U) and both are working just fine.



Thank you. That's less of a generic "can't even boot" issue and more of an incompatibility with installed software (wireguard) and the kernel workaround in recent linux versions for the microcode bug.


Can you recommend a good AMD laptop? What is a reasonable spec AMD CPU for that market for a developer laptop, e.g. comparable to i7 intel? Looking at the ThinkPad X1 Carbon and the Dell XPS 13 at the moment but can't seem to find either with AMD.


Unfortunately AMD's laptop chips are a generation behind the desktop. I have the AMD Thinkpad E485, it's significantly better at gaming (which I don't do on my laptop anyway) and a bit cheaper, but Intel laptops offer a bit better performance and significantly better battery life right now. Wait a couple months for Zen 2 to reach laptops.

And i7 is a meaningless brand in laptops, it covers a wide variety of power levels and core counts.


The current Thinkpad E495/E595 have Ryzen 3X00U CPUs.


Reviews indicate that the AMD variant of Microsoft's Surface Laptop 3 is actually a decent machine, as AMD and Microsoft have worked together to provide a good experience in terms of battery life, etc.

Hopefully the outcome of that work will trickle down to other laptop manufacturers. (BIOS, drivers, etc.)

Note that AMD's mobile "Renoir" chips are expected early next year, which will be based on the up-to-date Zen 2 microarchitecture used in the desktop Ryzen 3000/3000X series. (The 3000G series uses the older Zen+ from Ryzen 2000/2000X, just like the 3000-series laptop chips.)


I read that AMD is likely aiming for a CES 2020 release for "Renoir" [0]. How long from release does it usually take to start seeing released chips in new laptops?

Also, I've read conflicting information on whether "Renoir" will use Vega or Navi. Various articles and comments from earlier this year seem to think Vega, while I'm seeing more mentions of "Renoir" and Navi together now that we're closer to its release. IS there any solid information on which of the two it will use?

[0] https://wccftech.com/exclusive-next-generation-amd-7nm-mobil...


How long from release does it usually take to start seeing released chips in new laptops?

I seem to remember reading about a January launch (i.e. CES as you say) with shipping laptops expected towards the end of Q1 2020.

Vega vs Navi: IS there any solid information on which of the two it will use?

I don't think there's any solid information; watchers seemed to interpret recently added device IDs in Windows and Linux drivers as belonging to Vega-based APUs with new video engines backported from Navi. Whether that's Renoir or another APU line remains to be seen. There's also rumours that the next batch of APUs might just be a Zen+ die shrink to 7nm rather than a full upgrade to Zen 2.

Given that mobile CPUs and SFF business desktop systems with integrated graphics are where the volume markets are these days, it seems a little odd that AMD lags their APUs behind the standalone desktop and server CPUs so much. The next desktop APUs are only expected mid 2020. I can only assume they are still battling idle power draw issues (especially if Zen 2's PCIe 4.0 is hard to swap out for more the power efficient PCIe 3.0) or that the integration of CPU and GPU via Infinity Fabric is difficult to pull off in practice.


Thanks for the clarifications. You seem to be much more familiar with hardware than I am. I've been thinking about getting a new laptop for a while since I built my desktop back in 2012 and my current laptop is a bulky 17" Toshiba that's even older. The hardware landscape seems to have changed so much since I was last in the market for a computer or parts. If you were in the market for a new mobile machine, would you be going Intel or AMD?

I'm currently eyeing Dell's Precision workstations due to their extensibility, but the XPS 13 Developer Edition also looks quite nice and more portable. And they both offer Ubuntu, which is nice. I think at this point almost anything I get will be better than what I already have.


If you were in the market for a new mobile machine, would you be going Intel or AMD?

I'd be basing my decision primarily around which models fulfil my needs - be that in terms of portability, battery life, display quality, performance (CPU and GPU), ergonomics, etc. Right now, there is much less choice of AMD-based laptops, and they currently tend to be on the low end of the scale in terms of everything but performance. After years of integrating and tuning systems with only Intel CPUs, the laptop makers have yet to get the hang of making high-quality AMD-based laptops. I suspect this will change over time (Microsoft's aforementioned Surface Laptop 3 is a good sign in this regard).

If you're keen to support the underdog and can wait a few months, you can certainly wait for Renoir to be launched. I suspect AMD are itching to get a piece of that giant laptop revenue pie, so even if it ends up being Zen+ with Vega, AMD might have spent that time instead fine tuning idle power draw and thermals. Performance of their APUs is already decent, so such fine-tuning might be enough to get some high quality AMD-based thin & light laptops to market.

If you need a powerful (discrete) GPU and would prefer to go with AMD over NVIDIA for that, the upcoming Navi-based Radeon RX 5500M is one to watch, be it paired with an Intel or AMD CPU.

Disclosure: I skew towards AMD for both GPUs (in preference to nVidia) and CPUs (in preference to Intel) when it makes sense. I've got a Ryzen-based desktop system, but there are more Intel-CPU-based systems in my office, to a large extent because I do Mac-based development professionally - I think I bought my last AMD-based laptop in 2005.


Those are all great point, regarding portability, battery life, etc. It's easy to get caught up in the technical specs on paper and ignore the tangible differences. The Radeon RX 5500M is definitely on my watch list in addition to the next AMD mobile CPU lineup. I'm thinking, at the very least, it will be best to wait till Q1 2020 just to see what kind of improvements AMD have achieved, and if there will (hopefully) be more AMD options in the laptop market. I agree that the Surface Laptop 3 is a good sign for the future.


It's unfortunately not AMD, but I recently got the 2019 ThinkPad X1 Carbon and love it. It was spendy (~$1300 for 512GB SSD, 16 GB RAM, 1440p screen and 8th gen i7 w/ vPro), but the build quality is excellent, and battery life on Mint 19.2 x64 is insane (~13 hours web browsing @ 50% brightness).

Only negative is the Ethernet jack requires a $30 proprietary adapter from Lenovo. Apparently to keep the chassis thin they couldn't fit the full-size RJ-45 port.


There are now Surface laptops with AMD inside.


If you really want a good AMD laptop, I would wait until Q1/2020 when the Zen2 4000 mobile chips come out.


Closest seems to be the X395.


T495 is closer. But a X1 Carbon with a Zen 2 would be great.


9980XE can run closer to 35K physics score (Physics is the only part of Firestrike that actually tests primarily the CPU).

3950X is going to be good at productivity but Intel is still faster per-core and the 10980XE packs in 2 more cores. It's also more expensive, of course - there's a place for the 3950X here, just don't expect to see the 3950X handily best a 10980XE despite being short by 2 cores.

The 3950X is just a 3900X with more cores, and we already know Zen2 pretty much just matches Intel in the best case and loses in many other cases. There is not enough architectural lead there to account for a (32,082 / 25,838) x (18/16) = 39.6% per-core performance lead in favor of AMD. Guaranteed.

It's an erroneous test. It happens a lot with pre-release leaks, Zen has had a lot of timer bugs at launch. Or it's an LN2 run on the 3950X and an engineering sample 10980XE, or something like that.

In a more realistic set of tests, with a 50% core deficit and mitigations enabled, Intel's geomean score is 6% lower than AMD's. They're not 40% behind AMD in per-core performance.

https://www.phoronix.com/scan.php?page=article&item=3900x-99...


At the same time you're comparing a HEDT CPU with a regular one so wait for Thread Ripper and check again.

They might've slashed prices and they are kind of similar, but they target 2 different categories.


It wasn't my comparison. I'm not saying buy one vs the other.

I'm simply responding to grandparent comment's claim that a 16C 3950X will beat a 18C 10980XE by 25%, which, no, it won't.

The performance of these products is already known +/- a few percent, they are both just evolutions of existing products.


Or, as we've seen in many benchmarks, AMD's SMT implementation gets much higher gains than Intel's.


The math just isn't close to a 40% per-core lead in literally any single test let alone that being representative across the board.

Real-world, AMD is still losing in per-core performance on average. Close, sometimes tying, sometimes leading, but not leading by 40%.

Again: we already know what Zen2 performance looks like. Cascade Lake-X is basically Skylake-X+, a new stepping with some tweaks and (much) higher clocks. These are not new products when it comes to expected performance.

It's straightforward to see that 40% per-core lead is purely wishful thinking from the AMD set here. Don't get your hopes up, you'll just crash the hype train yet again.

I don't know why the hype train is such a thing for the AMD crowd, this spring it was supposed 5.1 GHz and 6C12T for $99 and that crashed into the brick wall of reality hard too.


>this spring it was supposed 5.1 GHz and 6C12T for $99

Did a reasonable person actually claim this, or are you citing the most extreme of the most extreme AMD fanboys? That group isn't anywhere near representative of normal AMD customers.


Yes, a very popular and (at the time) well-reputed tech analyst named AdoredTV claimed that was going to be the lineup.

https://www.youtube.com/watch?v=PCdsTBsH-rI

Obviously his reputation is in the toilet ever since (also due to him frankly melting down about how this was really AMD's fault, and that they must have changed the lineup since he published his leak, and if his leak wasn't right they should have told him). But at the time he was extremely revered among the AMD fanbase for breaking a fair few scoops, like the IO Die design for Zen2 server and desktop chips, and he claimed this was a very well-sourced leak that he was confident in.

People also leaned on statements from Kyle at HardOCP, another long-time analyst as well as statements from Der8auer (an overclocker who works at CaseKing) that 5 GHz was quote "very realistic".

There has been a concerted effort to retcon that and pretend that he was just some crazy who nobody ever believed, but this shit completely took over hardware discussion for the month before CES, and had a strong contingent of believers all the way up to the actual launch.

Contemporary discussion:

https://www.reddit.com/r/Amd/comments/a34nnm/ryzen_3000_rade...

https://www.reddit.com/r/Amd/comments/a3kteb/amd_ryzen_3000_...

https://www.reddit.com/r/Amd/comments/ahxxpg/der_8auer_think...


Well, yeah, people on the AMD subreddit are going to eat this stuff up, obviously. That subreddit doesn't represent the average AMD customer.


Intel badly needed good competition, because they grew arrogant, lazy and stopped evolving (ECC, PCIE 4, more PCI lanes anybody?). What they got is even better, and they are OK with admitting current superiority of AMD on desktops in most use cases.

Is the situation still bad with Intel regarding side-channel attacks? The fixes for those used to drop performance for up to 30%, which render them pathetic compared to AMD which doesn't suffer from those issues. This is real performance, not some synthetic benchmarks.


Cascade Lake-X has a new set of hardware mitigations for side-channel attacks (including some which are not hardware-mitigated on AMD architectures, specifically Spectre v2 - AMD claims they are "difficult to exploit" but still enables software mitigations).

Regardless, Firestrike Physics is not a benchmark that is substantially affected by mitigations. End-user tasks in general are not affected very strongly, generally ~1-3%, up to 5% in some cases.

Phoronix tested across a spectrum of workstation tasks (not gaming) and found a geomean of about 12% performance impact for Intel and 4.5% for AMD. Linked earlier, here again:

https://www.phoronix.com/scan.php?page=article&item=3900x-99...

Generally, server tasks are hit hardest because they involve lots of context switching. Gaming is typically not affected much at all. Workstation tasks fall somewhere in the middle.


>I don't know why the hype train is such a thing for the AMD crowd

Because AMD doesn't deliver constant good results. They just drop a bombshell every 10 years and call it a day.


I agree, but it could be that it's just a subjective feeling.

In any case if I remember correctly AMD was the first using a ~64bit arch in consumer CPUs (which I think is why the arch is called "AMD64") and the same about offering multicore CPUs ( https://www.pcworld.com/article/117654/article.html ), but Intel was then always able to quickly catch up (and, in the end, present better products).

In the case of Zen I read some time ago that the lead architect left AMD and ended up working now for Intel ( from https://en.wikipedia.org/wiki/Jim_Keller_(engineer) ):

In August 2012, Jim Keller returned to AMD, where his primary task was to design a new generation microarchitecture called Zen. After years of being unable to compete with Intel in the high-end CPU market, the new generation of Zen processors is hoped to restore AMD's position in the high-end x86-64 processor market. On September 18, 2015, Keller departed from AMD to pursue other opportunities, ending his three-year employment at AMD. In January 2016, Keller joined Tesla, Inc. as Vice President of Autopilot Hardware Engineering. In April 2018, Keller joined Intel

That doesn't give me a good feeling about the future of AMD chips, but on the other hand AMD's CEO (Lisa Su) gives me the impression of being a "no bull*hit"-person, so I do still have hope that they won't mess up things in future revisions of the architecture :)


Intel's hyperthreading nets you an extra 20%, depending on workload. AMD's Zen1 SMT, in comparison, was more like 40%. Add in something like the branch prediction working better on this problem pattern, or the cache being too small / wrong shape on the Intel, and another 20% is easy.

Honestly, it looks like you're arguing with data at this point. It really does do 40% better per core.


> Intel's hyperthreading nets you an extra 20%, depending on workload. AMD's Zen1 SMT, in comparison, was more like 40%. Add in something like the branch prediction working better on this problem pattern, or the cache being too small / wrong shape on the Intel, and another 20% is easy.

This is already measured in the existing benchmarks. You don't take the benchmarks and then add 60% to them arbitrarily.

Broadly speaking, Zen2 is slightly slower per-core to Coffee Lake (eg 3700X/3800X vs 9900K). Including whatever architectural features you care to name - that's built into the result. Skylake-X is slightly slower per-core than Coffee Lake due to the mesh architecture, let's say 5%, so perhaps roughly on par with Zen2. Nowhere near 40% different

And Fire Strike, specifically, is not one of the things that Intel suffers on (not to mention this architecture has hardware mitigations for most of the vulnerabilities). FireStrike is supposed to resemble physics processing for a game, there is not a lot of context switching (which is what Intel suffers on).

--

> Honestly, it looks like you're arguing with data at this point. It really does do 40% better per core.

Well, I'm not the one arguing that we need to be taking the benchmarks and adding 60% so it matches an anomalous data point ;)

Yes, this is absolutely an outlier or anomaly. No, it's not "arguing with data" to point out when a data point bucks a larger trend in the data as a whole.

We already know the relative per-core performance of Zen2 and Skylake-X, the difference is not 40%.

I'm not saying not to buy it. I'm just saying, this thread is going to look real embarrassing in a month or two when the 3950X doesn't have a magic 40% per-core performance gain over the existing Zen2 chips. 10980XE will be more expensive, and more power hungry... but it will be faster. Marginally.


Losing the wars? I think sales numbers would disagree with that


Have a look at the sales numbers yourself:

https://www.extremetech.com/computing/297785-amd-sales-are-b...

Consumer purchases are very often influenced by brand and general 'knowledge'. These shifts take time to fully take effect.


>They are losing the desktop wars hard to AMD with Ryzen and starting to lose ground on the server front.

So they're losing the domain that matters less (since laptop is where it's at, and that just because of a flash in the pan, AMD wont be able to sustain it, as they historically never were), while the server will also end with more ARM, not more AMD.


While I am all for AMD the new Skylake X part is probably better performance (at much higher price point)?

https://www.anandtech.com/show/14980/the-intel-core-i9-9990x...


Probably, but they are predicted to sell less than 100 in a year, so it's really more of a fluke part than anything you can reliably get.


At much higher power and thermals as well.


AMD is only superior when it comes to performance for parallelized tasks, single thread performance is still owned by Intel.

Also, your link is biased because it is well documented that AMD performance significantly improves with faster RAM and they used much faster RAM in the test only for AMD.

That RAM is very expensive so you need to factor that in the total cost if you wanted fo be fair when doing price comparison as well.


> AMD is only superior when it comes to performance for parallelized tasks, single thread performance is still owned by Intel.

Strongly disagree. Zen 2 gave AMD better IPC and comparable clock speeds. Zen 2 is also more power efficient than what Intel offers per unit of compute.

Intel offers a few processors with higher frequencies, but only by having absurdly high power consumption and uncompetitive price.

Source: https://www.anandtech.com/show/14605/the-and-ryzen-3700x-390...

Look at the last chart at the bottom and the summary:

> Normalising the scores for frequency, we see that AMD has achieved something that the company hasn’t been able to claim in over 15 years: It has beat Intel in terms of overall IPC.

The web tests are also very lightly threaded benchmarks:

https://www.anandtech.com/show/14605/the-and-ryzen-3700x-390...

Definitely no “ownage” there from Intel. AMD processors did great. The Intel 9900K generally did worse, and the 9700K only did slightly better for some interesting reasons discussed at the end of the page.

Making claims that AMD’s processors only perform better in highly threaded tasks is either disingenuous or based on significantly outdated information.

Zen 2 brings AMD up to parity with Intel in IPC, and the clock speed difference isn’t substantial in most cases.

AMD also offers substantially more (equally strong) cores and stronger threads at a better price. They also have products that offer substantially more PCIe 4.0 lanes than anything Intel has, among other nice features, especially when comparing server processors.

Intel will dump billions and billions of dollars into becoming competitive again, but right now... Intel’s main advantage is in their low idle power consumption in laptops. No one should be buying Intel processors for anything else.


> Also, your link is biased because it is well documented that AMD performance significantly improves with faster RAM

This was more true in prior generations of AMD. Newer ones aren't as RAM dependent (largely due to increased L3 cache and a more efficient core interconnect topology). Of course any CPU will still perform some amount better with better RAM.

> and they used much faster RAM in the test only for AMD.

This is a bit of Intel's own fault. AMD officially supports 3200 MHz while Intel officially only supports 2666 MHz, anything higher is considered overclocking the memory controllers even though both are perfectly capable of going much higher than that. Also note the Intel has quad channel while the AMD has dual channel. Quad channel has higher bandwidth but is the same RAM is going to have to run at lower speeds that if it were dual channel.

> That RAM is very expensive so you need to factor that in the total cost if you wanted fo be fair when doing price comparison as well.

The link to the bench now shows "not found" so I don't know what the exact RAM used was but you can get 16 GB of well timed 3600 MHz RAM for $80 so whatever difference there is isn't coming close to affecting that $250 price gap.

Intel certainly does still have an edge on single thread performance but the interesting thing is it's an ever shrinking lead at an increasing cost


One more thing about AMD vs Intel. i was recently shopping for professional workstations and there is almost no offering for AMD so whatever they are doing on the consumer market does not seem to impact the pro market much at all.


Their consumer stuff allows ECC, hardware RAID, and now PCI 4.0.


Irrelevant if you cannot buy the hardware in entreprise setting.


Is custom built not an option?


Nope. In entreprise you need to purchase thru predefined vendors.



You can do a specification-driven auction like goverment authorities do.


You post bunch of lies / outdated statements that are not true anymore (that are super easy to find and whole community have been raving about them for last few months, so your motivations look a bit shady to be polite), and when all of them are debunked you start claiming that your preferred corporate shops don't build PCs with AMD. What kind of argument for CPU performance is that?

Go buy whatever you can even if its subpar these days, nobody else cares, but currently most performance is with AMD. Power per dollar in high performance is purely with AMD.


I agree on performance, but I believe GP is correct on availability. Search for a threadripper-based workstation from HP, Dell, Lenovo — you won’t find any, they only offer models with Xeon and i9. Search for a fast AMD laptop — you can only get one if you order a custom-spec model using their configuration tools, they take quite some time to deliver.


> AMD is only superior when it comes to performance for parallelized tasks, single thread performance is still owned by Intel.

It depends on the pricing tier.

My 3700X costs about 20% less than a 9700K but is a bit faster in single core, and about 30% faster in multi core.

https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+7+3700X&i...

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-9700K...


FYI, the 9700k can be found for ~300USD at Microcenter which makes the comparison far more favorable in terms of price/performance. It was literally the difference for me in terms of deciding between the 3700x & the 9700K.


Passmark is a really bad benchmark.

Real-world test with mitigations:

https://www.phoronix.com/scan.php?page=article&item=3900x-99...

Or https://www.techpowerup.com/review/amd-ryzen-7-3700x/22.html

9700K is still solidly faster in per-thread performance. Having SMT gives the 3700X a boost in total performance though. The 9900K beats all the 8C Ryzens (including 3800X) and beats the 3900X in per-thread performance but loses in total performance (of course).


(I didn't downvote you)

In some games the 9700K seems to have better performance, but in pure CPU benchmarks the 3700X is generally faster.

See these results: https://www.techquila.co.in/amd-ryzen-7-3700x-vs-intel-core-...

In Cinebench the Ryzen has the advantage in both single and multi core.

See this other results for real time audio performance where the Ryzen is 10-15% better in all results: https://www.scanproaudio.info/2019/07/12/amd-ryzen-3600-3700...

Also, in your second link the price-performance-ratio of the 3700X is 27% better...


It's 2019.

If your workloads are limited by single thread performance you need better software. It's why vulkan and dx12 are a thing. (The single thread limitation of committing a frame to the GPU has been reduced by an order of magnitude) It's why C++ has the parallel algorithms library baked into the language.

I get it, threading is hard. But it's honestly not that hard. It's only hard when you're maintaining some super old program with single threadedness engineered into its core architecture. (note: this is my day job) Greenfield applications since 2009 should have had threading built in as a core assumption.

AMD is doing the right thing by optimizing for multithread performance over single thread performance. Moore's Law is dead for single cores. It has been for a decade and a half.


Depends on the program. If it requires, say, to synchronize hundreds of thousands of entities every 16ms, it's probably better to go with single threaded instead...


in virtually all cases, "single-threaded performance" should be read as "serial performance" - the performance of a specific, single thread, within a larger program. While confusing to a layman, nobody uses the term to refer to performance when there is only a single thread running on the entire processor - we indeed left that time behind in the 90s, so that is not a sensible metric.

However, Amdahl's law means that serial/single-threaded performance remains an important determinant of total performance even in multi-threaded applications. Locking is a great example, even in a heavily multithreaded program you can be limited by lock contention, that specific part of the program is serial.

Gustafson's Law is of course a thing, eventually we will come up with additional work to fill those cores, but single-threaded performance remains a critical factor for CPU performance.


>If your workloads are limited by single thread performance you need better software.

Most of the time we don't get to choose which software we run. Hell, most core components of Windows get stuck in a single core still. I'll take single-core performance over multi-core performance any day. Maybe that will change in 10 years, but I live in the present.


Unless you count the 30%+ speed reduction due to vulnerability mitigation’s... in which case AMD has much higher performance all around.


Any source for that? Like, recent and properly designed test?


From July 2019:

If looking at the geometric mean for these various mitigation-sensitive benchmarks, the default mitigations on the Core i9 9900K amounted to a 28% hit while the Ryzen 7 2700X saw a 5% hit with its default Spectre mitigations and the new Ryzen 7 3700X came in at 6% and the Ryzen 9 3900X at just over 5%.

https://www.phoronix.com/scan.php?page=article&item=amd-zen2...

Should note:

Keep in mind these benchmarks ran for this article were a good portion of synthetic tests and focused on workloads affected by Spectre/Meltdown/L1TF/Zombieload. Many of these particular tests aren't multi-threaded and that's why you don't see as much of a difference between these HEDT and desktop CPUs as in our more normal benchmarks.


Absolute performance with mitigations enabled for the 9900K and 3700X are still comparable though.


Checking the price in newegg now (and I don't know if that's pretty representative) gives me $489 vs $329, so, in that tier, Intel looks almost 50% more expensive.


>AMD is only superior when it comes to performance for parallelized tasks, single thread performance is still owned by Intel.

That isn't always true, for instance on some CS:Go benchmarks, the new Ryzens were quicker, and that is dominated by single threaded CPU perf.

Also, gcc compile time benchmarks are faster on the new Ryzens, largely due to large L3 cache sizes on the Ryzens (which is facilitated by their chiplet design)

It would be more accurate to say that intel has a slight single thread performance, most of the time. Rather than to say "owned".

Its all close enough that it depends on lots of details, like cache size and ram speed and timings. (ryzen benefits more from ram speed due to the interaction with infinity fabric). You also have to consider whether mitigations for side channel attacks are on for each chips.

There are dozens of little details like this that can be taken into account when deciding on a benchmark, and there is not always a clear answer as to "what is fair". You just have to be sure you say how those parameters are set, so people know what the comparison is, exactly. You can't be so quick to complain of bias, just because the parameters are not set the way you would have wanted them set.


Is RAM still expensive? I put out $150 for 16GB in early 2017 and I considered that to be an incredible deal at the time.

I thought the RAM cartel backed off the ridiculous prices when the Justice Department started murmuring about antitrust.


DDR4 is as cheap or cheaper than it was at the previous bottom in 2016.

What really got them was when China stepped up investments in a number of projects to jumpstart some domestic fabs focused on RAM production. The RAM cartel came to the table and signed some agreements regarding RAM production quotas with the Chinese real quick after that.

https://semiengineering.com/will-china-succeed-in-memory/

https://seekingalpha.com/news/3327772-samsung-headed-china-a...

A huge percentage of China's economy depends on smartphones, laptop, desktop, server manufacturing and they had no interest whatsoever in seeing that drastically slowed down so that Samsung, Hynix, and Micron could enjoy a fatter margin.

Not like a majority or anything, but losing 10-20% off 10% of your economy is a big deal.


I just bought 8 GB DDR3 for 21€


For the price point amd has been superior to Intel for atleast a decade. You could buy an fx black for a couple hundred dollars that absolutely demolished anything Intel makes.

I don't know why AMD was ever seen as the underdog. They never released something like the P4 which used that crappy RAM. They also basically owned the console market. Intel certainly didn't. AMD stock price has always been the whipping boy that was easy to predict.


Did you miss Bulldozer? It wasn't rambus bad, but it wasn't good. For most of Bulldozer's span, Intel was in tick/tock decent incremental increases every year, while AMD performance was pretty similar each year --- kind of like the last couple of years for Intel.


Intel made small improvements while the fx from years ago smoked pretty much anything for a tenth of the cost. Intel is overrated. The atom was a piece of junk. The celery was marginally better. And they ditched the Pentium class. I can't remember when they last used its name after the P4 debacle. AMD has consistently been competing at a much better price point. Now they are crushing Intel.

And you admitted bulldozer wasn't as bad as rambus


that era athlons had lot stability issue and whatever you saved on cpu cost you had to spend in psu to run the thing.


A 200 dollar chip and 100 dollar psu doesn't add up to the equivalent 500-1000 dollar Intel.


fx and x2 where _never_ that ahead of performance to be comparable to a twice expensive intel chip.


Maybe I got a great deal but it measured 9000+ on cpubenchmark for 300 dollars. Intel equivalent was nearly 1000 for the same performance. Yes power requirements were higher but that's pennies on the dollar.


> When some people say "5 nm" is the early-stage of nanotechnology, remember that the gate length actually stays at 25 nm since 2009

Single exposure limit is 40nm for ArF and 30 for EUV. The cost of those 10nm were ~$18B spent over 2 decades, and it will go up much further. 3400 is rumored to costs $360M a pop in early production.

157nm fluorine laser can actually get to 25nm with a single exposure, and pure argon even further.

25nm is better than EUV at the moment, but it was an industry wide decision that it was better to stop beating a dead horse, and work on a first in-vacuum process as it has the biggest potential for further improvement, and opens a road to less painful transition to soft X-ray and particle beam litho.


FYI, GlobalFoundries canceled their 7nm process over a year ago. The leading-edge semiconductor fabrication race is converging on just Intel vs. TSMC, with Samsung still relevant in certain markets.


"they do have a lot of production issues" means they still sell every single processor they make.

New computers are about 80% Intel and 20% AMD. Until this is reversed Intel will still be the kings.

I remember fondly the age of Athlon 64 processors. AMD was a winner. On paper. Intel still beat them in the long run.

And I say this as an AMD fanboy.


In the consumer space AMD appear to now be outselling Intel, in some markets this is reportedly by a considerable amount.

This may be down to the Zen 2 release going well, and it's going to be interesting to see if it's sustained. I say this as a fanboy of neither really, and owner of both!

(I'm really a fanboy of the re-emergence of serious competition in the space 8C/16T for ~$300? Yes please...)


>(I'm really a fanboy of the re-emergence of serious competition in the space 8C/16T for ~$300? Yes please...) 100% this. The lack of any credible competition in the consumer space for intel really let them get complacent. I jumped to a 2500K sandy bridge as soon as it was apparent it was a game changer. I didn't keep it in the end as I wasn't really using it, so let my father in law have it for his video editing rig. Only now can I answer the question "Is it worth upgrading yet?" with a qualified YES!

I personally have no AMD/intel bias (just happenstance that when I've needed to build a PC, intel had the best price/performing/overclockable part at the time), but I welcome the competition to not have a single supplier stagnate the market with 4C/4T consumer parts for nearly a decade... (currently have a R5 1600 and looking forward to a large range of upgrade options within the AM4 socket - though I am aware the forward compatability may not last much longer - but at least I'm paying attention to the market again!)


> In the consumer space AMD appear to now be outselling Intel, in some markets this is reportedly by a considerable amount.

Anecdotal, but I'm considering an AMD cpu in my next desktop refresh for the first time in over 15 years.


In the consumer space AMD appear to now be outselling Intel Nonsense, AMD is only outselling Intel on the desktop sales which represents only 20% of x86 cpu sales. ZEN 2 laptops aren't yet released.


In the consumer space in which people buy CPUs directly, which I agree is a minority market. Lets wait and see on the rest.



Mindfactory is retailer and very small part of the market.

Revenue 2018:

    Intel  $70.8 billion
      AMD   $6.5 billion


Density is no longer meaningful.

I have stopped looking at the process node or stated specs.

There is so much architectural differences that the only sane way is to run YOUR workload and compare performance/TCO.


and on top of all of that, transistor density no longer works are a rough approximation of performance either. clock rate may get worse, with a more dense node, heat issues may not let them actually put more transistors in the same area despite the smaller pieces, etc etc etc.

as usual, the only way to actually judge a cpu's tech is how many frames of doom it can render per second ;)

if any pedant responds to this about how modern doom is mostly a function of GPU I will kill a kitten.


Incidentally, there's a great write-up by Digital Foundry on why Crysis still melts modern CPUs, and is still CPU-limited.

https://www.eurogamer.net/articles/digitalfoundry-2018-why-c...

"Not a trick, not an illusion. Yes, this 2007 game is running at under 40fps on the fastest CPUs money can buy."


>""x nanometer" has became a technically meaningless trademark solely exists for indicating new generations of process for marketing since ~2009's introduction of FinFET."

What is the connection between the introduction of the FinFET and the nm process number becoming a meaningless distinction? Might you or anyone else elaborate? Thanks.


Is there still a relationship in regards to reduction of power consumption with smaller feature sizes?


Yes, smaller transistors are more power efficient.


The number of transistors grows faster than the power efficiency though, so the percentage of unused silicon (aka dark silicon) increases as you go smaller but keep the die size the same.


Could we rate them on number of transistors instead? Is that number going up?


Yes, at the same rate it always has, to a T.

https://fuse.wikichip.org/news/2207/tsmc-starts-5-nanometer-...


So something is shrinking? It’s just not at the nm’s they claim?


There is the fin pitch, the gate pitch, the metal pitch, and (related) the track height (which is a design density number). Fin pitch shrinks a little, gate pitch a little, track lots (Intel used to use 9-12, moving to 6). There is also space between device boundaries, which also can be shrunk.

The 5nm is the gate half pitch, which is no longer a sufficient description. All else being equal, simple shrink would shrink everything, and that gate half pitch would be useful. That hasn't been true for 15 years.

Density comes at the cost of performance and yield, so you balance different aspects to get the right combination. Or _a_ right combination for your design.


I wonder if the scale could be expressed in the area required to implement a given amount of logic and SRAM. This would make is easier to compare things.


SRAM cell area is a very common benchmark for processes.


I guess the best the pleb like us can use as a metric is TDP/core frequency/number of core ?

The TDP for me was always a sign of progress in density for same core count and frequency.


Nope. Some manufacturers exclude “turbo” power usage from the TDP, others don’t.


Intel excludes turbo power usage, while AMD doesn't. Intel's TDP is really its base load.


As of the 2000 series, AMD uses the same model as Intel. Their "105W" processors will draw up to 141.75W while boosting, their 65W processors will draw up to 88W while boosting, etc etc.

It is, of course, always more of a generalization than a precise measurement, but AMD and Intel are now generalizing roughly the same thing. Both brands will exceed specified TDP while boosting.


not so for epyc.


Global foundries' 7nm process was similar to Intel's 10nm, however Intel heavily revised their 10nm process to be approximately 1.3x less dense.


You seem to know more about recent news better than me.

I heard that Intel's 10 nm delay was mainly the result of its overambitious engineering and beliefs that they could do 10 nm via existing DUV without the troubles of moving to EUV, but the yield proved to be extremely low and basically a disaster.

Is it true? What is their reenginnered solution in addition to making it 1.3x less dense?


I assume both the parents are referring to TSMC, global foundries stepped out of 7nm to concentrate on existing processes


Would transistor density be a better metric? Or that wouldn't be significant either?


If feature size doesn't matter why has EUV been so critical?


My (incomplete) understanding is that current lithography relies heavily on multipatterning, which takes a long time. With EUV, you can have very small features with fewer lithography steps which improves factory throughput.

That's more of an economic benefit for chip manufacturers than a tangible benefit for a user of the end product, but it means being able to buy cheaper chips with more gates (in addition to any power or performance improvements that can still be had these days).


> GlobalFoundries' 7 nm process is similar to Intel's 10 nm process. TSMC and Samsung's 10 nm processes are only slightly denser than Intel's 14 nm in transistor density. They are actually much closer to Intel's 14 nm process than they are to Intel's 10 nm process.

Where can I get the source? I often see someone saying "X's process is better than Y's" but they never leave any links. I would like to know any credible source on the semiconductor manufacturing.


The main source is WikiChip. It appears that the wiki is edited by the people who work in the industry and has a lot of information that seems to be fairly reliable. Unfortunately, it rarely has in-line citation to its primary sources, but still a good source.

Read,

* 14 nm lithography process

https://en.wikichip.org/wiki/14_nm_lithography_process

* 10 nm lithography process

https://en.wikichip.org/wiki/10_nm_lithography_process

Other sources include:

* Life at 10nm. (Or is it 7nm?) And 3nm Views on Advanced Silicon Platforms

https://www.eejournal.com/article/life-at-10nm-or-is-it-7nm-...

* A Brief History of Process Node Evolution

https://www.design-reuse.com/articles/43316/a-brief-history-...

* 14nm, 7nm, 5nm: How low can CMOS go? It depends if you ask the engineers or the economists…

https://www.extremetech.com/computing/184946-14nm-7nm-5nm-ho...

* Is Intel Really Starting To Lose Its Process Lead? 7nm Node Slated For Release in 2022

https://wccftech.com/intel-losing-process-lead-analysis-7nm-...

You get dig deeper using these links.

And don't ask me why I'm just copy & paste Wikipedia citations. I added those citations...


Chipwiki has some numbers[1] sourced from industry presentations, but unfortunately foundries usually prohibit publishing direct comparisons with reference designs. As a result there's a lot of "folk knowledge" that is commonly known by insiders but doesn't have a public source.

[1] E.g. their 7nm page https://en.wikichip.org/wiki/7_nm_lithography_process


Slight reminder:

- they have been saying this for a long time

- they have also been saying that they won't have a 7nm untill 2021: https://newsroom.intel.com/news/2019-intel-investor-meeting/...

It seemed that they had internal problems which caused the delays ( experienced people leaving).

I think they got spooked by TSMC mentioning 3nm for 2023 with 19,x billion in investments.

Samsung was also mentioning 5nm.

I'll see it when I see it. Currently, I'm not convinced.

Ps. Great timing for the financial results of AMD fyi. Guess when that is :p

Also: yes, I believe in AMD ( = stocks), because they had great execution in a short timeframe the last years. It's amazing.

For the rest, I'm curious to see what arguments will come here because of financial investments or because of being "true?" believers :p


> (experienced people leaving)

This has me curious on how much of the Management style of Andy Grove is still alive inside intel. Do they change the company culture and _want_ these people to leave or was it mismanagement?


I saw multiple people here in HN mentioning this, so I have no idea.


They still admit they won't have 10nm desktop chips till 2021 (at least) with 7nm predicted to ship in 2022. https://www.techpowerup.com/260141/intel-clarifies-on-10nm-d... Which of course means that 10nm will never ship on the desktop. And you bet most mobile chips they will ship next year will be "Comet Lake" on 14nm and not "Ice Lake" or "Tiger Lake" 10nm. The 10nm process is completely botched they just can't admit that.


As best I can tell Intel kinda painted themselves into a corner with their 14nm+++ process. It produces really high quality parts (no surprise) but that also means that there is now an expectation of super high clocks on the intel side. That's not realistic however and I highly doubt their 10nm can currently match their 14nm+++ process on clocks (yet). It looks like Intel is thus focusing 10nm on areas where clocks more representative of a new node won't be an issue: server and mobile.

I'll be honest and say I'm very curious to see how intel tries to escape this trap they've set for themselves. Unless we see rapid gains in the 10nm process quality up into the 4.5+ GHz range I wouldn't expect to see 10nm desktop parts anytime soon.


Maybe that's perfectly okay? I don't care about power efficiency in my large stationary gaming PC, and I don't see any reason I should care.


Haha, obviously the largest CPU market is date centers which pretty much only care about performance/watt. The gaming market is really only useful for PR and launching new architectures.


A fine point. I guess I'm not clear which audience we're discussing here?

Data centers presumably don't care at all about those super high clocks, so I don't think any downgrade in that area would be a problem for them, as long as overall metrics are good.


Both server and mobile have plenty 14nm parts going forward. No, the truth is much simpler: the 10nm process is botched and produces very little but they can't just straight up admit it's not working. They need to limp with it until the 7nm which is developed independently and actually has promise to be working.


This gets very dark when you line up the timeline for TSMC’s next processes and the idea that Apple would like to use their own (TSMC-based) chips in their computers. Intel is currently in a danger zone. If QC were to swap processes with it then the Arm v Intel Cold War could get hot fast.


Million transistors per square millimeter (MTr/mm²) is better comparison metric than the commercial name for the process. Here is handy chart I copied from somewhere:

    Tech Node name  (MTr/mm²)

    Intel 7nm       (2??) 
    TSMC 5nm EUV    171.3
    TSMC 7nm+ EUV   115.8
    Intel 10nm      100.8
    TSMC 7nm Mobile 96.5
    Samsung 7nm EUV 95.3
    TSMC 7nm HPC    66.7
    Samsung 8nm     61.2
    TSMC 10nm       60.3
    Samsung 10nm    51.8
    Intel 14nm      43.5
    GF 12nm         36.7
    TSMC 12nm       33.8
    Samsung/GF 14nm 32.5
    TSMC 16nm       28.2
https://en.wikichip.org/wiki/mtr-mm%C2%B2


Source of table: https://www.techcenturion.com/7nm-10nm-14nm-fabrication

Link referenced earlier by "lettergram".


I think that this article is really telling about the truth behind the Intel marketing: https://semiaccurate.com/2019/10/29/intels-actions-on-10nm-a...

Intel is telling every 3 months that 10mm is around the corner since 2016... (and even had a token 10mm CPU). Here we are in 2019, and we are still waiting.


I wonder if they'll consider the "chiplet" model AMD went with. Yields are higher with smaller pieces.


Signs currently point towards Intel skipping the chiplet-on-interposer model AMD is currently using and going directly to 3D stacking i.e. dies directly on top of other dies.

They did a paper launch of their "Foveros" 3D stacking technology and "Lakefield" Architecture this year, which is clearly still in very early stages, and also tellingly announced at their investor meetup. It will probably be some years before we see any real chips with this.

AMD is speculated to go with a combined model (chiplets, but stacked cache on the IO Die) with Zen 3/4 and then go full 3D for Zen 5 or so.


This conjecture sounds wrong to me. 3D stacking and chiplets are complementary; going only 3D has thermal and cost issues, so it makes sense to combine both. One shouldn't replace the other any time soon.


Well, the Lakefield design is fully 3D, and Intel has not announced any chiplet based designs, so either they made it all up, or they're convinced they can make it work in the next few years.


Lakefield is a 1+4 core part, it's tiny and won't need chiplets. Chiplets are necessary only once you get to core counts that won't fit on a single square of silicon.

Intel have announced co-EMIB, which seems to be their solution to 2D integration of 3D-stacked parts.

https://fuse.wikichip.org/news/2503/intel-introduces-co-emib...


It'll definitely be interesting to see if they do. If I remember correctly, there were some articles with recent Xeon chips about how their monolithic design was hurting them pretty badly on yields.


There are benefits to it if you can get the yields up right, and Intel's 14 nm +++++++++++ is pretty darn awesome at this point.


AMD seems to be doing very well with their non-monolithic CCX setup for Zen, especially now that Zen 2 has fixed up some of the earlier issues with inter-core and inter-CCX communication.

Intel has definitely had plenty of time to refine their 14nm monolithic architectures over the years since Skylake, though!


Yes, they have said as much in their investor slides.


At this rate, an Intel motherboard will have no room for any chipset other than the CPU. Are we going back to slot form factor for processors?



Intel is feeling insecure: AMD is announcing earnings tomorrow, so this fluff piece was absolutely timed to blunt the impact of that.


> so this fluff piece was absolutely timed to blunt the impact of that.

The timing has nothing to do with AMD. Intel announced their own earnings on Thursday, and this piece didn't get written and edited in time to run on Friday so it got delayed to Monday.


If that's the case they should have released it a few days later, to avoid looking insecure. Because that's what it looks like, and I'm not the only one on the thread who sees it that way.


While AMD is already selling CPUs at 7nm.


To be fair they measure them differently.

For reference, Intel 10nm has slightly higher transistor density than 7nm TSMC or Samsung

https://www.techcenturion.com/7nm-10nm-14nm-fabrication


Table after the chart says TSMC's 7nm+, currently in mass production, has higher density than Intel's upcoming 10nm.

TSMC’s 7nm+ 115.8 MTr/mm²

Intel’s 10nm 100.8 MTr/mm² (2018 estimate)


It is completely insane (and awesome) that "Million transistors per square millimeter" is a useful unit.


After realizing TSMC and AMD surpassed them and are aiming for their lunch, looks like they got their ducks in a row and are coming out guns blazing.

This will probably satisfy big clients and partners. Long term it will depend on whether AMD can keep up the pressure. Good times for CPU buyers.


Intel has been saying 10nm is ready and in production for almost 3 years now, not holding my breath.


They’re entering high volume production according to the article. 7nm is basically taped out and they’re working on 5nm.


Back in 2016-17, around Skylake/Kaby Lake, I remember getting all excited with some friends about the prospect of Intel 10nm, which they were saying would be used for their immediately upcoming series of chips.

Now that it's 2019 and they're finally claiming to enter high-volume production of 10nm, I have a really hard time taking any claims they make about future lithography nodes at face value.


"High-volume production" of just Ice Lake-U, no Ice Lake-H, no Ice Lake-S, no Ice Lake-SP, etc. Intel can say whatever they want but they're shipping a pretty small number of chips.


Current roadmap has Ice Lake-SP in 2020.

I know people like to meme that Intel is never going to advance nodes ever again, but they are finally launching 10nm products. Ice Lake-U is launched and server comes next.

Desktop is going to hold back for a while simply because Coffee Lake/Comet Lake are such high clockers that Ice Lake won't be able to outperform them, and it relieves the strain on 10nm fabs.

Core for core Coffee Lake still outperforms Zen2 through raw clocks, although IPC is starting to fall behind.


This 10mm Ice Lake CPU officially launched in June at Computex... Three months later they are nowhere to be seen, and nobody is showing a product including this CPU. So what is going on?


> entering high volume production according to the article

Obviously, we need to know the definition of "high volume production"...


The Oregon and Israel fabs, Chandler, AZ coming next quarter. It’s a guess as to actual volume though. However that they are expanding their production throughout fabs means they have confidence in their process now (i.e. high yield).

But as others point out, these are their server parts and not desktop parts yet. We’ll have to see when those start shipping.


ark.intel.com indicates that transactional extensions are not present in 10th gen processors. Did Intel decide to abandon tsx-ni?


Intel is a bit annoying at picking and choosing which instruction sets get included in the processes. Maybe they just removed it for the 10th gen mobile? I wonder if they will be included in the 10th gen desktop, if that is ever going to be released.

I've only looked at a few of the mobile and desktop 9th gen, looks like its only included in some of the higher end models.

Mobile:

i5-9300H - no

i5-9400H - yes

i7-9750H - no

i7-9850H - yes

Desktop:

i3-9300 - no

i5-9600 - yes

i7-9700 - yes


Maybe they're limiting it to Xeon products, since Intel seemingly has a quota of asinine market segmentation decisions per generation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: