I'm kind of bullish on Intel right now. They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab. Let's ignore the elephant in the room which is taiwan and it's sovereignty, and only focus on the core r&d.
Intel flopped so hard on process nodes for 4 years up until Gelsinger took the reigns... it was honestly unprecedented levels of R&D failure. What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting". This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.
Intel's 18A is roughly 6 months ahead of schedule, set to begin manufacturing in the latter half of 2024. Most accounts put this ahead of TSMC's equivalent N2 node...
Fab investments have a 3 year lag on delivering value. We're only starting to see the effect of putting serious capital and focus on this, as of this year. I also think we'll see more companies getting smart about having all of their fabrication eggs in one of two baskets (samsung or tsmc) both within a 500 mile radius circle in the south china sea.
Intel has had 4 years of technical debt on it's fabrication side, negative stock pressure from the vacuum created by AMD and Nvidia, and is still managing to be profitable.
I think the market (and analysts like this) are all throwing the towel in on the one company that has quite a lot to gain at this point after losing a disproportionate amount of share value and market.
I just hope they keep Pat at the helm for another 2 years to fully deliver on his strategy or Intel will continue where it was headed 4 years ago.
There is a good chance for Intel to recover, but that remains to be proven.
From their long pipeline of future CMOS manufacturing processes with which Intel hopes to close the performance gap between them and TSMC, for now there exists a single commercial product: Meteor Lake, which consists mostly of chips made by TSMC, with one single Intel 4 die, the CPU tile.
The Meteor Lake CPU seems to have finally reached the energy efficiency of the TSMC 5-nm process of almost 4 years ago, but it also has obvious difficulties in reaching high clock frequencies, exactly like Ice Lake in the past, so once more Intel has been forced to accompany Meteor Lake with Raptor Lake Refresh made in the old technology, to cover the high-performance segment.
Nevertheless, Meteor Lake demonstrates reaching the first step with Intel 4.
If they will succeed to launch on time and with good performance, later this year, their server products based on Intel 3, that will be a much stronger demonstration of their real progress than this Meteor Lake preview, which has also retained their old microarchitecture for the big cores, so it shows nothing new there.
Only by the end of 2024 it will become known whether Intel has really become competitive again, after seeing the Arrow Lake microarchitecture and the Intel 20A manufacturing process.
N5 is interesting because it's the first process fully designed around EUV and because it was pretty much exclusive to Apple for almost two years. It launched in Apple products in late 2020, then crickets until about late 2022 (Zen 4, RTX 4000, Radeon 7000). Launches of the other vendors were still on N7 or older processes in 2020 - RTX 3000 for example used some 10nm Samsung process in late 2020. All of those were DUV (including Intel 7 / 10ESF). That's the step change we are looking at.
Exactly. N5 is sort of an outlier, it's a process where a bunch of technology bets and manufacturing investment all came together to produce a big leap in competitive positioning. It's the same kind of thing we saw with Intel 22nm[1], where Ivy Bridge was just wiping the floor with the rest of the industry.
Improvements since have been modest, to the extent that N3 is only barely any better (c.f. the Apple M3 is... still a really great CPU, but not actually that much of an upgrade over the M2).
There's a hole for Intel to aim at now. We'll see.
[1] Also 32nm and 45nm, really. It's easy to forget now, but Intel strung together a just shocking number of dominant processes in the 00's.
The reason N5 came together for TSMC is because they run more experiments per unit time than Intel does. They're doing this 24 hours a day across multiple shifts, which makes it possible for them to improve a given process faster. It remains to be seen if Intel can actually pull ahead or not without a major culture change, or if "this time" they can succeed at becoming a trusted foundry partner that can drive enough volume to support the ongoing investment needed in leading edge fabs.
> The Meteor Lake CPU [...] has obvious difficulties in reaching high clock frequencies,
Not sure where that's coming from? The released parts are mobile chips, and the fastest is a 45W TDP unit that boosts at 5.1GHz. AMD's fastest part in that power range (8945HS) reaches 5.2GHz. Apple seems to do just fine at 4GHz with the M3.
I'm guessing you're looking at some numbers for socketed chips with liquid cooling?
The 5.1 GHz Intel Core Ultra 9 processor 185H is the replacement for the 5.4 GHz Intel Core i9-13900H Processor of previous year. Both are 45-W CPUs with big integrated GPUs and almost identical features in the SoC.
No liquid cooling needed for either of them, just standard 14" or 15" laptops without special cooling, or NUC-like small cases, because they do not need discrete GPUs.
Both CPUs have the same microarchitecture of the big cores.
If Intel had been able to match the clock frequencies of their previous generation, they would have done that, because it is embarrassing that Meteor Lake wins only the multi-threaded benchmarks, due to the improved energy efficiency, but loses in the single-threaded benchmarks, due to lower turbo clock frequency, when compared to the last year's products.
Moreover, Intel could easily have launched a Raptor Lake Refresh variant of i9-13900H, with a clock frequency increased to 5.6 GHz. They have not done this only to avoid an internal competition for Meteor Lake, so they have launched only HX models of Raptor Lake Refresh, which do not compete directly with Meteor Lake (because they need a discrete GPU).
During the last decade, the products made at TSMC with successive generations of their processes had a continuous increase of their clock frequencies.
On the other hand Intel had a drop in clock frequency at all switches in the manufacturing processes, at 14-nm with the first Broadwell models, then at 10-nm with Cannon Lake and Ice Lake (and even Tiger Lake could not reach clock frequencies high enough for desktops), and now with Meteor Lake in the new Intel 4 process.
With the 14-nm and 10-nm (now rebranded as Intel 7), Intel has succeeded to greatly increase the maximum clock frequencies after many years of tuning and tweaking. Now, with Meteor Lake, this will not happen, because they will pass immediately to different better manufacturing processes.
According to rumors, the desktop variant of Arrow Lake, i.e. Arrow Lake S, will be manufactured at TSMC in order to ensure high-enough clock frequencies, and not with the Intel 20A, which will be used only for the laptop products.
Intel 18A is supposed to be the process that Intel will be able to use for several years, like their previous processes. It remains to be seen how much time will pass until Intel will become able to reach again 6.0 GHz in the Intel 18A process.
That's getting a little convoluted. I still don't see how this substantiates that Intel 4 "has obvious difficulties in reaching high clock frequencies".
Intel is shipping competitive clock frequencies on Intel 4 vs. everyone in the industry except the most recent generation of their own RPL parts, which have the advantage of being being up-bins of an evolved and mature process.
That sounds pretty normal to me? New processes launch with conservative binning and as yields improve you can start selling the outliers in volume. And... it seems like you agree, by pointing out that this happened with Intel 7 and 14nm too.
Basically: this sounds like you're trying to spin routine manufacturing practices as a technical problem. Intel bins differently than AMD (and especially Apple, who barely distinguish parts at all), and they always have.
I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.
While one reason why TSMC did not have such problems is that they have made more incremental changes from one process variant to another, avoiding any big risks, the other reason is that Intel has repeatedly acted as if they had been unable to estimate from simulations the performance characteristics of their future processes and they have always been caught by surprise by inferior experimental results compared to predictions, so they always had to switch the product lines from plan A to plan B during the last decade, unlike the previous decade when all appeared to always go as planned.
A normal product replacement strategy is for the new product to match most of the characteristics of the old product that is replaced, but improve on a few of them.
Much too frequently in recent years many Intel new products have improved some characteristics only with the price of making worse other characteristics. For example raising the clock frequency with the price of also increased power consumption, increasing the number of cores but removing AVX-512, or, like in Meteor Lake, raising the all-cores-active clock-frequency with the price of lowering the few-cores-active clock frequency.
While during the last decade Intel has frequently progressed in the best case by making two steps forward and one step backward, all competitors have marched steadily forwards.
> I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.
I'll be blunt: you're interpreting a "problem" where none exists. I went back and checked: when Ivy Bridge parts launched the 22nm process (UNDENIABLY the best process in the world at that moment, and by quite a bit) the highest-clocked part from Intel was actually a 4.0 GHz Sandy Bridge SKU, and would be for a full 18 months until the 4960X matched it.
This is just the way Intel ships CPUs. They bin like crazy and ship dozens and dozens of variants. The parts at the highest end need to wait for yields to improve to the point where there's enough volume to sell. That's not a "problem", it's just a manufacturing decision.
You can't compare optimized clock frequencies on 2 year mature process with first run on a new process... AMD and Nvidia both improve stable frequencies with process improvements at TSMC even on the same nodes over time (RTX 4060 TI, tsmc N4, 2.31ghz base, 2.54ghz boost vs RTX4060 - TSMC N4, 1.83ghz base, 2.46ghz boost).
Most chipmakers saw gains moving from n5 to n5p at tsmc, which wasn't even a process jump simply maturity and optimization on the existing node.
What I worry about with Intel is that they have gotten too much into politics; relying on CHIPS act and other subsidies, encouraging sanctions on Chinese competitors while relying on full access to the Chinese market for sales.
It is not a good long term strategy: The winds of politics may change, politicians may set more terms (labour and environment), foreign market access may become politicized too (US politicians will have to sell chips like they sell airplanes on foreign trips).
So Intel will end up like the old US car makers or Boeing - no longer driven by technological innovation but instead by its relationship to Washington.
"This investment, at a time when … wages war against utter wickedness, a war in which good must defeat evil, is an investment in the right and righteous values that spell progress for humanity"
That is not a partner for creating logical systems. Very clear their current decisions are political.
They are taking sides. That is easily seen during an interview with the CEO, who almost cried talking about the events of October 7th. Intel will give 5000$ war grant to the Israeli employees. One of Intel's largest fabs is a 20-minute drive from where the massacres occurred.
Do you know if AMD has any presence in Israel? Intel has already sold me multiple garbage dump products in the past and so if I can minimize my Israel related purchases i'd prefer to do that.
The Mac is annoying since I think some pieces of their silicon designs come from Israel (storage controller). Can someone correct me if I am wrong on that?
AMD is big in Israel as well. Most of the tech stuff is developed in Israel, side effects of future-oriented democracy I imagine.
Boycotting things is useless virtue signalling of the woke disease. I would suggest going to pro-Palestinian protests and try to explain to them that raping, kidnapping and mutilating children is not going to bring peace and a country to Palestinians.
It's fine to be future oriented, to share development, the problem is any religious destiny/racist element.
I'm not sure what you mean by "woke disease," but consumerism involves evaluation.
Oct 7 was horrible, but it didn't come out of nowhere. Sabra and Shatila, for example (Waltz with Bashir being a very good Israeli film on the topic), and the many thousands of people killed mutilated or displaced in their usual unhelpfully disproportionate response..
> Boycotting things is useless virtue signalling of the woke disease.
I didn't realize HN served multiple alternate realities. Over here where I'm looking from, by far the largest boycotts of the last 8 years have been from conservatives who were upset by events like trans people being featured in ads and the existence of gay people in movies/tv shows.
Apparently they have become strongly associated since that quote is part of the release for their new plant. It is sickening to me this kind of hard-right religious zealotry is part of decisions of tech companies. I am avoiding Intel as much as possible now, I hope others will consider this too.
It is directly associated with the deal they are making. Would you let that quote be used with a deal your company is making? Intel knows full well how this is being spun.
Yes, but it's all relative. A giant new development is different than a branch office. Though in all cases there is no doubt overlap with their military industrial complex. But I don't think we will normally see such religious extremism tied to projects, and that should be called out, loudly and clearly, as not ok.
Intel has used political incentives often though its history to great effect. I think its a much smaller issue than you think. Its part of their standard game-plan for over 30 years. The issue with boeing is becoming acontract company that into contracts out all their work which is self defeating and leads to brain drain. EX: the door lacking bolts because Boeing doesnt even build its own fuselages anymore and have let their standards fall, wholly depending on contractors with little oversight.
Yeah. Too much the cold war angle. I think he overstates the role of government/military and underestimates how much the consumer market has driven the process innovations that has made computing cheap and ubiquitous,
If you think the concern over China and Taiwan is understated I think you'd do well to look at how both the US and China are putting insane amounts of resources behind this.
>have made some earnest headway in being an actual fab
In the terms of end product - not really. Last 3-4 gens are indistinguishable to the end user. It's a combined effect of marketing failure and really underwhelming gains - when marketing screams "breakthrough gen", but what you get is +2%/ST perf for another *Lake, you can't sell it.
They might've built a foundation and that might be a deliberate tactics to get back into the race, we'll see. But i'm not convinced for now.
Depends who your user is. From a desktop side you're probably not going to notice because desktop CPU requirements have been stagnant for years, desktop is all about GPU. On the server side Sapphire Rapids and Emerald Rapids is Intel getting back in the game and the game is power and market share.
See there's only 2 or 3 more obvious generations of obvious die shrinks available. Beyond those generations we'll have to innovate some other way, so whoever grabs the fab market for these modes now gets a longer period to enjoy the fruits of their innovation.
Meanwhile server CPU TDPs are hitting the 400W+ mark and DC owners are looking dubiously at big copper busbars, die shrinks tend to reduce the Watts per Calculation so they're appealing. In the current power market, more efficient computing translates into actual savings on your power bill. There's still demand for better processors, even if we are sweating those assets for 5-7 years now.
Intel is still behind TSMC at this point in terms of raw process efficiency, but that rate of change is moving quickly and I posit that the products released later this year will have a process efficiency advantage over AMD's offerings for the first time since AMD abandoned global foundry.
> They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab.
I'd buy this if they'd actually built a fab, but right now this seems too-little, too-late for a producer's economy.
The rest frankly doesn't matter much. Intel processors are only notable in small sections of the market.
And frankly—as counter-intuitive as this may seem to such an investor-bullish forum—the death knell was the government chip subsidy. I simply can't imagine american government and private enterprise collaborating to produce anything useful in 2024, especially when the federal government has shown such a deep disinterest in holding the private economy culpable to any kind of commitment. Why would intel bother?
Licking County (next-gen, post-18A) has already broken ground and is in assembly, Magdeburg and Ireland (18A) also well underway and in production. Arizona's 20A facility (Fab 52 and Fab 62) have been done for half a year and are already in tape out. Not sure what is up for debate here, you can't really hide a $5BN infrastructure project from the public.
I think it's safe to say that 80BN+ in subsidies are already well in the process of being deployed. Intel, along with Samsung and TSMC, are heavily subsidized and have been so for a very long time. Any government with modest intelligence understands the gravity of having microchip manufacturing secured.
There are a few areas where they are under pressure:
- The wintel monopoly is losing its relevance now that ARM chips are creeping into the windows laptop market and now that Apple has proven that ARM is fantastic for low power & high performance solutions. Nobody cares about x86 that much any more. It's lost its shine as the "fastest" thing available.
- AI & GPU market is where the action is and Intel is a no-show for that so far. It's not about adding AI/GPU features to cheap laptop chips but about high end workstations and dedicated solutions for large scale compute. Intel's GPUs lack credibility for this so far. Apple's laptops seem popular with AI researchers lately and the goto high performance solutions seem to be provided by NVidia.
- Apple has been leading the way with ARM based, high performance integrated chips powering phones, laptops, and recently AR/VR. Neither AMD nor Intel have a good answer to that so far. Though AMD at least has a foothold in the door with e.g. Xbox and the Steam Deck depending on their integrated chips and them still having credible solutions for gaming. Nvidia also has lots of credibility in this space.
- Cloud computing is increasingly shifting to cheap ARM powered hardware. Mostly the transition is pretty seamless. Cost and energy usage are the main drivers here.
> Apple has proven that ARM is fantastic for low power & high performance solutions
Apple has proven that Apple Silicon on TSMC's best process is great. There are no other ARM vendors competing well in that space yet. SOCs that need to compete with Intel and AMD on the same nodes are still stuck at the low margin end of the market.
Has that been announced? Or is it more a matter of Intel producing some unannounced product on an unannounced timeline with a feature set that has yet to be announced on an architecture that may or may not involve arm? Intel walking away from x86 would be a big step for them. First they don't own arm and second all their high end stuff is x86.
Correct me if I'm wrong on the timeline I think you are talking about, but Intel stock shed value like the rest of tech in 2021/22. IMO, your theory had a much smaller impact than you think in terms of the dump. They both dropped roughly the same amount from their frothy highs in 2021 to their 2022 lows, INTC and AMD at roughly 60ish%.
For the rebound you're theory is probably more true. It has been better for AMD obviously, but INTC has almost doubled in value since its $25 low, which is not slouching by any means.
I can agree on being bullish long term (I had short puts exercise back in the $20s). Like a lot of tech, INTC has more money than God and they'll get it right eventually.
The biggest Intel's problem is that a lot of good people left over the previous years of shitty management. Pouring money into R&D certainly helps but with wrong people in key positions the efficiency of the investments will be low.
Gelsinger put a $4BN compensation package in effect for securing and retaining talent within his first 6 months of taking the role, one of the first things noted was brain drain to competitors.
>Intel Poaches Head Apple Silicon Architect Who Led Transition To Arm And M1 Chips. Intel has reacquired the services of Jeff Wilcox, who spearheaded the transition to Arm and M1 chips for Apple. Wilcox will oversee architecture for all Intel system-on-a-chip (SoC) designs
note re-acquired.
Raja Koduri also came back to Intel (from AMD Radeon) and only recently left to dabble in VFX, as opposed to working for a competitor to Intel.
Anton Kaplanyan (father of RTX at nvidia) is at Intel now.
I think people are not checking linkedin when they make the claim that Intel's talent has been drained and there is nobody left at home. Where there is remuneration and opportunity you will find talent. I think it's safe to say no industry experts have written down Intel.
> This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.
Why the fuck are shareholders often so short-sighted?
Or do they just genuinely think the R&D investment won't pay off?
They bought the stock on the principle that it was going to pay a consistent 5% dividend every year and weren't looking for moonshots at the cost of that consistent revenue.
Yeah, just look into any investment thread on HN to see how shareholders thinks, nobody recommend investing in unconventional things. Shareholders are your everyday guy who decides where to put his pension, and that guy picks the safe bet with good returns.
has told the story for more than a decade that Intel has been getting high on its own supply and that the media has been uncritical of the stories it tells.
In particular I think when it comes to the data center they’ve forgotten their roots. They took over the data center in the 1990s because they were producing desktop PCs in such numbers they could afford to get way ahead of the likes of Sun Microsystems, HP, and SGI. Itanium failed out of ignorance and hubris but if they were true evil geniuses they couldn’t have made a better master plan to wipe out most of the competition for the x86 architecture.
Today they take the desktop for granted and make the false claim that their data center business is more significant (not what the financial numbers show.). It’s highly self-destructive because when they pander to Amazon, Amazon takes the money they save and spends it on developing Graviton. There is some prestige in making big machines for the national labs but it is an intellectual black hole because the last thing they want to do is educate anyone else on how to simulate hydrogen bombs in VR.
So we get the puzzle that most of the performance boost customers could be getting comes from SIMD instructions and other “accelerators” but Intel doesn’t make a real effort to get this technology working for anyone other than the Facebook and the national labs and, in particular, they drag their feet in getting it available on enough chips that it is is worth it for mainstream developers to use this technology.
A while back, IBM had this thing where they might ship you a mainframe with 50 cores and license you to use 30 and if you had a load surge you could call you up and they could turn on another 10 cores at a high price.
I was fooled when I heard this the first time and thought it was smart business but after years of thinking about how to deliver value to customers I realized it’s nothing more than “vice signaling”. It makes them look rapacious and avaricious but really somebody is paying for those 20 cores and if it is not the customer it is the shareholders. It’s not impossible that IBM and/or the customer winds up ahead in the situation but the fact is they paid to make those 20 cores and if those cores are sitting there doing nothing they’re making no value for anyone. If everything was tuned up perfectly they might make a profit by locking them down, but it’s not a given at all that it is going to work out that way.
Similarly Intel has been hell-bent to fuse away features on their chips so often you get a desktop part that has a huge die area allocated to AVX features that you’re not allowed to use. Either the customer or the shareholders are paying to fabricate a lot of transistors the customer doesn’t get to use. It’s madness but except for Charlie Demerjian the whole computer press pretends it is normal.
Apple bailed out on Intel because Intel failed to stick to its roadmap to improve their chips (they’re number one why try harder?) and they are lucky to have customers that accept that a new version of MacOS can drop older chips which means MacOS benefits from features that were introduced more than ten years ago. Maybe Intel and Microsoft are locked in a deadly embrace but their saving grace is that every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017, which itself has to be an interesting story that I haven’t seen told.
> every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017
You must mean, performance relative to Intel, not absolute performance. Clearly Qualcomm has improved Snapdragon over time as have a number of other Android SOC vendors.
But I wonder if it's even true, have ARM vendors other than Apple failed to move the needle on performance (let's call performance single thread geekbench) relative to Intel? If someone is up for tracking down all the numbers I'd read that blog post. :)
> and they are lucky to have customers that accept that a new version of MacOS can drop older chips
Indeed, Apple has shown not just once but multiple times that they'll happily blow up their entire development ecosystem, whether it's software (Mac Finder vs. MacOS X) or hardware (68k, PPC, Intel, and now ARM). I think Intel didn't expect Apple to switch architectures so quickly and thoroughly and got caught flat-footed.
i honestly don’t see what you are seeing in terms of taiwans future sovereignty. Of course, China would like to do something about Taiwan, especially now with their economy kind of in the dumps and a collapsing real estate bubble. But when you look at the facts of it all, there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict. Their military isn’t up to snuff and they are one broken dam away from a huge mass casualty event.
> there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict.
However China is now a full fledged dictatorship. I'm not sure you can count on them being a rational actor on the world stage.
They can do a lot of damage, but would also get absolutely devastated in return. They are food, energy insecure and entirely dependent on exports after all.
True, but the elite class that’s currently profiting from and in control of said country would devastate themselves if they dare. Skepticism about the wests self-inflicted dependency on China is at an all time high. Terms like "on-" or "friend-shoring" are already coming up now.
You’re not wrong, maybe all the scaremongering in the west about China overtaking us got them delusional enough in a Japanese nationalist type way for them to behave this irrational, but i highly doubt it. But that can also change pretty quick if they feel like their back is against the wall, you’re not wrong in that regard
How much is that elite independent of Xi? A relatively independent elite is probably a more stable system. But a completely subservient elite to the fearless leader is however much more dangerous.
I don’t think Xi is as independent as you believe, but that’s a matter of personal opinion.
I just don’t think it’s very likely for just about any leader putting themselves into the position you are describing. This is a reoccurring narrative in western media, and I’m not here to defend dictators, but i feel like reality is less black and white than that.
Many of the "crazed leaders" we are told are acting irrational, often do not. It’s just a very, very different perspective, often bad ones, but regardless.
Let me try to explain what I mean: during the Iraq war, Saddam Hussein was painted as this sort of crazed leader, irrationally deciding to invade Kuwait. But that’s not the entire truth. Hussein may have been an evil man, but the way the borders of Iraq were re-drawn, Iraq was completely cut off from any sources of fresh water. As expected, their neighbors cut off their already wonky water supplies and famine followed. One can still think it’s not justified to invade Kuwait over this, but there’s a clear gain to be had from this "irrational" act. Again, not a statement of personal opinion, just that there IS something to be had. I’m not trying to say that i am certain that Hussein had the prosperity of his people at heart, but i do think that it isn’t entirely irrational to acknowledge that every country in human history is 3 missed meals away from revolution. That’s not good, even if you are their benevolent god and dictator for lifetime(tm).
Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see. Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke. The reason why they still can compete in this market is their asset of soviet infrastructure and industry. A good majority of USSR pipelines run through the Ukraine. I’m not saying it’s okay for them to invade, but i can see what they seek to gain and why exactly they fear NATO expansion all that much.
I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.
But as i stated, this may very well change when they get more desperate. Hussein fully knew the consequences of screwing with the wests oil supply, but the desperation was too acute.
I just don’t buy irrationality, there’s always something to be had or something to lose. It may be entirely different from our view, but there’s gotta be something.
Russia doesn't frear NATO - see their reaction on Finland joining it. Also the pipelines were not the reason for invasion. They were the opposite - a deterrence. As soon as Russia built pipelines that were circumventing Ukraine, they decided to invade, thinking that the gas transmition would't be in danger now.
yup. there are more examples than i can muster up to write. One more gut-wrenching than the former. The US calling anyone irrational is pretty rich anyways. After all, invoking the use Brainwashing in war after war, instead of accepting the existence of differing beliefs isn’t the pinnacle of rationality either. Neither is kidnapping your own people in an attempt to build your own brand of LSD-based brainwashing. Neither is infiltrating civil rights movements, going so far as attempting to bully MLK into suicide. Neither is spending your people’s tax money on 638 foiled assassinations of Castro. Neither is committing false-flag genocides in Vietnam, or PSYOPing civilians into believing they are haunted by the souls of their relatives.
none of those claims are anything but proven, historical facts by the way.
Wanna lose your appetite? The leadership in charge of the described operations in Vietnam gleefully talked about their management genius. They implemented kīll quotas.
Problem is, "rational" is not objective. "Rational" is more like "consistent with one's goals (subjective) under one's perception of reality (subjective)".
When you're saying "Putin invaded Ukraine irrationally" you're implicitly projecting your own value system and worldview onto him.
Let's take goals. What do you think Putin's goals are? I don't think it's too fanciful to imagine that welfare of ordinary Russians is less important to him than going down in history as someone who reunited the lost Russian Empire, or even just keeping in power and adored. It's just a fact that the occupation of Crimea was extremely popular and raised his ratings, so why not try the same thing again?
What about the worldview? It is well established that Putin didn't think much of Ukraine's ability to defend, having been fed overly positive reports by his servile underlings. Hell, even Pentagon thought Ukraine will fold, shipping weapons that would work well for guerrilla warfare (Javelins) and dragging their feet on stuff regular armies need (howitzers and shells). Russians did think it'll be a walk in the park, they even had a truck of crowd control gear in that column attacking Kyiv, thinking they'll need police shields.
So when you put yourself into Putin's shoes, attacking Ukraine Just Makes Sense: a cheap&easy way to boost ratings and raise his profile in history books, what not to like? It is completely rational — for his goals and his perceived reality.
Sadly, people often fall into the trap of overextending their own worldview/goals onto others, finding a mismatch, and trying to explain that mismatch away with semi-conspiratorial thinking (Nato expansion! Pipelines! Russian speakers!) instead of reevaluating the premise.
I don't accept the subjectivity w.r.t. "perceived reality". Russia's military unreadiness was one of the big reasons I consider the invasion irrational, and I put the blame squarely on Putin because he could have gotten accurate reports if he wasn't such a bad leader. You are responsible for your perceived reality, and part of rationality is acting in a way that it matches real reality.
(But yeah, clearly his actual goal was to increase his personal prestige. Is that not common knowledge yet?)
I'm skeptical of you claims about Hussein but I will admit less familiarity with it. Your claim about Russia's motives are bunk
> Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see.
Invading one of their largest neighbors and ruining their relationship with a nation they had significant cultural exchange and trade with (including many of their weapons factories) is irrational.
But Russia's leaders didn't want a positive neighborly relationship they wanted to conquer Ukraine and restore the empire. Putin has given speeches on this comparing himself to the old conquering czars.
> Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke.
True enough
> The reason why they still can compete in this market is their asset of soviet infrastructure and industry.
Much of the equipment is western and installed in the post Soviet period.
> A good majority of USSR pipelines run through the Ukraine.
Then they probably shouldn't have invaded in 2014? Almost seems like they made a bad irrational choice. They had other pipelines that bypassed Ukraine like NS1 and NS2 which didn't enter service due to the war
> I’m not saying it’s okay for them to invade, but i can see what they seek to gain
Please explain what they tried to gain. Ukraine wouldn't have objected to exports of gas through Ukraine if not for the Russian invasion and they already had pipelines that bypassed Ukraine.
> and why exactly they fear NATO expansion all that much.
They don't fear NATO expansion, they disliked it because it prevented them from conquering or bullying countries with threats of invasion. They've taken troops of the NATO border with Finland (and didn't even invade Finland when Finland joined NATO). Russia acknowledged the right of eastern European nations to join NATO and promised to respect Ukraine's sovereignty and borders.
> I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.
The fabs are a red herring, they're largely irrelevant. If China invades (which I hope doesn't happen) it will not be because of any economic gains. There are no possible economic gains that would justify the costs of a war. If they invade it will be for the same reason that Russia did, because of extreme nationalism/revanchism and trying to use that extreme nationalism to maintain popularity among the population.
I think "economy in the dumps" is a bit too harsh.
China is facing a deflating real estate bubble, but they still managed to grow the last year (official sources are disputed but independent estimates are still positive).
I would refer you to these to take the counterpoint to your position [1][2] [3].
China is in a world of hurt, but the government is trying desperately to hide how bad it actually is. If this continues for a few more months, it will be an existential situation for their economy.
it’s where the growth is coming from. Chinas growth (or even just sustenance) isn’t coming from a healthy job market and consumer spending. It’s mostly fueled by SOEs and prefectures going into debt to keep on investing, many local administration have found out they can trick debt limits by forming state-owned special purpose vehicles that aren’t bound to their debt limits. That’s not good at all. there’s a reason we are seeing tons of novel Chinese car brands being pushed here in Europe, they massively overproduced and cannot sell them in their own market anymore. It’s really not looking great atm.
edit: one also should keep in mind that the Chinese real estate market is entirely different in its importance to its populations wealth. "Buying" real estate is pretty much the only sanctioned market to invest your earnings. They still pretend to be communist after all.
none or VERY few are even remotely close to the impact a potential breach of the three gorges dam would have. [1] Seriously, it’s worth reading up on, it’s genuinely hard to overstate.
"In this case, the Three Gorges Dam may become a military target. But if this happens, it would be devastating to China as 400 million people live downstream, as well as the majority of the PLA's reserve forces that are located midstream and downstream of the Yangtze River."
"This article first appeared in The Times of Israel on September 11, 2020."
Also what does "400 million people live downstream" even mean? There's ten million people living downstream of this dam https://en.wikipedia.org/wiki/Federal_Dam_(Troy), and ten million more living downstream of the various Mississippi dams and so on.
It's grossly overstated because TW doesn't have the type or numbers of ordnance to structurally damage gravity dam the size of three gorges. And realistically they won't because the amount of conventional munitions needed is staggering, more than TW can muster in retaliatory strike, unless it's a coordinated preemptive strike, which TW won't since it's suicide by war crime.
The entire three gorges meme originated from FaLunGong/Epoche times propaganda, including in linked article (to interview with Simone Gao) and all the dumb google map photos of deformed damn due to lens distortion. PRC planners there aren't concerned about dam breech, but general infra terrorism.
The onne infra PRC planners are concerned about are coastal nuclear plants under construction, which is much better ordnance trade for TW anyway, and just as much of a war crime.
i seem to have been throughly wrong about the three gorges dam. But i think you also have misunderstood the scenario i was imagining. I was actually entirely unaware of there being a meme about the thing collapsing on its own. I was strictly referring to its viability as a strategic target for infrastructure terrorism if that’s the term to use here. I was imagining a scenario where the US is going to town in support of TW, as has been theorized by just about every media pundit in existence right now. I may be wrong about the state’s willingness to commit war crimes, but i just watched IDF, dressed up as civilians, sneaking into a hospital to shoot unarmed patients, alleged to be Hamas members. Or the lack of care over Gaza being white phosphorous'd.
But, as it seems, i vastly underestimated the effort needed to cause my theorized catastrophe. I’m entirely open to admit being wrong about that, always good to learn.
Also, correct me if I’m wrong, but afaik, the viability of nuclear plants as strategic targets has been vastly overblown. I’ll go read up on it, but i don’t think it’s that big of a risk.
IMO US hitting three Gorges (ptentially killing 10s of millions) is basically instantly escalating to proportional countervalue (i.e. targetting civilian, not counterforce, targetting military) nuclear retaliation, regardless of PRC no first use. This isn't perfidy spectrum of warcrime.
I think you're talking about US, willing to escalate to mainland attack, specifically strategic targets that support war economy. Nuclear plants being sensation overblown since it's basically jsut another piece of hard power infra. Which BTW very few US strategic planners have actually indicated willingness to do, but also inevitably must since PRC can prosecute TW (and SKR/JP) war completely from mainland.
To which IMO, most also vastly underestimate the effort needed. Reality right now is, the amount of fire power US can surge in region (naval strikes, aviation regional runway access, CONUS long range bombers), is very limited relative to number of PRC strategic targets, and in contested space theatre. To be blunt, PRC mainland is significantly larger (more targets) and capable (less ability to hit targets) than any previous US adversaries. By 1-2 order of magnitude. Most don't grasp this.
For reference the US+co air campaign in Gulf War, where US+co surged 6 carriers and had extremely geographically favourable regional basing to supplement land aviation, conducted ~100,000 sorties in 40 days, on Iraq, a country 20x smaller (realistically 10x since PRC targets are mostly east half of country), with 80x less people (even less aggregate productive/manufacturing ability). And that campian was essentially UNCONTESTED, since IIRC the french who designed Iraqi anti-air network sold out entire system to west. And it was efficient since regional base (CENTAF Saudi) was close enough that US fighters can sortie with minimal refueling.
None of that is true in PRC campaign, distances involved and limited basing US has access to (at least relative to PRC access to their entire military infra), means US unlikely to forward deploy as much aviation, and sorties need midair tanking (possibly multiple times) to deliver weapons, assuming those fighters aren't shot down/destroyed on the ground in the first place. Same with navy - US can throw in all but the effects won't scale proprotionally since US can't actually sustain/replenish surge for more than a few weeks, assuming support assets don't get destroyed themselves when they restock in port. So to summarize PRC is 10x-20x bigger than Iraq, 80x+ more targets, in contested region where PRC has home team advantage and where US has visiting team disadvantage (with regional partners factored in), in manner that US might not even be able to sustain forward posture for more than a few weeks (vs 5 weeks of initial Gulf War campaign). If you just naively scale Iraq air campaign to PRC, it would take US 5+ years to degrade PRC same way it did Iraq.
That's the scale of problem. Granted it's very hand wavy and napkin mathy but it illustrates how gargantuan PRC actually is and how big the challenges has become relative to US military capability that is calibrated to stomp small/medium sized countries. IMO why planners last 10 years have focused on SLOC/energy blockade, because land war in Asian is stupid. But even blockade talk is going to quiet down (and IMO US supporting TW militarily) in a few years when PRC roles out CONUS conventional strike with ICBMs to mutual conventional homeland vunerability. But that's another matter entirely, the TLDR is US game theory on TW going to be very different when they realize 200-300 oil refineries and lng plants and a few F35 assembly plants can significantly degrade CONUS and NATO. The other part of hitting a 100s of smaller targets vs 1 large target that triggers nuclear retaliation is there's more rungs/opportunity to deescalate, which is probably top priority in actual US/PRC war.
Intel is recipient #1 of CHIPS and similar EU initiatives - and the government may pressure nvidia and other US companies (ie: apple) to move their procurement domestically. Intel being the only player outside of taiwan and south korea to have the capital and capacity to supply that.
> I think the market (and analysts like this) are all throwing the towel
1) Intel is up 100% from ten years ago when it was at $ 23. All that despite revenue being flat/negative, inflation and costs rising and margins collapsing.
2) Intel is up 60% in the last 12 months alone.
Doesn't look to me like they throwing the towel at all.
I appreciate the deep cut. I definitely do not follow companies internally closely enough to see this coming.
> (samsung or tsmc) both within a 500 mile radius circle in the south china sea.
Within 500 mile radius of great power competitor, perhaps. The closest points on mainland Taiwan and Korea are 700 miles apart. Fabs about 1000 miles, by my loose reckoning.
Ha, silly of me, quite right. Not exactly what comes to mind when drawing circles to include a city 2/3 south down Taiwan, and 2/3 north up RoK, but fair point.
>What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting"
Not clear about what the role of activist hedge funds is here but Intel's top shareholders are mutual funds like Vanguard which are part of many people's retirement investments. If an activist hedge fund got to run the show, it means that they could get these passive shareholders on their side or to abstain. It would have meant those funds along with pension funds, who should have been in a place to push back against short term thinking, didn't push back. These funds should really be run much more competently given their outsized influence, but the incentives are not there.
there's probably no need to imagine this conspiracy-like machinations of shareholders. Intel fucked up bad and process development is certified crazytrain to la la land.
(dropping molten tin 1000 times a second and then shooting it with a laser just to get a lamp that can bless you with the hard light you need for your fancy fine few nanometers thin shadows? sure, why not, but don't forget to shoot the plasma ball with a weaker pulse to nudge it into the shape of a lens, cheerio.
and you know that all other parts are similarly scifi sounding.
and their middle management got greedy and they were bleeding talent for a decade.)
Everyone acts as though Intel should have seen everything coming. Where was AMD? Was AMD really competitive before Ryzen? Nope. Core 2 series blew them out of the water. Was ARM really competitive until recently? Nope. Intel crushed them. The problem for Intel is the inertia of laziness due to a lack of competition. I wouldn’t count them out just yet, however. The company’s first true swing at modern GPU was actually good for a first attempt. Their recent CPUs while not quite as good as Ryzen aren’t exactly uncompetitive. Their foundry business faltered because they were trying a few things never before and not because they were incompetent. Also, 20A and 18A are coming along. I am not an Intel fan at all. I run AMD and ARM. My dislike isn’t technological though, it’s just that I hate their underhanded business practices.
The curse of having weak enemies is that you become complacent.
You're right: AMD wasn't competitive for an incredibly long time and ARM wasn't really meaningful for a long time. That's the perfect situation for some MBAs to come into. You start thinking that you're wasting money on R&D. Why create something 30% better this year when 10% better will cost a lot less and your competitors are so far behind that it doesn't matter?
It's not that Intel should have seen AMD coming or should have seen ARM coming. It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle. Intel should have been smart enough to understand that backing off of R&D would mean giving up the moat they'd created. Even if it looked like no one was coming for their crown at the moment, you need to understand that disinvestment doesn't get rewarded over the long-run.
Intel should have understood that trying to be cheap about R&D and extract as much money from customers wasn't a long-term strategy. It wasn't the strategy that built them into the dominant Intel we knew. It wouldn't keep them as that dominant Intel.
> It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle.
Their third employee that later went on to become their third CEO and guide Intel from the memory to processor transition literally coined the term and wrote a book called "Only the Paranoid Survive" [1]. It's inexcusable that management degraded that much.
Yes, I agree. However, I don’t necessarily see this book title as an imperative to innovate. Patent trolling can also be a way to deal with competitors.
After all, Apple and ARM came from the idea to have better end user products around softer factors than shear CPU power. Since Intel‘s products aren’t highly integrated Phones nor assembled computer, Intel had no stake directly.
Apple came from recreational “there is now a 10 times cheaper CPU than anything else and I can afford to build my video terminal into real computer in my bedroom” and “maybe we can actually sell it?”. [1]
ARM literally came from “we need a much better and faster processor” and “how hard can this be?” [2]
To be fair, they should have seen Ryzen coming, any long-term AMD user knew years before Ryzen landed that it was going to be a good core because AMD were very vocal about how badly wrong they bet with Bulldozer (previous core family).
AMD bet BIG on the software industry leaning in heavily on massive thread-counts over high throughput, single-threaded usage... But it never happened so the cores tanked.
It was never a secret WHY that generation of core sucked, and it was relatively clear what AMD needed to do to fix the problem, and they were VERY vocal about "doing the thing" once it became clear their bet wasn't paying off.
From a consumer perspective, Bulldozer and revisions as compared to Skylake and revisions were:
+ comparable on highly multi-threaded loads
+ cheaper
- significantly behind on less multi-threaded loads
- had 1 set of FPUs per 2 cores, so workloads with lots of floating point calculations were also weaker
- Most intensive consumer software was single or a very small number of thread focused still (this was also a problem for Intel in trying to get people to buy more expensive i7s/i9s over i5s in those days)
Bulldozer was contemporary with Sandy Bridge, not Skylake. Piledriver competed with Ivy Bridge and Haswell. The next Construction cores (Steamroller and Excavator) were only found in APUs, and not in desktop FX parts. Around the time of Skylake, AMD didn't have a meaningful presence in the desktop space. All they were selling was quad-core APUs based on minor revisions of Bulldozer, and the highly outdated FX-x3xx Piledrivers.
1.Bulldozer had a very long pipeline akin to a Pentium 4. This allows for highclocks but comparatively little work being done per cycle vs their competition. Since clocks have a ceiling around 5GHz they could never push the clocks high enough to compete with intel.
2.They used a odd core design with 1 FPU for every 2 integer unit instead of the normal 1:1 that we have seen on every x86 since the i486. This leads to very weak FPU performance needed for many professional applications. Conversely it allowed for very competitive performance on highly threaded integer applications like rendering. This decision was probably under the assumption APUs would integrate their GPUs better and software would be written with it in mind since a GPU easily out does a CPUs FPU but it requires more programming. This didn't come to be.
3. They were stuck using Global Foundries due to previous contracts when they spun it off requiring AMD use GloFlo. This became a anchor as Gloflo fell behind market competitors like TSMC. Leaving AMD stuck on 32nm for a long while, until gloflo got 14nm and eventually AMD got out of the contract between zen 1-2.
bonus: Many IC designers have bemoaned how much of bulldozers design was automated with little hand modifications which tends to lead to a less optimized design.
3.
3.
There's been lots written about this but this is my opinion.
Bulldozer seemed to be designed under the assumption heavy floating point work would be done on the GPU (APU) which all early construction cores had built in. But no one is going to rewrite all of their software to take advantage of the iGPU that isn't present in existing CPUs and isn't present in the majority of CPUs (Intel) so it sort of smelt like Intel's itantic moment, only worse.
I think they were desperate to see some near term return on the money they spent on buying ATI. ATI wasn't a bad idea for a purchase but they seemed to heavily overpay for it which probably really clouded management's judgement.
This sounds like Google. Some bean counter is firing people left and right and somehow they think that's going to save them from the fact that AI answers destroy their business model. They need more people finding solutions, not less.
I got curious about how ARM is doing in the data center and found this:
>Arm now claims to hold a 10.1% share of the cloud computing market, although that's primarily due to Amazon and its increasing use of homegrown Arm chips. According to TrendForce, Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021.
ARM would be even more popular in the datacenter if getting access to Ampere CPUs was possible.
I can get a top of the line Xeon Gold basically next day with a incredibly high quality out of band management from a reputable server provider. (HP, Dell).
Ampere? Give it 6 months, €5,000 and maybe you can get one, from Gigabyte. Not known for server quality.
(yes, I'm salty, I have 4 of these CPUs and it took a really long time to get them while costing just as much as AMD EPYC Milan's).
I'm using Ampere powered servers on Oracle cloud and boy, they're snappy, even with the virtualization layer on top.
Amazon has its own ARM CPUs on AWS, and you can get them on demand, too.
Xeons and EPYCs are great for "big loads", however some supercomputer centers also started to install "experimental" ARM partitions.
The future is bright not because Intel is floundering, but there'll be at least three big CPU producers (ARM, AMD and Intel).
Also, don't have prejudices about "brands". most motherboard brands can design server-class hardware if they wish. They're just making different trade-offs because of the market they're in.
I used servers which randomly fried parts of their motherboard when see some "real" load. Coming one morning and having no connectivity because a top of the line 2 port gigabit onboard Ethernet fried itself on a top of the line, flagship server is funny in its own way.
Since roughly the first year of covid the supply generally has been quite bad. Yes, I can get _some_ xeon or epyc from HPE quickly, but if I care about specific specs it's also a several month long wait. For midsized servers (up to about 100 total threads) AMD still doesn't really have competition if you look at price, performance and power - I'm currently waiting for such a machine, the intel option would've been 30% more expensive at worse specs.
> Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021
I'm guessing this has increased since 2021. I've moved the majority of our AWS workloads to ARM because the price savings (it mostly 'just works'). If companies are starting to tighten their belts, this could accelerate even more ARM adoption.
It'll probably get get there, but it'll probably be a slow migration. After all, no point in tossing out all the Xeon that still have a few years left in them. But I believe Google is now also talking about or is already working on their own custom chip similar to Graviton now. [1]
ARM has been really competitive since, well, 2007, when the first iPhone hit the market, and when Android followed in 2008. That is, last 15 years or so. Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.
Pretty certainly, Intel is improving, and of course should not be written off. But they did get themselves into a hole to dig out from, and not just because the 5nm process was really hard to get working.
> Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.
And it's not like they didn't notice either. Apple literally asked intel to supply the chips for the first iPhone, but the intel CEO at the time "didn't see it".
I agree mobile was a miss but the linked article actually quotes Intel's former COE making a pretty good argument why they missed:
> "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."
In that circumstance, I think most people would have made the same decision.
Kind of speaks to how Intel was not competitive in the space at all. If it was truly that the marginal cost per part was higher than the requested price, either Apple was asking for the impossible and settled for a worse deal with an ARM chip, or Intel did not have similar capabilities.
I’m not so sure, he made a choice purely on “will it make money now” not “well let’s take a chance and see if this pays of big and if not we’ll loose a little money”
It’s not like they couldn’t afford it and taking chances is important
Ok, but you have to view this through the lens of what was on the market at the time and what kind of expectations Intel likely would have had. I can't imagine that Apple told Intel what they were planning. Therefore, it would have been reasonable to look around at the state of what existed at the time (basically, iPods, flip phones, and the various struggling efforts that were trying to become smartphones at the time) and conclude that none of that was going to amount to anything big.
I'm pretty sure most people here panned the iPhone after it came out, so it's not as if anyone would have predicted it prior to even being told it existed.
Intel also had a later chance when Apple tried to get off the Qualcomm percent per handset model. This was far after the original iPhone. Apple also got sued for allegedly sharing proprietary Qualcomm trade secrets with Intel. And Intel still couldn’t pull it off despite all these tailwinds.
And that statement is hilarious in light of the many failed efforts (eg subsidies for Netbooks and their embedded x86 chip) where they lit billions on fire attempting to sway the market.
FWIW I don't buy his explanation anyway. Intel at the time had zero desire to be a fab. Their heart was not in it. They wanted to own all the IP for fat margins. They have yet to prove anything about that has changed despite the noise they repeatedly make about taking on fab customers.
That was very luck for Apple though. Nokia made deals with Intel to provide the CPU for upcoming phone models, and had to scramble to redesign them when it became clear Intel was unable to deliver.
Not quite true - the intel projects were at a pretty early stage when Elop took over, and the whole Microsoft thing happened - and the projects got canned as part of the cleanup and moving to Windows for the phones.
The CPUs were indeed horrible, and would've caused a lot of pain if the projects had actually continued. (source: I was working on the software side for the early Nokia intel prototypes)
Thanks for the insights. N9 was originally rumored to use Intel, and it was speculated so [1] still half a year before the release. Was that then also switched by Elop as part of the whole lineup change, or were these rumors unfounded in the first place?
Pretty much all rumors at that time were very entertainingly wrong.
I think at the time that article got published we didn't even have the intel devboards distributed (that is a screen and macroboards, way before it starts looking like a phone). We did have some intel handsets from a 3rd party for meego work, but that was pretty much just proof of concept - nobody ever really bothered looking into trying to get the modem working, for example.
What became the N9 was all the time planned as an arm based device - exact name and specs changed a few times, but it still pretty much was developed as a maemo device, just using the meego name for branding, plus having some of the APIs (mainly, qt mobility and QML) compatible with what was scheduled to become meego. The QML stuff was a late addition there - originally it was supposed to launch with MTF, and the device was a wild mix of both when it launched, with QML having noticeable issues in many areas.
Development on what was supposed to be proper meego (the cooperation with intel) happened with only a very small team (which I was part of) at that time, and was starting to slowly ramp up - but massive developer effort from Nokia to actually make a "true" meego phone would've started somewhere mid-11.
And a few years prior to that Intel made the most competitive ARM chips (StrongARM). Chances are that an Intel chip would have powered the iPhone had they not scrapped their ARM division due to “reasons”
DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.
Then, after the success of these ARM chips in the blackberry and most of the palm PDAs as well as MP3 players and HTC smartphones, Intel sells it off, so it could focus on trying to make its big chips more energy efficient, making the mistake DEC avoided.
iPhone was a defining moment, but at the time it was completely obvious that smartphones would be a thing, it's just that people thought that the breakthrough product would come from Nokia or Sony-Ericsson (who were using ARM SoCs from TI and Quallcomm respectively). Selling off the ARM division would not have been my priority?
So it's a string of unforced errors. Nevertheless, Intel remains an ARM licensee, they didn't give that up when selling StrongARM, so it seems some people still saw the future..
Sounds like the classic Innovators Dilemma. There wasn't a lot of margin in the ARM chips so Intel doubled down on their high margin server and desktop chips. ARM took over the low end in portable devices and is now challenging in the datacenter.
Baum: Apple owned a big chunk of it, but when Apple was really having hard times, they sold off their chunk at quite a profit, but they sold off the chunk. And then-- oh, while Newton was going on, some people from DEC came to visit, and they said, “Hey, we were looking at doing a low power Alpha and decided that just couldn’t be done, and then looked at the ARM. We think we can make an ARM which is really low power, really high performance, really tiny, and cheap, and we can do it in a year. Would you use that in your Newton?” Cause, you know, we were using ARMs in the Newton, and we all kind of went, “Phhht, yeah. You can’t do it, but, yeah, if you could we’d use it.”
That was the basis of StrongARM, which became a very successful business for DEC. And then DEC sued Intel. Well, I worked on the StrongARM 1500, which was a very interesting product. It was an ARM and a DSP kind of highly combined. It was supposed to be like video processing using set top boxes, and things like that. And then we finished that project and our group in Palo Alto, we were just gonna start an Alpha project.
And just then it was announced that DEC was-- no. No. Intel, at that time, Intel, DEC sued Intel for patent violations, didn’t go to them and say, “Hey, pay up or stop using it.” They just sued them. Intel was completely taken by surprise. There was a settlement. The settlement was you have to buy our Microelectronics Division and pay us a whole pile of money, and everything will go away.
So they sold the Microelectronics Division, which we were part of, except for the Alpha Design Group, 'cause they didn’t think that they could sell that to Intel and have the SEC approve, 'cause the two can conflict. So I went away on vacation not knowing whether I would be coming back and working for Intel, or coming back working for DEC. And it turned out they decided to keep the Alpha Design Group, so I was still working for DEC. Except the reason for the lawsuit was Compaq wanted to buy DEC, but didn’t want to buy ‘em with this Fab and Microelectronics Division. So by doing this, they got rid of the Microelectronics Division, and now they could sell themselves to Compaq.
I think they're doing better (disclaimer: writing this from a Ryzen laptop) and their latest chip has better thermals and consumption, with a decent reputation compared to 10 years ago for instance. But yes, it's a long road ahead.
Didn't understood the future from smartphones to servers was about power efficiency and scale.
Eventually their lack of power efficiency made them lose ground in all their core business. I hope they will get this back and not just by competing on manufacturing but architecture too.
The writing was on the wall 6 years ago; Intel was not doing well in mobile and it was only a matter of time until that tech improved. Same as Intel unseating the datacenter chips before it. Ryzen I will give you is a surprise, but in a healthy competive market, "the competition outengineered us this time" _should_ be a potential outcome.
IMO the interesting question is basically whether Intel could have done anything differently. Clay Christianson's sustaining vs disrupting innovation model is well known in industry, and ARM slowly moving up the value chain is obvious in that framework. Stratechery says they should have opened up their fabs to competitors, but how does that work?
Previously, wverytine competition managed to out engineer intel, they crushed them either by having spare process advantage they could use to brute force performance... Or lock competition out of markets by blocking large swathes of them through illegal deals.
The problem is that Intel has had a defensive strategy for a long time. Yes, they crushed many attempts to breach the x86 moat but failed completely and then gave up attempts to reach beyond that moat. Mobile, foundry, GPUs etc have all seen half-hearted or doomed attempts (plus some bizarre attempts to diversify - McAfee!).
I think that, as Ben essentially says, they put too much faith in never-ending process leadership and the ongoing supremacy of x86. And when that came to an end the moat was dry.
On the contrary, they tried to pivot so many times and enter different markets, they bought countless small companies, some big, nothing seemed to stick except the core CPU and datacenter businesses. IIRC mobileye is one somewhat successful venture.
Except they weren't real pivots. Mobile / GPU were x86 centric, foundry half hearted without buying into what needed to be done. Buying a company is the easy bit.
Part of the problem is Intel is addicted to huge margins. Maybe of the areas they have tried to enter are almost commodity products in comparison so it would take some strong leadership to convince everyone to back off those margins for the sake of diversification.
They should have been worried about their process leadership for a long time. IIRC even the vaunted 14nm that they ended up living on for so long was pretty late. That would have had me making backup plans for 10nm but it looked more like leadership just went back to the denial well for years instead. It seemed like they didn't start backport designs until after Zen1 launched to me.
Look at Japan generally and Toyota specifically. In Japan the best award you can get for having an outstanding company in terms of profit, topline, quality, free-cash, people, and all the good measures is the Deming Award. Deming was our guy (an American) but we Americans in management didn't take him seriously enough.
The Japanese to their credit did ... ran with it and made it into their own thing in a good way. The Japanese took 30% of the US auto market in our own backyard. Customers knew Hondas, Toyotas cost more but were worth every dollar. They resold better too. (Yes some noise about direct government investment in Japanese companies by the government was a factor too, but not the chief factor in the long run).
We Americans got it "explained to us." We thought we were handling it. Nah, it was BS. But we eventually got our act together. Our Deming award is the Malcolm Baldridge award.
Today, unfortunately the Japanese economy isn't rocking like it was the 80s and early 90s. And Toyota isn't the towering example of quality it once was. I think -- if my facts are correct --- they went too McDonalds and got caught up in lowering price in their materials, and supply chain with bad effects net overall.
So things ebb and flow.
The key thing: is management through action or inaction allowing stupid inbred company culture to make crappy products? Do they know their customers etc etc. Hell, mistakes even screw-ups are not life ending for companies the size of Intel. But recurring stupidity is. A lot of the times the good guys allow themselves to rot from the inside out. So when is enough enough already?
Intel’s problem was the cultural and structural issues in their organization, plus their decision to bet on strong OEM partner relationships to beat competition. This weakness would prevent them from being ready for any serious threat and is what they should’ve seen coming.
Intel's flaw was trying to push DUV to 10nm (otherwise known as Intel 7).
Had Intel adopted the molten tin of EUV, the cycle of failure would have been curtailed.
Hats off to SMIC for the DUV 7nm which they produced so quickly. They likely saw quite a bit of failed effort.
And before we discount ARM, we should remember that Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors.
Intel should have bought Acorn, not Olivetti.
That's a lot of mistakes, not even counting Itanium.
Acorn’s original ARM chip was impressive but it didn’t really capture much market share. The first ARM CPU competed against the 286, and did win. The 386 was a big deal though. First, software was very expensive at the time, and the 386 allowed people to keep their investments. Second, it really was a powerful chip. It allowed 11mips vs ARM3’s 13, but the 486 achieved 54mips. ARM6 only hit 28mips. It’s worth noting that the 386 also used 32bit memory addressing and a 32 bit bus while ARM was 26bit addressing with a 16 bit bus.
Intel had StrongARM though. IIRC they made best ARM cpus in the early 2000s and were designing their own cores. Then Intel decided to get rid of it because obviously they were just wasting money and could design a better x86 mobile chip…
> Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors
Coincidentally, ARM1 and 80386 were both introduced in 1985. I'm a big fan of the ARM1 but I should point out that the 386 is at a different level, designed for multitasking operating systems and including a memory management unit for paging.
The problem is this... The money cow is datacenters, and especially top of the line products where there is no competition.
Then fastest single core and multi core x86 cpus that money can buy will go to databases and similar vertically scaled systems.
That's where you can put up the most extreme margins. It's "winner takes all the margins". Beeing somewhat competitive, but mostly a bit worse is the worst business position. Also...
I put money on AMD when they were taking over the crown.
Thank you for this wakeup call, I'll watch closely if Intel can deliver on this and take it back. I'll habe to adjust accordingly.
And Sandy Bridge all but assured AMD wouldn't be relevant for the better part of a decade.
It's easy to forget just how fast Sandy Bridge was when it came out; over 12 years later and it can still hold its own as far as raw performance is concerned.
They should have known it was coming because of how many people they were losing to AMD, but there is a blindness in big corps when management decide they are the domain experts and the workers are replaceable.
>>> Notice what is happening here: TSMC, unlike its historical pattern, is not keeping (all of its) 5nm capacity to make low-cost high-margin chips in fully-depreciated fabs; rather, it is going to repurpose some amount of equipment — probably as much as it can manage — to 3nm, which will allow it to expand its capacity without a commensurate increase in capital costs. This will both increase the profitability of 3nm and also recognizes the reality that is afflicting TSMC’s 7nm node: there is an increasingly large gap between the leading edge and “good enough” nodes for the vast majority of use cases.
My understanding is that 5nm has been and continues to be "problematic" in terms of yield. The move to 3nm seems to not be afflicted by as many issues. There is also a massive drive to get more volume (and a die shrink will do that), due to the demands of all things ML.
I suspect that TSMC's move here is a bit more nuanced than the (valid) point that the article is making on this step...
5nm is good yield at the moment. It's even fully automotive qualified, which is a testament to its yield. But performance advantage vs cost doesn't justify to move from 7nm for a lot of designs. So tech adoption is getting stickier.
For hogh performance designs like Apple CPUs going to the cutting edge is a given. So since 3nm is available 5nm lost its appeal. This is a new territory for TSMC but I think they handled it well. Just last year they were gearing up a lot of 5nm capacity in anticipation of people moving from 12 and 7nm to 5nm. It quickly became clear that this isn't happening. So they moved some of this to 3nm and some is going back to 7nm and 6nm (shrunk 7nm) I think. They are also cautious about buying the newest equipment from ASML unlike Intel and Samsung. This seems to play well for TSMC.
I think TSMC learned from Intel's downfall more than Intel. I don't see any industry traction from IFS. They can research any new technology they want. Without wafer orders it's a recipe for a quick cash burn..
> some is going back to 7nm and 6nm (shrunk 7nm) I think
You can also see the lasting popularity of 7nm-class nodes in consumer products. For example, RDNA3 uses 5nm for the core parts (GCD), but the peripheral parts (the memory chiplets/MCDs) are built on 6nm, and the monolithic low-end parts (RX 7600) are even fully built on 6nm.
I think it's time to stop talking about node sizes. This reads as meaningless to me, not because I don't know what nanometers are, but because the nodes are all defined by a set of technological changes that have little to do with the measurements, and they are all very incremental and interchangeable at this point, so each node doesn't require an entirely new fab. Actually this has always been mostly true, but the capital investment of new equipment used to be less than the risk of variance from process modification. I don't think this is the case anymore. I expect to see gradually-evolving processes that are sold to customers as performance targets rather than fixed fabrication methods. Even the designs may need some kind of meta-representation that gets 'compiled' to a process, which may have traveller variances like metallization layers.
Of course we don't need to know everything, but in terms of news, maybe we could reference key technologies in use that have the largest effect on performance targets and downstream process modification.
Intel's 2010 7.6B$ purchase of mcafee was a sign that Intel doesn't know what its doing. In the CEO's words: The future of chips is security on the chip. I was like no, no its not! I wanted them to get into mobile and GPUs at the time. Nvdia's market cap was about 9B at the time. I know it would have been a larger pill to swallow, and likely had to bid a bit more than 9B, but I thought it was possible for Intel at the time.
> The future of chips is security on the chip. I was like no, no its not!
Putting aside whether the statement is considered true or not, buying McAfee under the guise of the kind of security meant when talking about silicon is... weird, to say the least.
McAfee makes their money from people being required to run it for certification. Imagine government/healthcare/banking/etc. customers being obliged to use only Intel chips because they'll fail their audits (which mandate on-chip antivirus) otherwise. I hate it, but I can see the business sense in trying.
I’m not sure mcaffee is the go to for this requirement any longer. Maybe. Definitely across the 4 enterprise I’ve worked at, they all migrated away from mcaffee.
Intel sold their XScale family of processors to Marvell in 2006.
I remember very well as back then I was working in University porting Linux to an Intel XScale development platform we had gotten recently.
After I completed the effort, Android was released as a public beta and I dared to port it too to that Development Board as a side project. I thought back then Intel was making a big mistake by missing that opportunity. But Intel were firm believers in the x86 architecture, specially on their Atom Cores.
Those little Intel PXA chips were actually very capable, I had back then my own Sharp Zaurus PDA running a full Linux system on an Intel ARM chip and I loved it. Great performance and great battery life.
It's really sort of been downhill since they decided to play the speed number game over all else with the Pentium IV. Even the core i7/i9 lines that were good for a long time have gone absolutely crazy lately with heat and power consumption.
That's overly reductionist. Conroe topped out at around 3 GHz, compared to its predecessor Presler achieving 3.6 GHz.
I think Netburst mostly came from a misguided place where Intel thought that clock frequency was in fact the holy grail (and would scale far beyond what actually ended up happening), and that all the IPC issues such as costly mispredicts could be solved by e.g. improving branch prediction.
Intel's market reality is (percieved) speed sells chips.
It's embarassing when they go to market and there's no way to say it's faster than the other guy. Currently, they need to pump 400W through the chip to get the clock high enough.
But perf at 200w or even 100w isn't that far below perf at 400w. If you limit power to something like 50w, the compute efficiency is good.
Contrast that to Apple, they don't have to compete in the same way, and they don't let their chips run hot. There's no way to get the extra 1% of perf if you need it.
Oh, I'm quite well aware. I traded a spaceheater of an i9/3090 tower for an M1 Studio.
The difference in performance for 95% of what I do is zero. I even run some (non-AAA) Windows games via crossover, and that's driving a 1440p 165hz display. All while it sits there consuming no more than about 35w (well, plus a bit for all my USB ssds, etc) and I've never seen the thermals much past 60c, even running nastive-accelerated LLMs or highly multithreaded chess engines and the like. Usually sits at about 40c at idle.
It's exactly what almost 40 year old me wants out of a computer. It's quiet, cool, and reliable - but at the same time I'm very picky about input devices so a-bring-your-own peripherals desktop machine with a ton of USB ports is non-negotiable.
I remember when they did random stuff like the whole IoT push (frankly, their offerings made no sense to humble me .. Microsoft had a better IoT than Intel). They did drone crap .. gave a kick ass keynote at CES I recall .. also .. made little sense. Finally, the whole FPGA thing .. makes little sense. So much value being destroyed :(
There were some technical issues with the follow-through that they didn't foresee. CPUs need to closely manage their power usage to be able to extract maximum computing power, and leaving a big chunk of static power on the table in case the FPGA needs it. The idea of putting an FPGA on a die was mostly killed by that.
Regarding other plans, QPI and UPI for cache coherent FPGAs were pretty infeasible to do at the sluggish pace that they need in the logic fabric. CXL doesn't need a close connection between the two chips (or the companies), and just uses the PCIe lanes.
FPGA programming has always been very hard, too, so the dream of them everywhere is just not happening.
That was not the point of the Altera acquisition. The point was the fill Intel's fabs, but the fab fiasco left Altera/Intel-FPGA without a product to sell (Stratix 10 -- 10nm -- got years of delay because of that). Meanwhile Xilinx was racing ahead on TSMC's ever shrinking process.
I remember when they bought a smart glasses company then refunded every buyer ever the full retail price. There hasn’t been an Intel acquisition that has worked out in some 20 years now it seems. Just utterly unserious people.
Google built its core advertising ecosystem on acquisitions (Applied Semantics, DoubleClick, AdMob, etc) and extended it into the mobile space by buying Android.
Apple does really well on its rare acquisitions, but they aren't very public as they get successfully absorbed. PA Semi, Intrinsity, more I can't remember.
ATi and Xilinx have by all accounts worked out really well for AMD.
i was a process engineer there in the early 2000's, they did crazy random shit then too! they had an 'internet tv' pc that was designed to play mp4's in 2001.
Then they would have worse-performing chips and the market wouldn't care about the security benefits. Cloud providers may grumble, but they aren't the most important market anyway.
Intel pivoting to GPUs was a smart move but they just lacked the tribal knowledge needed to successfully ship a competitive GPU offering. We got Arc instead.
They mostly work now and they are decent options at the low-end (what used to be the mid-range: $200) where there is shockingly little competition nowadays.
However, they underperform greatly compared to competitors' cards with similar die areas and memory bus widths. For example the Arc A770 is 406mm^2 on TSMC N6 and a 256-bit bus and performs similarly to the RX 6650XT which is 237mm^2 on TSMC N7 with a 128-bit bus. They're probably losing a lot of money on these cards.
It's getting better and drivers are improving all the time. I personally liked the Arc for the hardware AV1 encoding. Quicksync (I use qsvencc) is actually pretty decent for a hardware encoder. It won't ever beat software encoding, but the speed is hard to ignore. I don't have any experience using it for streaming, but it seems pretty popular there too. Nvidia has nvenc, and reviews say it's good as well but I've never used it.
This. If you follow GamersNexus, there are stories every month about just how much the Arc drivers have improved. If this rate continues and the next-gen hardware (Battlemage) actually ships, then Intel might be a serious contender for the midrange. I really hope Intel sticks with it this time as we all know it takes monumental effort to enter the discrete GPU market.
Arc seems more like, where the GPU will 'be' in another 2-6 years. Where Arc's second or third iteration might be more competitive. Vulkan / future focused, fast enough that some translation layers for old <= DX11 / OpenGL are worth it.
If you're hoping for an nVidia competitor, the units in that market may be more per unit, but there's already an 1-ton gorilla there and AMD can't seem to compete either. Rather, Arc makes sense as an in-house GPU unit to pair with existing silicon (CPUs) and low / mid range dGPUs to compete where nVidia's left that market and where AMD has a lot of lunch to undercut.
One unfortunate note on Nvidia data center GPUs is to fully utilize features such as vgpu and multi-instance GPU, there is an ongoing licensing fee for the drivers.
I applaud Intel for providing fully capable drivers at no additional cost. Combined with better availability for purchase they are competing in the VDI space.
Intel has an amazing track record with acquisitions -- almost none of them work out. Even the tiny fraction of actually good companies they acquired, the Intel culture is one of really toxic politics and it's very hard for acquired people to succeed.
I wish Pat well and I think he might be the only who could save the company if it's not already too late.
Sourced: worked with many ex-Intel people.
POSTSCRIPT: I have seen from the inside (not at Intel) how a politically motivated acquisition failed so utter spectacularly due to that same internal power struggle. I think there are some deeply flawed incentives in corporate America.
Not gonna lie, I had a professor who retired from Intel as a director or something like that. Worst professor I had the entire time. We couldn't have class for a month because he 'hurt his back,' then half us saw him playing a round of golf two days later.
I've heard the reason AMD bought ATI instead of Nvidia is Jensen wanted to be CEO of the combined company for it to go through. I actually AMD would be better off it they had taken that deal.
Prior to the ATI acquisition nvidia actually had been the motherboard chipset manufacturer of choice for AMD cpus for a number of years.
ISTR there was sort of a love-hate relationship with a lot of nVidia chipsets.
nVidia always had the trump card of saying "if you want SLI, you have to buy our chipset." But conversely, a lot of the options weren't great. VIA tended to alternate between decent and incompetent chipsets, SIS was mostly limited to the bottom of the market, and ATI's chipsets were very rare.
AMD is doing fantastic and it’s CEO is great. It would be a big let down if they had bought nvidia as we’d have a single well run company instead of two
Unlikely with AMD owning ATI. The reason NVidia was blocked from buying ARM was because of the many, many third parties that were building chips off ARM IP. Nvidia would have become their direct competitor overnight with little indication they would treat third parties fairly. Regulators were rightly concerned it would kill off third party chips. Not to mention the collective lobbying might of all the vendors building ARM chips.
There were and are exactly zero third parties licensing nvidia IP to build competing GPU products.
like it’s of course dependent on what “build competing products” means, but assuming you mean semicustom (like AMD sells to Sony or Samsung) then nvidia isn’t as intractibly opposed as you’re implying.
regulators can be dumb fanboys/lack vision too, and nvidia very obviously was not spending $40b just to turn around and burn down the ecosystem. Being kingmaker on a valuable IP is far more valuable than selling some more tegras for a couple years. People get silly when nvidia is involved and make silly assertions, and most of the stories have become overwrought and passed into mythology. Bumpgate is… something that happened to AMD on that generation of gpus too, for instance. People baked their 7850s to reflow the solder back then too - did AMD write you a check for their defective gpu?
Maybe, however, the GPU market was not considered so incredibly valuable at the time (particularly by eg politicians in the US, Europe or China). Today it's a critical national security matter, and Nvidia is sitting on the most lucrative semiconductor business in history. Back then it was overwhelmingly a segment for gaming.
i worked at intel between 97 and 07. MFG was absolute king. Keeping that production line stable and active was the priority that eclipsed all. i was a process engineer, and to change a gas flow rate on some part of the process by a little bit, i'd have to design an experiment, collect data for months, work with various upstream/downstream teams, and write a change control proposal that would exceed a hundred pages of documentation. AFAIK, that production line was the most complex human process that had happened to date. It was mostly ran by 25-30 yo engineers. That in itself was a miracle.
It has implications for his decision making process though. Christianity requires comfortably handling internally inconsistent information and taking a superficial approach to evidence. Whether that is an advantage in a CEO is unclear. It probably helps him remain confident in Intel's outlook.
With the US re-industrialising, semi-conductors are a strategic priority so Intel will be at the heart of the process. They're going to be a major beneficiary of the process, here and in Europe. They'll be a significant player.
They got lazy and sat on their laurels when AMD was struggling and they didnt view ARM as a threat. TSMC was a probably a joke to them...until everyone brought out killer products and Intel had no response to them. They could have been way ahead of the pack by now but they decided to harvest the market instead of innovating aggresively. Right now they're worth less than $200bn, which is less than half of Broadcom or TSMC, its 30% less than AMD and 10% of Nvidia. Is it intrisically worth that little? Probably not, I think its a buy at this price.
My own humble opinion is that Intel has always suffered from market cannibalization. They are a brand I look for but many times the iteration of products will force me to go a generation or two older because I can’t argue with the price and features. By the time I was sold on a NUC they were discontinued. I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid. By the time I’m ready to spend some money it will become something else usually. Also, they should have snapped up Nuvia. I’m still rooting for them but really if they could streamline their products and be willing to take a leap of faith on others in the same space it would help out a lot.
>I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid.
They've made this situation fairly clear, in my eyes.
Alchemist is the product line for their first attempt at true dedicated GPUs like those Nvidia and AMD produce. It's based on Intel Xe GPU architecture.
It's done decently well, and they've been very diligent about driver updates.
Battlemage is the next architecture that will replace it when it's ready, which I believe was targeted for this year. Similar to how the Nvidia 4k series replaced the 3k before it. Celestial comes a couple years later, then druid a couple years after that, etc. They don't exist simultaneously, they're just the names they use for generations of their GPUs.
Brilliant article. He is totally right that what Pat Gelsinger is doing is as brave as what Andy Grove did and just as essential. In hindsight Andy Grove was 100% right and I hope Pat Gelsinger is proved right.
The fact that Intel stock went up during the Brian Krzanich tenure as CEO is simply a reflection of that being the free money era that lifted all boats/stocks. Without that we would be writing Intel’s epitaph now.
You cannot play offense in tech when there is a big market shift.
> I thought that Krzanich should do something similar: Intel should stop focusing its efforts on being an integrated device manufacturer (IDM) — a company that both designed and manufactured its own chips exclusively — and shift to becoming a foundry that also served external customers.
That would only work if Intel has a competitive foundry. Intel produces very high margin chips. Can it be competitive with TSMC in low margin chips where costs must be controlled?
The rumors I've heard (not sure about their credibility) is that Intel is simply not competitive in terms of costs and yields.
And that's even before considering it doesn't really have an effective process competitive with TSMC.
It's easy to say it should become a foundry, it's much harder to actually do that.
I worked at (American drink company in Japan) previously and saw what the poster may be referring to.
Management Consulting sells ideas, many of them silly or not important that are marketed and packaged as era-defining. A manager who implements #FASHIONABLE_IDEA can look forward to upwards mobility, while boring, realistic, business-focused ideas from people in the trenches usually get ignored (unless you want a lateral transfer to a similar job). hashtag-collection-of-ideas is much easier to explain when the time comes for the next step up.
This explains why you get insane things like Metaverse fashion shows that barely manage to look better than a liveleak beheading video. These sorts of things might seem like minor distractions, but getting these sorts of boondoggles up and running creates stress and drowns out other concerns. Once the idea is deployed, the success or failure of the idea is of minimal importance, it must be /made/ successful so that $important_person can get their next job.
These projects starve companies of morale, focus and resources. I recall the struggle for a ~USD $20k budget on a project to automate internal corporate systems, while some consultants received (much) more than 10 times that amount for a report that basically wound up in the bin.
Oddly, this sort of corporate supplication to management consultants worked out for me (personally). I was a dev who wound up as a manager and was able to deliver projects internally, while other decent project managers could not get budget and wound up looking worse for something that wasn't their fault.
I don't think any of the projects brought by management consultants really moved the needle in any meaningful way while I worked for any BigCos.
It’s hard to believe that a manager can be effective if they literally cannot do the job of those they manage.
I can imagine such folk turning to fads as an appeal to pseudo-expertise.
People come in on a short term basis. They don't know the company, the business, or the employees. They make long term decisions by applying generic and simplified metrics without deep understanding.
Intel has truly been on a remarkable spree of extremely poor strategic decisions for the last 20 years or so. Missed the boat on mobile, missed the boat on GPUs and AI, focused too much on desktop, and now AMD and ARM-based chips are eating their lunch in the data centre area.
You're missing the big one: they missed the boat on 64-bits. It was only because they had a licensing agreement in place with AMD that they were able to wholesale adopt AMD's extensions to deliver 64-bit x86 processors.
That's not at all what happened. Intel's 64-bit story was EPIC/IA-64/Itanium and it was an attempt to gain monopoly and keep x86 for the low-end. AMD64 and Itanic derailed that idea so completely that Intel was forced by Microsoft to adopt the AMD64 ISA. Microsoft refused to port to yet-another incompatible ISA.
Had Itanium been a success then Intel would have crushed the competition (however it did succeed in killing Alpha, SPARC, and workstation MIPS).
I don’t think it was Itanium that killed SPARC. On workstations it was improved reliability of Windows and to some extent Linux. Sun tried to combat this with lower cost systems like the Ultra 5, Ultra 10, and Blade 100. Sun fanatics dismissed these systems because they were to PC like. PC fanatics saw them as overpriced and unfamiliar. With academic pricing, a $3500 Ultra 10 with 512 MB of RAM and an awful IDE drive ran circles around a $10000 HP C180 with 128 MB of RAM and an OK SCSI drive because the Sun never had to hit swap. I think Dell and HP x86 PC workstations with similar specs as the Ultra 10 were a bit cheaper.
On servers, 32 bit x86 was doing wonders for small workloads. AMD64 quickly chipped away at the places where 1-4 processor SPARC would have previously been used.
I think that Itanium had zero impact on anything (besides draining Intel for money) due to high cost and low performance.
It could not run x86 apps faster than x86 cpus, so it didn't compete in the MS Windows world. Itanium was a headache for compiler writers as it was very difficult to optimize for, so it was difficult to get good performance out of Itanium and difficult to emulate x86.
Itanium was introduced after the dot-com crash, so the marked was flooded with cheap slightly used SPARC systems, putting even more pressure on price.
This is unlike when Apple introduced Macs with PowerPC cpus: they had much higher performance than the 68040 cpu they replaced. PowerPC was price competetive and easy to write optimizing compilers for.
Itanium itself did nothing. But Itanium announcement killed basically killed MIPS, Alpha and PA-RISC. Why invest money into MIPS and Alpha when Intanium is gone come out and destroy everything with its superior performance.
So ironically announcing Itanium was genius, but then they should have just canceled it.
Microsoft did port to Itanium. Customers just didn't buy the chips. They were expensive, the supposed speed gains from "smarter compilers" never materialized, and their support for x86 emulation was dog slow (both hardware and later software).
No one wanted Itanium. It was another political project designed to take the wind out of HP and Sun's sails, with the bonus that it would cut off access to AMD.
Meanwhile AMD released AMD64 (aka x86-64) and customers started buying it in droves. Eventually Intel was forced to admit defeat and adopt AMD64. That was possible because of the long-standing cross-licensing agreement between the two companies that gave AMD rights to x86 way back when nearly all CPUs had to have "second source" vendors. FWIW Intel felt butt-hurt at the time, thinking the chips AMD had to offer (like I/O controllers) weren't nearly as valuable as Intel's CPUs. But a few decades later the agreement ended up doing well for Intel. At one point Intel made some noise about suing AMD for something or another (AVX?) but someone in their legal department quickly got rid of whoever proposed nonsense like that because all current Intel 64-bit CPUs rely on the AMD license.
Maybe I wasn’t clear; I meant after Itanium failed, Microsoft refused to support yet another 64-bit extention of x86 as they already had AMD64/x64 (and IA-64 obviously)
No, they were on the boat, they just mis-managed it. They used to make Arm chips, but sold it off just before the first iPhone was released as they saw no future in mobile CPUs. Same with network processors for routers around the same time.
They have been trying half-heartedly with GPUs on and off since the late 1990's i740 series.
The root cause is probably the management mantra "focus on core competencies". They had an effective monopoly on fast CPUs from 2007 until 2018. This monopoly meant very little improvement in CPU speed.
Is it just the fate of large successful companies? The parallels with Boeing always come to mind. We’ve seen this play out so many times through history, it’s why investing in the top companies of your era is a terrible idea.
As of Nov 2023, BRK has ~915.5M AAPL shares, with a market cap of $172B. BRK’s market cap is $840B.
Per the CNBC link, the market cap of BRK’s AAPL holding is 46% of the market cap of BRK’s publicly listed holdings, but BRK’s market cap being much higher than $172B/46% means there is $840B - $374B = $466B worth of market cap in BRK’s non publicly listed assets.
I would say $172/$840 = 20% is more representative of BRK’s AAPL proportion.
Meta was beaten because investors were worried Mark had gone rogue. Now that hes laid a bunch of people off to show that the investors are in control theyre cool with the metaverse.
[...] when Intel’s manufacturing prowess hit a wall Intel’s designs were exposed. Gelsinger told me: 'So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” [...]'
This is so common that it happens all the time with successful companies. They don't have to make good decisions. They have more than enough cash to keep making bad decision after bad decision, whereas a smaller company would collapse.
Apple and Microsoft have both managed to avoid this, and been the exception.
> Datacenters were and still are full of monstrous Xeons, and for a good reason.
To ask the foolish question, why? My guess would be power efficiency. I've only ever seen them in workstations, and in that use-case it was the number of cores that was the main advantage.
It's a compact package with many cores, lots of cache memory, lots or RAM lanes, and lots of PCI lanes. All within a large but manageably hot crystal.
Space and power in datacenters are at a premium; packing so many pretty decent cores into one CPU allows to run a ton of cloud VMs on a physically compact server.
AMD EPYC, by the way, follows the same dtatcenter-oriented pattern.
Looking at Statistica [0] I see Intel at 62% and AMD at 35% on desktop for 23Q2.
That's a significant gap, and having more than half of the market is nothing to sneeze at, but I think they move from being dominant to just being a major player.
IF (big IF) the trend continues we might see Intel and AMD get pretty close, and a lot more comptetion on the market again (I hope)
On the server side, I don't have the number but that's probably a way harder turf to protect for Intel going forward, if they're even still ahead ?
Cloud providers are very careful to make sure of that - they have deliberately eschewed performance increases that are possible (and have occurred in the consumer market) in favor of keeping the “1 vcpu = 1 sandy bridge core” equivalence. The latest trend is those “dense cores” - what a shocking coincidence that it’s once again sandy bridge performance, and you just get more of them.
They don’t want to be selling faster x86 CPUs in the cloud, they want you to buy more vcpu units instead, and they want those units to be arm. And that’s how they’ve structured their offerings. It’s not the limit of what’s possible, just what is currently the most profitable approach for Amazon and google.
The trend ain't their friend in those markets though. Many folks running server workloads on ARM and their customer base is drastically more concentrated and powerful than it once was. Apple has shown the way forward on PC chips.
Time will tell. It's not meant to be hyperbolic - I'm short them and lots of others are and expect it will be a disaster going forward. There are obviously people on the other side of that trade with different expectations, so we will see.
This reminds me of the situation at Boeing, albeit w/ less fatal consequences. For a long time, it's been a company that has focused on maximizing profits through "innovative" business practices, first and foremost, rather than innovative R&D. It's completely unsurprising that Intel has been struggling lately against its genuinely inventive competition.
As someone that doesn’t really understand the space and stuff that the article is talking about, I was surprised that “Apple Silicon” didn’t appear here since from my perspective having used both Intel and Apple Silicon Macs, it’s a huge “wow” change. Was Apple leaving and Apple Silicon Macs being so incredibly better than Intel ones not actually a big deal for Intel?
Apple had been sandbagging the Intel chips for several generations of macbook with anemic cooling solutions, poor thermal throttling curves, and fans that wait until the last moment to turn on. To the extent that the fastest CPUs available in Macbook Pros did not outperform lower models, because both became thermally saturated. When properly cooled, the Intel silicon tends to perform a lot better.
Additionally, both Intel and AMD manufacturer CPUs for HEDT and servers which are far beyond anything Apple is fabricating at the moment. Apple has no response to Epyc, Threadripper, or higher end Xeon. Similarly, Apple has no equivalent to Intel, AMD, and Nvidia discrete GPUs.
Apple made a quality cell phone chip, and managed to exploit chiplets and chip-to-chip interconnects to double and quadruple it into a decent APU. But it's still an APU, just one segment addressed by current x86 designs.
Absolutely ridiculous. No, Apple did not juice the M1 by giving it better cooling than x86. Quite the opposite, they took a big chunk of their design+process wins to the bank and exchanged them for even cooler and quieter operation because that's what they wanted all along. Cool and quiet was a deliverable.
It's absurd to point out that apple could have gotten higher perf from x86 at higher power as if it's some kind of win for intel. Yes, obviously they could have, that's true of every chip ever. They could take it even further and cool every macbook with liquid nitrogen and require lugging around a huge tank of the stuff just to read your email! They don't, because that's dumb. Apple goes even further than most laptop manufacturers and thinks that hissing fans and low battery life are dumb too. This leads them to low power as a design constraint and what matters is what performance a chip can deliver within that constraint. That's the perspective from which the M1 was designed and that's the perspective from which it delivered in spades.
Maybe it's not clear to everyone... hot transistors waste more power as heat. It's a feedback loop. And it doesn't require liquid nitrogen to nip in the bud. Running the chip hot benefits no one. Kicking on a fan, and racing to idle without throttling would use less battery power.
I'm not sure what's so upsetting about the assertion that chips composed of a similar number of transistors, on a similar process node, cooled similarly, might function similarly. Because when all variables are controlled for, that's what I see.
Even taking the latest Intel has to offer with the Core Ultra 9 185H in a system with active cooling and putting it up against the fanless model of the M1 MacBook Air comes out equal in performance at more than triple the power usage - 3 years after the fact. It's got nothing to do with a feedback loop, there is less cooling capacity and not even a fan on the Apple model, the fan wasn't somehow needed for the performance level. Not that a conspiracy of Apple to run all their MacBooks like shit for many years prior to switching all to make Intel look bad is a bit ludicrous - if the problem was ever that the fans weren't on enough they would have just turned them up. Turns out, they can remove them instead!
The one place I agree with you is in the > ~16 core space (Server style) with TBs of RAM, where total performance density is more important than power, they don't bother to really compete. Where I differ slightly is I don't think there is anything about the technical design that prevents this, just look how Epyc trounced Intel in the space by using a bunch of 8 core chip modules instead of building a monolith, rather they just don't have interest in serving that space. If Apple was able to turn a phone chip into something with the multicore performance of a 24 core 13900k it doesn't exactly scream "architectural limitation" to me.
Parent didn't deserve any charity--Apple didn't sandbag anyone. They made the laptop they felt was the future based off a processor roadmap Intel failed to deliver on.
The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
> The fact that the exact same laptop designs absolutely soared when an M1 was put in them with no changes tells you everything you need to know about how Intel dropped the ball.
Intel did screw up and get stuck on 14nm for far too long. But then does Apple deserve credit for TSMC's process advantage? AMD was never stuck in the way Intel was, and they have had the performance crown since for most kinds of workloads. I suppose Apple figured if they were going to switch chip vendors again it might as well be to themselves.
True, it's not just a story of Apple besting Intel. AMD has been beating them too.
Rough recent history for Intel.
I agree that Apple figured if they were going to switch, they should just go ahead and switch to themselves. But the choice was really to switch to either themselves or AMD. Sticking with Intel at the time was untenable. 14nm is certainly a big part of that story, and I'm glad you at least finally recognize there was a serious problem.
If Intel had been able to deliver on their node shrink roadmap, perhaps Apple never would have felt the need to switch--or may have at least delayed those plans. Who knows, that's alternate history speculation at this point.
The article in question is about Intel potentially getting back to some level of process parity, perhaps even leadership. I'm looking forward to that because I think a competitive market is important.
But pretending Intel's laptop processors weren't garbage for most of the last 8 or so years is kind of living in an alternate reality.
I think a lot has happened in Intel land since Apple folk stopped paying attention, as well. Intel still has a lot of work to do to catch up to AMD, but they have been fairly consistently posting gains in all areas. Apple really doesn't have a power advantage other than that granted by their process node at this point, against either AMD or Intel. AMD has seemingly delayed the Strix Halo launch because it wasn't necessary to compete at the moment. And Qualcomm is taking the same path Apple has, but is willing to sell to anyone, and as a result has chips in all standalone VR headsets other than Apple's.
It remains to be seen if Apple is willing or able to scale their architecture to something workstation class (the last Intel Mac Pro supported 1.5TB of ram, it's easy to build a 4TB Epyc workstation these days).
> When properly cooled, the Intel silicon tends to perform a lot better.
Of course, but the average Joe does not want to wear ear protection when running their laptop. Nor do they want the battery to last 40 minutes or have it be huge brick, or have to pour liquid nitrogen on it to not get it to not thermal throttle.
Apple innovated by making chips that fit the form and function most people need in their personal devices. They don't need to be the absolute fastest, but innovation isn't solely tied to the computing power of a processor. It make sense that Intel excels in the market segment where people do need to wear ear protection to go near their products. If they need to crank in an extra 30 watts to achieve their new better compute then so be it.
We don't know the specifics of the conversations between Apple and Intel. Hopefully for Intel it was just the fact that they didn't want to innovate for personal computing processors and not that they couldn't.
It seems like you think I'm trying to dunk on Apple. I am not. Apple Silicon is a great first showing for them. Performance simply isn't better than Ryzen APUs running in the same power envelope. And power usage is what you'd expect of silicon running on the latest node. Further, some of Apple's choices - bringing memory on package, only two display outputs - caused regressions for their users compared to the previous Intel offerings.
I wouldn't call what Apple did innovation - they followed predictable development trajectories - more integration. They licensed ARM's instruction set, Imagination's PowerVR GPU, most of the major system busses (PCIe, Thunderbolt, USB 3, Displayport, etc), they bonded chiplets together with TSMC's packaging and chip-to-chip communication technologies, and they made extensions (like optional x86 memory ordering for all ARM instructions which removes a lot of the work of emulation). Incidentally, Apple kicked off it's chip design efforts by purchasing PA Semi. Those folks had all the power management chip design expertise already.
But again, it's been a good first showing for Apple. I think they were smart to ship on-package DRAM in a consumer device. Now is about the right time for the CPU to be absorbing DRAM, as can be seen by AMD's 3D VCache in another form. And it's cool for Apple folks to have their own cool thing. Yay y'all. But I've run Linux for 20 years, I've run Linux on every computer I can get my hands on in that time, and through that lens, Apple silicon performs like any x86 APU in a midrange laptop or desktop. And as regards noise, I never hear the fans on my 7800x3D / 3090Ti, and it is very very noticeably faster than my M1 Mac. Apple Silicon's great, it's just for laptops and midrange desktops right now.
Somehow you are comparing Apple’s first gen laptop/iPad chip to a a desktop setup requiring 10x the power consumption and 10x the physical size (for the chips and all the cooling required). The power envelope for these chips is very different and they prioritize different things.
> to be fair there is an awful lot of copying and extending in place
That's how technology proliferates. The point is if the M1 wasn't innovative, that rules out pretty much everything AMD, Intel and potentially even NVIDIA have done in the last three decades.
Did they do anything in those three decades that hadn't been dreamt of and projected out sometime in the 60s? Architecturally, sure doesn't seem like it.
I'd say a lot more innovation happens on the process side. Manufacturing.
All the architecture changes look like things mainframes did decades ago.
>Apple Silicon is a great first showing for them. Performance simply isn't better than Ryzen APUs running in the same power envelope. And power usage is what you'd expect of silicon running on the latest node.
Do you have source for this other than Cinebench R23, which is hand optimized for x87 AVX instructions through Intel Embree Engine?
From all sources, Apple Silicon has 2-3x more perf/watt than AMD's APUs in multithread and a bigger gap in single thread.
Intel stopped making good chips for the things Apple cared about: efficient, cool designs that can run well with great battery life in attractive consumer products.
Apple "sandbagging" was them desperately trying to get a poor Intel product to work in laptops they wanted to be thin and light.
Even today though their designs haven't really changed all that much--in fact the MacBook Air is actually just as thin and now even completely fan-less. It just has a chip in it that doesn't suck.
It was Apple's decision to put an 8-core i9 in an MBP chassis that was utterly incapable of dealing with the heat. Yes, the M1 is far more efficient and Apple deserves a lot of credit for it, but their last Intel laptops were much worse than they needed to be.
You can definitely point out some processor decisions that seemed poor in hindsight, but the fact is nothing in Intel's lineup at the time was any good for a laptop.
Maybe they were worse than they needed to be, but the best they could have been with Intel would have still left so much to be desired.
> nothing in Intel's lineup at the time was any good for a laptop.
You leave no room for nuance here, friend. And somehow I doubt you're familiar with every single one of Intel's tens of thousands of SKUs over decades. Intel has made a lot of CPUs. They're in lots of laptops. You might consider your wording if you're not trying to be corrected.
Please correct me then. I'm willing to learn about the Intel laptop chip that Apple could have used to achieve M1 performance and efficiency at the time they switched. You're right that I am not familiar with every one of Intel's SKUs they produced at the time.
M1 was released in 2020, that puts it in line with Tiger Lake and Comet Lake, for which I'm seeing 7 different SKUs at 7W TDP. The fastest of which seems to be the Core i7 1180G7, which seems to perform almost as well as the M1 while using about half as much power: https://nanoreview.net/en/cpu-compare/intel-core-i7-1180g7-v...
Just the first example I grabbed. Intel made quite a few more chips in the 15W and higher envelopes, still under M1's ~20W TDP.
I didn't check their embedded SKUs yet. Nor enterprise.
It's hard for me to square your statements with actual real world products. The Core i7 1180G7 was the CPU in the ThinkPad X1 Nano Gen 2 (IIRC), and that laptop got half the battery life of an M2 Pro and a third of the battery life of the M1 air.
TDP doesn't seem to be the whole story - the datasheet is one thing, but I've yet to see a reviewer get 30 hours from an Intel laptop.
For sure there's interesting stuff to talk about here. Intel allows OEMs a lot of configuration with regard to TDP and various ways to measure chip temp accounting for thermal mass of the cooling solution and skin temperature of the device. Many OEMs screw it up. Even the big ones. Lenovo and Apple included.
Screen backlight usually has as much to say about battery life as the CPU does. And it's hard to deny Apple's advantage of being fully vertically integrated. Their OS only needs to support a finite number of hardware configurations, and they are free to implement fixes at any layer. Most of my PC laptops make choices which use more power in exchange for modularity and upgradeability as well, like using SODIMMs for ram, and m.2 or SATA removeable storage, all of which consume more power than Apple's soldered on components.
Lots of confounders there. MacOS is probably a lot more power optimized than Windows, given the shared iOS codebase, and OS quality can impact battery life hugely (useless wakeups, etc).
Ah, taking manufacturer-provided TDP numbers at face value. So much for all the pretending you know anything about CPUs.
Your very own source gives the M1 a 75 vs 59 score on power efficiency. They don't seem to provide a benchmark for it, but it should have been a clue regardless.
"In single-threaded workloads, Apple’s showcases massive performance and power advantages against Intel’s best CPU. In CineBench, it’s one of the rare workloads where Apple’s cores lose out in performance for some reason, but this further widens the gap in terms of power usage, whereas the M1 Max only uses 8.7W, while a comparable figure on the 11980HK is 43.5W.
In other ST workloads, the M1 Max is more ahead in performance, or at least in a similar range. The performance/W difference here is around 2.5x to 3x in favour of Apple’s silicon.
In multi-threaded tests, the 11980HK is clearly allowed to go to much higher power levels than the M1 Max, reaching package power levels of 80W, for 105-110W active wall power, significantly more than what the MacBook Pro here is drawing. The performance levels of the M1 Max are significantly higher than the Intel chip here, due to the much better scalability of the cores. The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance, meaning the perf/W at ISO-perf would be even higher than this."
I guess we're reading different graphs. The one I'm seeing shows Intel producing 70% of the performance with 35% of the power of an M1. That's... checks math... better performance per watt, and lower overall power consumption. If you want more performance, you are free to step up to a 15W version which outperforms the M1 at 70% of it's TDP.
I think I have a better sense of some of the misunderstanding here. I think you're taking Intel's TDP figures as honest and straightforward, when they really truly aren't.
I wouldn't call them shenanigans. At least Intel publishes numbers (Apple does not). There are simply a lot of power states Intel CPUs are capable of engaging, and many configuration options about when and under what circumstances to do so. Much of this is configurable by the OEM and is not hard-set by Intel, or even under Intel's control after the processor leaves the fab. The Anandtech article seems to indicate that it's perfectly possible to run this Intel CPU within the advertised 7W TDP, and that it can also be allowed to turbo up to 50W, which I'm sure most OEMs enable. My favorite OEMs provide the knobs to adjust these features.
So you’re aware of this Intel TDP chicanery but trying to say with a straight face that these chips achieve similar performance and efficiency to the M1. I think that’s pretty disingenuous.
Again, there's not chicanery going on. You asked for examples of chips Intel produced which were suitable for laptops as you claimed they produced none. I provided one.
I'm not sure what you had back in the day, but even Intel's E cores these days are spritely. Gone are the days of slow Atom cores. Pretty sure an equivalently clocked Intel E core of today beats the fastest Intel core ever shipped in an Apple product.
It sounds like Apple just wanted to vertically integrate, to me. Which is a fine reason to do something. But doesn't require misrepresenting competitors or constructing a past which never happened. You do you though.
Intel laptops cant even sleep anymore. I have a 11th Gen Intel space heater that invalidates every claim you've made in this thread. People aren't stupid, we've owned these shitty Intel products for years, you're fooling nobody.
Yeah, this is ridiculous. This guy is trying to claim both the Intel has good power consumption numbers, and that Apple should have added more fans to their Intel laptops. Has Intel discovered the secret of creating heat without using energy? Because according to the laws of physics I was taught, the need to festoon your computer with numerous fans means that... you're dissipating tons of energy.
2019 15" Macbook Pro with i9-9980HK has 1383/6280 Geekbench score. [1] generic user submitted Intel Core i9-9980HK has 1421/6269 Geekbench score. [2] a <3% difference doesn't seem like Apple sandbagging with anemic cooling.
It's not possible to cool those Intel cpus better and still have good battery life, and quiet operation in a compact laptop form factor. The last Intel macbooks would get very hot and very loud when running hard, and kill the battery in minutes... my M1 never makes a sound or feels even slightly warm to the touch when running hard, yet is a heck of a lot more powerful, and can run hard all day long on a single battery charge.
I had never even considered it was deliberate. In hindsight, doing it deliberately seems pretty smart, at least from a “let’s juice the intro” perspective, but that would have been a really big bet.
It's an interesting question right? Do you think Apple never tested multiple configurations of their leading computer (specifically the higher end ones) over multiple generations, or do you think they knew what they were doing?
Microsoft Surface Laptops were trying for similar form factors and having the same thermal problems at the same time, this isn't a grand conspiracy, all Intel laptops were suffering.
They still do it. Apple silicon macs run at 100+ degrees at load. Apple would rather run their chips hot than turn on the fans, which may or may not be justified. We don't really see chips dying from high temps, especially for typical workloads a casual user would use.
Just wait for them to get a little older and you'll probably have a wave of motherboards dying in the future like what happened with the MBPs from around 2011 - 2013 with apple denying it for years until the threats of class action lawsuits before offering replacements with other motherboards that also fail.
They didn’t sabotage them. The CPUs just didn’t align with Apple’s goals. It’s not like they gave the M1 CPUs the cooling that Intel didn’t get. My Mac Mini M1 never spins up its fans and everything is fine. I love it. If it did, I’d consider it a downgrade.
I don't think Apple "sabotaged" them but it is true that the M1 series came with very different thermal design parameters than the terminal Intel models. Apple's Intel laptops would ramp chip power to the max as soon as anything happened. Apple's M1 ramps very, very slowly, dwelling for hundreds of milliseconds at each voltage before stepping up. These are decisions that are in the OEM's hands and not dictated by Intel.
Not only do they manufacture Apple Silicon, but Apple usually buys out the first runs of their new processes so they have a head-start against even other TSMC customers. I believe every 3nm wafer that TSMC makes is still going to Apple, hence the just-released Qualcomm SD8G3 flagship chip still being made on TSMC 4nm.
Nvidia's AI $trillions may influence this arrangement in the near future. In the recent past Nvidia has bid Samsung against TSMC in attempts to save costs, but Apple's strategy works well for as long as one foundry has a process advantage.
I think Apple's advantages are that it has a lot of cash and a very predictable business. I'm not arguing that Nvidia doesn't have a good business, but it seems to be a bit less predictable and a bit less regular than Apple's business. Apple really knows how many chips it's going to want years in advance.
Even if Nvidia also wants TSMC's latest process, that could work to Apple's advantage. Right now it's looking like Apple might end up with TSMC's 3nm process for 18 months. If Apple and Nvidia split the initial 2nm capacity, it could be 3+ years before AMD and Qualcomm can get to 2nm.
If Nvidia launches the RTX 50 series in late 2024 or early 2025 on TSMC's 3nm (which seems to be the rumor), what does that do for availability for AMD and Qualcomm? Maybe what we'll see going forward is Apple taking the capacity for the first year and Nvidia taking the capacity for the second year leaving AMD and Qualcomm waiting longer.
That would certainly benefit Apple. Apple isn't competing against Nvidia. If Nvidia uses up TSMC capacity leaving Apple's actual competitors at a bigger disadvantage, that's pretty great for Apple.
The chips Nvidia requires are a lot bigger (>800 mm2 sometimes) and they are much more expensive to make on a cutting edge process with relatively low yields compared to the 100-150 mm2 chips Apple wants for its Axx iPhone chips.
There are many different reasons. Firstly, Apple built experience over a decade designing and building the chips, taking over more and more of the design and IP. Then making these chips at a Intel competitor TSMC, who has a different business model to Intel. Apple were also willing to compromise. The first years had weird experiences for customers with Rosetta, broken apps and Macs that could only drive a single external monitor and connect to few devices. Yet we clearly saw the power efficiency from the better foundry tech at TSMC, coupled with decade of saving watts for mobile phone batteries.
Isn't the current GPU-based stack which drives the progress of AI an early stage architecture which will become sub-optimal and obsolete in a longer term?
Having separate processors with separate memory and separate software stack to do matrix operations works, but it would be much more convenient and productive to have a system with one RAM and CPUs that can efficiently do matrix operations and would not require the programmer to delegate these operations to a separate stack. Event the name 'Graphics Processing Unit' suggest that the current approach for AI is rather hacky.
Because of this, in a long run there can an opportunity for Intel and other CPU manufacturers to regain the lucrative market from NVIDIA.
The software stack co-evolves with the hardware, if the hardware can't do something fast then the software guys can't necessarily even try it out.
People have been trying to break NVIDIA's moat by creating non-GPU accelerators. The core maths operations aren't that varied so it's an obvious thing to try. Nothing worked, not even NVIDIA's own attempts. AI researchers have a software stack oriented around devices with separate memory and it's abstracted, so unifying CPU and GPU ram doesn't seem to make a big difference for them.
Don't be fooled by the name GPU. It's purely historical. "GPUs" used for AI don't have any video output capability. I think you can't even render with them. They are just specialised computers come with their own OS ("driver"), compilers, APIs and dev stack, which happen to need a CPU to bring them online and which communicate via PCIe instead of ethernet.
How do the new “TPUs” and “NPUs” slot into your perspective of GPUs? Just nomenclature catching up to describe matrix math, ML GPUs without video output?
TPU is a Google thing. It shows how far NVIDIA has gone from classical GPUs that Google made ML specific silicon and it's not a clear winner over NVIDIA chips (or in some cases is a clear loser, from what I understand).
I don't know about NPUs. Do you mean Apple? Apple silicon is interesting because of the unified memory architecture, but beyond letting you save RAM by using it for both graphics and app code simultaneously I'm not sure it has much benefit. Certainly, datacenters will remain on NVIDIA for the time being unless AMD manage to catch up on their software stack. Intel is the dark horse here. I hear rumblings that their first offering is actually quite good.
Intel has made a lot of mistakes, but I'm reluctant to bash a company that's contributed such a vast amount to human civilization. The insights and innovations pushed forward by Intel in its heyday have delivered incalculable benefits to billions of humans and will benefit billions more for the rest of human history. Their recent history is checkered but the overall grade for Intel is A+ : staggering success.
"Intel’s argument is that backside power will make chips much easier to design, since the power is separated from the communications layer, eliminating interference"
That makes a lot of sense to me: that's how and why PCBs are usually designed as they are. How true it that there is an actual advantage vs TSMC?
It all depends on timing. TSMC is also working on backside power delivery.
Intel's roadmap looks great. However, I'm skeptical of whether they're actually meeting that roadmap. Meteor Lake was launched last month using Intel 4, but it looks like Intel 4 has lower transistor density than TSMC's 5nm. Intel 3 is listed on their roadmap as second-half 2023, but we've yet to see any Intel 3 parts.
Realistically, there won't be too much of an advantage for Intel. It's pretty clear that even when Intel ships things, they aren't shipping these new nodes in volume. Intel 4 is only being used for some laptop processors and they're even using TSMC's 5nm and 6nm processes for the graphics and IO on those chips. They canceled the desktop version of Meteor Lake so desktops are staying on Intel 7 for now. Intel's latest server processors launched last month are also Intel 7.
If Intel were able to get a year or two ahead of TSMC, then I could see a nice advantage. However, it looks like Intel's a year behind its roadmap and even being a year behind they're only shipping laptop parts (and not even able to fab the whole chip themselves).
But past success/failure doesn't predict the future. Maybe Intel will be shipping 20A with backside power in 2024 and it'll be 2025 before TSMC gives Apple a 2nm chip with backside power.
But given that we haven't seen them shipping Intel 3 and they haven't taken Intel 3 off their roadmap, I'm going to be a bit skeptical. Intel is doing way better than they had been doing. However, I've yet to see something convincing that they're doing better than TSMC. That's not to say they aren't going to do better than TSMC, but at this point Intel is saying "we're going to jump from slightly worse than 5nm (Intel 4) to 2nm in a year or less!" Maybe Intel is doing that well, but it's a tall ask. TSMC launched 5nm in 2020 and 3 years later got to 3nm. It doesn't take as long to catch up because technology becomes easier over time, but Intel is kinda claiming it can compress 5-6 years worth of work into a single year. Again, maybe Intel has been pouring its resources into 20A and 18A and maybe some of it is more on ASML and maybe Intel has been neglecting Intel 4 and Intel 3 because it knows it's worth putting its energy toward something actually better. But it also seems reasonable to have a certain amount of doubt about Intel's claims.
I'd love for Intel to crush its roadmap. Better processors and better competition benefit us all. But I'm still wondering how well that will go. TSMC seems to be having a bit of trouble with their 3nm process. 2024's flagship Android processors will remain on 4nm and it looks like the 2024 Zen 5 processors will be 4nm as well (with 3nm parts coming in 2025). So it looks like 3nm will basically just be Apple until 2025 which kinda indicates that TSMC isn't doing wonderfully with its 3nm process. Android processors moved to 5nm around 6 months after Apple did, but it looks like they'll move to 3nm around 18 months after Apple. But just because TSMC isn't doing great at 3nm doesn't mean Intel will face similar struggles. It just seems likely that if TSMC (a company that has been crushing it over the past decade) is facing higher struggles at 3nm, it's a bit of wishful thinking to believe Intel won't face similar struggles at 3nm and below.
Andy Grove's philosophy was "Only the paranoid survive". Intel seemed to rest on its laurels, divesting ARM, casually developing casual GPUs, and being complacent about their biggest direct competitor.
Wasn't the region around 12-7nM around where flash and RAM style memory were more reliable? If the new process works well for those but is cheaper that could be very good for bulk (value focused) non-volatile and volatile memory.
Intel is just like Boeing: A company with legendary engineering roots taken over by myopic financial engineers who can’t think further than next quarters stock price, which is their only measure, their only target, and to which all their bonuses are attached.
This is oversimplified and actually underplays how serious the issue is.
These companies didn’t fail because of myopic financial engineers. The ones focused on quarter to quarter tend to bomb the company relatively quickly and do get flushed out.
These companies failed because of long term financial visionaries. These are the worst because they are thinking about how the company can capture markets at all costs, diversify into all adjacent markets, etc. They are a hard cancer to purge because on the surface they don’t sacrifice stuff for the current quarter. They sacrifice laser focus for broad but shallow approaches to all kinds of things.
“Sure, we’ll build a fab here. And we’ll also do enterprise NICs… And also storage… and also virtualization… and also AI… and also acquire some startups in these fields that might have something interesting…”
The company becomes big listless monster coasting on way too many bets spread out all over the place. There is no core engineering mission or clear boundary between what is the business and what isn’t.
Intel is far from “myopic”. If it was something as obvious as a next quarter driven CEO, the board would have made a change immediately.
> These companies failed because of long term financial visionaries. These are the worst because they are thinking about how the company can capture markets at all costs, diversify into all adjacent markets, etc.
Long term financial visionaries? No they’re simply plundering the business to reward shareholders and executives through buybacks and dividends. They rationalize this as “maximizing shareholder value” but it is destroying long term value, growth and innovation.
Don’t be naive. You missed the entire point of what I highlighted and looking for “dividends and buybacks” as your signal will lead you nowhere.
CEOs not focused on product vision who focus on “company growth” as a vague vision are what lead to Intel, GE, etc. There is no greedy raiding of the coffers like your caricature implies in this scenarios.
Cool, keep on pretending the only people that destroy companies are focused on financials.
My point is that visionaries can drive everything into the ground and people like you will ride the ship down with support not realizing the negligence at play.
I’m here saying, “fires are not the only thing that destroy homes, floods wreck way more” and you’re digging in with evidence of fires.
That article is looking at things focused on financials, but intel did not fuck up on underinvestment. They invested in cutting edge stuff like uv lithography that now others are reaping the benefits on.
They all look like that, aren't they? It's like cancer, late stage. It's everywhere. I'm trying to be optimistic here, but I don't see much light at the end of the tunnel. Best case a second depression wipes away everyone's wealth and we start from clean slate.
I mean, clearly the competitors eating Intel's lunch do not look like that. Nvidia does not look like it. Apple doesn't. AMD doesn't. Just seems like ordinary competitive churn in the marketplace to me.
And furthermore they lost most of their best engineers and scientists during the years that the company was run by the useless MBAs. Now each new technology process is a huge struggle. Intel has secured some of the first new gen EUV machines from ASML, but if they'll have the talent to quickly start using them on scale is not yet clear.
Somebody from Intel who had a PhD from my school and manages a bunch of process engineers was on sabbatical and teaching a class at his old department, some of whose undergrads I taught. I visited an info session he was presenting and pointed out that I had been a green badge in JF5 for Validation and that they had a reputation of not matching ASML and Nvidia offers. He went ballistic on me and told me to go work for them if I got an offer from them and that he wouldn’t want me on his team with that attitude. While I am sure all the other people in the room who were F1s and undergrads still wanted jobs and he answered their questions honestly as far as I could tell (saying Intel would be bankrupt without CHIPS act money etc) that can’t have been a good look for him. I did leave the room only after telling him I wanted his company to succeed.
Why would people pay attention even if long-term thinking is taught? Only chumps think about the long-term interest of the company. Your long-term interest as an individual is in extracting as much money from short-term bonuses as possible, because once that money is in your bank account it doesn't matter if the company craters.
Funnily enough, this is EXACTLY what has happened to the video game and movie/series industry.
They jump onto the latest trend, e.g. ESG to get in good with the banks and funds without thinking about what long term damage it is doing to their brands and products.
To be fair, they jump on that particular trend because the banks will punish them if they don't. They have tens of trillions in assets under their management and they make sure that capital won't flow to companies that refuse their agenda. In effect the banks dictate the direction society moves towards. And people say it's a conspiracy theory.
> ESG investment funds in the United States saw capital inflows of $3.1 billion in 2022 while non-ESG investment funds saw capital outflows of $370 billion during the stock market decline that year
Honestly I think the real deathwatch started during spectre/meltdown. Intel had a real choice. They could own the issue, patch and use it as a chance to retool a largely broken architecture when corporate money was still 0% interest or negative interest.
They didnt. Every press conference downplayed deflected and denied the performance issues, every patch to the Linux kernel was "disabled by default." They lied through their teeth and real players like amazon vultr and other hosting providers in turn left for AMD.
I think you’re vastly overestimating how much Wall Street cares about technical issues like this. Spectre/meltdown barely registers on the radar of issues that Intel has. People are still mostly buying Intel CPUs in laptops, desktops, and servers, and the N100 seems to be gaining ground in the Raspberry Pi space. Maybe it gave an opening for AMD in some markets, but Intel really hasn’t seen some huge tarnished reputation because of it.
The far greater threats are that they missed the phone market, and are missing the GPU/AI market. Those are entirely new markets where growth would happen, instead of the mostly fixed size of the current CPU market where the same players are just trading small percentage points.
I take that to mean you haven't tried any recent (zen3/4 U series) AMD laptops then, or for that matter the M1/2/3's. I've heard this from a number of people who are comparing the x13s or new mac's with their 14nm intel that has a discrete nvidia gpu (which is eating another 10W+) from 2017/8 or so.
I have a pile of laptops including the x13s, and that part burns a good ~30W for a couple minutes, while running ~30% (or worse) slower, then thermally throttles and drops another ~30% in performance.
For light browsing it manages to stay under 5W and gets 10h from its 50Wh battery. This is also doable on just about any AMD or Mac laptop. The AMD machine I'm typing this on tells me I have more than 12 hours remaining and i'm sitting at 81% battery (67Wh). And this machine is a good 3x faster compiling code, or running simple benchmarks like speedometer when its plugged in or allowed to burn 35W. Except it also has a fan that turns on when its under heavy load to keep it from thermally throttling.
Yes the x13s is cute, it is a thin/light laptop and is probably the first windows on arm machine that isn't completely unusable. But, its going to take another leap or two to catch-up with current top of the line machines.
Everyone loves geekbench so, here is an obviously unthrottled x13s vs a similarly rated 28W AMD 7840U.
Intel flopped so hard on process nodes for 4 years up until Gelsinger took the reigns... it was honestly unprecedented levels of R&D failure. What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting". This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.
Intel's 18A is roughly 6 months ahead of schedule, set to begin manufacturing in the latter half of 2024. Most accounts put this ahead of TSMC's equivalent N2 node...
Fab investments have a 3 year lag on delivering value. We're only starting to see the effect of putting serious capital and focus on this, as of this year. I also think we'll see more companies getting smart about having all of their fabrication eggs in one of two baskets (samsung or tsmc) both within a 500 mile radius circle in the south china sea.
Intel has had 4 years of technical debt on it's fabrication side, negative stock pressure from the vacuum created by AMD and Nvidia, and is still managing to be profitable.
I think the market (and analysts like this) are all throwing the towel in on the one company that has quite a lot to gain at this point after losing a disproportionate amount of share value and market.
I just hope they keep Pat at the helm for another 2 years to fully deliver on his strategy or Intel will continue where it was headed 4 years ago.