But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.
1. It is possible to be bigger and still be in decline, that just indicates you had an even bigger lead before.
2. Everybody loves a good "underdog takes over from the big bad empire in decline" story, both the press and the commenting public. The stories about big companies absolutely crushing their competitors in quality are just not as interesting.
3. AMD can't scale up as rapidly as a SAAS because fabs take a while to build. I'd wager Intel has (at least) five to ten years to come up with a good processor design before they get overtaken by AMD in sheer volume.
4. There are no doubt Chinese competitors with even lower sales and even higher P/E and growth rate than AMD. It'll be interesting to see how that plays out.
Considering fabs take so long to build and get processes functioning well, it's likely they have another 5-10 years to blow their lead entirely. Rome didn't fall in a day
And Nuvia, Qualcomm, Huawei and whoever else decides to get TSMC to make some killer Arm-based SoCs. Seems like the playing field is about to get a lot bigger.
The stock price is in part determined by the discounted future Cashflow, the one of AMD is pointing upwards that of Intel seems neutral or declining. Intel had a long string of failures and bad acquisitions recently, which points to bad management. AMD has all the potential to attack Nvidia’s deep learning moat with in house talent. Maybe this is all wishful thinking though, but my AMD calls jumped 46% within a day.
This might be wild speculation but I just now realized that AMD, Nvidia, and, of course, TSMC are all Taiwanese lead. Did Taiwan have some kind of incubator or have some kind of special emphasis on semiconductors or is this just pure coincidence? I wouldn't be so surprised if Taiwan isn't such a small country (relative to its bigger neighbors). Just wondering.
The government of Taiwan realized that their semiconductor industry might be the one thing that the US would be willing to go to war with China over to protect. So the government has invested heavily in ensuring they maintain critical mass in semiconductor expertise, typically by investing heavily in university education and research in related engineering fields. Computer/electrical/materials engineering is pretty much the default major everyone gets funneled into without capacity limits since at least the 70s.
For example, my parents and all their friends weren't well off enough to afford private university, but majors at public university were limited by the government based on projected need with slots offered to only the highest test scores. They couldn't test high enough to secure a public university slot for accounting, art, architecture, education, medicine, and trades such as automotive repair or plumbing. So they all ended up with computer science and engineering degrees. Fully paid for by the government.
They basically treat the semiconductor industry like how the US treats its defense industry.
There's a fairly commonly held view that improving education access means giving people more ways out of poverty and therefore less people choosing armed forces as their career.
It seems intuitive to me that as war technology progresses, number of humans becomes less of a factor in military strength, but I know so little about this area that I couldn't guess how greatly alternative educational opportunities impact military applications, nor at what point in the past/future the scale might tip between wanting policies that push more into armed forces vs. no longer being so important (and I assume it would be different for different countries, too).
But I do believe good, free education should be a key part of any country, regardless of whether it helps national défense or not. If that drives up the cost of recruiting people to the armed forces then fine - I'm no fan of them in general, but if people are going to risk their lives potentially in wars then it shouldn't be because their choices were limited to that vs a life of poverty.
Interesting how they invert the qualifications for the disciplines versus US universities. It makes a lot of sense for a centrally-planned education system to force higher admission requirements for disciplines with less rigor and lower job demands. I have a suspicion that the US college system would implode for lack of willing customers and greatly reduce rigor to pablum. It’s basically a babysitting service here. It also makes it clear that pursuit of some measures of prestige reduce effectiveness of a system.
I vaguely recall a John C. Dvorak column (yes, I am too lazy to look it up) from the 1990s (maybe?) concerning a trip to Taiwan where he wondered if they were doing an experiment to see what would happen if everyone got a degree in electrical engineering.
I started going to the Super Computer/HPC confs here in the USA over the last few years and it was shocking how few American companies are there with impressive hardware engineering. Maybe the Japanese/SKOrean/Taiwanese companies showing up are the Dell/IBMS of those countries and I'm just not familiar with the names but walking around the conference always gives me this feeling that the USA is lagging behind or sitting this one out. The majority of companies I see are the usual hardware conf goers; IBM/HP and then a bunch of Gov/NSA/DOD groups on the USA end with a few universities. On the AIPAC end tons of what I feel like are small hardware companies doing cool bleeding edge nvmeoF/arm/fpga stuff, TONS of universities with really cool looking projects, etc.
I think it's highly unlikely that any traditional chip company dethrones Nvidia in DL, at least in a reasonably soonish time horizon. As others have said, CUDA is just too far ahead in terms of development and adoption.
However, I think NVIDIA is still vulnerable—but against AWS/GCP/Azure, not Intel/AMD.
My opinion is that deep learning is moving to the cloud. That's a bigger conversation with a lot of nuances, but if you take that basic assumption, then the development of ASICs like TPU/Inferentia become a big threat to Nvidia.
If the biggest buyers of chips in deep learning are the clouds, and the clouds are increasingly developing their own chips for deep learning, Nvidia is in a tough spot. They'll always have a place among labs that use their own machine, and of course, Nvidia's business is bigger than machine learning, but in general I think the clouds are a real threat.
It is relatively trivial to hook any new accelerator you develop into the popular deep learning frameworks. In the case of AMD there actually already exists a mature compiler framework for their GPUs and their cards are mostly on par with Nvidia's. Most deep-learning researchers don't write custom CUDA kernels, but simply stitch high level operations together in python. So as soon AMD delivers a performance / power advantage there will be almost no friction to deploy a AMD only cluster.
One of Nvidia's actual moats is their system building competency, which AMD lacks. They can sell you a box / a whole server room configuration, since their acquisition of Mellanox together with network equipment.
Yes, but there is one assumption in this hypothesis which I think is not accurate.
The cost of Hardware Development is the main cost contribution.
Which I think is not true with regards to both GPU and GPGPU computing. The major cost for GPU is Drivers, and CUDA for GPGPU. i.e It is Software.
Unlike ARM where AWS/GCP/Azure can make their chips and benefits from the Software ecosystem already in place for ARM, there is no such thing on GPU. Drivers and CUDA is the biggest moat around Nvidia's CPU. And unless Developers figure out a way to drive the cost of DL and Drivers down, there is no incentives to switch away from Nvidia's ecosystem.
That is why I am interested to see how Intel tackle this area. And if History will repeat itself again in the Voodoo, Rage 3D and S3 Verge era.
Not happening. nVidia has a stranglehold on deep learning because of CUDA and Cudnn. I don't see any AMD alternatives to take over either of these. So I wouldn't bet too much on AMD taking over the deep learning chip market.
The Apple ecosystem with its amd graphic cars and future Apple Gpu card seems to be a fight. Or at least maintain certain software not totally all cuda all the way down. And also the gaming with amd dominates both gaming platform.
Really do not want just one players. And hope the high level plays have more completion.
Still interest in Taiwan part. Purely from economic point of view. How secure are we ok that front, if all eggs are in one basket. Hk is fallen. Taiwan or South China Sea is in play. That will affect the supply chain.
Intel is launching a GPU/Deep learning accelerator, Huawei is thinking about launching a GPU. Pytorch and Tensor flow work well enough on AMD GPUs. There are also custom deep learning ASICs from Google. There is simply too much competition at this point for CUDA to continue to be the standard.
Is there any chance that some of the upcoming open-source cross-platform standards like WebGPU could have an effect on this, if tooling around them was built to support writing more GPGPU-focused code?
Last quarter, Nvidia’s datacenter segment exceeded $1B in revenue for the first time, and it’s close to overtaking the gaming segment as largest business segment.
Marketing fad or not, it’s not a bad business to be in.
Isn't HIP+ROCm a serious "competitor" to CUDA? You can convert your code automatic from CUDA to HIP. At least that's why the advertising says; I haven't used it myself. Plus, PyTorch and Tensorflow have AMD support, I thought.
Which CUDA libraries are you referring to? NVIDIA libraries like cuBLAS? There are ROCm libraries for a subset of those, but it's definitely a work-in-progress.
They might not have to, as others have pointed out. Intel, on the other hand, is fighting the multi-front battle against a lot of competitors. It’s great for consumers, but Intel might have to decide to focus too.
AMD can't ramp up the manufacturing volume it has from TSMC up quickly. TSMC builds capacity to match roughly what it is contracted for. I can just assume that any extra capacity they planned sells for a very good price.
Neither AMD nor TSMC can fully exploit Intel troubles because they can't foresee what happens in Intel manufacturing. Intel is selling chips like hotcakes.
TL;DR Intel has high manufacturing volume and the ability to make money with less competitive product.
The problem for Intel is AMD is putting out a clearly better value proposition in the high-margin server space, with single socket systems beating Intel dual socket systems, and other advantages like more PCI lanes and lower power consumption.
That will drive Intel prices, and margins, down. That means a lower Intel stock price as probable future earnings shrink.
In another submitted story regarding the rumor that Intel might rely on TSMC for some of its future products, one comment indicated that [1] "Intel’s fab capacity is several times TSMC’s".
I haven't fact-checked that, but given Intel's market share, it sounds plausible.
Intel may be behind on process and microarchitecture, but as long as they can ship that volume, and with a far better gross margin than AMD (Intel 53%, AMD 44%), I wouldn't count them done just yet.
Paying TSMC to fab chips is going to cost more than producing chips in their own fabs, reducing Intel's profit margin the more they do so.
TSMC also knows full well that Intel will switch back to their own fabs the instant it's practical for them to do so, making them a less reliable long term customer than Apple, AMD, etc. and so won't be inclined to give them priority or much of a break on pricing.
AMD and TSMC have really excellent relationship. Nvidia on the other hand does not, since it's rumoured most of their next gen GPUs are going to be made on Samsung 8nm because TSMC bid high against Nvidia and wouldn't budge. Apple might be the only company with a better relationship with TSMC than AMD. Apple and AMD together should be able to keep Intel's pressure low.
Given that AMD is fabless these days, you would have to compare Intel's gross margin to AMD and TSMC's combined to have an oranges to oranges comparison. In that light AMD is doing pretty well.
The other factor is going to be that TSMC has 5nm in risk production already. If they bring 5nm fabs online before Intel has a real answer to their 7nm process, AMD could be buying capacity from the 7nm and 5nm fabs at the same time.
Where does the meme come from about AMD being ahead on microarchitecture? Head-to-head single-threaded benchmarks are mixed results for Rome vs Skylake and its descendants, which seems to indicate that Intel had a microarchitecture comparable to AMD's, but years ago.
I wonder if this is merely an advantage of Intel's larger process size. It seems a threshold has been crossed at 14nm or 22nm where the smaller sizes can't be driven at higher clocks because of voltage and heat dissipation issues. So AMD is forced to trade clocks for cores, whereas Intel's larger (and very mature) process sizes can sustain higher frequencies.
For the most part it's a worthwhile trade for AMD because you get much greater overall compute power with only marginally diminished frequencies, but it's still something for Intel to hang its hat on in benchmarks.
The easiest way to interpret this is that the market cap size is what investors expect the market share to be in about 2,3 years. That is that Intel would have about 70% share compared to something like 90% today.
PE or net income is not really relevant for AMD as they are in very high growth phase (and because their gross margins are fine). The most important thing here is that Intel is failing to execute on the upcoming fab process which will unequivocally make them less competitive from 2021 - 2023 (at the very least). The uncertainty of Intel being able to execute here on out is also impacting their share price since they have had a long history of failing to execute on 10nm and now on 7nm.
I think the problem with this is they will just start using TSMCs process until they become competitive again? Let's face it they have the money to pay TSMC more than AMD do I'd guess.
This is a very interesting question. With everyone fabbing at TSMC (amd, nvidia, apple, google?, amazon?, aspiring startups) I could see a bidding war for volume guarantees if there is a volume problem going forward. Some of these players have deeper pockets and margins than others.
[This is my take on it (largely informed by stratechery) and probably overconfident]
Intel runs their own fab and for whatever reason they've messed this up repeatedly (unclear what the reason is, but it's at least partly a management/strategic failure). Their focus on old designs that are currently profitable instead of the future was a short term benefit and a long term mistake.
AMD uses TSMC (Taiwan Semiconductor Manufacturing Company) to fab their chips. Apple does the same.
Intel's profits from older technology and server sales have made them slow to recognize the severity of their situation. First they missed mobile, now they've had years of delays with their own manufacturing process, and now they're going to feel pressure from ARM on the desktop and probably on the server.
No US based fab for modern chips is a concern for national security (particularly given Chinese interest in eventually taking over Taiwan).
AMD spun off their fabs a while ago (Global Foundries) and uses TSMC for modern chip manufacturing. This gives them lower overhead now, but I'm not sure it's a much stronger position overall. There's a benefit to owning your own fab if you can pull it off.
I think Intel is at serious risk long term. They need someone who recognizes the existential crises they're in and can save them. Their current results are a lagging indicator.
"yet even in their most recent results Intel revenue/earnings still dwarfs AMD's."
Paradoxically this is likely WHY they are getting punished so much by the market. Their current revenue/margins are too juicy to give up, despite the fact that it's causing them massive pain on the technological front.
"in their most recent results Intel revenue/earnings still dwarfs AMD's."
Very simple explanation: stocks reflect future (expected) performance, not present performance. So the stock market is really just telling you they expect Intel to continue failing.
> But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.
That is because AMD only has GPUs and CPUs while Intel has a lot more side business (VDSL modem chips, Thunderbolt, FPGAs, ...).
Additionally, the CPU side of Intel doesn't look very promising for the future (architectural issues like the whole sidechannel attacks, technical issues in their lithography process), they will have to invest a lot of money to get this under control - while AMD has a wildly positive outlook and is only limited by the capacity of TSMC's fabs - whatever they produce gets ripped out of their hands by customers.
From a financial point of view, Intel also has the problem that AMD was drastically undercutting prices for competitive products... part of the Intel stock price was the ridiculous amount of money they could squeeze out of customers for their top-notch processors for years. AMD all but flattened that as Intel was forced to cut their prices by a bunch.
Also, that was a record Q2 in revenues for INTC. As much as I love AMD and that they are now competitive +/- in most cpu metrics, Intel held its own with a lot of 14++ inventory that is only now shifting to 10nm.
Reality is, there is no way TSMC would be able to mfg. all of Intels cpus even if both wanted it. What I can see happening is that Intel licenses some IP from TSMC.
AMD has a ton of room to grow. Intel does too, but not in the processor business where it's had a near-monopoly in some spaces. Intel also has challenges in other areas, and ARM is looking strong too.
It also doesn't help Intel that the long-running data leakage issues hurt it a lot more than AMD.
AMD is in a better position to seriously increase the E by utilizing its P.
Intel has failed to use its P to seriously increase its E for a few years, and just experienced a string of setbacks with long-term consequences. (This is why buying AMD a couple weeks ago was an obvious good move.)
Dont consider P/E as sole index of company's valuation.
Stock markets assign high P/E for anything they consider as "Growth". TSLA, CMG is all considered as "Growth".
For some reason the present market is geared towards "Growth" than "Value".
To me this difference in market cap and revenue you illustrate demonstrateS more that AMD has a lot of market to gain while INTC stands to lose ground.
As an investor it would seem that AMD has more potential upside. Add to that they are currently producing better technology at the moment...
>But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.
Decline means decreasing. Which is true. But they are decreasing from a big number.
What is funny, Intel revenue growth : 20%, AMD : 26%.
And Intel has NSG division that is growing much faster than AMD, almost having the same global revenue and more benefice.
Their P/E ratio has been bouncing off a floor of ~9-10 since after the financial crisis, so there's nothing unusual about that. Prior to the FC you have to go back to 1994.
The issue I see is that the prices imply Intel is going to fall to 70% of the market, and AMD is going to take 90%. Obviously both things cant be true.
Also, what if the cash sink in Intel's fab continue to become a drag for cash on hand and unable to produce any returns on those huge new fab process investments?
3-5 years from now, still 14nm CPU only without any volume productions on 7nm, 10nm? How much can those 14nm 300 Watts CPU can be sold for and who will buy them at that time?
Actually from the past 3-5 process development history of Intel, it is not hard to see what is likely to happen.
If you think Intel isn't a building on fire you either don't work in tech or haven't spent more than 10 seconds looking at what has happened to them over the last 5 years.
In the last 5 years Intel revenue has grown from $55B to $72B. Sure AMD's TSMC-manufactured chips have a price/performance lead today, but 50 years of history show us that's only part of the story between AMD and Intel.
Missed process shrinks and had multiple major security issues. Went from having all of the major oems and cloud providers on lockdown to them universally seeking other options: whether amd, in-house arm, or both.
Past revenue has never and will never be an indicator of future success.
I'm a huge AMD fan but I fear that its current valuation already has an immense amount of success already priced in.
They are doing absolutely spectacular work, but there's still much to do, and there are significant risks.
They have been making progress on the GPU side, but as long as they don't provide a CUDA-like ecosystem and experience, I don't see them challenging NVIDIA soon in the accelerator market.
I'm pretty confident that they will continue to outpace Intel on the CPU side, but with Amazon's Graviton2, and the recent TOP500-success of Fugaku -- pure ARM, no accelerators - there is still a tremendous amount of competition ahead.
Looking at earnings growth as a steady process can be misleading when profit margins vary. Intel has a 26% net margin, and AMD is just over 8%, so if AMD gets Intel's pricing power their earnings could be 3X as high before any change in revenue. For a look at the pathological case, realize that a company that breaks even one year and makes a trivially small profit the next has experienced infinite earnings growth.
That being said, I agree it looks like AMD stock is priced for something spectacular to happen, which makes me more excited about their chips than their stock.
One of the most interesting aspects of investing is that the success of a company and its stock are only loosely correlated. It's entirely possible for an investor to do very poorly after investing in a company that grows a lot if the price they paid was too high (as you mention, AMD might be in such a situation right now). Conversely it's entirely possible to make excellent returns on the stock of a company that is not growing (or even shrinking) but pays out large dividends.
A market can stay irrational longer than you can stay solvent.
If you want to sit on the sidelines and watch valuations soar to unreasonable levels and not try to claim a piece of that fine, but don’t cry when you see how much you missed out. AMD could be the next NVDA.
Congratulations to those who have enjoyed AMD's stock price rise by 30x over the past few years. If you think its going to rise by another 30x, prepare to be disappointed. Another decade or more of massive growth is already priced in.
ARM is an architecture and their competitors there are pretty much all fabless just like AMD. Even if there was a market shift to that architecture, they could just do this again:
I agree that ARM is a threat to AMD as well, but AMD has two things going for it:
1. Stronger x86 design: AMD's recent CPU releases have shown they are inching ahead of Intel on x86 design, and are able to achieve significantly better cost per $. At the same time, AMD are already well into shifting a big chunk their 7nm manufacturing to TSMC. Intel has only just started this process.
2. A strong GPU business: Yes, they are second to Nvidia, but given the design skills they are showing on the CPU side, I expect that gap will narrow very quickly. Both Sony and Microsoft have chosen AMD for CPU and GPU in the PS5 and Xbox Series X, with support for full 4k ray-tracing. Given how long this generation of consoles will be on the market for (likely 5-10 years at least), it is a strong forward indicator of roadmap strength.
tl;dr: I expect AMD will weather* the ARM storm better than Intel.
* Originally a typo as "whether". Thanks for the correction!
I wouldn't interpret Sony's and Microsoft's decision for AMD graphics as anything other than Nvidia being dickheads.
Keep in mind that Apple is also exclusively building Macs with AMD graphics cards. They don't even support Nvidia cards as eGPU anymore. The rumour is that Nvidia is not willing to do any customised designs and someone at Apple is very upset with Nvidia.
> Keep in mind that Apple is also exclusively building Macs with AMD graphics cards.
Any reason to believe that will continue to be true when Apple move to their own ARM chips? No technical reason they couldn't keep using AMD GPUs, but Apple seems to be leaning pretty hard into getting as vertically integrated as possible.
> I wouldn't interpret Sony's and Microsoft's decision for AMD graphics as anything other than Nvidia being dickheads.
I think that's a large component, but I'd like to add, ontop of maybe pricing and the like, Nvidia is being a dickhead in openness towards their hardware/software stack. Which, documentation of which is determined important for AAA game optimization over the course of operation of the console.
Additionally, there are important aspects of AMD's GPU architecture that are adventitious for teams squeezing the most amount of performance out of a fixed platform. Specifically, as far as I am aware, AMD's compute is much more flexible in context switching while the graphics pipeline is active, which at least used to be a problem for Nvidia's architecture.
Don't forget that almost every desktop and laptop CPU that Intel ships has a GPU. That probably gives Intel a larger installed base of GPUs than AMD and Nvidia. Intel is also entering the discrete GPU market for the first time in over 20 years.
I think that's the point. AMD sells compute power at a lower price point than Intel. If AMD can continue to lower the retail price of CPUs, they will chip away further at Intel's market share.
Margin means nothing if people don't buy your product.
Exactly this. AMD is willing to forgo on margin to make up for it in volume. Being ahead of Intel in outsourcing manufacturing will help them keep up with demand.
> Being ahead of Intel in outsourcing manufacturing will help them keep up with demand.
I don't think that follows. You realize the world is capacity constrained on leading-node fab capacity? And that by going fabless, AMD now has no guaranteed capacity?
I think this has more to do with the irrationality of the customers than any legitimate technical reason. From a technical perspective, AMD is crushing Intel and will be for the foreseeable future.
Maybe there are enough suckers to keep Intel afloat. I couldn't say.
While most of the focus here is understandably on the CPU side, there seems to be some interesting shifts taking place on the GPU side.
AMD currently has a process lead over Nvidia (and this is rumoured to be set to continue for a little while longer - apparently the first consumer Ampere chips are being fabbed on Samsung's inferior 8nm process due to lack of capacity at TSMC for the next few months)
Nvidia has clearly had an architecture advantage, although RDNA2 may close this gap, depending on how Ampere performs.
While Nvidia has had a much stronger showing in the GPGPU space, with CUDA helping it be the clear current winner, this also appears to have driven architecture decisions at Nvidia with the focus on tensor cores.
In gaming, Nvidia has put a lot of work into utilising these tensor cores for Deep Learning Super Sampling (DLSS). The idea being that you render at a lower resolution and then use deep learning to upscale in real-time to higher resolutions. DLSS 2.0 made some leaps in quality and DLSS 3.0 is on the horizon. It will be interesting to see:
a) How well they can get this working
b) Is AMD working on its own version of this?
c) If so, how well will the RDNA architecture be suited to this approach?
> apparently the first consumer Ampere chips are being fabbed on Samsung's inferior 8nm process due to lack of capacity at TSMC for the next few months)
I just wanted to clarify to anyone else that was initially confused, that the parent is referring to Nvidia's next-generation GPU architecture, not the ARM CPU developer.
What's your source? Ampere are shipping. AMD has no fab advantage since they have no 7nm enterprise cards. Their entire enterprise line has been somewhat of a joke to date .
You are correct, the enterprise Ampere A100 is on the TSMC 7nm process. I should have been clearer that I was referring to the consumer Ampere GeForce cards due later this year.
The 8nm rumors have been widely reported[1] but at this point are just that, rumours.
Random comment about AMD, but damn their cpu line naming is really confusing. between zen, zen+, zen2, threadripper, ryzen 7,8,9. ryzen is actually 3 different architectures? then there's like ryzen 7 2000, 3000, now 4000. But for the laptop cpus the architectures are actually different. zen2 isn't used in the ryzen 3000 mobile cpus. Then you can look at best buy and see a laptop listed using a 3rd gen ryzen. Im not sure what that is actually referring to. I'm not sure how this compares to their epyc line either. I still need to read up on that...
How is this any different from Intel Core i7? Intel Core i7 are a line of architectures from 2008. The i7-950 is a Quad-core Nehalem. The i7-2600k was a quad core Sandy Bridge.
Then Ivy Bridge, then Haswell. Crystalwell (laptop-only L4 cache version). Broadwell. Skylake. Ice-lake. Skylake-X. Sapphire Rapids. Etc. etc.
All under the "Core i7" name, despite being a ton of different computers.
---------
The "innovation" was realizing that customers want a long-running name based on price. The Intel i7 is the $300 processor, be it from 2008 or from 2020. Customers otherwise don't really care about the specific hardware details (AVX, BMI instructions, 256-bit or 128-bit Load/store mechanisms. AVX512, etc. etc.)
For the technical people who DO care about those details, Intel (and AMD) release manuals on the details. We know its more important to read the number that comes after the name. "Ryzen 9 3950k", the "3950" is way more important from an architectural perspective than the "Ryzen 9" part.
The "Ryzen 9" or "Core i7" part is just simplified marketing, for the people who are more concerned with price points than technical details.
Zen was the first core design, Zen+ was an enhancement on it, Zen 2 the newest generation. This is analogous to intel chip generations.
Ryzen 3, 5, 7 and 9 are like your Core i3, i5, i7 and i9 - market differentiators.
I agree that when you start looking at the actual model numbers, they're all over the place. Zen 2 laptop products are 4000 series, but Zen 2 desktop products are 3000. I think this was a mistake, personally.
Yeah. Other "favorites", which include also unclear directions where products are heading, are eg Microsoft Xbox naming: Xbox, Xbox 360, Xbox One, Xbox One S, Xbox Series X (compare against PlayStation 1--5), and the Google chat/videocall product lines: Duo, Hangout, Meet, etc.
To be honest Nintendo is just as bad at naming their consoles but for some reason get a lot less hate (outside of Wii U)
Nintendo Entertainment System, Super Nintendo Entertainment System, Nintendo 64, GameCube, Wii, Wii U, Nintendo Switch, Nintendo Switch Lite
or the handheld ones
Game Boy, Game Boy Pocket, Game Boy Light, Game Boy Color, Game Boy Advanced, Game Boy Advanced SP, Game Boy Advanced Micro, Nintendo DS, Nintendo DS Lite, Nintendo DSi, Nintendo DSi XL
zen, zen+ and zen2 are the architecture. Threadripper and Ryzen are product lines. I find that quite straight forward.
The really confusing part is, is that the Ryzen 4000 APUs will be zen2 architecture but the desktop CPUs without APUs or the mobile Ryzen 4000 series are zen3 architecture.
While unfortunate, this has been the case since Ryzen launched. Ryzen 2000G(E)/U series were Zen-based like Ryzen 1000(X), Ryzen 3000G(E)/H/U series were Zen+-based like Ryzen 2000(X), Ryzen 4000G(E)/H(S)/U is Zen2-based like Ryzen 3000(X(T)).
I suspect they do this because the APUs typically launch half a year after the GPU-less variants and
What's really confusing and unfortunate is that there are some Ryzen 1000 series variants (Ryzen 3 1200, Ryzen 5 1600) that were re-released well over a year after their initial launch and which are actually Zen+ based.
Ryzen 4000 mobile parts were released early this spring with Zen 2 cores in a monolithic design. The model numbers end in U or H.
Ryzen 4000 APUs were just announced and also use Zen 2 cores in a monolithic design. These have model numbers that end in G.
Zen 3 based desktop parts are expected late this year. If they follow past naming, they will also be Ryzen 4000 with model numbers sporting an optional X at the end, or no letter suffix.
Good for them. I daily drive an AMD Hackintosh (3900X) and the price to performance ratio of this chip is excellent.
More broadly, consumers are real winners with this zen-powered competition of the last few years. Intel first dropped prices aggressively and now with them shaking up the tech org it seems likely the two companies will have to fight one another for consumer dollars for years to come.
They are "solid" in the context of hackintosh community (not to mention non-Intel hackintoshes are considered to be less solid and more risky even by the community). I would dare to say they are not solid in the understanding and expectations of anyone else, and I'm saying that from a perspective of hackintosh user, one of the most recommended "golden builds" with components carefully selected to be as close to real Mac as possible.
And it is still full of issues, intermittent, persistent, every OS update is a stress, every Clover/drivers update is a stress and risk and so on. Yet, for a hackintosh, it is solid.
I wouldn't recommend it to anyone and I regret spending money on it ;)
Is your workflow OSX specific ? I switched to MBP and OSX 2 years ago for iOS development but OSX has been getting slower and slower with each update. And I'm running a 15" i9 with 32gb ram and Vega 20 - I see noticeable UI lag on my 5k monitor with native lightweight apps (eg resizing telegram/WhatsApp), Chrome is getting slower and Firefox is no champ either.
I recently booted to win 10 bootcamp for some game and was shocked at how much smoother the experience was. Need to do some benchmarking but just running VS code and docker felt noticeably faster on win 10 - same machine - and Macs have terrible windows drivers
I've spent the last 3 months slowly trying to move to WSL/WSL2 on the weekend and the experience has been really bad IMO.
Right now I'm in some state where I somehow deleted my Ubuntu WSL vm and nothing I do will get it to reinstall so that I can use WSL again. I'm so sick of dealing with this OS. It actually reminds me of trying to get my hackintosh to work and wasting an entire weekend testing different .kexts before I could even get to doing the actual work I wanted to do (code).
With that said Catalina/Mojave have been insanely buggy and I'm dying for a middle ground between osx and windows that isn't linux. I wish cocoa was opensourced.
But at least on my 16" mbp I can open it, maybe have sound not work, docker/windowserve/kernel-task consume all of my memory for no reason and have to restart it every few days but I can usually just open it and code and not worry about breaking ancillary stuff that takes me a day or three to fix.
I think I could say my workflow isn't Mac OS specific, but for my own use, I think it is. I require 1st class Unix userland tools (which WSL isn't), fast native terminal (which nothing on Windows and most on Linux aren't) and due to me not being 20 anymore, an OS that "Just Works" (which Linux isn't for sure and Windows most often isn't either) that doesn't actively spy on me (which Windows does) and runs on a well made hardware (which Mac OS does only on Apple machines). I've talked about that at lengths, feel free to check my comments, for me Mac OS is the only viable OS right now.
You should give openCore a try. Everyone seems to have a better experience with that now. I was surprised at how easy it was compared to clover. Don’t be the first to update to a new 10.x.y release, but even the latest 10.15.x releases became compatible pretty quickly.
My system is very stable (“solid”). My usecase is web development and occasional Xcode, so ymmv.
I wouldn't compromise my day-to-day work experience just because every other aspect of the G14 is perfect. It is an incredible machine (especially for the money), but if it doesn't get the job done it's ultimately worthless.
USB webcams or cellphones are a pain to deal with, especially if you just want to grab 1 device and run to a meeting room. "Oops i forgot my webcam brb". Cellphones are problematic because this means you now have to run some sort of hybrid of meeting software between PC and phone. This can increase cognitive load and distract from the actual purpose of the meeting.
IMO doesn't really matter anymore, considering most of us have a phone with a front facing camera with much better quality than most/all laptop cameras, it can easily substitute a laptops camera.
Is there some easy to use software to use your phone as a webcam for your laptop (android -> linux/windows/mac) ? And do you use a stand for the smartphone for that?
Otherwise you need to connect to each conference with multiple devices, choose which microphone to use, share a presentation on one device, but the camera on the phone, ... . Doable, but annoying.
why do you need a webcam? i wfh and all my meetings are audio only and optionaly someone shares the screen to show a demo or a presentation. we don't show our faces. the majority of colleagues have duct tape on the laptop's webcam. so for me, the missing cam from g14, is a feature
Yeah, you can tell they designed that machine way before the pandemic and WFH was the norm and they switched to a no-webcam-on-gaming-laptops mantra.
ASUS engineer: "laptop webcams have shitty quality and gamers don't use them anyway, let's just not include one and save ourselves the BOM cost; applause from bean-counters"
Covid-19 WFH: "I'm gonna end this man's whole career"
MacBook Pros got the right balance between usability, features and power a long time ago, other companies should just copy & modernize it. Not having webcam on a laptop is unacceptable (though having a physical switch on it is a great feature for privacy).
I don't like the post Steve Jobs direction that the Macbook Pro took, so it's a no-go for me. I had a company macbook pro in 2008 and I loved it (except the OS and the keyboard layout). It had great sound (I had more expensive laptops with worse speakers since then). Also the display was perfect for me (especially outside in sunshine...). Maybe it's just my memory, but I don't feel that the current laptop offerings are that much better.
Hard pass. People who do not buy Macbook pro do it because they don't like what's on offer. No point copying it. For example, latest laptops from Dell/ASUS/etc have 120Hz screens, sometimes even touch screens. If they were to just copy mac, we would never get these amazing features.
A couple of years ago I remember seeing a video where a Google engineer said they were working on CUDA to AMD compilers and a push to standardize CUDA - what happened to that ? Or am I misremembering something ?
I think it ultimately boils down to who has the best CPU architecture if you are talking exclusively about performance-per-watt. Right now, it feels like AMD/ARM are going to be very competitive with each other in the mobile segment, but only on paper. They mostly stay in their own separate market arenas. Apple may disrupt this soon.
The bigger picture is that x86 is a platform that most of the business world runs on top of right now. ARM is certainly pushing into that arena, but AMD is keeping the x86 offering very attractive.
I am of the camp that there is nothing intrinsically wrong with x86, and especially not its recent implementations. It is an old & dirty ISA, but it gets the job done. Every scenario on earth has been thrown at it and it has adapted to suit. Decades of iteration and testing with billions of participants.
All AMD needs to do is continue cranking out 100W+ TDP parts that tear through workloads. The current style of ARM devices cannot keep up with power budgets like that. I believe they would have to completely redesign their architecture if they wanted to move from 5-15W up to something like the toasty 225W TDP of the 7742.
If Intel has another delay, they're dead. They'll be like Boeing (but their mistake won't directly kill people). It'll mean that all engineering culture is dead and the best minds have already left the company. Everyone who at Intel who predicted the wrong schedule and not a realistic one should be fired. They have destroyed a national champion in their short term greed.
Their best minds have definitely not left the company. Probably, just like with Boeing, they're just watching the clock, waiting for their retirement and for the quarterly bonus to come in.
Let's not pretend Intel is dead, they just had a record quarter and my friends working there still got sizable bonuses.
What’s the best case scenario for them though? The mobile market is already permanently blown for them. Best case AMD chips are heavily defective next gen and Intel price/performance blows AMD out of the water with 7nm...which AMD has now already shipped.
My pet theory is that the Trump funds to keep American microchip manufacturing afloat has made a Intel complacent. Maybe they’re just dunces though.
Best case for them is TSMC continues to be heavily capacity constrained for the next few years, handicapping AMD's ability to pick up significant market share while Intel resolves their fab issues. Intel may also have to heavily lean into backporting new designs that have been sitting on the shelf waiting for new nodes. 14nm is finally moving past Skylake-based architectures with Rocket Lake, and there should be opportunities to backport 7nm designs to 10nm as well. Not ideal, but if they can remain roughly competitive with AMD and just out-ship them, that could get them through the next few years without losing much market share.
A US government injection of cash to Intel's fab business seems like it could get bipartisan support if Taiwan/China continue to lead the market, but Intel's problems don't appear to be be cashflow related.
AMD is the exception, not the norm. There are very few companies that can do that turn around. And AMD was lucky that Intel wasn't able to continue their progress. Had Intel still been ahead of AMD, AMD's revenue's wont be growing.
Intel has about 10x the revenue and 10x the employees of AMD. AMD is doing well lately, but if times get tough Intel can survive for a very long time just on inertia, just like IBM and HP are surviving. AMD probably can't.
Intel also has plenty of time to get their mojo back if they still have the drive to succeed. A lot of very smart people work there. They just need leadership that can execute. In a lot of ways Intel was a victim of its own success, having a virtual monopoly on good CPUs until Ryzen came out. Leadership got lazy. Leadership needs to fix that. It's not fair to say that the engineering culture there is dead.
Without Intel's access to AMD64-bit patents, Intel wouldn't have anything better than a 32-bit Pentium4.
Intel and AMD have each other in a MAD (mutually assured destruction) patent hold. If either pulls either patent portfolio from each other, they both die dramatic deaths.
Intel owns 32-bit x86 patents... while AMD owns the 64-bit patents. Modern x64 chips cannot function unless both parts are together.
Intel Revenue: 19.7B, Net Income: 5.1B, Market Cap: 209.42B, P/E: 9.06
AMD Revenue: 1.93B, Net Income: 157M, Market Cap: 79.2B, P/E: 133.82
But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.