Hacker News new | past | comments | ask | show | jobs | submit login
SoftBank set to sell UK’s Arm Holdings to Nvidia for $40B (ft.com)
332 points by JumpCrisscross on Sept 12, 2020 | hide | past | favorite | 260 comments



I think this is a very bad and short-sided deal.

ARM is in such a great position currently. There's no reason to sell except that SoftBank is in desperate need of capital. On top of that, Nvidia is likely to be a terrible steward of the IP. Nvidia has a terrible track record of working with other companies, partners, and open source developers. ARM has become a de-facto standard in mobile space, and Nvidia will likely use that position to strong-arm competition. This will push vendors out of ARM and into some alternative ISA. While long-term this might end up being great for RISC-V, it's going to cause a huge fracture in software stacks at the exact WORST time. Finally we're starting seeing huge convergence on ARM in Mobile/Desktop/Server space. One ISA to rule them all! Nope, now Nvidia is going to destroy that progress and set everything back another 5+ years.

Please, somebody tell me I'm wrong. I really don't want to be so pessimistic about this.


Unfortunately, you are correct. This is what Nvidia was looking for since out of the big 3 US semiconductor manufacturers, it doesn't have its own IP on CPUs. Now it will soon be able to control ARM to its advantage against the competition. This will indeed push the urgency for RISC-V to be worth looking at in the next few years to be used by hardware vendors soon. But it will be a long wait.

As for the UK now realizing SoftBank's expensive 'pass the parcel game' with ARM since 2016, I'd say you only appreciate something until it is gone. [0][1] In terms of technology, the UK simply didn't know what they had.

[0] https://www.bbc.com/news/technology-54122692

[1] https://news.ycombinator.com/item?id=24362846


> This is what Nvidia was looking for since out of the big 3 US semiconductor manufacturers, it doesn't have its own IP on CPUs. Now it will soon be able to control ARM to its advantage against the competition.

In all fairness, consider the market from Nvidia's perspective: they've spent a few decades being a stone's throw from Intel eating them. They started as one of many add-on card manufacturers and have turned into a dominant player by innovating, iterating, and being ruthless.

Some here might not remember how cutthroat the early-GPU days were. Nvidia survived.

Now, an opportunity to become a technical peer to Intel at the CPU level? To control their own destiny?

They'd be insane not to take this opportunity.


Very well put

The biggest thing if you have been in prolonged vicious competition, is to Control Your Own Destiny and leave the basket of crabs behind

For Nvidia this is a MASSIVE win

Softbank are @#$@#$ idiots for selling ARM

Right when ARM is finally taking over the world

For what?

So they can fund the next bunch of charlatans - add more disasters like WeWork and Uber?


Softbank probably has a bunch of NVIDIA options that it needs to be worth money


I don't remember it being all that cutthroat tbh. 3dfx was arguably the only game in down and ATi was it's whipping boy. Matrox was over in the corner saying "look nah two monitors at once!"

3dfx stumbled right as nVidia dropped the GeForce and blew everything away.

ATi became nVidia's whipping boy and Matrox just added the ability to drive more monitors.

I mean aside from not knowing if Matrox is still around, not much has changed from what I can see.


Yeah there's a lot more fights NVidia has had.

They spent a lot of time and money first building motherboard chipsets[1] with on-board GPUs for AMD CPUs only to have CMD buy ATI.

Then they started a program to build their own x86 CPU[2] which Intel sued them over[3]. There was a counter suite, and the EU settlement around the same time and Intel ended up paying NVidia $1.5B3[3] which was often read a win for NVidia. Of course in retrospect it gave Intel another 10 years of CPU domination.

So then NVidia announced they were building a desktop ARM processor[5]. That went pretty much nowhere. So then there was their mobile play (the Tegra)[6]. That was supposed to let them dominate mobile phones, and went just about as well as their launch partner phone (the Microsoft Kin). It has found use in robotics (the NVidia Jetson series) and cars (the Telsa model 3) though.

So they've lost a bunch of fights, and outside GPUs it really has been cut throat.

[1] https://www.anandtech.com/show/828

[2] https://www.tweaktown.com/news/11570/nvidia_announces_x86_cp...

[3] https://www.reuters.com/article/us-intel-nvidia/intel-pays-n...

[4] https://www.anandtech.com/show/4122/intel-settles-with-nvidi...

[5] https://www.engadget.com/2011-01-05-nvidia-announces-project... arm-cpu-for-the-desktop.html

[6] https://en.wikipedia.org/wiki/Tegra


Note that Tesla does not use NVIDIA anymore.

https://www.zdnet.com/article/nvidia-takes-aim-at-teslas-cus...


FYI, Nintendo Switch is Tegra based as well. 61 million units isn't bad, and there are significantly more Tegras in use in Switch consoles than in Jetson boards or even Telsas (~1 million).


Yeah and a bunch of NVIDIA products too (the Shield, etc).

But it hasn't really seen the broad based success in mobiles they were hoping for, yet.

(By comparison, a midsize mobile company like Oppo sold 115M units in 2019)


You just described a space with 4 competitors fighting for the same market. How could it be anything other than cutthroat? GPUs are not the kind of space where it’s easy for new companies to enter. It very much was a winner takes all market.


> It very much was a winner takes all market.

Of course. The multi billion dollar market is just too small for more than one company. Nvidia is just biding it’s time on ~20% market share until it can put the finishing moves on the other 9 major vendors.


Cutthroat usually means something like unprincipled or ruthless competition. When 3DFX and now Nvidia were on top, there was no assailing them. You exist in a position they allow you to exist.

Perhaps the competition between 3DFX and ATi or Nvidia and AMD could be described as cutthroat but the other "competitiors" in the space couldn't even compete and eventually they all just pivoted into other things without anyone noticing or caring.

It's not unlike the x86 processor market. Companies like Via and Transmeta existed but they never were any serious competition to Intel or AMD.


ATI/AMD remains nearly the only competitor nvidia has in the discrete GPU space. But we started out with 3dfx, S3, nvidia, ati.


PowerVR, Matrox, Trident, Bitboys Oy, 3Dlabs, Rendition, Intel740...


SiS also had 3D graphics chips. Tseng Labs was probably one of the earliest victims of the 3D GPUs wars, having developed but never completed the ET6300.

PowerVR still exists and is seems successful in the mobile market (having provided Apple's GPUs until A10).


Imgtech/PowerVR was successful in the mobile market until Apple stopped buying their GPUs. Now they have practically no mobile customers though they're still reasonably successful in the automotive market.


PowerVR died when Mali became a viable GPU, it was also much less Apple than Qualcomm switching to Mali and then to reference ARM cores in general.


> Qualcomm switching to Mali

When did that ever happen?

Qualcomm used to license AMD's handheld GPUs (IP), then acquired that whole business unit from AMD. They've never used either Mali or PowerVR in any products that I know of.

Source: worked at PowerVR 2006-2007, then worked at AMD's handheld division including during the acquisition by Qualcomm, and stayed there until 2017.


> Source: worked at PowerVR 2006-2007, then worked at AMD's handheld division including during the acquisition by Qualcomm, and stayed there until 2017.

David, you're entirely too qualified to be speaking about these things.


That was the gist of my comment. Nvidia could have easily been PowerVR, if Intel had shown earlier interest (~00s) in developing a serious graphics product.

As it was, it seems they made the Microsoft/Internet and Microsoft/Mobile mistake and saw market evolution only as a threat to their existing portfolio, rather than as an opportunity.


I'm not sure that's correct, one of the main reasons why NVIDIA succeeded is that they actually pushed technology and they always seem to have a much longer term vision than for the next 1-2 iterations.

I'm also not sure if you are suggesting NVIDIA made the same mistake as Microsoft regarding mobile, but them buying ARM is anything but that.

Them buying ARM is a way for them to offer a fully vertically integrated solution for the enterprise, pushing their existing initiatives such as NVDLA (https://github.com/nvdla/) being able to exert more control over the future of ARM architectures and designs for specific fields especially automotive as well as potentially getting their graphics and compute IP into billions of devices.

Anyone who thinks NVIDIA is buying ARM to simply trash it or to mess up with their competition is wrong, I'm not saying NVIDIA would necessarily be successful but most of the so called competitors that people are point at aren't their competitors at all.

NVIDIA is also buying a lot of talent with this acquisition, specifically the likes of ARM Austin which were responsible for the A76-78 cores.

NVIDIA + ARM + Mellanox has the potential to be a player on an unprecedented level for the hyperscaler and HPC markets, especially if the likes of Apple and Amazon (and to lesser extend Cloudflare etc.) do a lot of the heavy lifting for them. Apple going with ARM is probably the best thing NVIDIA could've asked for.


Intel was the intended antecedent for "they". I should have been clearer.


Oh wow I had a Tseng Labs T4000(?) on my second Linux computer! I forgot about them!


> Now, an opportunity to become a technical peer to Intel at the CPU level? To control their own destiny?

> They'd be insane not to take this opportunity.

It made financial sense for ARM's board to sell to SoftBank; it makes financial sense for SoftBank to sell to NVIDIA; it makes financial and strategic sense for NVIDIA to buy ARM; it makes financial and strategic sense for NVIDIA to radically change the way the ARM ecosystem works to benefit themselves.

And yet, the world will lose out massively by having the ARM ecosystem radically changed to benefit NVIDIA.

Capitalism certainly does a lot to create value for society as a whole, but this is just one of many examples of where capitalism also destroys value for society as a whole.


This is a premature judgement.

If NVIDIA treats ARM poorly, that benefits alternative ISAs and may ultimately undermine NVIDIAs investment. If NVIDIA treats ARM well, then that benefits the platform as a whole. The reality is going to be somewhere in the middle.

Market equilibrium does not imply maximum benefit for any one interest group. "Consumers" or "end-users" are often conflated with "society".


Looking through the doomsayers in this thread, this is the result I believe to be most likely. If ARM goes belly up through gross mismanagement, then the laurels will be taken up by someone else. I think another aspect to consider is the perpetual license agreements ARM holds with several other businesses. I think this may muzzle Nvidia to some extent.


ARM also has well-funded competition in many spaces in the form of Intel and AMD. And to a lesser extent in Apple, Qualcomm, and Samsung.

And while people may grouse about many things, chiefly paying a perceived Nvidia tax, I don't think anyone would accuse Nvidia of being strategically incompetent.

The more likely negative outcome seems to be that Nvidia would steer future ARM development so that their CPU+GPU solutions are more performant than anything anyone else can afford to produce.

Which seems like a negative... but far from the worst negative.

TBH, I think ARM's non-GPU business isn't interesting enough to Nvidia to screw with.


> the world will lose out massively by having the ARM ecosystem radically changed to benefit NVIDIA.

if that's the case, then the people who stand to lose from this _should_ be pushing the gov't to standardize and prevent fracturing of ecosystems from happening.


Why stop there ? It'll make financial sense for previous users of ARM license to start looking for alternatives ( such as riscV) and create more competition in the field. Which ultimately benefits to the people as a whole.

At least that's the theory of capitalism..


The techies here including myself did, it was a deal the government should have looked into.

It was a crown jewel of a company, hard to imagine the French government letting it slide or the US government letting a intel be sold to a foreign company.


Even though I've long followed Arm (I went to a presentation in Cambridge from Steve Furber about the Arm 1 in 1985ish even before the first commercial chips shipped) I'm a bit sceptical about how much value would ever be generated for the UK economy by Arm. The business model is about low margins and selling to everyone around the world so its hard to see how being next door to Arm (geographically) would be a big advantage.


The theory is that clusters form around “anchor tenants”; Cambridge has the University (and its excellent CS departments), ARM and some others, and is close to he largest capital marketplaces in the world, grandiosely bundled into a so-called “silicon fen”. So school leavers can get jobs, network, start companies, develop other tech in a virtuous circle.

Well that’s the theory; nobody has actually been able to reproduce Silicon Valley except arguably in SF (it’s more like the valley grew a tentacle is the peninsula, though the distance is far enough that the culture is slightly different).

As far as Cambridge goes, I don’t see that this purchase makes any difference for those factors, except bragging rights in Whitehall.


This is a bit of an aside, but it never ceases to amaze me how many would be tech hubs try variations of Silicon Valley in their (maybe informal?) branding. Silicon hills, Silicon fern.. (I knew a couple more but I’m forgetting atm). It’s not like it’s the name which made it what it is.


You can find all the places on Wikipedia with the "Silicon something" name here[1]. Look under "S".

I live in Silicon Peach.

[1] https://en.wikipedia.org/wiki/Category:Information_technolog...


An area in London was also amusingly called Silicon Roundabout.


This cracks me up I had to google and check it wasn't a made up joke. [1]

[1] https://en.wikipedia.org/wiki/East_London_Tech_City


It's not even a roundabout any more.


My personal favourite is Silicon Glen:

https://en.wikipedia.org/wiki/Silicon_Glen


I've heard people at the wafer manufacturing facilities in South Portland, Maine refer to that little cluster as "Silicon Tundra" but I don't think anybody from outside there calls them that.


Silicon Beach. I live in one that claims the title. I suspect there are many.


Me too. But I moved here from the Silicon Prairie!


I love that name because of course all beaches are by definition made of...silicon!


Silicon slopes as well I've heard for Boulder and Salt Lake City.


"Silicon Valley North" checking in.


> The theory is that clusters form around “anchor tenants”

Yes. Largest technology hub in Europe sits in Eindhoven (Veldhoven), Netherlands where you have a technical university, but the real anchor was Philips who were headquartered there. Eindhoven is home to ASML and Signify (formerly Philips Lighting) among others.


ASML is also a Philips spun-off


It was a joint-venture between Philips and ASMI.


> Well that’s the theory; nobody has actually been able to reproduce Silicon Valley

Shenzhen and Israel have been pretty successful.


I think the value is a soft-power and positioning play.

Being able to say "we control the architecture in seventy trillion and counting devices" gets you a seat at the tech industry grown-ups' table (next to the US, China, Japan, Korea).


>hard to imagine the French government letting it slide or the US government letting a intel be sold to a foreign company

You mean something like GE buying off Alstom Énergie using corruption and government pressures ? Or the Chinese getting a piece of all our ports and nuclear tech ?


> it doesn't have its own IP on CPUs

NVIDIA does design their own CPU cores.


Using the ARM ISA for application processors.


and RISC-V for for some smaller ones too


> Nope, now Nvidia is going to destroy that progress and set everything back another 5+ years.

You're blaming Nvidia. You should be blaming SoftBank and Masayoshi Son: bunch of fucking hucksters misleading people and making bad investments.

Now they need to sell ARM to fill some of the gaping void that has resulted. They couldn't care less who they sell to and what the outcome of that sale will be. I have nothing but contempt for them.

An absolutely miserable turn of events.


> You're blaming Nvidia. You should be blaming SoftBank and Masayoshi Son: bunch of fucking hucksters misleading people and making bad investments.

You shouldn't be blaming bad investors. They pay for their decisions with their, or more often somebody's else, money.

Blame the collective ownership of means of production, which in form of public companies have dominated the Western world for the last 30+ years.

When people think of of selling their business, which is their life's work, as a raison d'etre, something is really messed up with the business culture in that person's coutnry.


> You shouldn't be blaming bad investors. They pay for their decisions with their, or more often somebody's else, money.

> Blame the collective ownership of means of production, which in form of public companies have dominated the Western world for the last 30+ years.

Can you unpack these two points? I don’t understand. It sounds like you’re saying “don’t blame the owners (because investors ARE owners); blame the owners”.


I think GP is arguing that public companies, which are often blamed for "shortcomings of capitalism" are in fact not capitalistic at all. "Collective ownership of means of production" was postulated by Karl Marx as one of the pillars of communism.

Publicly traded companies allow shareholders to limit their risk - when things start to go wrong you sell your shares to a bigger fool, and it's not your problem anymore. Also, as shareholders are not personally responsible for wrongdoings of their companies it also limits the risk to the buyer.

In short: publicly traded companies are enablers for moral hazard that we constantly observe in financial markets.


Have you, by chance, looked into any of Carol Sandford’s work?


No


> When people think of of selling their business, which is their life's work

That's a bit of a grandiloquent take on reasons that could lead one to start a business.


...What?

Vision Fund isn't "public", and most of the companies they invest in aren't "public".


Well Brexit caused the pound to plummet and supposedly made ARM tempting. I find it ironic that the current UK Brexiteer government talk constantly about soveignty and control, yet were quite happy to allow this sale.


Would anybody have bought ARM if SoftBank didn't back in 2017? I wonder if Nvidia had bought it at some point anyway.


Did they have to be bought? Could have been a regular public company


In that case you should be blaming WeWork.


Why on earth would we want to converge on ARM? The first ARM architecture that was somewhat palatable was ARMv7, everything before that an unusable mess of different chips with vastly different capabilities. Their extensions are bad (read: subtly incompatible) late copies of what Intel and AMD are driving. Most of the innovation happens not at ARM, but their respective licensees. It took others inventing EFI to get some form of BIOS-equivalent for ARM, but even today the company gives the impression that they couldn't care less.


I agree with what you said, but there is a reason for wanting to converge on ARM.

There exists no good alternative.

ARMv8.2 or newer is a very well designed ISA, while RISC-V is a very bad ISA and I would hate to be forced to use it.

OpenPOWER is a far better ISA than RISC-V, but unfortunately most developers do not have any experience with POWER and they have the wrong belief that POWER is some antique ISA while RISC-V must be some modern fashionable ISA. Therefore even if OpenPOWER is much better, it is less likely than RISC-V to be used as a replacement for ARM.

I and probably thousands of other engineers could design a much better ISA than RISC-V in a week of work, but no one of the creators of those thousands of new ISA variants would be able to convince all the other people to choose his/her variant over the others and start the significant amount of work needed for porting all the required software tools, e.g. LLVM and gcc.

So, if ARM would no longer be an acceptable choice, I do not see any hope that its replacement would not be greatly inferior.


Why is RISC-V bad? You spend paragraphs ripping on it without actually explaining why it's bad.


This has been discussed many times on many forums on the Internet.

The summary is that RISC-V is inefficient because it requires more instructions to do the same work as other ISAs and it does not have any advantage to compensate for this flaw.

Those extra instructions appear especially in almost all loops and the most important reason is that RISC-V has a worse set of addressing modes than the the vacuum-tube computers from more than 60 years ago, which were built only with a few thousands tubes, compared to the millions or billions of transistors available now for a CPU.

Because of this defect of the RISC-V ISA, the Alibaba team who designed the RISC-V implementation with the highest current performance (Xuantie910, which was presented last month at Hot Chips) had to add a custom ISA extension with additional addressing modes, in order to be able to reach an acceptable speed.

Whenever the designers of the RISC-V ISA are criticized, they reply that the larger number of instructions is not important, because any high-performance implementation should do instruction fusion, to be able to reach the IPC of other ISAs.

Nevertheless, that is wrong for 2 reasons, instruction fusion cannot reduce the larger code size due to the inefficient instruction encoding and the hardware required for decoding more instructions in parallel and for doing instruction fusion is much more complex than the hardware required for decoding less instructions with a better encoding as in other ISAs.


> Nevertheless, that is wrong for 2 reasons, instruction fusion cannot reduce the larger code size due to the inefficient instruction encoding

RISC-V includes a compressed extension that makes instruction encoding competitive or better than x86(!), and with none of the drawbacks of ARM's Thumb modes.


Compression methods reduce the size of the code, but that does not matter when assessing the efficiency of the base instruction encoding.

If you would apply the same compression methods to a more compact original encoding then the compressed code would be even smaller.

Competing ISAs, such as ARM (Thumb), MIPS (nanoMIPS) and POWER also have compressed encoding variants.


Are there any benchmarks (in terms of code size, runtime performance, energy effiency, ...) available for OpenPOWER vs RISC-V?

(and if not, what would be some of the metrics to objectively compare the architectures?)


It is very difficult to make a non-biased comparison between different ISAs.

If you just compile some benchmark programs for 2 different architectures and you look at the program sizes and the execution times, the differences might happen to be determined mostly by the quality of the compilers, not by the ISAs, in which case you could reach a wrong conclusion.

Many years ago, on one occasion I have spent many months working at the porting of a real-time operating system between Motorola 68k and 32-bit POWER. At another time I have also spent a couple of months with the porting of many device drivers between 32-bit POWER and 32-bit ARM and Thumb.

Such projects required a lot of examination of the code generated by compilers for the target architectures and also a lot of time spent with writing some optimized assembly sequences for a few parts of the code that were critical for the performance.

After spending so much time, i.e. weeks or months, with porting some large program, whose performance you understand well, between 2 ISAs, you may be reasonably confident of having a correct comparison of them.

If you want to reach a conclusion in a few hours at most, it is unlikely to be able to find an unbiased benchmark.

RISC-V is however a special case. Even if I have never spent time with implementing any program for it, after having experience with assembly programming for more than a dozen ISAs, when I see that almost any RISC-V loop may require up to a double number of instructions compared to most other ISAs, then I do not need more investigations to realize that reaching the same level of performance with RISC-V will require more complex hardware than for other ISAs.

Also, when comparing ISAs, I place a large weight on how good those ISAs are at GMPbench, i.e. at large number arithmetic. In my experience with embedded system programming large integer operations are useful much more frequently than traditional RISC ISA designers believe.

While x86 has always been very good at GMPbench, many traditional RISC ISAs suck badly, because they lack either good carry handling instructions or good double-word multiply/divide/shift instructions.

RISC-V also seems to have particularly bad multi-word operation support.


Thanks for the perspective and GMPbench reference. I'm sure you're correct that RISC-V has a lot of optimization work to do both at the compiler and chip implementation levels.

I'm curious whether vector operation support in RISC-V might also make up for any apparent shortcomings in raw arithmetic throughput - I guess a lot of it will depend on the types of workloads involved.


There's a old Usenet post I really like comparing different RISC and CISC architectures from way back.

https://userpages.umbc.edu/~vijay/mashey.on.risc.html

If you look at things here in 2020 it looks like it was essentially the RISCiest of the CISC architectures, x86, and the CISCiest of the RISC architectures, ARM that succeeded in the modern day while architectures more tied to an ideological camp like Alpha or SPARC or VAX have all died out in our current era. That makes me think that another very ideologically RISC architectures like RISC-V will, at the very least, have a tough row to hoe in getting widespread commercial acceptance.


I am actually pretty curious to see a demo of features from you that backs up that RISC-V is bad and Open Power is much better.

What convinced me was how thin the RISC-V book[1] is, and I have seen both ARM and Intel reference manuals. For example the V extension removes need to specify data size, greatly reducing instructions needed.

I have no idea about Open Power btw.

[1] http://riscvbook.com/

P.S. I am hiring U.S. candidates for ARMv8, x86 roles: https://careers.vmware.com/main/jobs/R2009422?lang=en-us


While I’m not really an expert on either, I have looked at code for both and I got the impression that RISC-V’s “simplicity” is actively harmful to making it a good ISA outside of teaching because it’s instructions are just not really all that great; indexing into an integer array (for example) is a shift, add, then load while basically every other architecture can do this in one instruction. It seems to me that the ISA is really designed more towards being pretty than being practical.


I wish I could upvote your comment a thousand times!

Thoughts on OpenPOWER vs ARM v8.2 in terms of ISA?

And for those who may be interested in OpenPOWER, https://github.com/antonblanchard/microwatt


ARMv8 was a clean design not constrained by compatibility with the past and it was created by people having a lot of experience with the implementation of ISAs in hardware and in software tools.

Therefore there is no surprise that it is an efficient ISA.

The only significant flaw in its first version was the lack of atomic instructions, but that was corrected in the subsequent versions.

The 32-bit POWER was a very nice ISA, but it was not designed to be extendable to 64-bit. It had blocks of the encoding space reserved for future extensions, but various details of the instruction word formats depended on the fact that the size of the registers was 32 bit.

When POWER was extended to 64-bit, much earlier than IBM expected, i.e. only 5 years after the introduction of the 32-bit variant, the extension was constrained because IBM has chosen to not have a mode switch like ARM but they have chosen to make a compatible ISA extension, i.e. which has the original POWER ISA as an instruction subset.

This has constrained the instruction encodings, so the 64-bit POWER ISA has some parts that seem more clumsy that in ARMv8 and the result is that programs for POWER are usually slightly larger than their ARMv8 equivalents. However, the hardware implementation effort for equivalent performance levels should be very similar for POWER and ARM and significantly less for both than for x86.

POWER also had a compressed encoding variant, but that was implemented in very few chips. Now the latest ISA variant has introduced 2 instruction word lengths, i.e. both 64-bit and 32-bit long instructions, instead of just 32-bit long instructions.

This allows the embedding of large immediate constants in the instructions, which is an important advantage of x86 vs. traditional RISC ISAs. This might help to reduce the sizes of many POWER programs.


I just wonder what implications is the Arm sale to Nvidia going to have on Apple, who is switching their MacOS hardware from Intel to Arm.


Apple owns a perpetual ARMv8.2 license. And since they work closely with ARM on ARMv9 they might already have a license for that as well.

Basically doesn't concern Apple at all.


Apple is already shipping devices with ARMv8.4; do they need a specific license for that?


Hopefully MacBooks will be able to have GPUs that aren’t pitiful and outdated?


Because the ISA is simple, the perf/watt is way better, and we're reaching the point where architecture-specific native code is not explicitly sought out by most users, so we can actually change processors.


Most users however do depend quite a bit on the performance of a small section of architecture-specific code.


(I'm going to exclude Linux in the below, because it's an odd duck where users/distributors can reasonably be expected to recompile every program they use for a new architecture.)

Sure! But that code is generally core OS code written by a small number of people and deployed widely. If Apple or Microsoft or Google decided that OpenPOWER was the way to go, we'd be able to switch over at least some use cases. ALL of the above have current Arm offerings. Arm has reached "good enough" performance - the snapdragon 855 is as fast as a i5-8630U, and uses half the power.

The primary barrier to processor migration is compatibility with user-mode programs. Since the vast majority of programs are now using primarily ILs, such as JS or C#, once the runtimes are ported (and they have been), there's not a lot of lasting incompatibility since programs bind to APIs, not ISAs. Apps for Work (CAD, Photoshop, dev tools) are traditionally the last to port, but there has been no better time to switch architectures than now, and it's only getting easier.


As someone going through a processor migration right, now, the core OS is actually not that hard to move. But even once you change all the language runtimes, there's a fair bit of proprietary software (including games) that are unlikely to move quickly.


The Intel hate is so big that ARM fans don't acknowledge what they will lose in the process.


Someone finally talking sense here.


> While long-term this might end up being great for RISC-V... set everything back another 5+ years.

Five years is no big deal. In fact there’s so much in the pipeline it’s likely little will be different For 3-5 years.

The bigger problem is RISC-V is immature and to use ARM as an example will need another decade to get its act together (for which I have high hopes BTW). I agree with you that this will definitely be a shot in the arm for RISC-V.

> it's going to cause a huge fracture in software stacks at the exact WORST time [due to platform convergence]

This is far from the worst time, and platform monoculture is not something to be celebrated. So from that perspective you’ve given me some encouragement from the news of this deal.


> The bigger problem is RISC-V is immature and to use ARM as an example will need another decade to get its act together (for which I have high hopes BTW).

Do you think that RISC-V can overcome the inefficiencies discussed elsewhere in this thread with time? Specifically, can they improve the instruction set with more powerful addressing tools and reduce the number of instructions needed within loops?


Isn't the only reason it couldn't be improved the threat of other companies with patents? I thought that was usually the underlying culprit for missing capability.

I wonder what the best ISA we could create would be in a universe where there were no rent-seekers looking to recoup their R&D investment by limiting what a new ISA could contain.


>ARM is in such a great position currently. There's no reason to sell except that SoftBank is in desperate need of capital.

Softbank needs cash. sells ARM. not a good reason?


Softbank has to prop up Uber and WeWork with something.

I dunno, would you rather have ARM technology owned by a US company or by a front for the Saudi sovereign wealth fund?


Unfortunately, after the last years any non-US citizen would rather have ARM technology owned by a Saudi fund or by any other non-US entity.

When you choose to include ARM technology in some of your products, then you will spend a lot of time and money for the implementation and you will need a long term availability of that technology, to be able to recover you costs and to give you time to switch to another technology, if desired.

Now any long-term commitment for providing any US-owned technology can no longer be believed, because precedents show that such commitments can be unilaterally canceled at any time, without an early enough warning.


> long-term commitment for providing any US-owned technology can no longer be believed

only if you are not already under the thumb of the US gov't. Otherwise, it's a non-issue.


> Softbank has to prop up Uber and WeWork with something.

Softbank has a chainsaw prince as a main creditor. This is a very compelling reason to get cash in any way possible.


to be pedantic i believe it was a hacksaw


i agree with you but it look like Softbank want money more than ARM. it doesn't matter what you and i think. its whatever Softbank CEO think.


you give 'em an arm, they take a leg. after ARM, what else is in their portfolio that's not a flop like WeWork, Theranos, etc? It wouldn't be surprising to find SoftBank needing to sell other appendages too


Given the Saudi connection, this turn of phrase is... alarming.


or to turn it again, it is dis-arming


It seems a little dramatic to think Nvidia would spend $40 billion just to run ARM into the ground.


Large M&A have always been a mixed bag of success. Large ones have failed spectacularly in the past so not an unreasonable expectation

In this case the ROI does not depend on nvidia able to leverage ARM in their portfolio, even if they stopped developing ARM now , the licensing alone is worth a lot, while they hope to do more than that they don't have to.

IP can be extremely valuable in a merger like this , Google bought motorola and Microsoft bought nokia for largely IP reasons at 15B and 8 B


People here have an axe to grind against Nvidia it seems. Somehow seen as Intel but for GPUs despite being highly innovative and continuously so.


Their GPUs are great. But their business practices are bad, and if they apply them to ARM, that will hurt us all.


I don’t think the issue is that they aren’t doing well, it’s that they have historically not done good.


Agree. I really think they want to eat Intels lunch.


So you think ARM should be spun-off rather than sold? It seems like Softbank is having some issues unrelated to ARM, and needs the money. I'm also not sure that Softbank ownership has contributed anything to ARM's success.


The issue is with the well-being of the tech ecosystem and less with SoftBank or ARM's well-being as a business. SoftBank may not have contributed to ARM's success, but their neutrality allows ARM to license their IP without the kind of self-dealing you would get if Nvidia owns it. Further, Nvidia has a poor track record in global citizenship.

The net result is that the overall ecosystem is going towards a degenerative state, rather than fostering innovation and diversity in tech.


I’d think SoftBank’s ownership has probably resulted in more cash burn for ARM.


> While long-term this might end up being great for RISC-V, it's going to cause a huge fracture in software stacks at the exact WORST time. Finally we're starting seeing huge convergence on ARM in Mobile/Desktop/Server space. One ISA to rule them all! Nope, now Nvidia is going to destroy that progress and set everything back another 5+ years.

As someone who is not that well versed in hardware, can someone explain to me why it would be good to converge towards ARM? Why there is disdain for x86?


There is plenty disdain for both. However ARM is (was?) gaining mindshare and starting to take even more marketshare and mindshare from Intel. Intel is a bit on the ropes right now because they can't keep up with TSMC on miniaturization and other chip technologies. I think it's popular to be against the 800lb gorilla. This deal is likely really bad for the platform as Nvidia doesn't have the greatest history for being a "nice" player in the tech world either. I think the primary fear is they will stop licensing the ARM IP to other companies and claim it solely for themselves in hopes of going even more head to head with Intel across the entire spectrum of chip technology.


I’m sure there are people who have legitimate issues with x86 and its extensions. I’m not one of them.

What I believe most people are concerned with are low power high performance devices. ARM owns that space. It would be at the very least disruptive to have ARM platforms die.

On the high end Intel CPUs in servers are very expensive. Many people on HN have expressed hope that cheap ARM cores become good enough and drive down the cost of both on-prem and cloud servers.


Given how SoftBank has mismanaged things across their portfolio companies, wouldn't literally anyone else be a net positive to get ARM out of SoftBank?

For all the PR ARM's old founder has been doing, i don't see he's been able to scare up a buyout partnership.


Nvidia isn’t likely to invest $40B and then do anything to damage ARMs licensing efforts. They are likely to look to find synergies so ARM can incorporate Nvidia technologies and Nvidia can better utilize ARM technologies.


I hope that NVIDIA does not have any plans to damage the development and licensing of Cortex-M and Cortex-R microcontrollers.

This is a business that has no overlap over what NVIDIA does and maybe NVIDIA could spin off this part of ARM as a separate company, because I can hardly imagine the NVIDIA management focusing in this business, where they do not have any experience.

On the other hand, ARM Cortex-A CPUs are better than those designed by NVIDIA and they are used in products (e.g. by Qualcomm) that directly compete with NVIDIA products.

I cannot imagine that NVIDIA will continue to design better Cortex-A cores and then give them immediately to their competitors.


That sounds like a great strategy to announce and do for a year or two... before changing course to “maximise value” by merging operations, jacking up licensing costs, making nvidia gpus part of the arm ISA, etc...


> making nvidia gpus part of the arm ISA

That would be amazing. Nvidia has arguably the best GPU designs in the market, this would be a boost for ARM IP business.


> making nvidia gpus part of the arm ISA

what would this mean? are there any CPU ISAs with a GPU integrated into the ISA instead of just a peripheral?


it doesn't have to be 'technically' integrated, it could be something buisness-y like "licences now cost 2x, but you also get a licence for some nvidia stuff you didn't really need"


Does Nvidia have a history of doing this?


Maybe this will accelerate RISC-v, we'll see


It will ( don't know about success ) and the Chinese will benefit from it because they are already doubling down on RISC-V because they tried but could not become relevant with x86.


My first thought at this news was "but nvidia's not even that into it anymore!"

Sure, they make SoCs for the Switch and some cars and a media dongle, but they used to be so much more interested. They basically owned the Android gaming market with Tegra 2 and 3, made a really cool handheld and licensed a incredibly economical tablet design to EVGA with Tegra 4, and released their own gaming tablet with the Shield.

Then they stopped. Why spend the last five years whittling down their ARM product line and then try to buy ARM?


What will happen with Apple Silicon?


Why would you expect anything to happen? Apple has obviously already negotiated and signed the licensing deal.


Inability to resign a V2 deal might lead to "Apple ARM" diverging from "Everyone Else's Arm".


That may happen but ultimately it’s not like that matters. Apple tend not to worry much about compatibility outside of their own ecosystem and certainly haven’t worried about making breaking changes in the past (think 68k, PowerPC, x86, ARM) when it suits them — it’s not like they need to. They have managed migrations like this in the past with fat binaries and, more recently, they have even accommodated differences in architecture on iOS devices by using intermediate presentations (“bitcode”).

Any other compatibility with Apple devices tends to be a matter of coincidence. When they moved to x86 hardware, running Windows on Macs became so easy that building Boot Camp — effectively an install wizard and a few driver packs — was easy, therefore Apple giveth. However Apple Silicon Macs will drop without Boot Camp for the obvious reasons and Apple won’t apologise as they taketh away.


RISC-V is pretty awesome as an ISA. Give it a few years and it can solve all of these problems.


You are not wrong

"Nvidia has been the single worst company we have ever delt with so Nvidia F_ck You!" -- Linus Torvalds [1]

[1] https://www.youtube.com/watch?v=IVpOyKCNZYw&t=86


I'll never forget when I was applying for internships back in 2010, Nvidia was the ONLY company I came across with an GPA requirement (3.5). I happened to be below that, so I didn't even bother applying.

None of the FAANGs cared in the slightest.

I bring this up just because I feel it probably probably represents their arrogance well, in addition to everything else they do.


On reflection I'm a bit surprised that a consortium hasn't emerged to offer more for Arm given the concerns expressed here:

- $40bn isn't a lot in aggregate for the companies that are heavily invested in the Arm ecosystem (Apple, Qualcomm, Amazon etc) - maybe even Intel would take a small stake!

- The $40bn is partly in Nvidia's (arguably) inflated stock. Would cash be more attractive?

- Could probably partly fund through a public offering in due course.


Apple almost certainly has some kind of long term or perpetual contract that assures access to ARM instruction set and perhaps some other IP, given they co-created ARM Holdings back in 1990. While A-series chips use ARM instruction set, they've moved on from using actual ARM cores to their own custom cores with the A6 (2012), so they're probably not that exposed on IP if ARM changes hands.

Additionally, since Apple controls the whole stack including dev tools on their platforms, they have a lot more freedom to change underlying chip architecture changes than many others, who must rely entirely on third party software ecosystems to move.

Apple probably wouldn't get their money's worth at current price, given they wouldn't want to take on ARM's tech licensing business, since it's well out of scope for Apple's core business. A consortium would only be another messy strategic entanglement to deal with.


Agree on all the individual points but owning a stake in Arm would be a lot less messy than potentially moving to whole new ISA (and possibly losing access to lots of Arm patents for example).

I suspect that the calculation is that Nvidia would be a 'good enough' steward of the Arm ISA.


> The $40bn is partly in Nvidia's (arguably) inflated stock.

This is SoftBank we're talking about. Do we really want to bring up inflated stock as one of their concerns? ;)


There is bound to be some sort of financial engineering going on too.

Doesn't Son have a stake in Nvidia so maybe this is to help support the Nvidia share price?


Giving $40B of Nvidia stock to a cash strapped seller hurts the stock price.


I was wrong on Son having a holding in Nvidia too - seems to have sold it.


Yeah, this is a great idea. Apple and Google and Microsoft all have a vested interest in making this happen.


> The $40bn is partly in Nvidia's (arguably) inflated stock

I'm not seeing this in the article. How much of it is in Nvidia's arguably inflated stock?


I think that concerns about this being the end of Arm are overdone - I expect that the UK government will get some assurances about maintaining the HQ in the UK etc and that Arm's business is sufficiently different to Nvidia's to make wholesale merger with Nvidia's existing operations counterproductive.

However, I can't see how they will avoid enormous conflicts of interest between Nvidia and other competing Arm customers and that this will be to the detriment of everyone who makes and uses Arm based products (except Nvidia).


> However, I can't see how they will avoid enormous conflicts of interest between Nvidia and other competing Arm customers and that this will be to the detriment of everyone who makes and uses Arm based products (except Nvidia).

Yes, this. I see people talk about this but not enough imo. How on earth would Nvidia ever be allowed to purchase Arm? That's a massive conflict of interest. I know the rules don't really matter when we're talking about companies this large but this is so blatant, to me.


If this goes through, I expect Nvidia to become the new Intel - a humongous, anticompetitive chip monopoly. Today, people refer to Intel as Chipzilla; tomorrow Nvidia will carry that moniker.


> Today, people refer to Intel as Chipzilla; tomorrow Nvidia will carry that moniker.Today, people refer to Intel as Chipzilla; tomorrow Nvidia will carry that moniker.

Intel will always be the "Chipzilla". nVidia won't replace them but, will join it as the "Chipkong". So we'll have two problems, not a single but, different one.


Duopolies everywhere.


It won't be a duopoly because nVidia doesn't make x86 hardware. Even if they did, there would be three contenders (AMD, Intel, nVidia).

nVidia is becoming an AI/HPC behemoth. GPUs for Compute, ARM for feeding the GPUs, Infiniband for interconnect. All in a tightly integrated, closed package. This is a clear monopoly.

They're light years ahead in terms of GPU development and Debugging tools when compared to AMD. CUDA cornered AI/GPU computing it seems. Intel's interconnect foray has fizzled, like their Xeon Phi / Larabee efforts. So, nVidia has the interconnect (Infiniband) and compute part for now.

CPUs can be challenged and disrupted. It's a mature technology. AMD can catch nVidia in enterprise in medium term (hopefully), but Infiniband has no competitors for what it does. And no, 100G Ethernet is no match for 100G Infiniband (we use it a lot since DDR. It's an insane tech).

We're living in interesting times.


Looking 10 years into the future, do we really need x86 though? Is it not possible that Intel will loose and CISC will become basically obsolete? (Yeah, I'm ignoring AMD here for no good reason, but we were talking about nVidia vs Intel anyway.)


Looks somewhat unlikely. We may have other architectures as mainstream and, they would be more energy efficient for the same performance of x86 in mainstream use but, pure SIMD computation is underrated IMHO.

Yes, AVX has a clock penalty but, if your code is math heavy (scientific, simulation, etc.) it's extremely convenient for some scenarios still.

GPUs are not perfect for "streaming data processing" or intermittent processing because their setup and startup time is still in seconds. You need to transfer data first to the GPU if you want the full speed also. In CPU computing this overhead is nonexistent.

I develop a scientific application and we've seen that with the improvements in the FPU and SIMD pipelines across generations, a 2GHz core can match a 3.7GHz one in per core performance in some cases. This is insane. This is a simple compilation with -O3 only. march and mtune were not added intentionally.

Unless GPU becomes as transparent as CPU, we either need to catch or surpass X86 on SIMD / pure math level to replace it completely.


As an outsider: What is so much better about Infiniband compared to Ethernet?


Infiniband (IB) is not a network per se. It's a HBA. It can run TCP/IP but it's not the main application.

Every infiniband adapter is self-aware & topology aware. They know where the other nodes are so, they can directly talk with each other, regardless of the topology (network is mapped, managed and maintained by a daemon called subnet-manager which can either run on switches or a dedicated server).

This hardware and software combo results in three things:

1. Memory to memory transfers: IB can transfer from RAM of a host to RAM of another host, directly with RDMA. This means, when you run MPI and send a message to other processes, it's magically beamed over there, direct to RAM of the target(s). IB is transparent to MPI via its libraries, so everything automagic and 100x faster.

2. Latency: A to B latency is around 2-5 ns (nanoseconds). This means, when running stuff like MPI, machines become one as much as they can be. Until ethernet assembles your one package, you're there; possibly finished your transfer and continuing to churn your code.

3. Speed: 40Gbps IB means 38+ Gbps real throughput. For every p2p connection if you're running through a cube topology core switch. 80Gbps means around 78 or so. So theoretical and maximum is not so distant from each other. In most cases, 100 means 100 sustained, 80 means 80 sustained and so on (you can attach storage devices to IB network and enjoy that speed and latency on your HPC compute nodes for files).

Moreover, with more modern cards and switching hardware, It hardware accelerates MPI operations (broadcast, atomics, summation, etc.) and has multi-context support for supporting multiple MPI processes without blocking each other as much as possible.

For HPC, it's a different universe of speed, latency and processing acceleration. Moreover, you can run TCP/IP over it but, we generally run another gigabit network for server management.


I'm just picturing something like Superman's crystal cave https://i.pinimg.com/564x/c9/d5/a4/c9d5a448c3c0eb98014e8be0d... with a bunch of computer modules plugged into an Infiniband connection system. It could be a collection of manycore ARM processor modules as well as Nvidia GPU and AI modules. You just keep plugging more of them in to build up your home supercomputer.

Then you use your Neuralink brain-computer interface (communicating with the home supercomputer cluster with an ultra-compact WiGig module) to "program" it by talking to an AI avatar that pops up in the middle of your living room (or whatever simulation you are replacing it with currently). The cluster runs the AI and the simulation.


FWIW, Ethernet supports RDMA also, via RoCE.

> A to B latency is around 2-5 ns (nanoseconds).

What are A and B? and where did you get these numbers? HCA latency is more like ~500 ns.


What would Nvidia have a monopoly on?

For that matter, what does Intel have a monopoly on today?

There is more competition in the CPU market this year than there has been for a long time. Things are getting better, not worse.


Regardless of the consequences of selling to Nvidia…that’s not a great exit, is it? They bought ARM for $32 billion, and they are getting $40 billion for it? With all the money sloshing around in the debt markets, I figured they’d get a bit more.


Depends on how much leverage they used. If it was all cash on hand the that’s a poor ROI, if they only put up say 8b then that‘s a nice return over 4 years.


8B at zero percent interest would only increase the return to about 6% compounded.


SoftBank bought Arm in 2016 for $32 billion. A 32B investment turned into 40B over 4 years is 6% ROI. (32 * 1.06^4) = 40.3

8B + 24B loan turned into 40 Billion - 24B loan = 16B or 8B in profits minus interest. Which could be up to a 19% annual ROI 1.19^4 ~= 2x. I don’t know what their loan interest rate look like, but I suspect it’s shockingly low.


Sorry I misread your post, thanks for the clarification.


Nvidia already has their own ARM SoC, they just merge Arm developers to the team.

I suspect that Nvidia changes the business model of Arm a little and starts to sell high performance Nvidia Arm CPU's directly to server, laptop and mobile manufacturers.

Then we will have Intel, AMD and Nvidia in both CPU and GPU markets.



Oh the joys of stock markets. Nvidia makes around 3B a year (profit), they have 10B cash. IIRC Arm isn't really a cash cow. How can they afford such a deal? There market capitalization is 330B - for whatever reason people are willing to pay that much for their shares. Money is cheap nowadays - also thanks to Corona. So, they sell some warrants, sell a few new shares and it's done.



The disaster that was WeWork could ruin ARM.


or: The disaster that was WeWork could accelerate the adoption of an open-sourced, royalty-free ISA.


An open source ISA isn't going to be any better than ARM if it's being sold to you by AWS, GCP, and Azure.


It simply wouldn’t make sense for NVidia to maliciously impose upon ARM after acquisition, as many of the commenters are concerned. I’m not saying they won’t, but because the present situation would limit any efforts to do so long enough that it would be futile.Consider the perpetual license agreements ARM holds with companies like Apple. These companies are best positioned to resist meddling with ARM. As an example Apple will forever have access to the ARM ISA, so NVidia can’t simply stop them from using existing designs. The processors Apple uses are all custom designs anyway. If future designs were purposely kneecapped, they could just improve their currently licensed designs until a suitable alternative is produced. Hindering future processor designs won’t hurt the biggest players in the short term, and in the long term it will only drive them to the competition. NVidia could take the approach of slowly drifting new designs to greater integration with their own GPUs in such a way that alternatives would be displaced - either by favor of NVidia GPUs or by the difficulty in using alternatives. This would be obvious though, and would again drive their users away. NVidia hasn’t had great success like they have had with GPUs in any other market. I see this as an opportunity to diversify and secure their future, and they want to take it.


It's not just about Apple. Suppose you're an SoC designer without an architecture license who makes SoCs that successfully compete with say Tegra. Maybe you have a license to use Cortex A77. Lets assume that A78 is the successor to A77.

- Will Nvidia sell you a license to A78 to enable you to continue to compete with Tegra or are you stuck on A77? - If you can't get a license to A78 where do you turn? RISC-V? Possibly but will you still have a business by the time a competitive RISC-V design emerges from somewhere?

The point is that Nvidia might play fair but the temptation to hinder those who compete directly with its own SoCs - where it will make a lot more money - will be great, and who will stop them if they do?


Do Apple actually work with Arm currently or is all their Apple Silicon stuff completely on their own? I don’t actually understand the issue they have with Nvidia but they seem to have one.


As I understand it, Apple has a special perpetual license for the architecture thanks to it being one of the companies that founded ARM. This deal shouldn't have too much impact on Apple.


I think you’re thinking of PowerPC?

To my knowledge, Apple was not involved in any founding of ARM (the company or the ISA).

ARM history, according to Wikipedia: https://en.wikipedia.org/wiki/ARM_architecture#History

Edit: I guess you could quibble about ‘founding’, but really I’m wrong, and my own link proves it!

————

Advanced RISC Machines Ltd. – Arm6

In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[30][31][32] which became Arm Ltd when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998.[33] The new Apple-Arm work would eventually evolve into the Arm6, first released in early 1992. Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA


> Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA

The ironies of history: one of Apple's most infamous failures ended up being the foundation of their later success.


Including how much JavaScript kind of resembles NewtonScript.


From that page:

https://en.wikipedia.org/wiki/ARM_architecture#Advanced_RISC...

Advanced RISC Machines Ltd. – Arm6

Die of an Arm610 microprocessor In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[30][31][32] which became Arm Ltd when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998.[33] The new Apple-Arm work would eventually evolve into the Arm6, first released in early 1992. Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA.


Plus Apple was a large shareholder (about 45% I think) in Advanced RISC Machines Ltd. The sale of that stake in the late 1990s raised a lot of cash (over $1bn) for Apple when it really needed it.


From this article https://www.imore.com/mac-moving-apple-silicon-not-arm

"First, ARM has two different kinds of licenses. One is for chipset designs. You pay your fee, you take your Cortex cores or whatever, you get them fabbed, and you've got your CPUs.

The other is an ISA license. With that, you get no chip design. None. All you get is the instruction set architecture. You have to roll the actual design yourself.

And that's what Apple's been doing. Making their own custom designs that use the ARM instruction set. For years."


Isn't that what everybody doing Qualcomm, Samsung, huawei ..?


No. They have architectural licenses but they use ARM designs.


Just to point out the perpetual license for the architecture is not special to Apple, and has nothing to do with ARM being founded by Apple.

It is standard business offering from ARM.


Saw a presentation a little while ago with Simon Segars of Arm and Jeff Williams of Apple (don't have the link but it was in honour of Morris Chang of TSMC and I think Jensen Huang hosted!) which seemed to imply that there was quite a close relationship. If nothing else I'd expect feedback from Apple to Arm about the evolution of the architecture.

If Apple was really unhappy about this and didn't want full ownership they could probably pull together a consortium to put together a rival bid. Doesn't look like it's happening though.


> Saw a presentation a little while ago with Simon Segars of Arm and Jeff Williams of Apple (don't have the link but it was in honour of Morris Chang of TSMC and I think Jensen Huang hosted!)

Perhaps this: https://www.youtube.com/watch?v=lGT3zSGDN3k


It was - thank you. Not sure which part led me to think that Apple and Arm worked together though and it's a long presentation!


Maybe because it's a co-founder of ARM?


There was a specific exchange between Segars and Williams in this presentation which seemed to imply a close relationship.


Apple has a perpetual license to ARM tech. There's nothing Nvidia can do to them.


Apple is clearly tied historically to ARM, and might have gotten something like that, but I've yet to see a definite source about this so called perpetual license.

For all we know they just pay for that license like everybody else.


Along with cpu's ARM also makes mobile gpu ip (Mali), nvidia as well in their tegra. I would be surprised if this deal went through.


Mali was an ARM acquisition. It makes sense for NVidia to smash Mali and RTX together to allow Android manufacturers to compete against Apple.

Apple built their GPU studio from ex-Imagination staff and will introduce 3 GPUs over the next year: Sicilian, Tonga, Lifuka to support their mobile and desktop plans.

The question is whether ARM will sub license this combined GPU tech, or if it will be NVidia silicon only.


I think Android manufacturers need SoCs with 16MB L3 shared across the CPU+GPU+DSP+NN like the A13 more than they need RTX in a GPU.


Apple is famous for compiling all its OSes to different architectures. We saw it with the quick switch to Intel and this upcoming switch to ARM. I’m convinced someone somewhere at Apple is firing off an email asking: "hey, are we compiling our stack on RISC-V?"


Hopefully European and American governments will step in and stop this. This is terrible for the future of ARM as a somewhat neutral platform and the Industry as a whole.


What effect does this have on Apple and them basing their own chips on Arm?

Are the now so far down their own road of development that it doesn't really matter?


None whatsoever. They have a license from the Newton days which allows them to do whatever they want perpetually.


Is there a citation for the perpetual licence? I would have just assumed they would have purchased a long-term licence to build cores for a particular ISA version


You just have to implement ARMv8 architecture in your SoC also nothing prevents you from extending it further with ML or GPU bits.

I doubt that NVidia controlled ARM will ever be so inclined to sell another architecture license ... they would rather sell you their own designed ARM chips.


Do apple just build on their own direction now then - I thought ARM would every so often create a new design and then everyone would use that as the basis for their chips. Do apple not do that? Or am I misunderstanding what's going on?


Related read (a reminder on the history of the original acquisition): https://www.wired.co.uk/article/softbank-vision-fund.


UK government should save ARM !!!


Well, looks like I’ve been hugely wrong on this all along. Seems like one of the biggest overpays in acquisition history, I can’t fathom how such a tiny company with such mediocre growth deserves such a massive premium.



Well... on the bright side I like what Tegras have been able to achieve. The Shields and the Switch are neat machines. Maybe we'll see some more ARM designs that can compete with Apple.


No, everything Nvidia is evil. We're supposed to be hating on them, right? Can I get my upvotes for perpetuating the hive mind now?

Seriously. I'm excited to see ARM owned by a hardware centric company. That said, I really don't expect this to have much impact in the near term. Licenses are already in place. China will probably spin competing chips based on their own ISA before too long(5-10 years).

I'm frankly interested to hear what folks in HW have to say. Hearing the repetitive, uninformed opinions of users and SW folks isn't really telling me anything informative about this. I'm an embedded SWE and I'm not seeing much to worry about. Would it be better if Apple bought ARM? Huawei?

This notion that a hippie commune is going to buy ARM and lead us all into open source nirvana where free, cutting edge IP rains down from the sky is frankly goofy.


Why would you be excited?

Nvidia can (and do) build their own ARM ISA CPU's offer them in the market so we already have access to Nvidia's take on the architecture. Do they have established expertise in ISA design or microcontrollers?

Maybe I'm missing something?


Like I said, maybe there's room for GPU improvements in the ARM designs. Seems like nVidia could do something like pack in hardware DLSS. They license out their GPU designs so its not far fetched that their GPU features make their way into mobile ARM chip designs. The Tegra was a bit power hungry but maybe having control of the full stack can alleviate that.


I am really interested to see how China is going to react. Can Trump administration put some kind of sanction on ARM license.


ARM should be converted into a non-profit foundation or something. Should be able to be financed by the licensing deals. But SoftBank couldn't make 40 billion with such an idea.


I wish they went public. I’d love to invest in them long term


Well now you can. Through NVDA.


gigglesnort


I’m sure many people would like the idea of investing in it. ARM Holdings was public and listed before. Then SoftBank bought it and delisted the company from the stock exchanges.


This is so dumb. Just IPO and take the windfall.


SoftBank really really needs that money


The important players like Samsung/Apple/Huawei are already ARM architecture license holders so they can do their own thing if they don't like the direction that NVidia is going in.

A 128 bit version might be an issue in the future.


I could imagine we might see an architecture with 128 bit word sizes, but I doubt we'd see 128 bit pointers (aka a 128 bit address space) any time soon. Itd just be more annoying than anything. I personally have even written software where we do pointer path to get 32 bit indexes rather than storing full 64 bit pointers simply because of space constraints (like trying to fit a hot struct into a single cache line)

Having a native uint128_t would make dealing with IPv6 addresses a lot nicer though :)


> but I doubt we'd see 128 bit pointers (aka a 128 bit address space) any time soon.

I can see 128-bit pointers being a thing: not because of 128-bits of address space, but for the ability to embed type information directly in the pointer - which could improve performance for dynamic-dispatch scenarios or runtime type-safety built-in to the hardware itself.

> Having a native uint128_t would make dealing with IPv6 addresses a lot nicer though :)

[We're already there](https://stackoverflow.com/questions/34234407/is-there-hardwa...)


Not type information, but CHERI capabilities: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri...


Current chips don't even actually support 64 bits of hardware address yet, seeing as few users can afford 2 exbibytes of DRAM and shuffling around a full 64 bits of hardware address is a noticeable tax on the chip.


Large address spaces are useful for virtual memory reasons, not because you literally have petabytes of RAM physically attached to your machine.


A 128bit ISA is super unlikely. We're barely using the 48bit implemented in AMD64 of 64 possible address bits.

Wider data arguments are already implemented as AVX/2/512.


Architectures with 128 pointers exist, but the extra bits are used for metadata. I don’t see much indication this’ll become mainstream through.

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri...



25% more than they paid just 2 years ago?


Softbank bought ARM in 2016 and paid $32B ($35B of 2020 USD after 4 years of inflation).

And today they get $40B but partly in Nvidia stocks... Assuming that Softbank manages to sell immediately the Nvidia stock and get their $40B, it is more like 14%. In the same timeframe Nvidia stock went from $30 (Jan. 2016) to $480 (Friday) or x16.

Softbank would have a way better outcome if they had invested in Nvidia 4 years ago, than investing in ARM. The potential 14% they got over last 4 years, is not great.


Softbank acquired 4.9% of Nvidia for $4B in May 2017. They sold it in February 2019 for $3.6B (according to this https://www.cnbc.com/2019/02/06/softbank-vision-fund-sells-n...). Another hilarious example of Softbank's market timing expertise.

Compared to other investments, an ARM sale of $40B would be a home run...


I wonder what the return would be just taking the opposite position of everything Softbank does is...


Sure, compare to negative return, 14% in 4 years is nice. Still it is a return of 3.3% per year. Nice, but not great. I was relativizing the return. Clearly, during that 2016-2020 period, pretty much any major tech stock would have do way more than 14%. Clearly Softbank did not time it correctly.


Hindsight is 20/20 in regards to the Nvidia stock gains. $5-$8 billion profit plus whatever they managed to extract in-between is a pretty astounding investment.


Perhaps the how much of nvidia stock gains are 20/20 .

It was never in doubt that nvidia was going to grow massively. 3-4 years back cryptocurrency mining made it next to impossible to actually buy a gpu. Machine learning was always going to keep growing massively. Cuda had cornered the market . AMD GPU is far behind .

Similarly AMD's growth is never in doubt after ryzen launched , there is nothing on intel's next 3-5 years that is going to be remotely competitive. They can only compete by the sheer force of their sales . AMD will only keep growing next 5 years, by how much depends on how well they can execute their own sales.


It’s worth remembering that a significant percentage of CUDA users are unhappy with the nVidia lock-in and also that the AMD GPUS are capable enough to do the work. So should any of the efforts to break the CUDA lock-in succeed... nvidia is likely to lose a significant percentage of their machine learning business.

The only people I’ve ever met that actually love nVidia, are enthusiast PC gamers and that’s because the games are optimised for nVidia due to nVidias anticompetitive efforts in the game development space.


Nobody loves oracle either. They still buy

It is just not the developers either, even the bean counters hate them. There is always constant chatter they will replace nothing actually happens.

Developer happiness is not how most of enterprise sales happens. Most of the big cloud vendors today only offer Nvidia GPUs . Unless AMD GPUs can become so much better that nvidia is left behind nobody is going to change.

Cost of change is enormous in large organizations . This why intel is and will still make money even though they are behind in tech in almost all parameters.

That is actually when they are behind , AMD has nothing on the horizon to actually beat nvidia.

I don't buy nvidia GPUs because I like it, I buy them because the total cost of ownership of investing in a stack everyone uses is cheaper. Hiring is easier , cloud and library support is great, lesser bugs that is already not been seen and handled before.

The only reason it will make financial sense is if either Nvidia is so expensive that it is worth exploring alternatives or AMD is offering something radically different that the effort in going a different path can generate a lot of biz value. Neither is applicable today or next few years


I would have thought Apple was a better strategic fit for Arm and likely to be able to pay bigger bucks too.


People are speculating that Nvidia might end Arm core licensing, but Apple would definitely shut it down. Apple's culture is totally incompatible with Arm's business model.


That doesn't preclude Apple owning it but operating it as an independent subsidiary.

Even if it wasn't a subsidiary Apple already operates some divisions and teams differently, such as its Webkit team (one of the few teams where employees are allowed to have Github.com user accounts...)


Many employees are allowed to have GitHub accounts; since they used it prior to being hired :P The WebKit team uses GitHub because this is where web standards development happens.


What how would Apple shutdown what Nvidia is doing? If they buy ARM they can stop selling licenses for it if they like. Of course they have to honor any prior contracts with other customers but when those run out, adios. Apple has their own license to use ARM architecture/ISA as long as they like, so what Nvidia does when they own ARM is really no concern to the fruit company


Do you have a reference about Apple having a perpetual license? I just found many articles that claimed they have an ARM "architecture license" but no specifics, and 8 years ago in 2012 ARM PR'd the deal as "long term" which might be relatively close to today.


I mean if Apple buys Arm, Apple will shut down Arm. No A80, no N3, no X3, etc.


Apple helped create ARM, as we know it, in 1990. They sold off their remaining stake around 2000 as they were beleaguered and needed the money. They chose not to buy in 2016, when SoftBank did.


Besides the screaming anti-trust issues they’d run into, that’s not really Apple’s business model, is it? Selling IP to other companies? I get they would want to control their own ISA, but I can almost imagine them creating their own, versus buying ARM and then being forced to maintain existing licenses.


Why should they continue the ARM model ? . They could shutdown that revenue stream, keeping competitors behind on chip design to justify the huge premium on apple devices is worth that kind of money.


Except they owned almost 50% of Arm in the 1990s and were very supportive of their business model.


Back then apple's business was quite different. They licensed their OS, they collaborated on stuff like the pippin, they were not sure if their future was in software or hardware.

The apple now is a very different beast.


Yup. That was also the Mac clone era of Apple. There is none of that DNA left.


I don't think Apple would be allowed to buy ARM, there would be serious competition implications.


And you think a company like like Nvidia (another hardware company) owning the same tech doesn't have precisely the same competition implications?


There is no strategic fit for ARM at Apple. They have a perpetual license already and design their own chips. They don’t need the customer and antitrust headaches to pay over 10x revenues for the ARM licensing business.


I am not sure any antitrust concerns would remotely be applicable in the case of an apple buy.there is no overlap of ARM biz to apple. Same as google or MS buying phone manufacturers and shutting down/selling parts they don't like.

Apple could shutdown the licensing just to keep competitors behind, if they did that it would set back all their android competitors by 5-10 years atleast and potentially fragment the chip market . Maintaining the iphone chip dominance is definitely worth 40B .


Their competitors already have their licenses. They’d continue to develop newer versions, and Apple couldn’t stop them. Some fragmentation would occur (though they’d probably create an industry group to coordinate).

So Apple would create minimal disruption and a huge antitrust case that they would lose. Why do you think Apple would be this dumb?


what are the implications for Apple


Apple has architectural licence, so they can do whatever they want.

Apple uses just ARM instruction set and design the microarchitecture by themselves.


With the removal of the 'web' button, could we possibly get a non paywalled link?


The anti-trust works so well these days...


Hopefully they use it to f&#@ Qualcomm.


why dont huawei/xiaomi/lenovo/zte buy arm?

and then the hilarity of trump banning doing business with arm afterwards would be worth some popcorn...


now nvidia is the new intel, it owns arm and can do whatever it wants to.

time for other big companies checking out alternatives


This makes sense. Most of the Jetson/Xavier/Nano boards already use a Carmel ARMv8 chip. I just hope this spurs more development in ARMv8. Currently the majority of ARM is ARMv7 (smartphones and Raspi).


I think it's more about having CPU IP more than what CPU IP they have used in Jetson/Xavier to this point (though it is a helping factor). HPC also loves ARM these days and nvidia makes huge cash on that already.

Smartphones have largely been ARMv8 for the last 5 years. Raspberry Pi is as well these days.


Raspberry Pi 4 uses ARMv8 according to Wikipedia: the ARM Cortex-A72.

https://en.wikipedia.org/wiki/Raspberry_Pi#Processor


Pi 3 and the V1.2 revision of the Pi 2 were ARMv8 too (Cortex-A53).

edit: also re smartphones, Android SoCs started to move to v8 CPU cores 5 years ago, in the Nexus 5X/Pixel 1 generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: