Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Chinese researchers planning 1,600-core chips that use an entire wafer (tomshardware.com)
126 points by _____k on Jan 23, 2024 | hide | past | favorite | 109 comments


The article doesn't note this, but wafer scale integration is a very old idea. We discussed it at Inmos back in the day, since often the systems we built essentially consisted of many CPU die sliced out of the wafer, bonded into a package, then tiled onto a PCB[1]. But there are...issues: cooling for one. Iann Barron joked that you could make a toaster from two WSI wafers running full-tilt.

[1] https://twitter.com/tnmoc/status/429638751904878592


That is super cool. I wrote to SGS-Thomson when I was in highschool and they sent me what felt like a refrigerator box of manuals for the 400 through the 9000. Huge transputer fan. The simplicity was striking, and that you could construct a system that could handle huge numbers of threads communicating across the network with almost no "system software" in the way was mind blowing.

Most famously, Gene Amdahl started https://en.wikipedia.org/wiki/Trilogy_Systems in the 80s to explore this idea.

The calculus changes when you don't have to dice the wafer, packaging, etc. I'd say that we clock things now at the highest speed that we can safely remove heat, so these wafer scale chips, we trade frequency for area and need/should clock them much slower.

At large production runs, the wafer in a Cerebras is 20k each for a system costing millions and the primary engineering feat is still cooling. I'd love to see a WSI system utilizing NTV (near threshold voltage) logic.

https://semiengineering.com/near-threshold-computing-2/

Another interesting design pattern that has arisen is that Cerebras, Esperanto, Tenstorrent, and InspireSemi are all mesh networks using message passing.

What kinds of things did you work on at Inmos?


>> But there are...issues: cooling for one > That is super cool.

...so apparently not... ;)


Can you not sandwich it with a peltier layer


I worked on board level designs and interfaces and memory subsystems among other things.


The traditional computer chip will have a power draw proportional to frequency and the square of voltage, which itself must be increased to control delay and raise frequency:

  P ∝ C×V²×f
So if you are ready to accept a lower speed per core, the power draw can be controlled and you won't get a toaster.

One difficulty would be that modern technological nodes have high leakage and dissipate power even when they are not switching, making it more advantageous to clock aggressively, finish the computation as fast as possible and cut the power on that entire circuit for the remainder of the timer slot ("race to idle"), as opposed to reducing the frequency and prolonging the "on" phase.

But that's a deliberate design choice, knowing the chip will be cut out and fitted with a substantial thermal solution. Wafer level power draw it's definitely something you can control at the design stage.


I wonder when we'll convert to plasma computing, as in the entire instruction set and operators running as a waveform in a condensed cloud of plasma where the voltage outs at specific points equal the computational result of the inputs?

If we could do that then we could run terahertz frequencies.


How does whatever you're talking about differ from the thyratrons that were used in early vacuum tube computers and decidedly do not scale to THz clock frequencies?


Do we really need a clock for computing?



Yes, I was going to say I thought this had been looked at with the transputer.

I also recall Clive Sinclair suggesting this approach in the late 80s (can't recall if that was somehow related to the transputer or was completely separate). I believe his idea was that the faulty CPUs that naturally exist due to wafer defects would be cut off from the main group (I could have misremembered but I think it may have been via some kind of self test process).


It looks like they were close to launching a product! https://qlwiki.qlforum.co.uk/doku.php?id=qlwiki:sinclair_waf...

RAM, not CPU, but IIRC he was talking about CPUs too.

and later there was a prototype storage product:: https://www.computinghistory.org.uk/det/3043/Anamartic-Wafer...


There’s been quite a few. I can’t remember the first one I found which was also heavily analog. Here’s another:

https://www.kip.uni-heidelberg.de/vision/previous-projects/f...

https://iopscience.iop.org/article/10.1088/2634-4386/acf7e4


The HICANN chip is really interesting, never seen or heard of it before!


Not exactly a wafer but once I tried to create a very powerful LED bulb by combining LEDs of multiple 10W LED bulbs close together and yep, it got very hard to cool it down. I wasn't expecting to get that hot but later once I gave it some thought, I said silly me of course it would be hard to cool it down. When you put together multiple elements that heat-up, the radiator-to-heater ratio quickly deteriorates.

And now when I've red the title, the heat was the first thing that came to my mind.


I built a 4096-pixel LED array (64x64 addressable pixels) that, when lit fully up used something like a kilowatt. However it was about 4 feet by 4 feet with spacing between each pixel and could run just fine without overheating.

These days I'm working on a PCB with many LEDs on it and heat management is much more challenging; I ended up designing the board so that the LEDs sit on thermal rectangles with through-holes (vias) that connect it to a massive ground plane (in this case it's "thermal ground" not just electric ground), as described in the datasheet for my LED chip. Each LED is 3W so it can get hot really quickly, but as long as the board is adequately coupled to a larger heat sink, it can run at fairly high power. Thermal coupling is harder than I expected! I have learned you can melt solder with a hot LED(!).

Also, it helps to run LEDs at a lower-than-max power; you get most of the light but less of the heat. Keeping LEDs in their happy zone prolongs their life signficantly.


Thanks for the tip


If there's serious work that can't be effectively done any other way then complex and esoteric cooling setups to overcome those challenges are fine to design and use.

Not sure what needs 1600 cores in one 'chip' but it's probably fairly impressive.


You can’t call it a chip anymore as it’s not cut into chips. More a slice.. perhaps they’ll put it in a 19” pizza box.

Pizza Computing


Sure, at some point solving the cooling becomes the path with least resistance. It's just surprisingly hard if you have to account in the reliability and dimensions - so it depends on the application.


I haven’t done the math - why wouldn’t something as ‘simple’ as a die diameter solid copper slug work?

Easy to drill for water cooling, and at these scales pretty cheap.

Or are we talking just getting the heat out to whatever heat management device is attached without burning something in the chip itself?


In my understanding, normally you have a tiny chip that is inside a larger package and then you can attach that packaged chip to something that takes away the heat. So the ratio of cooling element(package + heat remover) to heating element(the IC) is pretty large. Also, the speed of removing heat from the silicon IC is limited to the heat conductivity of the silicon itself and the materials used to make the package. The silicon itself is not where the heating happens but on the "etched/printed" features on top of the silicon.

Therefore, the amount of heat by heating elements(the tiny wires over the wafer) grow faster than the heat dissipation capacity, which rises the temperature until the unit breaks.

Notice that when the heating elements are close together you lose the horizontal heat gradient advantage since that grows by the perimeter when the heat generating elements grow by the area.

Which means you have to get creative, add moving parts or use more exotic materials, which makes the thing significantly more expensive and less reliable as more things to break are added.



I'm not well versed in this subject but could you solve the heat issue by running the cores at low voltage and clock speeds?


I wonder if the performance increase of separating into chips that can be cooled might dwarf the impact of a slower cooler single wafer.

I would also think a huge wafer like this would correlate with a data center, where cooling is less of an issue than say a laptop or desktop. or a wafer-phone :)


I was going to say ... pretty sure I saw this on TV in the 1980's.


> you could make a toaster from two WSI wafers running full-tilt.

I'd totally eat compute toast, where can I get such a toaster?


You could design it and have it manufactured fairly cheaply using JLCPCB. Design a simple PCB that is basically a long zig-zagged wire (copper trace). Make sure the trace has a fairly high resistance, like a few ohms. Include some heavy-duty connectors on the board.

Send the design to JLCPCB; it usually takes a few days and costs almost nothing to have that shipped to the US.

Design a base that holds two of the boards a fixed distance between each other, with room for toast, and build an enclosure. You also need a spring-loaded part to hold the toast in place and let you pop it out.

Here's an example, sans JLC: https://www.instructables.com/PCB-Heater-Diy-Joule-heating/

Much of the challenge would be making this food safe, and consumer-safe, while also affordable and competitive with a $5 toaster, but that doesn't stop a sufficiently motivated hacker.


There was a recent Ask HN about making a company to produce Made In USA toasters. Maybe you can pitch him this idea.


That sounds great... until you get a defect on your networking block.

Then what? You've got a heterogenous network with tons of "this core to this core is not like the others" exceptions (latency, bandwidth, etc).

I know chip-to-chip/memory interconnects burn a ton of power, but fabbing discrete "biggest chip we can get with decent yield" still seems a solid tradeoff in the reality of < 100% yields.

Does anyone have a link or search phrases on how this is currently handled for high-chiplet counts? E.g. interconnection routing architectures that are still reasonable with random manufacturing-time failing links


I assume it's probably an 1800 core wafer and they just cut fuse off the 200 defective cores. Some redundancy is likely just built-in based on the expectations of process reliability.

Probably multiple networking blocks, too, and you'd use less demanding process features on the things that can't be duplicated. In fact you could probably even have FPGA-style soft programmable fabric interconnects to work around process failures.


That's great if it's just hitting compute cores.

But when your wafer networking flows through cores (because it's cores-all-the-way-down), a defective core starts to impact network performance. Which cascades into cache, memory, locality, etc. Which starts to make a very unpredictable hardware system for software to reason about.

See also dragontamer's comment down below.


Not all transistors have to yield the same. You can use more forgiving design rules to ensure that critical network elements don't fail. There is also post fabrication repair. It all comes down to economics. There is no fundamental problem engineering wise.


also the networking is a small fraction of the area so you naturally expect it to have fewer defects


Cerebras already solved this problem. So we have that existence proof. The redundancy overhead in v1 was 1-1.5%, and reportedly half that for v2 WSE.

https://www.youtube.com/watch?v=8i1_Ru5siXc


This is on a 22nm process, which is what Intel Haswell [1] (4th gen core) was using 10 years ago. The latest gen chips are now 7 and 5nm [2], and it seems a lot of the innovation in chip manufacturing is about shrinking this size.

How much is being done to improve yields of these older process sizes, maybe using the improvements done for smaller sizes? Logically it must be possible to have 100% yield on wafers at a certain process size -- but what size is that?

[1] https://en.wikipedia.org/wiki/Haswell_(microarchitecture)

[2] https://en.wikipedia.org/wiki/Microprocessor_chronology#2020...


Core to core latency is already very heterogeneous on existing CPUs. Fusing off a few links isn't the end of the world.


Heterogeneous but predictable from distance, no?

Routing chip-to-chip in a 2x2 or 3x2 with a fused link is less complicated than around a 1,600-core layout with multiple fused links.

See nerpderp82's link above.


Predictable in theory, but not much software actually tries to predict.


Now that Cerebrus has proven it works, I would love to have an x86 / ARM / NVidia do this. And for best results, onboard one of the memory maker as well. Cerebrus seems to have underestimated memory requirement of LLM. So imagine, 16 H200 GPU along with single digit TB HBM memory stitched together on a single substrate wafer. It seems doable with correct technology.

Go for it China. You are in good track here.


> 16 H200 GPU along with single digit TB HBM memory stitched together on a single substrate wafer

How on earth would you cool this?


15kW dissipated over that area isn't particularly challenging for an industrial water cooler - within reason, you can just keep cranking up the flow rate. What'd scare me is power delivery, because they've probably got >10,000 amps going to the die.

https://www.eetimes.com/powering-and-cooling-a-wafer-scale-d...


20K amps (!!!): https://vimeo.com/853557623

They use probably the fanciest piece of rubber (and metal) ever made to pass the power in frontside to the die.


Doing it with all that water needing to get everywhere is definitely a feat.

But the power itself isn't too bad. A square millimeter of wire can comfortably carry 20 amps, and you can scale that up pretty straightforwardly.

It looks like each of those 84 rectangles has to deal with 240 amps and has multiple square centimeters of contact with the voltage regulator card above it.


Seymour Cray solved this problem 40 years ago.

https://en.wikipedia.org/wiki/Cray-2#/media/File:Cray2.jpg

Bring back toxic waterfalls.


The issue with Fluorinert is not toxicity but a very high global warming potential.


> Although Fluorinert was intended to be inert, the Lawrence Livermore National Laboratory discovered that the liquid cooling system of their Cray-2 supercomputers decomposed during extended service, producing some highly toxic perfluoroisobutene.[5] Catalytic scrubbers were installed to remove this contaminant.

With that much constant heat, what chemical might be best/most practical I wonder.

https://en.m.wikipedia.org/wiki/Novec_649

https://en.m.wikipedia.org/wiki/Hydrofluoroether


Fluorinerts with very low global warming potentials have been invented since then.


Direct-die vapor phase change cooler? That tends to be the fastest way to get heat out of a chip and to a big radiator. For industrial applications the need to run a big fan & a compressor are less problematic.


Submersion cooling.


> How on earth would you cool this?

With liquid nitrogen. :-)


>"The latter has only been managed by Cerebras so far, but it looks like Chinese developers are looking towards them as well."

Cerebra's wafer has 850,000 cores which totally dwarves 1600 cores on Chinese wafer. I did read though that Cerebra cores optimized for tensor ops. Does Chinese version have more universal cores or it just way smaller clone of Cerebra?


There were experiments with wafer scale FPGAs in the 1990s. The idea was that being programmable, the final chip could be programmed to route around defects. Lasers were also used to eliminated defective cells.


The concept is interesting.

I guess it will have to be able to route around broken cores?


I've seen other projects that market themselves as "wafer scale". https://www.cerebras.net/product-chip/

> I guess it will have to be able to route around broken cores?

Yeah, but you'll also have to route around broken routes, and that starts to get a bit too much chicken-and-egg problem for me.

I guess you could design something akin to error correction codes, meaning you're resilient to X failures. Ex: a 64-bit bus could have physically Single Error Correction, Double Error Detection, which IIRC would be 72 physical wires.

That means any wire can completely fail, but you still have a 64-bit bus (indeed, the 8x error-correction wires could all fail and you'd still have a 64-bit bus).

------------

At some point, it makes more sense to cut the chips out, test them for reliability. Then cut the router out, and test those for reliability, and then finally glue them together.

On the other hand, doing it all on one wafer has cost savings / manufacturing simplicity. The math is likely difficult for optimizing over costs, production speeds, and so forth.


You can design supervisory nodes and routing elements with relaxed design rules that guarantee high yield for these critical elements.


There are techniques to disable nodes that fail testing, so it shouldn't be a problem (within reason).


The article talks about chiplets. I presume the wafer will still be cut into distinct chips? I thought there were thermal (and yield) reasons to not making chips that are too large.


My take was that they currently have a working chiplet design and are looking to move to a wafer-scale (i.e. not cut up) design. Thermal/power/yield are all issues and the design has to take all of that into account. Cerebras has done it for their NN processors, so it has been done before.


They can use these for their high speed trains!


What about trains requires this level of processing power???


Well of course the LLM that is going to drive it.

/me ducks


Our next stop is "Let's talk about something else."

https://en.wikipedia.org/wiki/Tian%27anmendong_station


What kind of wattage is expected for this? And how is heat management done?


How the hell are they gonna cool a cpu like that?


Big block of copper. Water pumped through channels. The total heat output isnt all that big comparred to other water-cooled processes. A car engine cylinder pumps out more heat across a similar area. You will just need pumps and fans bigger than the toy parts used in normal pc cooling.


You just need a car radiator and engine.


Not really. A small automotive radiator, like on a motorcycle, can easily handle a few thousand watts. You just need a 200ish watt fan and a 100+ watt pump, not the sort of thing for home use but nothing extreme in the world of cooling products.


Been hearing it for 30 years.

Finally?


1.7%


What do you mean?



Does anyone know how yields are billed, e.g. by TSMC?

If I'm Nvidia, and I contract for X volume on Y process, and TSMC delivers it with Z yield... how does the +/- to Z work?

I'd assume it isn't completely Nvidia's to eat? More like there's an expected yield and then bonuses / penalties to TSMC for above/below that?


I would have said 1.3.


Those who try may get 1.7%

Those who don't may forever get 0%


Wayne Gretzky's post-retirement career in semiconductor engineering may not have been expected, but seems to be going well.


That's because he doesn't design for where the state of the art chips are, he designs for where they are going to be.


China doesn't have the money nor economy to support such low yield effort for long term

China has ordered its local governments to halt public-private partnership projects identified as "problematic" and replaced a 10% budget spending allowance for these ventures with a vetting mechanism by Beijing as it tries to curb municipal debt risks. https://www.reuters.com/markets/asia/china-orders-local-gove...

China’s Economy Has Picked Up Traits Reminiscent Of The Great Depression. https://www.forbes.com/sites/miltonezrati/2024/01/22/chinas-...

Also from unconfirmed resource: Central government orders high debt load places like Tianjin, Inner Mongolia, Heilongjiang, Chungking, Guizhou, and a few others to stop any new constructions in 2024, and only allow constructions to provide water, electric, or heat.


Every year, economists are saying China is collapsing.. yet they keep deliver, they seem immune to the organized FUD


Collapse hasn't been the suggestion in fact. That's a straw premise to distract from the negative claims (which are supported by persistently bad economic data).

The US didn't collapse due to the great depression either.

What is being suggested, is that China is facing a very bad economic stretch as a result of their extraordinarily poor command economy choices.

Beijing's fabled (supposed) long-term thinking has shown itself to be a complete fraud. The Emperor, Xi, is wearing no clothes, as is the case with all dictators. An economy the scale and complexity of China's can't be run effectively with an authoritarian command approach. We have been seeing the proof of that increasingly since the great recession hit, wherein China switched to forever stimulus & debt fakery to prop up their flagging economy. In the span of a decade China became the most indebted nation in world history, while their growth sank below that of the US (which is a very mature, slower growth economy). And now the imploding demographics are setting in hard and fast, while the affluent world shuns China (leaving them with key partners like Russia, Iran, North Korea).

It's quite obvious that while China kicked the can down the road for a long time, the cost of doing so just keeps increasing in the form of negatives on their economy.


economy is weakening globally, it affects everyone, including both the US and the EU, so your argument falls flat

any positive comment about China gets downvoted to oblivion, further emphasizing the theory of an organized FUD against China


> any positive comment about China gets downvoted to oblivion, further emphasizing the theory of an organized FUD against China

Just based on this very comment thread you're wrong, so how are we to trust any of your other rebuttals?

Even well-sourced critiques of China are greytext so one may as well assume the opposite if they were taking your position.


>Beijing's fabled (supposed) long-term thinking has shown itself to be a complete fraud. ...<snip>

Been hearing all that and more for at least the past 25 years or so, and still China's doing fine, blew Japan out of the #2 GDP spot, and far as I can tell is winning the Cold War of our present era.

So, thanks for reaffirming what you tried to refute.


Are they doing fine if there will be 500,000,000 fewer Chinese by 2100?


> blew Japan out of the #2 GDP spot, and far as I can tell is winning the Cold War of our present era.

That's like the bare minimum when you have a 10x higher population. Also at the current pace (both population and GDP China will never even come close to US, the gap over the last 10 years is only getting wider: https://data.worldbank.org/indicator/NY.GDP.PCAP.CD?end=2022...


Not sure if you call this "delivering"

1.) The heavy market losses in 2024 come hot on the heels of a bruising run last year, when the CSI 300 index, comprising 300 major stocks listed in Shanghai and Shenzhen, fell more than 11%. By contrast, the United States’ benchmark S&P 500 index climbed 24% in 2023, while Europe’s grew almost 13%. Japan’s Nikkei 225 soared 28% last year and is still going strong, notching gains of nearly 10% so far this month. https://www.cnn.com/2024/01/22/business/china-stock-market-f...

2.) China suffers from deflation, while the rest of the world combats inflation. Not only does deflation signal a stagnating economy, it can lead to high unemployment, unaffordable debt repayment, and dismal outcomes for businesses. In the worst cases, deflation can lead an economy into a recession, or even a depression. https://www.wsj.com/world/china/deflation-worries-deepen-in-...

3.) Crushing debt. Going back further, China accounts for over half of the entire world’s total debt-to-GDP increases since 2008. https://www.geopoliticalmonitor.com/backgrounder-china-econo... https://www.bloomberg.com/news/newsletters/2024-01-06/bloomb...

4.) China’s youth unemployment rate hit consecutive record highs in recent months. From April to June, the jobless rate for 16- to 24-year-olds reached 20.4%, 20.8% and 21.3% respectively. https://www.cnn.com/2023/08/14/economy/china-economy-july-sl.... For reference, G7 countries is at 10%, US is at 8% https://data.oecd.org/unemp/youth-unemployment-rate.htm


These are just problems. Just like how the US has some really big glaring problems:

1. Incapable of building infrastructure, can't build the CA hsr while China built an entire network across the country.

2. Can't raise the population out of poverty. You see homeless people and drug addicts everywhere.

3. Unemployment going up, Inflation is going up.

4. Wealth ienquality is rising. The Housing is becoming more and more unaffordable for most Americans.

I mean this is "delivering" too. As if 4 random economic problems a country is facing is indicative of total collapse.


Problems are not a like-for-like exchange. It's a completely meaningless statement you're making. Having cancer and having a cold are both illnesses. The severity of the two are not alike.

The US' unemployment fell to 3.7 percent end of December. [1] Inflation fell to 3.4 percent. Please do your research before commenting.

[1] https://www.reuters.com/markets/us/us-job-growth-accelerates...


Please don't lie and mis characterize the situation before commenting. Thank you.

Inflation going down and unemployment going down doesn't characterize the overall situation for the last year.

What your saying a month of the opposite trend indicates the US economy has no more problems? That china can't possibly ever have a month of upward ticks?

My statement is meaningless. That's the point of my statement. It's an example to show how meaningless the statement about China is.

I'm sick and tired of people who get emotional and are patriotic. Why can't people be level headed and just talk about facts rather then "defend" their stance or their own country.


> I'm sick and tired of people who get emotional and are patriotic. Why can't people be level headed and just talk about facts rather then "defend" their stance or their own country.

I specifically called out false claims you made out as fact. Your claims were not factual and I did not misconstrue them.

The US is at one of the lowest unemployment rates over its entire history. It is at the lowest level over the last 54 years. [1] That's fact. Your claim was false.

Inflation in America is not under any long- or short-term trend of increasing. That is a false claim you have made and continue to shelter behind. Inflation increased because of a global calamity the entire world suffered under. That was an acute crisis, not a trend.

China has an unemployment rate among its youth well over 20 percent. That is not a blip. That is institutional failure caused by decades of mismanaging a population under an authoritarian government.

[1] https://www.commerce.gov/news/blog/2023/02/news-unemployment...


Everybody said the USSR was collapsing and it kept growing until the country collapsed. China’s GDP numbers are notoriously opaque and difficult to vet. It’s no different than saying Los Angeles is going to get flattened by an earthquake. Its inevitable but good luck predicting it.


The fall of the USSR was more of a political collapse caused by elites thinking another system would be better, not economic collapse.


The USSR had obviously deep political dysfunction; it was a hollow gerontocracy in its latter decades. It had run out of ideas and was desperately looking to the West for inspiration.

There is a superpower that bears stark resemblance to the dying USSR, and it's not the PRC.


I've seen a few hundred too many "the Chinese economy is about to collapse" stories to believe any of them.


Maybe not collapse but a slowdown and significant decline in some sectors seems like a real risk. Which would be a big issue because China is still quite poor per capita and at this pace it's unlikely to ever catch up with the US:

https://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD?end=2...


GDP is a terrible metric, the usual example is France's being lower than every single state including Mississippi.


I'm mainly talking about the growth rates rather than the actual values so this is hardly relevant.


> China planning 1600-core chips that use an entire wafer – 'wafer-scale' designs

... and they will cool it by pouring water on it. /s


Pretty much. This is the way Cerebrus does it: https://web.archive.org/web/20230812020202/https://www.youtu...


"China" is "planning" to do this...?

A better title might be: "Researchers in China studying 1600-core chip".


Ok, we've replaced China with some Chinese researchers in the title above.


Software determines whether good hardware succeeds or fails. China has yet to build successful software ecosystems on top of its hardware innovations.


> China has yet to build successful software ecosystems on top of its hardware innovations.

I'm not sure your belief is grounded in reality. I'd go as far as to assert that if China was able to research and develop these chips, both their design and production processes, they certainly are not leaving software as an afterthought.

Nevertheless, even entertaining your fantasy, once these chips are out and people like you and me are able to take these toys out to play with them, you'll soon get software that does something interesting and useful. Software is hardly the hard part, or even costlier.


> Software is hardly the hard part, or even costlier

Tell that to Nvidia. HW is a commodity with relatively low margins unless you can lock in your users in some other way.


> Tell that to Nvidia.

Nvidia is an excellent example. Without the hardware part, they would simply not have a product line. As they developed expertise in hardware design and production, they are now one of the most valuable companies in existence.

Some market segments even spend thousands of dollars in Nvidia's hardware without having any expectation or plan to use any of NVidia's drivers.

Nvidia proves hardware is the hard part.


Yet they basically monopoly in the datacenter GPU/AI market mainly because of software.

> Some market segments even spend thousands of dollars in Nvidia's hardware

What segments are those?


I'm surprised by the cluelessness of most replies here, given that this is HN.

Hardware is only as useful as the software that can run on it. Radically new hardware requires rewrites of certain layers of that software. Ain't nobody got time for that, unless they can be assured that there will be a large number of companies and customers who need software to run on the new hardware.


Wait what, China has by significant margin the second largest software ecosystem in the world, and in VC terms is comparable to the US.


China runs on Intel chips like everyone else. It's AI researchers beg for Nvidia chips like everyone else. And its mobile companies use Android.

Chips that break away from those standards have little chance of success.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: