Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> If 18a is not ready I think the best case scenario for Intel is a merger with AMD.

Aside from the x86 monopoly that would create, I don't think Intel has much of value to AMD at this point other than the fabs (which aren't delivering). IMHO if Intel is failing, let them fail and others will buy the pieces in bankruptcy. This would probably benefit several other companies that could use 22nm and up fab capacity and someone could even pick up the x86 and graphics businesses.

BTW I think at this point the graphics business is more valuable. Even though Intel is in 3rd place there are many players in the SoC world that can use a good GPU. You can build a SoC with Intel, ARM, or RISC-V but they all need a GPU.



Certainly feels like preempting news that Intel 18A is delayed.

Restoring Intel's foundry lead starting with 18A was central to Pat's vision and he essentially staked his job on it. 18A is supposed to enter production next year but recent rumors is that it's broken.


The original "5 Nodes in 4 Years" roadmap released in mid 2021 had 18A entering production 2H 2024. So it's already "delayed". The updated roadmap has it coming in Q3 2025 but I don't think anyone ever believed that. This after 20A was canceled, Intel 4 is only used for the Compute Tile in Meteor Lake, Intel 3 only ever made it into a couple of server chips, and Intel 7 was just renamed 10nm.


https://www.intel.com/content/www/us/en/newsroom/opinion/con...

I have next to zero knowledge of semiconductor fabrication, but “Continued Momentum” does sound like the kind of corporate PR-speak that means “people haven't heard from us in a while and there's not much to show”.

I also would never have realized the 20A process was canceled were it not for your comment since this press release has one of the most generous euphemisms I've ever heard for canceling a project:

“One of the benefits of our early success on Intel 18A is that it enables us to shift engineering resources from Intel 20A earlier than expected as we near completion of our five-nodes-in-four-years plan.”


The first iteration of Intel 10nm was simply broken -- you had Ice Lake mobile CPUs in 2019 yes but desktop and server processors took another two years to be released. In 2012 Intel said they will ship 14nm in 2013 and 10nm in 2015. Not only they did fail to deliver 10nm Intel CPUs but they failed Nokia server division too, nearly killing it off in 2018, three years after their initial target. No one in the industry forgot that, it's hardly a surprise they have such trouble getting customers now.

And despite this total failure they spent many tens of on stock buybacks https://ycharts.com/companies/INTC/stock_buyback no less than ten billions in 2014 and in 2018-2021 over forty billions. That's an awful, awful lot of money to waste.


Indeed. Brian Krzanich destroyed Intel.


Most of the stock buybacks happened under Bob Swan though. Krzanich dug the grave of Intel but it was Swan who kicked the company in there by wasting forty billion. (No wonder he landed at a18z.)


Ice Lake wasn't the first iteration of 10nm - that was the disastrous Cannon Lake in 2018.


Yes, yes, yes, of course, the infamous CPU released just so Intel middle managers can get their bonus. GPU disabled, CPU gimped, the whole thing barely worked at all. Let's call it the 0th iteration of 10nm , it was not real, there was like one laptop in China, the Lenovo IdeaPad 330-15ICN, which paper launched.


> Certainly feels like preempting news that Intel 18A is delayed.

I think at this point no one believes Intel can deliver. So news or not..


Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.

The best they could do with the GFX business is a public execution. We've been hearing about terrible Intel GFX for 15 years and how they are just on the cusp of making one that is bad (not terrible). Most people who've been following hardware think Intel and GFX is just an oxymoron. Wall Street might see some value in it, but the rest of us, no.


My understanding is that most of the complaints about Vista being unstable came from the nvidia driver being rather awful [1]. You were likely to either have a system that couldn't actually run Vista or have one that crashed all the time, unless you were lucky enough to have an ATI GPU.

[1] https://www.engadget.com/2008-03-27-nvidia-drivers-responsib...


Parent talks about GMA900 from i910 series chipset.

It wasn't fully WDDM compatible for a quite minor (overall) part, but the performance were awful anyway and lack of running in the full WDDM mode (ie Aero) also didn't help too, partly because running in Aero was faster.


> If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.

What does an OS need a GPU for?

My current laptop only has integrated Intel GPU. I'm not missing Nvidia, with its proprietary drivers, high power consumption, and corresponding extra heat and shorter battery life...


The GUI >99% of users used to interface with the OS required a GPU to composite the different 2d buffers with fancy effects. IIRC if you knew how do disable as much as possible of it the performance without GPU acceleration was not great, but acceptable. It really sucked when you had an already slow system and the GPU pretended to support the required APIs, but the implementation didn't satisfy the implied performance expectation e.g. pretending to support a feature in as "hardware accelerated", but implementing it mostly on the CPU inside the GPU driver, but even the things the old Intel GPUs really did in hardware were often a lot slower than a "real" GPU of the time. Also CPU and iGPU constantly fought over the often very limited memory bandwidth.


Composiors are generally switching to being gpu accelerated, not to mention apps will do their own gpu accelerated UIs just because the OS ui systems are all junk at the moment


We are at the perfect moment to re-embrace software rasterizers because CPU manufacturers are starting to add HBM and v-cache.

An 8K 32bpp framebuffer is ... omg 126MB for a single copy. I was going to argue that a software rasterizer running on vcache would be doable, but not for 8k.

For 4k, with 32MB per display buffer, it could be possible but heavy compositing will require going out to main memory. 1440p would be even better at only 15MB per display buffer.

For 1440p at 144Hz and 2TB/s (vcache max), best case is an overdraw of 984 frames/frame


Does 4k matter?

I was doing a11y work for an application a few months back and got interested into the question of desktop screen sizes. I see all these ads for 4k and bigger monitors but they don't show up here

https://gs.statcounter.com/screen-resolution-stats/desktop/w...

And on the steam hardware survey I am seeing a little more than 5% with a big screen.

Myself I am swimming in old monitors and TV to the point where I am going to start putting Pepper's ghost machines in my windows. I think I want to buy a new TV, but I get a free old TV. I pick up monitors that are in the hallway and people are tripping on them and I take them home. Hypothetically I want a state-of-the-art monitor with HDR and wide gamut and all that but the way things are going I might never buy a TV or monitor again.


All the browsers on my machine report my resolution as 1080p despite using 4k. I assume this is because I run at 200% scaling (I believe this is relatively common among anyone using a 4k resolution)

If the above-linked website uses data reported by the browser, I wonder how this scenario might be taken into consideration (or even if such a thing is possible)


A pixel is defined as 1/96th of an inch in the web world so it is dependent on your dpi/scaling. There is a window.devicePixelRatio that JavaScript can use to get actual pixels.


Remember the days when you would be in danger of your apartment being burglarized and thieves would take your TV or receiver or CD player etc.? Nowadays that's called junk removal and you pay for it! How times have changed...


> Does 4k matter?

The PC I'm typing this on has two 27in 4k screens. I'm sitting so that I look at them from about 75cm away (that's 2.5 feet in weird units).

I archive most of my video files in 720p, because I genuinely don't see that big of a difference between even 720p and 1080p. It is definitely visible, but usually, it does not add much to the experience, considering that most videos today are produced to be watchable on smartphones and tablets just as much as cinema screens or huge TVs. I only make an exception here for "cinematic" content that was intended for the very big screen. That does not necessarily mean movies, but also certain YouTube videos, like "Timelapse of the Future": https://youtube.com/watch?v=uD4izuDMUQA - This one hits differently for me in 4K vs. just 1080p. Having to watch this in just 720p would be tragic, because its cinematography relies on 4K's ability to resolve very fine lines.

So why would I make a point to have both my screens be 4K? Because where else do you look at fine lines a lot? You're looking at it right now: Text. For any occupation that requires reading a lot of text (like programming!), 4K absolutely makes a difference. Even if I don't decrease the font size to get more text on screen at once, just having the outlines of the glyphs be sharper reduces eye strain in my experience.


A unit based on the average size of the human foot is not "weird".


Correct. It is insane. Bonkers. Absurd. How the hell can you live with that-stupid.


No. It is in fact completely normal and has been repeated many times in human history. A unit based on an arbitrary fraction of the distance from the north pole to the equator is quite a bit more odd when you think about it.


Curious to hear more about the a11y work.


Run of the mill.

A complex desktop web form with several pages, lots of combo boxes, repeating fields, etc. I cleaned up the WCAG AA issues and even the S, but the AAA requirement for click targets was out of scope but had me thinking that I wanted to make labels (say on a dropdown menu bar) as big as I reasonably could and that in turn got me thinking about how much space I had to work with on different resolution screens so I looked up those stats and tried to see what would fit in which constraints.


And by "are generally switching" you're really trying to say "generally switched a decade ago".


'cept for Linux.


GNOME 3 had a hardware accelerated compositor on release in 2011.


Ubuntu came with Compiz by default in 2007: https://arstechnica.com/information-technology/2007/11/ubunt...

“Ubuntu 7.10 is the first version of Ubuntu that ships with Compiz Fusion enabled by default on supported hardware. Compiz Fusion, which combines Compiz with certain components developed by the Beryl community, is a compositing window manager that adds a wide range of visual effects to Ubuntu's desktop environment. The default settings for Compiz enable basic effects—like window shadows and fading menus—that are subtle and unobtrusive. For more elaborate Compiz features, like wobbling windows, users can select the Extra option on the Visual Effects tab of the Appearance Preferences dialog.”


I remember Beryl, those were fun times.


Different desktop but KDE still has the wobbly windows as an option, I enabled them out of nostalgia recently.


Yeah, I worked on that but I didn't think that would count since it was a distro, not a desktop environment. In that case Novell shipped compiz in 2006 so even earlier.


macOS had one in 2000. Windows shortly thereafter.


What?

MacOS in 2000 was still old MacOS, with no compositing at all. The NeXT derived version of MacOS was still in beta, and I tried it back then, it was very rough. Even once OSX shipped in 2001, it was still software composited. Quartz Extreme implemented GPU compositing in 10.2, which shipped in 2002.

Windows finally got a composited desktop in Vista, released in 2007. It was GPU accelerated from day one.


There's a fancy terminal emulator written in Rust that uses GPU acceleration. I mean, it is emulating a needlepoint printer...


If I recall correctly, Vista was hard depending on DirectX 9a for Aero. Intel GPU parts embedded in mobile CPUs were almost, but not fully DX 9a capable, but Intel convinced Microsoft to accept it as "compatible". That created lots of problems to everyone.


IIRC they also implemented some features by reporting them as available and mostly emulating them on the CPU to qualify.


Integrated Intel GPU's have come a long way since Windows Vista. They started to be usable around 2012 and actually decent in the current decade


> What does an OS need a GPU for?

https://en.m.wikipedia.org/wiki/Windows_Aero


The modern paradigm of "application blasts out a rectangle of pixels" and "the desktop manager composes those into another rectangle of pixels and blasts them out to the screen".

It actually separates the OS from the GPU. Before WDDM your GFX device driver was the only software that could use GFX acceleration. After WDDM the GPU is another "processor" in your computer that can read and write to RAM and the application can use the GPU in user space any way it wants, and then the compositor can to the same (in user space) and in the end all the OS is managing communication with the GPU.

For that approach to work you need to have enough fill rate that you can redraw the screen several times per frame. Microsoft wanted to have enough they could afford some visual bling, but Intel didn't give to them.


> My current laptop only has integrated Intel GPU.

Which is far more powerful than the ones that caused problems almost two decades ago.


More than you think.

As people noted, most of your GUI is being rendered by it. Every video you watch is accelerated by it, and if it has some compute support, some applications are using it for faster math at the background (mostly image editors, but who knows).


For a smaller gripe: they also bought Project Offset, which looked super cool, to turn into a Larabee tech demo. Then they killed Larabee and Project Offset along with it.


> Intel GFX held back the industry 10 years. If people thought Windows Vista sucked it was because Intel "supported" it by releasing integrated GPUs which could almost handle Windows Vista but not quite.

not sure about it. i had friends with discrete GPUs at the time and they told me that vista was essentially a gpu-stress program rather than an OS.

at the same time, compiz/beryl on linux worked beautifully on intel integrated gpus, and were doing way cooler things than vista was doing at the time (cube desktops? windows bursting into flames when closed?).

I'm a bit sad that compiz/beryl is not as popular anymore (with all the crazy things it could do).


I've been playing Minecraft fine with Intel GPUs on Linux for about 15 years. Works great. If Windows can't run with these GPUs, that's simply because Windows sucks.


I wonder how big a downside an x86 monopoly would actually be these days (an M4 MacBook being the best perf/watt way to run x86 Windows apps today as it is) and how that compares to the downsides of not allowing x86 to consolidate efforts against rising competition from ARM CPUs.

The problem with the "use the GPU in a SoC" proposition is everyone that makes the rest of a SoC also already has a GPU for it. Often better than what Intel can offer in terms of perf/die space or perf/watt. These SoC solutions tend to coalesce around tile based designs which keep memory bandwidth and power needs down compared to the traditional desktop IMR designs Intel has.


That’s actually a pretty good pint, honestly


I'd like to address the aside for completeness' sake.

An x86 monopoly in the late 80s was a thing, but not now.

Today, there are sufficient competitive chip architectures with cross-compatible operating systems and virtualization that x86 does not represent control of the computing market in a manner that should prevent such a merger: ARM licensees, including the special case of Apple Silicon, Snapdragon, NVIDIA SOCs, RISC-V...

Windows, MacOS and Linux all run competitively on multiple non-x86 architectures.


> An x86 monopoly in the late 80s was a thing, but not now.

Incorrect, we have an even greater lack of x86 vendors now than we did in the 80s. In the 80s you had Intel, and they licensed to AMD, Harris, NEC, TI, Chips & Technologies, and in the 90s we had IBM, Cyrix, VIA, National Semi, NexGen, and for a hot minute Transmeta. Even more smaller vendors.

Today making mass market x86 chips we have: Intel, AMD, and a handful of small embedded vendors selling designs from the Pentium days.

I believe what you meant was that x86 is not a monopoly thanks to other ISAs, but x86 itself is even more of a monopoly than ever.


I believe in the 80s all those vendors were making the same intel design in their own fab. I don't think any of them did the design on their own. In the 90s some of them had their own designs.


Some were straight second sources but they all had the license to do what NEC, AMD, and OKI did, which is alter the design and sell these variants. They all started doing that with the 8086. There were variants of the 8086, 8088, and 80186, I'm unaware of variants of the 80188, or 80286 although there were still multiple manufacturers, I had a Harris 286 at 20MHz myself. Then with the 386 there were more custom variants of the 386 and 486. In the Pentium days Intel wouldn't license the Pentium design, but there were compatible competitors as AMD also began 100% custom designs that were only ISA compatible and pin compatible with the K5 and K6 lines.


At what point do we call a tweak to an original design different enough to count it... K5 and K6 where clearly new designs. The others were mostly intel with some changes. I'm going to count the rest as minor tweaks and not worth counting otherwise - but this is a case where you can argue there the line is and so others need to decide where they stand (if they care)


The NEC V20/30 series were significant advances over their Intel equivalent (basically all the 186 features plus more in an 8086/8 compatible package).

C&T had some super-386 chips that apparently barely made it to market (38605DX), and the Cyrix 5x86 (most of a 6x86) is substantially different from the AMD one (which is just a 486 clock-quadrupled)


I called the K5 and 6 new designs, I said they were only ISA and pin compatible, but not the same design.


> An x86 monopoly in the late 80s was a thing, but not now.

I think you're off by 20 years on this. In the 80s and early 90s we had reasonable competition from 68k, powerpc, and arm on desktops; and tons of competition in the server space (mips, sparc, power, alpha, pa-risc, edit: and vax!). It wasn't till the early 2000s that both the desktop/server space coalesced around x86.


Thank you for saying this. It's clear that processors are going through something really interesting right now after an extended dwindling and choke point onto x86. This x86 dominance has lasted entire careers, but from a longer perspective we're simply seeing another cycle in ecosystem diversity, specialized functions spinning out of and back into unified packages, and a continued downward push from commoditization forces that are affecting the entire product chain from fab to ISA licensing. We're not quite at the wild-west of the late 80s and 90s, but something's in the air.

It seems almost like the forces that are pushing against these long-term trends are focused more on trying to figure out how to saturate existing compute on the high-end, and using that to justify drives away from diversity and vertical integrated cost/price reduction. But there are, long-term, not as many users who need to host this technology as there are users of things like phones and computers who need the benefits the long-term trends provide.

Intel has acted somewhat as a rock in a river, and the rest of the world is finding ways around them after having been dammed up for a bit.


I remember when I was a senior in undergrad (1993) the profs were quite excited about the price/performance of 486 computers which thoroughly trashed the SPARC-based Sun work stations that we'd transitioned to because Motorola rug-pulled the 68k. Sure we were impressed by the generation of RISC machines that came out around that time like SPARC, PA RISC, POWER PC and such but in retrospect it was not those RISC machines that were fast it was 68k that was dying, but x86 was keeping up.


> It wasn't till the early 2000s that both the desktop/server space coalesced around x86.

A lot of companies killed off their in-house architectures and hopped on the Itanium bandwagon. The main two exceptions were Sun and IBM.


The bandwagon was actually an Ice Cream truck run by the old lady from the Sponge Bob movie.

Intel had just wiped the floor with x86 servers, all the old guard Unix vendors with their own chips were hurting. Then Intel makes the rounds with a glorious plan of how they were going to own the server landscape for a decade or more. So in various states of defeat and grief much of the industry followed them. Planned or not, the resulting rug pull really screwed them over. The organs that operated those lines of businesses were fully removed. It worked too well, I am going to say it was on accident.

Intel should have broken up its internal x86 hegemony a long time ago, which they have been trying since the day it was invented. Like the 6502, it was just too successful for its own good. Only x86 also built up the Vatican around itself.


X86 is more than just the ISA. What’s at stake is the relatively open PC architecture and hardware ecosystem. It was a fluke of history that made it happen, and it would be sad to lose it.


PCI-e is the culmination of that ecosystem, and like PCI before it, is available on all architectures to anyone who pays PCI-SIG.


PCIe is great, yes.

Sadly with the rise of laptops with soldered-in-everything, and the popularity of android/iphone/tablet devices, I share some of layer8's worries about the future of the relatively open PC architecture and hardware ecosystem.


On the one hand I do get the concern, on the other there’s never been a better time to be a hardware hacker. Cheap microcontrollers abound, raspberry pi etc, cheap fpgas, one can even make their own asic. So I just can’t get that worked up over pc architectures getting closed.


Hacking on that level is very different from building and upgrading PCs, being able to mix and match components from a wide range of different manufacturers. You won’t or can’t build a serious NAS, Proxmox homelab, gaming PC, workstation, or GPU/compute farm from Raspberry Pis or FPGAs.

We are really lucky that such a diverse and interoperable hardware platform like the PC exists. We should not discount it, and instead appreciate how important it is, and how unlikely for such a varied and high-performance platform to emerge again, should the PC platform die.


Today sure. If you want to custom make "serious" system then x86 is likely your best bet. But this isn't about today, you can have that system right now if you want, it's still there, so this is about the future.

All the use cases, except gaming PC, have "less serious" solutions in Linux/ARM and Linux/RISCV today, where I would argue there is more interoperability and diversity. Those solutions get better and closer to "serious" x86 solutions every day.

Will they be roughly equivalent in price/performance in 5 years... only time will tell but I suspect x86 PC is the old way and it's on it's way out.


You can't really build a PC with parts other than x86. The only other platform you can really build from parts is Arm, with the high end Ampere server chips. Most other platforms are usually pretty highly integrated, you can't just swap parts or work on it.


What about the POWER9-based Talos II systems? Extraordinary niche, I know, but aren't they PC-ish?


Why not? Ram is ram, storage is storage.


You can't just buy an ARM or POWER motherboard from one place, a CPU from another place, some RAM sticks from another place, a power supply, a heatsink/fan, some kind of hard drive (probably NVMe these days), a bunch of cables, and put them all together in your basement/living room and have a working system. With x86, this is pretty normal still. With other architectures, you're going to get a complete, all-in-one system that either 1) has no expandability whatsoever, at least by normal users, or 2) costs more than a house in NYC and requires having technicians from the vendor to fly to your location and stay in a hotel for a day or two to do service work on your system for you because you're not allowed to touch it.


But what prevents it from working? I've been building PCs from parts since I was a child.


I was only just today looking for low-power x86 machines to run FreePBX, which does not yet have an ARM64 port. Whilst the consumer computing space is now perfectly served by ARM and will soon be joined by RISC-V, if a widely-used piece of free and open source server software is still x86-only, you can bet that there are thousands of bespoke business solutions that are locked to the ISA. A monopoly would hasten migration away from these programs, but would nonetheless be a lucrative situation for Intel-AMD in the meantime.


The fact that C++ development has been effectively hijacked by the "no ABI breakage, ever"/backwards compatibility at all costs crowd certainly speaks to this.

https://herecomesthemoon.net/2024/11/two-factions-of-cpp/

There are a lot of pre-compiled binaries floating about that are depended on by lots of enterprise software whose source code is long gone, and these are effectively locked to x86_64 chips until the cost of interoperability becomes greater than reverse engineering their non-trivial functionality.


C++ language spec doesn't specify and doesn't care about ABI (infamously so; it's kept the language from being used in many places, and where people ignored ABI compat initially but absolutely needed it in the future, as with BeOS's Application Kit and Mac kexts, it's much harder to maintain than it should be.

"two factions" is only discussing source compatibility.


They had ABI breakage when C++11 support was implemented in GCC 5 and that was extremely painful. Honestly, I still wish that they had avoided it.


You can still use the old ABI with -D_GLIBCXX_USE_CXX11_ABI=0


Microsoft (and other parties) already demonstrated quite effective x86 emulation, opening the migration path away from this anachronistic ISA.


Surely there must be an emulator you could use?


Indeed, I could use QEMU to run FreePBX on ARM64. However, the performance penalty would be pretty severe, and there isn't anything inherent to FreePBX that should prevent it from running natively on ARM64. It simply appears that nobody has yet spent the time to iron out any problems and make an official build for the architecture, but unfortunately I think there is still loads of other software in a similar situation.


I believe the "x86 monopoly" was meant to refere to how only Intel and AMD are legally allowed to make x86 chips due to patents. X86 is currently a duopoly, and if Intel and AMD were to merge, that would become a monopoly.


This is how I interpreted it as well. The others seem to be arguing due to a misunderstanding of what was said/meant.


Didn't AMD start making x86 chips in 1982?


That seems correct from some quick Wikipedia reading, but I don't understand what it has to do with anything?


The existence of an imaginary x86 monopoly in the 80s?


Oh, but xbar's interpretation of the phrase "x86 monopoly" is clearly the x86 architecture having a monopoly in the instruction set market. Under that interpretation, I don't really think it's relevant how many companies made x86 chips. I don't think xbar is necessarily wrong, I just think they're interpreting words to mean something they weren't intended so they're not making an effective argument


Did x86 have a monopoly in the 80s to begin with? If there is any period when that was true it would be the 2000s or early 2010s.

> intended so they're not making an effective argument

To be fair I'm really struggling to somehow connect the "x86 monopoly in the late 80s" with the remainder of their comment (which certainly makes sense).


x86 didn't have a monopoly, but IBM PC clones were clearly what everyone was talking about and there the monopoly existed. There are lots of different also ran processors, some with good market share in some niche, but overall x86 was clearly on the volume winners track by 1985.


> but overall x86 was clearly on the volume winners track by 1985.

By that standard if we exclude mobile x86 has a much stronger monopoly these days than in 1985. Unless we exclude low end PCs like Apple II and Commodore 64.

In 1990 x86 had ~80%, Apple ~7%, Amiga ~4% (with the remainder going to lowend or niche PCs) so again not that different than today.


This is all very true and why I think a merger between AMD and Intel is even possible. Nvidia and Intel is also a possible merger, but I actually think there is more regulatory concern with NVIDIA and how big and dominant they are becoming.


Intel and Samsung could be interesting, especially if it would get Samsung to open up more. Samsung would get better GPUs and x86, Intel gets access to the phone market and then you end up with things like x86 Samsung tablets that can run both Windows or Android.

Could also be Intel and Micron. Then you end up with full stack devices with Intel CPUs and Micron RAM and storage, and the companies have partnered in the past.


Samsung has its own leading edge fabrication plants. Merging the two would drop the number of leading edge foundries from 3 to 2.


Isn't Intel's main problem that they've ceased to be a leading edge foundry?

Maybe they should follow AMD's lead and spin off the foundry business.


What part of a Samsung merger do you think would help them enter the phone market? My layman's understanding of history is that Intel tried and failed several times to build x86 chips for phones and they failed for power consumption reasons, not for lack of access to a phone maker willing to try their chips or anything like that.


They failed primarily for pricing reasons. They could make a low power CPU competitive with ARM (especially back then when Intel had the state of the art process), but then they wanted to charge a premium for it being x86 and the OEMs turned up their nose at that.

Samsung still has a fairly competitive process and could make x86 CPUs to put in their own tablets and laptops without having the OEM and Intel get into a fight about margins if they're the same company. And with the largest maker of Android devices putting x86 CPUs into them, you get an ecosystem built around it that you wouldn't when nobody is using them to begin with because Intel refuses to price competitively with ARM.


> An x86 monopoly in the late 80s was a thing, but not now

And then in the 2000s after AMD64 pretty much destroyed all competing architectures and then in the 2010s Intel itself effectively was almost a monopoly (outside of mobile) with AMD being on the verge of bankruptcy.


Itanium’s hype killed the competing architectures. AMD64 then took over since it was cost effective and fast.


> x86 monopoly

Wintel was a duopoly which had some power: Intel x86 has less dominance now partly because Windows has less dominance.

There are some wonderful papers on how game theory and monopoly plays out between Windows and Intel; and there's a great paper with analysis of why AMD struggled against the economic forces and why Microsoft preferred to team up with a dominant CPU manufacturer.


Ooh got links?


I could see Broadcom picking up x86.


This is great "write a horror story in 7 words" content.


Ok, I'd like to pitch a Treehouse of Horror episode.

Part 1, combine branch predictor with the instruction trace cache to be able to detect workloads, have specific licenses for say Renderman, Oracle or CFD software.

Part 2, add a mesh network directly to the CPU, require time based signing keys to operate. Maybe every chip just has starlink included.

Part 3, In an BWM rent your seats move, the base CPU is just barely able to boot the OS, specific features can be unlocked with signed payloads. Using Shamir secrets so that Broadcom AND the cloud provider are both required for signing the feature request. One can rent AVX512, more last level cache, ECC, overclocking, underclocking.

The nice part about including radios in the CPUs directly means that updates can be applied without network connectivity and you can geofence your feature keys.

This last part we can petition the government to require as the grounds of being able to produce EAR regulated CPUs globally.

I think I'll just sell these patents to Myhrvold.


I want to emphasize this is not a defense of this behavior.

But didn't IBM do things like this for years with mainframes?


sir there are children reading this site.


I'm not sure I've ever laughed this much at a HN comment chain


Yeah, what both companies would need to be competitive in the GPU sector is a cuda killer. That's perhaps the one benefit of merging Antel can more easily standardize something.


You don't get a CUDA killer without the software infrastructure.

Intel finally seem to have got their act together a bit with OneAPI but they've languished for years in this area.


They weren’t interested in creating an open solution. Both intel and AMD have been somewhat short sighted and looked to recreate their own cuda, and the mistrust of each other has prevented them from a solution for both of them.


Disclaimer: I work on this stuff for Intel

At least for Intel, that is just not true. Intel's DPC++ is as open as it gets. It implements a Khronos standard (SYCL), most of the development is happening in public on GitHub, it's permissively licensed, it has a viable backend infrastructure (with implementations for both CUDA and HIP). There's also now a UXL foundation with the goal of creating an "open standard accelerator software ecosystem".


This is all great, but how can we trust this will be supported next year? After Xeon Phi, Omnipath, and a host of other killed projects, Intel is approaching Google levels of mean time to deprecation.


Neat. Now release 48gb GPUs to the hobbyist devs and we’ll use intel for LLMs!


Apple is your savior if you are looking at it as a CPU/GPU/NPU package for consumer/hobbyists.

I decided that I have to start looking at Apple's AI docs


The Intel A770 is currently $230 and 48GB of GDDR6 is only like a hundred bucks, so what people really want is to combine these things and pay $350 for that GPU with 48GB of memory. Heck, even double that price would have people lining up.

Apple will sell you a machine with 48GB of memory for thousands of dollars but plenty of people can't afford that, and even then the GPU is soldered so you can't just put four of them in one machine to get more performance and memory. The top end 40-core M4 GPUs only have performance comparable to a single A770, which is itself not even that fast of a discrete GPU.


Actual links to the github would be much appreciated, as well a half-page tuto on how to get this up an running on a simple Linux+Intel setup.



What’s happening with intel wino? That seemed like their cuda ish effort.


OpenCL was born as a cuda-alike that could be apply to GPUs from AMD and NVIDIA, and general purpose CPUs. NVIDIA briefly embraced it (in order to woo Apple?) and then just about abandoned it to focus more on cuda. NVIDIA abandoning OpenCL meant that it just didn't thrive. Intel and AMD both embraced OpenCL. Though admittedly I don't know the more recent history of OpenCL.


This meme comes up from time to time but I'm not sure what the real evidence for it is or whether the people repeating it have that much experience actually trying to make compute work on AMD cards. Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic. Very different issues - if CUDA allegedly "ran" on AMD cards that still wouldn't save them because the bugs would be too problematic.


> Every time I've seen anyone try the problem isn't that the card lacks a library, but rather that calling the function that does what is needed causes a kernel panic.

Do you have experience with SYCL? My experience with OpenCL was that it's really a PITA to work with. The thing that CUDA makes nice is the direct and minimal exercise to start running GPGPU kernels. write the code, compile with nvcc, cudaed.

OpenCL had just a weird dance to perform to get a kernel running. Find the OpenCL device using a magic filesystem token. Ask the device politely if it wants to OpenCL. Send over the kernel string blob to compile. Run the kernel. A ton of ceremony and then you couldn't be guarenteed it'd work because the likes of AMD, Intel, or nVidia were all spotty on how well they'd support it.

SYCL seems promising but the ecosystem is a little intimidating. It does not seem (and I could be wrong here) that there is a defacto SYCL compiler. The goals of SYCL compilers are also fairly diverse.


> Do you have experience with SYCL?

No, I bought a Nvidia card and just use CUDA.

> OpenCL had just a weird dance to perform to get a kernel running...

Yeah but that entire list, if you step back and think big picture, probably isn't the problem. Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away. The sheer number of frameworks out there is awe-inspiring.

I gave up on OpenCL on AMD cards. It wasn't the long complex process that got me, it was the unavoidable crashes along the way. I suspect that is a more significant issue than I realised at the time (when I assumed it was just me) because it goes a long way to explain AMD's pariah-like status in the machine learning world. The situation is more one-sided than can be explained by just a well-optimised library. I've personally seen more success implementing machine learning frameworks on AMD CPUs than on AMD's GPUs, and that is a remarkable thing. Although I assume in 2024 the state of the game has changed a lot from when I was investigating the situation actively.

I don't think CUDA is the problem here, math libraries are commodity software that give a relatively marginal edge. The lack of CUDA is probably a symptom of deeper hardware problems once people stray off an explicitly graphical workflow. If the hardware worked to spec I expect someone would just build a non-optimised CUDA clone and we'd all move on. But AMD did build a CUDA clone and it didn't work for me at least - and the buzz suggests something is still going wrong for AMD's GPGPU efforts.


> Programmers have a predictable response to that sort of silliness. Build a library over it & abstract it away

Impossible. GPGPU runtimes are too close to hardware, and the hardware is proprietary with many trade secrets. You need support from GPU vendors.

BTW, if you want reliable cross-vendor GPU, just use Direct3D 11 compute shaders. Modern videogames use a lot of compute, to the point that UE5 even renders triangle meshes with compute shaders. AMD hardware is totally fine, it’s the software ecosystem.


There are already packages that let people run CUDA programs unmodified on other GPUs: see https://news.ycombinator.com/item?id=40970560

For whatever reason, people just delete these tools from their minds, then claim Nvidia still has a monopoly on CUDA.


And which of these have the level of support that would let a company put a multi-million dollar project on top of?


We have trillions of dollars riding on one-person open-source projects. This is not the barrier for "serious businesses" that it used to be.


Resilience is not as valued as it should be... Average bus factor is how small these days? :/


What are you talking about?


Those packages only really perform with low-precision work. For scientific computing, using anything but CUDA is a painful workflow. DOE has been deploying AMD and Intel alternatives in their leadership class machines and it's been a pretty bad speedbump.


('DOE' = US Department of Energy)


There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.

And yet still the popcorn gallery says "there no [realistic] alternative to CUDA." Methinks the real issue is that CUDA is the best software solution for Nvidia GPUs, and the alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.


> There's already a panoply of CUDA alternatives

Is there?

10 years ago, I burned about 6 months of project time slogging through AMD / OpenCL bugs before realizing that I was being an absolute idiot and that the green tax was far cheaper than the time I was wasting. If you asked AMD, they would tell you that OpenCL was ready for new applications and support was right around the corner for old applications. This was incorrect on both counts. Disastrously so, if you trusted them. I learned not to trust them. Over the years, they kept making the same false promises and failing to deliver, year after year, generation after generation of grad students and HPC experts, filling the industry with once-burned-twice-shy received wisdom.

When NVDA pumped and AMD didn't, presumably AMD could no longer deny the inadequacy of their offerings and launched an effort to fix their shit. Eventually I am sure it will bear fruit. But is their shit actually fixed? Keeping in mind that they have proven time and time and time and time again that they cannot be trusted to answer this question themselves?

80% margins won't last forever, but the trust deficit that needs to be crossed first shouldn't be understated.


This is absolutely it. You pay the premium not to have to deal with the BS.


> alternative hardware vendors aren't seen as viable competitor for hardware reasons, and people attribute the failure to software failures.

It certainly seems like there's a "nobody ever got fired for buying nvidia" dynamic going on. We've seen this mentality repeatedly in other areas of the industry: that's why the phrase is a snowclone.

Eventually, someone is going to use non-nvidia GPU accelerators and get a big enough cost or performance win that industry attitudes will change.


> There's already a panoply of CUDA alternatives, and even several CUDA-to-non-Nvidia-GPU alternatives (which aren't supported by the hardware vendors and are in some sense riskier). To my knowledge (this isn't really my space), many of the higher-level frameworks already support these CUDA alternatives.

On paper, yes. But how many of them actually work? Every couple of years AMD puts out a press release saying they're getting serious this time and will fully support their thing, and then a couple of people try it and it doesn't work (or maybe the basic hello world test works, but anything else is too buggy), and they give up.


Why doesn’t NVIDIA buy intel? They have the cash and they have the pairing (M chips being NVIDIA and intel’s biggest competitors now). It would be an AMD/ATI move, and maybe NVIDIA could do its own M CPU competitor with…whatever intel can help with.


They don’t need it they have Grace


Why would you want this kind of increased monopolization? That is, CPU companies also owning the GPU market?


is it a lot more competitive for Nvidia to just keep winning? I feel like you want two roughly good choices for GPU compute and AMD needs a shot in the arm for that somewhere.


It is absolutely more competitive when nVidia is a separate company from Intel so they can't pull shit like "our GPUs only work with our GPUs" like Intel is now pulling with their WiFi chips.


WGSL seems like a nice standard everyone could get behind




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: