Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article is no more about transistors or literal frame-pointer hardware than a discussion about the QWERTY keyboard layout is about the metallurgy and mechanics of typewriter arms.

It's about how early design choices, once reinforced by tooling and habit, shape the whole ecosystem's assumptions about what’s "normal" or "efficient."

The only logical refutation of the article would be to demonstrate that any other computational paradigm, such as dataflow, message-passing, continuation-based, logic, actor, whatever, can execute on commodity CPUs with the same efficiency as imperative C-style code.

Saying "modern CPUs don't have stack hardware" is a bit like saying "modern keyboards don't jam, so QWERTY isn't a problem."

True, but beside the point. The argument isn't that QWERTY (or C) is technically flawed, but that decades of co-evolution have made their conventions invisible, and that invisibility limits how we imagine alternatives.

The author's Stockholm Syndrome metaphor isn't claiming we can’t build other kinds of CPUs. Of course we can. It's pointing out how our collective sense of what computing should look like has been quietly standardized, much like how QWERTY standardized how we type.

Saying that "modern CPUs are mostly caches and vector units" is like saying modern keyboards don't have typebars that jam. Technically true, but it misses that we're still typing on layouts designed for those constraints.

Dismissing the critique as fighting 1980s battles is like saying nobody uses typewriters anymore, so QWERTY doesn’t matter.

Pointing out that C works fine on architectures without a frame pointer is like noting that Dvorak and Colemak exist. Yes but it ignores how systemic inertia keeps alternatives niche.

The argument that radical CPU designs fail because hardware and software co-evolve fits the analogy: people have tried new keyboard layouts too, but they rarely succeed because everything from muscle memory to software assumes QWERTY.

The claim that CPU internals are now nothing like their ISA is just like saying keyboards use digital scanning instead of levers. True, but irrelevant to the surface conventions that still shape how we interact with them.

This dismissive pile-on validates the article's main metaphor of Stockholm Syndrome surprisingly directly!



The article is railing against how things are without offering even a glimpse of what could be improved in an alternate design (other than some nebulous talk about message passing - which I'm assuming he's meaning to be something akin to FPGAs?)

The alternatives are niche because they're not compelling replacements. A Colemak keyboard isn't going to improve my productivity enough to matter.

A DSP improves performance enough to matter. A hardware video decoding circuit improves performance enough to matter. A GPU improves performance enough to matter. Thus, they exist and are mainstream.

When we've found better abstractions that actually make a compelling difference, we've implemented them. Modern programming languages like Kotlin have advanced enough abstractions that they could actually be applied in exotically architectured CPUs. And yet such things are not used.

Big players with with their own silicon like Apple and Google aren't sticking to the current general CPU architecture out of stubbornness. They look at the bottom line.

And the bottom line is that modern CPUs are efficient enough at their tasks that no alternative has disrupted them yet.


The article argues that our very sense of what counts as "better" or "compelling" is shaped by the assumptions baked into C-style hardware. Saying no alternative has disrupted them because modern CPUs are efficient enough just restates that bias. It assumes the current definition of efficiency is neutral, when that's exactly what’s being questioned.

The examples of GPUs, DSPs, and hardware video decoders don’t really contradict the article's point. Those are domain-specific accelerators that still live comfortably inside the same sequential, imperative model the author critiques. They expand the ecosystem, but don't escape its paradigm.

Your Colemak analogy cuts the other way: alternatives remain niche because the surrounding ecosystem of software, conventions, and training makes switching costly whether or not they are actually "better." That is the path dependence the article calls out.

As to the article's not proposing an alternative, it reads like a diagnosis of conceptual lock-in, not a design proposal. Its point is to highlight how tightly our notion of "good design" is bound to one lineage of thought. It is explicitly labeled as part two, so it may be laying groundwork for later design discussion. In any case, I think calling attention to invisible constraints is valuable in itself.


> The article argues that our very sense of what counts as "better" or "compelling" is shaped by the assumptions baked into C-style hardware.

And this is where I disagree. History is rife with disruptive technologies that blew the existing systems out of the water.

When we do find compelling efficiencies in new designs, we adopt them, such as DSPs and GPUs - which are NOT sequential or imperative - they are functional and internally parallel, and offer massive real-world gains; thus their success in the marketplace.

We also experiment with new ways of computing, such as quantum computers.

There's no shortage of attempts to disrupt the CPU, but none caught on because none were able to show compelling efficiencies over the status-quo, same as for Colemak and DVORAK: they are technically more efficient, but there's not enough of a real-world difference to justify the switching cost.

And that's fine. I don't want to be disruptively changing things at a fundamental level just for a few percent improvement in real-world efficiencies. And neither are the big boys who are actively developing not only their own silicon, but also their own software to go with it.

The article itself reads a lot like a Post hoc ergo propter hoc, in that it only allows for the path of technological progress to exist within the bounds of the C programming language (while also misattributing a number of things to C in the process), but completely discounts the possibility that how CPUs are designed is in fact a very efficient way to do general purpose computing.


Market success does not necessarily prove conceptual neutrality; it just shows which designs fit best within the existing ecosystem of compilers, toolchains, and developer expectations. That is the lock-in the article describes.

Calling the argument post hoc ergo propter hoc also misses the mark. The author is not saying CPUs look this way because of C in a simple cause-effect sense, but that C-style abstractions and hardware co-evolved in a feedback loop that reinforced each other’s assumptions about efficiency.

And I do not think anyone is advocating "disruption for disruption's sake." The point is that our definition of what counts as a worthwhile improvement is already conditioned by that same co-evolution, which makes truly different paradigms seem uneconomical long before they're fully explored.


We'll have to agree to disagree, then. Technologies such as the GPU provided such massive improvements that you either had to get on-board or be left behind. It was the same with assembly line vehicle production, and then robot vehicle production. Some technological enhancements are so significant that disruption is inevitable, despite current "lock-ins".

And that's fine. We're never going to reach 100% efficiency in anything, ever (or 90% for that matter). We're always going to go with what works now, and what requires the least amount of retooling - UNLESS it's such a radical efficiency change that we simply must go along. THOSE are the innovations people actually care about. The 10-20% efficiency improvements, not so much (and rightly so).


You restate that disruptive innovation happens when gains are large enough to overcome inertia, and that smaller conceptual shifts aren't worth pursuing. Your premise is pragmatic: if it mattered, the market would already have adopted it.

This still sidesteps the article's point that what we measure as "efficiency" is itself historically contingent. GPUs succeeded precisely because they exploited massive parallelism within an already compatible model, not because the ecosystem suddenly became open to new paradigms. Your example actually supports the article's argument about selective reinforcement.

There's nothing to agree to disagree about. You're arguing a point the article does not make.


I get the feeling that we're talking across each other now...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: