Being expert in adjacent domains is sometimes worse than being clueless. The ratio of actual to assumed expertise seems to get worse. Navy captains vs shipping boats, geologists vs climate scientists, programmers vs cpu design, etc etc. You can very easily not understand subtleties, comment on a thing, and then people listen to you.
You went a bit too far. I'd presume a lot of programmers do know CPU architecture well. While not common some of them to work on boring web platforms, some still do. Also most CPU architects would be decent programmers to begin with.
Programming has not changed all that much and it was not so long time ago that programmers routinely knew assembly and how many cycles (and bytes) each opcode took... Nowadays it might be regarded as an arcane art by most, of course.
> Programming has not changed all that much and it was not so long time ago that programmers routinely knew assembly and how many cycles (and bytes) each opcode took...
Most programmers on Apple platforms don't actually think about execution order -- because they don't have to -- but also because Apple is actively using Clang to discourage assembly and writing for specific CPU architectures. It makes Apple's job of releasing new silicon that much easier if they don't have to worry about breaking existing software custom written for a previous architecture.
And this still assumes a one-to-one relationship between the code you're writing and the computer it's running on or designed for. When you get to the cloud, or cloud functions, that breaks down even further. If using Heroku, for example, you don't even have to consider how to deploy your code and you can make it pretty far running a production service.
It's possible for closely related fields to still have very large differences. Consider drivers and cars: The more automation is introduced, the less we might need to know about what the automation is doing for us under the hood. Anti-Lock Breaking (ABS) in cars might be a simple example where folks know about it because there's a light on the dash and instructions in driver's ed. But if we didn't have those indicators, how often would anyone know about it and other such features? Some technologies remain undocumented until discovered later by experimentation, the VW diesels come to mind. Specific chip designers likely know more than your average programmer, just as specific car manufacturers likely know more about their products than drivers would.
This is quite a blatant assumption on its right own (and very far from the truth). The programming, itself, has not changed. But of course, modern hardware is not a von neumann machine. Writing lock-free datastructure is not that different programming, it requires a lot more attention and (possibly) experience but the basic premise is still the same.
Understanding memory topology/hierarchy & latency, concurrency, branch (mis)prediction, cache coherency should be a minimum for anyone who comments on CPU architecture. I did mention Assembly and without some knowledge on the target architecture it's rather pointless to comment on, either.
I encourage most developers to at least understand that memory is not actually 'random access', which makes derefernce not cheap - but accessing data placed together is next to free as it is likely to hit L1.
> discourage assembly and writing for specific CPU architecture
I found out that I could not reliably beat a standard compiler writing everyday Assembly around K6-2 years. Yet, still some inner loops can be carefully hand optimized. The point is that there are plenty of programmers who would be able to understand modern architecture and to me basic understanding is needed unless the job is just gluing code.
In all of those examples, its possible the person DOES have a good understanding of the adjacent domain. And in all examples, it is possible they will miss some subtleties, but people will give their opinions a lot of weight.
Just as an example I see a lot: branch prediction. Some programmers don't know about it at all. Many do know about it, but think that it still works in some form like "assume the branch will go the same way it did last time". Which is how it worked in the 1990s. Then it evolved, and then it evolved two more times. Today there is something like a neural network that learns how the branches will go. (And careful, im a programmer so I may be communicating some subtleties wrong there!)
>Today there is something like a neural network that learns how the branches will go.
More like history, where the call comes from.
Oddly enough the price of branch misdirection has become lower as not the entire pipeline needs to be thrown away but also due to hyper threading taking the slack.
Flip note: with 'recent' developments of Spectre, one'd think branch prediction got into the lime light. Truth be told, though, not many would be able to write constant time 'fizz buzz' (can try it on your own, bonus points to having constant time int->string conversion)
And the aging of their experience matters. I mentioned upthread that I was trained as a merchant marine officer. However that was three decades ago and while a lot of my training will still apply, industry practices move on and a lot of the stuff I learned is long since outdated. A lot of times I start to type a reply to something relevant and have to smack myself into remembering that things are probably done differently in 2021 :-)