I was always taught ARM (and M1), being a RISC architecture, isn't as "capable" as x86 in some way, whatever it means.
I am no longer sure if that's still the case, since they seem to work just as good, if not better (energy efficiency). Of course, it's not exactly an apple to apple comparison since Apple upgraded so many other things, but I just didn't see any mention about the limitation of being RISC in these articles.
Could someone enlighten in this respect for an average Joe who knows nothing about hardware?
You are right that CISC processors (like x86) have more capabilities, i.e. more instructions. You as the programmer get to take advantage of the "extra" very specific instructions, so overall you write less instructions.
Less instructions sounds great, but with CISC you do not know how long those instructions will take to execute. RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining). This makes the hardware simple and easy optimize. The instructions you lose can just be implemented in software. That space on chip can be used for more registers and cache, for a huge speed up. Plus nowadays, shared libraries and compilers do a lot of work for us behind the scenes as well. Having tons of instructions on chip only is a benefit for a narrow group of users today.
We've found that for hardware it's better to reduce clock cycles per instruction, rather than reduce total number of instructions.
> RISC has only a handful of instructions, that all take 1 clock cycle (with pipelining).
This isn't really the case with modern ARM. Fundamentally modern ARM and x86 CPUs are designed very similarly once you get past the instruction decoder. Both 'compile' instructions down to micro-ops that are then executed rather than executing the instruction set directly so the distinctions between the instruction sets themselves don't matter all that much past that point.
The main advantages for ARM come from the decode stage and from larger architectural differences such as relaxed memory ordering requirements.
For the most part I think so. The main advantages x86 has are based on code size. Many common instructions are 1 or 2 bytes so the executable size on x86 can often be smaller (and more instructions can fit in the instruction cache). I'm sure there are tons of other small differences that weigh in but I'm not well versed enough to know of them.
A paper I read a few months ago compared instruction density between a few different ISAs. Thumb was 15% denser and aarch64 was around 15% less dense compared to x86. Unfortunately, mode switching in thumb impacts performance which is why they dropped it.
RISCV compressed instructions are interesting in that they offer the same compact code as thumb, but without the switching penalty (internally, they are expanded in place to normal 32-bit instructions before execution).
If they added some dedicated instructions in place of some fused ones, that density could probably increase even more (I say probably because two 16-bit instructions can equal one 32-bit dedicated instruction in a lot of those cases).
It’ll be interesting to see what happens when they start designing high performance chips in the near future.
You have been listening to propaganda. ARM has always been better. Consider the fact that it still exists, despite Intel's predatory nature. It exists because Intel could not make a processor for the low power market that was performant. They tried. But anything they built either took too much power, or used the right amount of power but was too slow. So ARM survived in this niche. But it couldn't grow out of this niche because Intel dominates ruthlessly.
But the low power market shows that, for a given power consumption, the ARM is faster. Does that not apply everywhere? Yes, it does. And so Apple, which controls its own destiny, developed an ARM chip for laptops and desktops. Its faster and cooler and cheaper than Intel chips, because ARM has always been faster and cooler and cheaper than Intel chips.
AWS, which also controls its own destiny has launched Graviton2. These are servers which are faster and cooler and cheaper than Intel servers, and the savings and performance are passed on to customers.
As long as Intel ruled by network effects - buy an Intel because everyone has one - build an Intel because it has the most software - their lack of value didn't matter.
There are now significant players who can ignore the network effects. The results are so stunning that many people simply refuse to believe the evidence.
I am no longer sure if that's still the case, since they seem to work just as good, if not better (energy efficiency). Of course, it's not exactly an apple to apple comparison since Apple upgraded so many other things, but I just didn't see any mention about the limitation of being RISC in these articles.
Could someone enlighten in this respect for an average Joe who knows nothing about hardware?