I remember RISC's back in the late 80's/early 90's. CISC's bullied them away and we've been stuck in Intel's quagmire every since. Anytime there's an attack on the status quo, the established players feign concern and beat back the attack then return to the way things were (remember Negroponte's $100 laptop and the netbook response?)
It wasn't that CISC won or that RISC lost, it was that the architectures got so blurry you couldn't tell one from the other. There's so much microcode in a CPU now that the instruction set is just the icing layer on the cake. Internally there's surprising amounts of commonality between PowerPC, ARM and x86 type chips.
Plus PowerPC started to adopt CISC-like instructions, x86-64 started to adopt RISC-like features such as having a multitude of generic registers, and here we are where nobody cares about the distinction.
Don't forget that while Intel won in certain markets, like notebooks, desktops and servers, it's absolutely, utterly irrelevant in other places that ship far, far more CPUs. A typical car may have as many as one hundred CPUs of various types, typically at least fifty, many of them PowerPC for power and legacy reasons. Your phone is probably ARM. Remote controls. Routers. Switches. Refrigerators. Thermostats. Televisions and displays. Hard drives. Keyboards and mice. Basically anything that needs some kind of compute capability probably has a non-Intel processor.
If there's a quagmire we're stuck in it's that we're surrounded by thousands of devices that are likely full of vulnerabilities that can never, will ever be fixed.
Actually most real RISC CPUs have no microcode, and if they do it's really just the same instruction set running out of an exception handler, not hardwired stuff on some other lower level private ISA
After microcoding, this is all silly. What matters is how efficiently you can encode and communicate the μ-ops to the ROB. RISC-V, with the C extension (and using only today's nascent compiler backends!), has more-or-less the same μ-op density as x86-64 (with a good order of magnitude or two less complexity in the decoder), and considerably better density than AArch64, which completely lacks reduced width instructions.
It's not that CISC won, it's that CISC (eventually) didn't lose to any great degree.
x86s were about the most riscy of the cisc processors - 99.9% of instructions that access memory perform an access to a single address, no double indirect accesses no move memory to memory accesses, not 21 TLB misses on a single instruction (meaning a program might have to have enough memory to get all 21 page table pages and the underlying data pages (42 pages) to make progress) - that sort of thing.
The CISC->RISC thing largely happened because the ratio of cpu speeds and memory speeds changed, low end CPUs got caches, they moved on chip, instruction decoding started to be an issue, the x86s were riscy enough that they survived that change
RISCV is pretty far from attacking Intel anywhere. ARM is the one that should be both worried about RISCV and simultaneously be a cause of worry for Intel.
I don't think this matters, as long as the internals are completely inaccessible to a programmer. In other words, what happens inside is not what is usually called "architecture" (which is part of the definition of what RISC is).
No idea how this will pan out.