Are you assuming that an x86 core is more powerful than an ARM core? That may be true historically, but only because ARM implementations were optimized for watts and x86 for compute. I think AMD is talking about optimizing ARM for compute as aggressively as x86, and that should (theoretically) make it slightly faster than x86, because the ARM design is more, having learned the lessons of CISC and RISC and all that.
I wouldn't bet on anyone beating Intel. They are a freaking behemoth. They have pretty much all the best engineers, and if they don't, they have the funds to hire them. If anyone gets close to actually threatening them, they have the means to ramp up their game.
CISC is dead, it's mostly irrelevant, all modern computers are RISC based. x86 CPUs merely have a micro-op translation system. That imposes some overhead, but in modern CPUs with gigatransistors that overhead isn't that great, and usually not a bottleneck.
The lesson of the last 20-30 years is that the ISA is almost entirely tangential to performance. There are some cases where that's not entirely true (like VLIW), but for the most part it still holds. What matters is the core design at a hardware level. And there it would be the height of foolishness to bet against Intel. They've been challenged multiple times by multiple world-class competitors, and they've demolished them.
Why is it so hard to imagine a world where multiple competitors have their niches? Look at iphone vs android, who are in the exact same market, for example, there's room for diversity and competition. The only way Intel could "die" is if they stop paying attention and stop caring about competing, and I just don't see that happen. At the end of the day they still have a ton of talent, the best fab capability and capacity in the world, and the ability to execute on projects successfully.
"They've been challenged multiple times by multiple world-class competitors, and they've demolished them."
Yes, but not always by technical might. They were once the scrappy underdog, and it's been shown in courts they aren't that worried about being unsportsmanlike.
I think X86 processors since the Pentium Pro (mid-late 90's) have been converting the instructions into internal micro-ops. If it was feasible/efficient then it will now be quite a small block of logic. While there will be some time used to decode there is also time saved in having denser code (more code cached, less memory bandwidth used). It is far from clear that the overall effect is slower.
I would turn it round to you and ask you to cite something showing that the translation is something that does cause significant performance loss.