It's really hard to meaningfully compare. They're designed for other things.
Most desktop or server CPUs prioritize massive parallelism, which is useful for OSes and multithreaded apps. In contrast, most MCUs have just one single-threaded core. CPUs are expected to run with sophisticated and bulky active cooling, so they reach speeds up to 3-5 GHz; in contrast, MCUs are almost always used without any added cooling and need to be power-efficient, so they seldom venture above 500 MHz. CPUs require external power controllers for dynamic voltage scaling to realize that performance. MCUs are often expected to run off a single supply, often something like "anywhere between 1.8 and 3.3 V". CPUs have a variety of hardware accelerators for operating on gobs of data (e.g., AVX vector instructions). In contrast, the most an MCU is typically expected to do is handle some low-resolution camera, and only some higher-end models have a hardware floating point unit.
Most benchmarks are optimized for desktop tasks too, so you can expect MCUs scores to be two orders of magnitude lower. But that doesn't mean they perform that much worse at the tasks they're intended for.
In situations where you need to do some real-time ML-based image classification, drive a 4K display, and stream H.265 videos, you generally don't reach for a traditional MCU, but for a CPU, with all the extra power supply complexity and thermal management issues this entails.
SoCs blur the line somewhat, because they often combine a CPU core, a graphics card, lots of memory, and a bunch of other things in a single package - making them essentially a fully-fledged computer that can run Linux, but is about as easy to integrate as an MCU. These are still slower than top-of-the-line desktops, but they're in the "one order of magnitude" territory.
I wonder how fast is a Cortex M0, an M1, an M3, an M4 and a Cortex M7.