Horsepower can make it go faster, but the major limitation is graphics memory. Graphics cards with under 12 GB of memory can’t handle the model (although I believe there are lower-memory optimizations out there), which means you need a pretty high-end dedicated graphics card on a PC. But because Apple Silicon chips have reasonably fast on-chip graphics with integrated memory, it can run pretty efficiently as long as your Mac has 16gb or more of RAM.
They have integrated GPU that I believe Apple claimed was comparable to RTX 3090 (perhaps since debunked though or at the least maybe a misleading claim).
Apple compared M1 max to RTX 3080 (mobile) which was a stretch.
M1 ultra was compared to RTX 3090 which was a larger stretch.
The M1 max deliver about 10.5 tflops
The M1 ultra about 21 tflops.
The desktop RTX 3080 delivers about 30 tflops and RTX 3090 about 40.
Apple’s comparison graph showed the speed of the M1s vs. RTXs at increasing power levels, with the M1s being more efficient at the same watt levels (which is probably true). However, since the graph stopped before the RTX GPUs reached full potential, the graph was somewhat misleading.
The M1 max and Ultra have extra video processing modules that make them faster than the RTX GPUs at some video tasks though.
I believe thats cherry picked data. More specifically apple says it’s comparable to the 3090x at a given power budget of 100 watts. They don’t mention that the 3090 goes up to 360 watts.
The relative feebleness of x86 iGPUs is partly about the bad software situation (fragmentation etc, the whole story of how webgl is now the only portable way) and lack of demand for better iGPUs. AMD tried the strategy of beefier iGPUs for a while with a "build it and they [software] will come" leap of faith but pulled back after trying for many years.