How does a virtualized ARM build, of Ubuntu for example, run in Parallels vs. the same workload on an x86 virtual machine in the same range?
If my day to day development workflow lives in linux virtual machines 90% of the time, is it worth it to get an M1 for virtualization performance? I realize I'm hijacking but I haven't found any good resources for this kind of information...
This is very dependent on setup. If your IO is mostly done to SR-IOV devices, your perf will be very close to native anyway. The difference would be about the IOMMU (I have no idea if there's a significant difference between the two here). If devices are being emulated, the perf probably has more to do with the implementation of the devices than the platform itself.
If my day to day development workflow lives in linux virtual machines 90% of the time, is it worth it to get an M1 for virtualization performance? I realize I'm hijacking but I haven't found any good resources for this kind of information...