Hacker News new | past | comments | ask | show | jobs | submit login

VirGL is definitely an interesting project, but all one has to do to get GPU passthrough working (from a Linux QEMU host to any guest OS) is: 1.) research a cheap, secondary GPU that is natively supported by the guest OS, 2.) plug such a secondary GPU into a PCIe slot on the host and hook it up to the primary monitor with a secondary cable (D-Sub vs. DVI, etc.), 3.) setup Linux to ignore the secondary GPU at boot and configure a QEMU VM for the GPU passthrough. The whole process takes perhaps [edit: "a few"] hours and as works flawlessly, with no stability issues [edit: "at least with Asus motherboards"]. (Switching across the two GPU cables can be accomplished in software by using Display Data Channel /DDC/ utilities and switching keyboard/mouse can be accomplished by using evdev /event device/ passthrough.) More information: https://github.com/kholia/OSX-KVM/blob/master/notes.md#gpu-p...



> plug such a secondary GPU into a PCIe slot on the host

Where can I find an unused PCIe slot on my laptop?

Also it is not very rational to buy 2 GPUs and use only one at a time.


All gaming laptops come with a secondary gpu. What i do on mine is a disable the dedicated gpu, enable passthrough, and voila - shitty windows can run virtualised with a dedicated gpu. Takes 5-10 mins to do so. Also if your laptop supports thunderbolt or usb4 then all you need is an egpu. But thats the more pricey solution.

Having said that virgl is pretty darn sweet!


I have an AMD 290 or whatever GPU to run NixOS (will be upgrading to Intel soon) and a 3060 that I pass into a Windows VM for gaming, I feel very rational.

Unused PCIe slots in a laptop is hard, haven't tried but I imagine Thunderbolt could work for this purpose.


Except that requires IOMMU which is not always available nor is it always reliable on consumer motherboards.


Good point, but I believe that was a serious problem 10 years ago, while these days virtually any decent motherboard properly supports IOMMU, consumer-grade boards included - e.g. any Asus motherboard should work perfectly.


They might support IOMMU, but the default groups can be very annoying to work with so you have to use a patched kernel as well that "ignores" the actual groups.


IOMMU is a integrated CPU feature, so consumer motherboards do not affect reliability - they just make the setting available.

IOMMU is also required for various modern security features, and M$ requires it for certification nowadays to protect against DMA vulnerabilities (heard of thunderbolt?).


No.

Just because your CPU supports IOMMU, that does not mean GPU passthrough is going to work properly or that groups are setup properly or....


It doesn't, but if you want to do this you might wanna consider not buying the cheapest motherboard either way.


And we've come full circle :)


See https://looking-glass.io/ to get around directly connecting monitors

But forwarding real GPUs limits the number of VMs and can cause stability issues if unlucky - PCIe device bugs, especially reset bugs, are not unusual. Had problems with e.g. a forwarded rx580 that would require a hard reboot of the host to fix...

Things like Intel GVT-g and VirGL are better solutions, when they can be used.


> See https://looking-glass.io/ to get around directly connecting monitors

Interesting, thanks for the link!

> Had problems with e.g. a forwarded rx580

I've been forwarding Sapphire Radeon RX 580 Pulse to both Windows and macOS for literally years and, except for the specific problem of host sleep/wake, had no problems whatsoever. Perhaps try an Asus motherboard?

> Things like Intel GVT-g and VirGL are better solutions, when they can be used.

Sure, when software solutions are available and stable (and there's no need for near-native GPU performance), they are definitely easier to work with. However, as of today, GPU passthrough is probably the only solution available for a daily driver.


I’m not sure what you mean by forwarding, but if you mean regular gpu passthrough and the reset bugs with amd gpus, then for the rx580 vendor-reset should typically work: https://github.com/gnif/vendor-reset

Does not work with the lower 6000 (below 6800) and 7000 series that can also have reset issues.


> The whole process takes perhaps one or two hours and as works flawlessly, with no stability issues. Good joke, that really made me laugh :)

I tried this on my ASRock X370 Taichi a while back. Turns out that there is a bug in older bios versions and the whole thing just freezes when starting the QEMU VM. Then there is an intermediate bios version with which I actually managed to get it working. Unforunately I later upgraded my CPU and had to install a new bios and this again completely breaks the IOMMU groups. Probably spend a few days to get everything running, including downgrading from a non-downgradeable bios version.

And even when it was working it was a pain to use. Want to use the passthrough GPU in Linux? Now I have to dual-boot QEMU VMs or disable the passthrough, reboot, then enable it again, reboot one more...

I really want proper GPU virtualisation...


> Good joke, that really made me laugh :)

I've been forwarding an AMD GPU to both Windows and macOS for literally years across multiple Asus motherboards and, except for the specific problem of host sleep/wake, had no problems whatsoever, even considering I work under GPU-passthrough VMs whole-day, every-day. Perhaps try a recent Asus motherboard?

> have to dual-boot QEMU VMs or disable the passthrough

Yes, you would have to buy as many cheap, secondary GPUs as the number of virtual machines that you want to run in parallel.

> I really want proper GPU virtualisation...

Sure, I don't blame you - my point was that the only truly usable GPU virtualization solution available today is GPU passthrough and that GPU passthrough is much easier to setup than it is commonly perceived.


> Perhaps try a recent Asus motherboard

And if I don't want to or can't afford to buy new hardware?

> Sure, I don't blame you - my point was that the only truly usable GPU virtualization solution available today is GPU passthrough and that GPU passthrough is much easier to setup than it is commonly perceived.

Okay, but for the poster you're replying to it is not available on their hardware.


> Perhaps try a recent Asus motherboard?

I also have an AM4 ASUS board (is that recent enough?) and earlier this year, ASUS decided to completely remove any mention of this board from their site, as if it never existed. So no bios updates for me I guess? No idea if it is up to date or not or if my CPU is even supported...

> Yes, you would have to buy as many cheap, secondary GPUs as the number of virtual machines that you want to run in parallel.

Except that cheap GPUs are... cheap and not very powerful, so depending on what I want to do I would have to buy a bunch of expensive and powerful GPUs (or go back to rebooting VMs to switch). And there are only so many PCIe slots on my board (2x8 and 1x4, the latter of which is already in use by a non-GPU card). Running them in an x1 slot also doesn't sound like a great idea.

I have tried to run it but gave up in the end after breaking bios updates and not wanting to spend even more money on another fast GPU.


> earlier this year, ASUS decided to completely remove any mention of this board from their site, as if it never existed.

Maybe your board was only disappeared on some locales? Maybe see if you can find it on asus.cn or one of their other regional sites (translation service required, but you can probably muddle through)

I haven't seen that in a long time, amd640 super7 chipsets got disappeared, but back then you could still get the bios updates via ftp, the boards just dropped off the website.


> I (...) have an AM4 ASUS board

No, I never tried GPU passthrough with an AMD CPU, Intel only.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: