Products > Computers

Windows VMs under Linux with Native Performance

(1/9) > >>

gnif:
Hi All,

This is a shameless plug for the free open source project I wrote which I use on a daily basis along with thousands of others that allows the use of a Windows VM in Linux that has a VFIO/SR-IOV or GPU pass-through setup.

2 Minute Demo (aka, TL;DR; version  ;D )
https://youtu.be/7XbQOjfnxbU

Before I begin I should explain what this is as it's not a commonly known technology and as such many people do not know that it's even possible. With the advent of cpu extensions like IOMMU and VFIO we are able to isolate PCIe devices and give them to a virtual machine for it's sole use. This technology has been used for years now in the hosting industry where network interface cards support advanced features like SR-IOV/VFIO, allowing them to be split into multiple "virtual devices", or more accurately known as "virtual functions". This is done to give the VM direct access to the hardware so that it can obtain bare metal performance on the hardware.

For a while now there has been a growing community of users that are using this to instead pass a complete GPU to the Virtual Machine, allowing the VM to have bare metal 3D rendering performance. This does require you to have two GPUs in your system, either by installing another one, or if you are using a laptop you may already have two GPUs (iGPU + AMD/NVidia something). This is commonly known as a VFIO Pass-through configuration, or VGA Pass-through configuration. There is one shortcoming of this type of a setup, the GPU has no idea that it's inside a VM and as such still wishes to output to a physical monitor connected to it's output.

This is where my software comes along (Looking Glass - https://looking-glass.io). This is a two part application that makes use of a special virtual device called IVSHMEM, or "Inter-Virtual Shared Memory" to map a block of shared RAM into the VM that can be used to move mass amounts of data with extremely low latency in/out of the VM. The "host" application (not the host system, yes the naming is confusing), runs inside the guest and captures the frame-buffer output of the GPU and feeds it into the shared memory segment.

The client side application that either runs on the host, or even in another VM (yes, we can run VM to VM) then takes the feed in shared RAM and is essentially a high performance RDP client but instead of using a slow network protocol with compression, it's lossless and the latency is outstanding. In-fact in some cases we can get the frames to screen faster then the GPU will output them to the physical monitor as the Linux graphics pipeline is far shorter then Windows.

So you might be now asking what kind of crazy exotic hardware you need for this kind of a setup? That's just it, none. You can do this on any CPU that has a decent core count (6+) and use any secondary GPU you have (provided it plays nice with VFIO, some AMD GPUs have hardware bugs that prevent their use). We have people running this on 5 year old laptops, in use in universities by students and professors for comp-sci projects, and artists that need access to windows only applications such as the Adobe suite, AutoCad, etc.

Personally I have this working on my Intel laptop, a Ryzen 7 1700x desktop, and my current workhorse that was kindly donated for this work, an AMD EPYC Milan system. I have not needed or even wanted to dual boot my Linux system in 5 years now, either for gaming or productive usages.

One additional feature I have not mentioned yet is for those that want to capture the VM for streaming/recording, etc. I have also written a native OBS plugin for Looking Glass that allows you to take the feed directly into OBS with no additional overheads. This then allows you to offload the video processing and encode to the host system (or another VM), even offloading to a GPU with hardware encode capability.

Halcyon:
Brilliant work as always Geoffrey. I've plugged you on Linkedin.  :-+

PKTKS:
Passing here to drop the brilliant work mention as well...

But also to remind that  GPU pass and virtIO  VMs require a proper kernel compilation.
The options should allow the PCI and IOMMU specific options for that to happen.

Indeed an awesome step ahead  :-+
Paul

gnif:

--- Quote from: PKTKS on August 07, 2022, 07:07:06 pm ---But also to remind that  GPU pass and virtIO  VMs require a proper kernel compilation.
The options should allow the PCI and IOMMU specific options for that to happen.

--- End quote ---

What do you mean? every major distro's stock kernel supports this and has done so for the past 5 years now.

bd139:
PKTKS is about 20 years behind the curve based on his other threads  :-DD

Interesting idea though. Will read into this tomorrow when I get some time. Either way more effort on this is appreciated so nice job  :-+

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod