Hi Gnif,
very to meet you
. Thank you very much for your invitation to your discord. Everything is now working with passthrough. Maybe you can help us to work on
Spoofing GPU´s?
Also Looking Glass may be of intrest to you, see https://looking-glass.io, or join us on our Discord server (https://discord.com/invite/52SMupxkvt)
Befor we use Looking Glass i must i have a working Solidworks in nativ mode with open "full scene anti aliasing" mode.
@krutav I find something new. The identifier of the gpu is openGL. When u start SW ask SW to nvdriver what is your renderer? the driver ask the opengl who am i?
opengl says no idea who am i but i am on PCIe and can SSE2 (unknown board/PCIe/SSE2) thatswhy solidworks says --->I dont know who are you also full scene anti aliasing greyed out.
Hi Gnif,
very to meet you . Thank you very much for your invitation to your discord. Everything is now working with passthrough. Maybe you can help us to work on
Spoofing GPU´s?
You're welcome. Note that quite a few in the VFIO community already use GPU id spoofing, it's quite common. Qemu supports overriding the PCI ID presented to the guest, and with a bit of ioctl hackery one can intercept the ioctl's the NVidia driver uses to identify the GPU device ID and spoof it completely.
Befor we use Looking Glass i must i have a working Solidworks in nativ mode with open "full scene anti aliasing" mode.
No worries, once you get it sorted if you need help, hit me up. Looking Glass is my project
Hi, i think here is a missing arg ... like "name=" or any other. how can i see which commands are possible in qemu?
args: -device 'vfio-pci,host=2d:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=RTX4000MOD.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x1eb1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x12a0'
Find some args from qemu 2.0.x :
$ qemu-system-x86_64 -device vfio-pci,? 2>&1 | grep "x-*"
>> vfio-pci.x-pci-sub-device-id=uint32
>> vfio-pci.x-no-kvm-msi=bool
>> vfio-pci.x-pcie-lnksta-dllla=bool (on/off)
>> vfio-pci.x-igd-opregion=bool (on/off)
>> vfio-pci.x-vga=bool (on/off)
>> vfio-pci.x-pci-vendor-id=uint32
>> vfio-pci.x-req=bool (on/off)
>> vfio-pci.x-igd-gms=uint32
>> vfio-pci.x-no-kvm-intx=bool
>> vfio-pci.x-pci-device-id=uint32
>> vfio-pci.host=str (Address (bus/device/function) of the host device, example: 04:10.0)
>> vfio-pci.x-no-kvm-msix=bool
>> vfio-pci.x-intx-mmap-timeout-ms=uint32
>> vfio-pci.bootindex=int32
>> vfio-pci.x-pcie-extcap-init=bool (on/off)
>> vfio-pci.addr=int32 (Slot and optional function number, example: 06.0 or 06)
>> vfio-pci.x-pci-sub-vendor-id=uint32
>> vfio-pci.x-nv-gpudirect-clique=uint4 (NVIDIA GPUDirect Clique ID (0 - 15))
>> vfio-pci.x-no-mmap=bool
Find some args from qemu 5.x :
qemu-system-x86_64 -device vfio-pci,? 2>&1 | grep "x-*"
addr=<int32> - Slot and optional function number, example: 06.0 or 06 (default: -1)
bootindex=<int32>
host=<str> - Address (bus/device/function) of the host device, example: 04:10.0
x-balloon-allowed=<bool> - (default: false)
x-igd-gms=<uint32> - (default: 0)
x-igd-opregion=<bool> - on/off (default: false)
x-intx-mmap-timeout-ms=<uint32> - (default: 1100)
x-msix-relocation=<OffAutoPCIBAR> - off/auto/bar0/bar1/bar2/bar3/bar4/bar5 (default: "off")
x-no-geforce-quirks=<bool> - (default: false)
x-no-kvm-intx=<bool> - (default: false)
x-no-kvm-ioeventfd=<bool> - (default: false)
x-no-kvm-msi=<bool> - (default: false)
x-no-kvm-msix=<bool> - (default: false)
x-no-mmap=<bool> - (default: false)
x-no-vfio-ioeventfd=<bool> - (default: false)
x-nv-gpudirect-clique=<uint4> - NVIDIA GPUDirect Clique ID (0 - 15)
x-pci-device-id=<uint32> - (default: 4294967295)
x-pci-sub-device-id=<uint32> - (default: 4294967295)
x-pci-sub-vendor-id=<uint32> - (default: 4294967295)
x-pci-vendor-id=<uint32> - (default: 4294967295)
x-pcie-extcap-init=<bool> - on/off (default: true)
x-pcie-lnksta-dllla=<bool> - on/off (default: true)
x-req=<bool> - on/off (default: true)
x-vga=<bool> - on/off (default: false)
xres=<uint32> - (default: 0)
I find somethjng interesting to get the vendor id over opengl
https://stackoverflow.com/a/42249529
@bayx Thanks for posting the list of arguments for QEMU. I have never seen many of these before and I suppose they will be quite helpful for us here. I don't know too much about OpenGL but what I can say is that there are a few QEMU args we can try. First of all, the romfile
arguments is pretty much useless on these nvidia cards and I still need to try it with and AMD/ATI graphics card to further prove the lack of usability with the feature. I say it is useless because the GPU is going to read right off of its own internal ROM. The only practical use of romfile is to get the bootscreen working on a passed through GPU. But it only works if you keep the PCI ID the same. I think Nvidia has outsmarted us in this area and it's understandable.
You also listed a "No GeForce Quirks" argument and that is something worth trying. I don't know where the documentation is but I have a feeling it might be a useful argument for us here. The path to success here requires that we get the QEMU bootscreen working with a spoofed GPU, and the reason this is very important is because it goes to show that the device successfully acts like the GPU we wanted to spoof it as.
As for @gnif, I am interested in getting LookingGlass to work on MacOS as it could benefit from one. Newer versions of MacOS also come with QEMU and VirtIO drivers so it might even be possible. MacOS is pretty similar to BSD but you can also run Linux code using HomeBrew. I am not the most experienced, but I could try to make something and get it working. MacOS can function as a host for QEMU now with pretty good performance. I don't know about GPU passthrough on MacOS QEMU but they have their own Paravirtual graphics device. As for using MacOS as the LookingGlass server to serve a Linux VM host, I think it would be possible to acheive if Linux can also act like a Windows guest in a typical LookingGlass setup.
I only have 8GB of RAM so these Virtualization projects are particularly interesting because a Windows VM with 4GB is practically useless. DDR4 prices are quite high :/
I find something interesting, there is a app called "regal" this is a wrapper. A wrapper for opengl requests. Its running on the host Maschine.
But i have no idea how.
No idea how to create from sources.
How to install
and howto run
aaaand last but not least how to confige it in proxmox?!
I am 100% sure this is Integrated in qemu but nobody calles about this.
TOP SECRET
https://github.com/p3/regalHere thread about regal
https://pyra-handheld.com/boards/threads/regal-opengl-wrapper.82518/And this args
-device virtio-vga,virgl=on
no idea how/does you can use "virgl=on,renderer=Quadro RTX 4000" (or something like this) in our x-line
args: -device 'vfio-pci,host=2d:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=RTX4000MOD.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x1eb1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x12a0
@bayx thanks for bringing up the topic of "VIRTIO GPU/VIRGL." Essentially what it is, is a driver that allows Linux Guest VM to access a KVM host GPU only for OpenGL and possibly Vulkan in the future. It will only work on Linux, and it is a paravirtualized adapter so there is not much that can be done with it.
I also looked at this "Regal3D" and it is very old and not being developed anymore. I am certain it has been replaced by VirtIO VirGL. If you plan on running Linux this VirtIO GPU will be fine but it will be useless outside of that. I don't think SOLIDWORKS is on Linux either, so why bother. Some people are trying to get VirGL driver on Windows but the success is very limited so you can't count on that.
Today I will try an ATI HD 4650 and ATI FirePro V5700. Both are the same cards but one is certified for SOLIDWORKS 2008 and other CAD programs, and one is for consumers. I will also try the old 660 Ti but first I must install the latest QEMU on my proxmox.
I give up
That's unfortunate
I haven't had much success lately either, because most of the mods only work by flashing the GPU ROM which I don't have time for. I am looking into using Linux for some other graphics related projects though, maybe with VirtIO GPU to provide some virtual Linux desktops with minimal overhead and no virtualization.
Put it on ebay... Buy me a Quadro RTX 4000
Now it is working ))))
Awesome! What was the solution?
Oh, man. This driver is putting up a hell of a fight. But at last I am making some sort of progress (see attached screenshot).
I will push the code to github later. The README is going to need an update, things got ... complicated.
Nice job. But what did it means?
Its done but it takes a few updates or
Many milestones done but some milestones must to work out?
Nice job. But what did it means?
Its done but it takes a few updates or
Many milestones done but some milestones must to work out?
https://github.com/DualCoder/vgpu_unlockIt's a pretty incredible project that DualCoder is working on. It will allow for vGPU to work on consumer graphics cards to create virtual Quadros for Virtual Machines. Only thing is that you would still need a license for vGPU which costs far too much to actually be worth it. The more practical solution is obviously to buy a second GPU, but the appeal of vGPU is to save space and use on graphics card. It also works quite well compared to other solutions like AMD MxGPU and Intel GVT-G. It's definitely a cool project worth contributing too and I will load it on my Proxmox server to test functionality with it later today. If it works, it could make for like a dual 1050 Ti gaming server. (I already have gaming pc which is cheaper than a vGPU license so its really only gonna be for fun and not practical use.)
Massive thanks to @DualCoder for making the vGPU Unlock program. It works fairly well!
I tested it out with PROXMOX hypervisor (KVM) and a Windows 10 Virtual Machine. I was even able to game at 60 FPS! Only problem is that after 10 minutes it capped me at 3 FPS because I can't afford a license
In all seriousness though, this is awesome work and it is actually very well done. I suggest all of you try it out and contribute to the project!
Next step: trying to figure out how to bypass licensing...
There is a post in this thread with working hacked vgpu grid driver. Why you not ask him for a working solution?
He only said that he will not release that. I know the solution to hacking licensing but it's obviously against the Nvidia terms of service.
There's 2 options. Make an add-on to DualCoder's script that takes out the licensing requirement. OR, hack the licensing server because it runs FlexNet license server which is apparently very easy to crack.
I probably can't do any of these because my vGPU Evaluation license expired 3 days ago
There is a post in this thread with working hacked vgpu grid driver. Why you not ask him for a working solution?
There's 2 options. Make an add-on to DualCoder's script that takes out the licensing requirement. OR, hack the licensing server because it runs FlexNet license server which is apparently very easy to crack.
Still waiting for a working noob version. 😊
Still waiting for a working noob version. 😊
This is about as noob friendly as it gets... I suppose I could write an automation script. It's working on Proxmox just fine, albeit a little fiddly. The performance with vGPU is shockingly good apparently. I used parsec and VMware horizon to do remote desktop.
I am looking into finding a driver hack to temporarily bypass licensing.
I haven't tried specviewperf but that's next on my list, just waiting for Nvidia to send me a new evaluation license
Edit: This is very funny... if you never open Nvidia control panel after installing the driver it won't check for license. (Only for 10 minutes, though. My VM is slow so it took a while, but yours might be faster.)
Time for the SpecViewPerf test! For anyone wondering, this also unlocks the anti-aliasing in Solidworks.
Edit 2: Specviewperf is 17GB so this is going to take a really long time...
Update: I ran SPECViewPerf and the performance was pretty abysmal to say the least. Anything with OpenGL crashed for some reason and gives me the "A TDR has been detected" error. The DirectX tests like Autodesk ran fine and I saw anywhere from 10-60 FPS so I'm fine with that.
The setup I ran is GTX 1080 Spoofed to Tesla P4 thanks to @DualCoder's script and then created a P4-4Q which is half of the graphics card's full resources.
My expectations were definitely exceeded, but I won't ever use this because all the apps I run work on a GTX 1060.
Edit: The vGPU host driver has a timer that automatically drops an unlicensed vGPU to 15 FPS after exactly 20 minutes, unfortunately. I am not so sure how to hack the driver to get around licensing at this point. Maybe a host driver hack could solve that...
I hope somebody hack the driver...
Hello everyone!
I have recently obtained Tesla K10 (converted as K2) from eBay. Unfortunately, K2 drivers (non-vGPU) are not supported by modern Linux, so I decided to convert it back to K10 for now. I installed resistors and I got it displayed as K10, however I cannot find a correct BIOS dump for those GPUs.
I used nvflash with override to flash one of the chips using this one:
https://www.techpowerup.com/vgabios/213266/213266Does anyone have full BIOS dump from original K10 for both vBIOS chips? From what I understand, they are not the same.
P.S. I plan to work on some interesting project with final goal to convert dual GPU Tesla K10 to dual K5000. K10/K2 and K5000 share similar GPUs (GK104) of different part#, however number of CUDA cores, TMUs and ROPs is the same. (same story as GK104 on GTX690 VS GK104 on K5000 or GK104 on GTX680).
Tesla K10: 10de:11
8F
Grid K2: 10de:11
BFQuadro K5000: 10de:11B
ASo far. all the values of resistors mentioned on this topic are valid for real K10 as well. Picture below shows resistors used for the K10 BIOS chip (rear one, near power connectors). R2 and R3 are responsible for the byte change from 8 to B while R4 and R5 are responsible for the GDDR5 manufacturer (Samsung VS Hynix). R1 and some supporting resistors around are the part of the ROM circuit that is identical to GTX7xx lineup. I compared values and reverse engineered schematic of the ROM. Schematic is attached on the second picture. vBIOS model is the following:
http://ww1.microchip.com/downloads/en/devicedoc/doc0606.pdfFrom my understanding, 4th byte difference comes not from the vBIOS circuit (because it affects 3rd byte only), but from the strap #2 on the GPU die itself. GTX780ti schematic is attached for your reference. I have found it on some Russian electronics repair forum. I have seen it was told that I can change 4th byte using BIOS straps, however I did not understand how to do it. That would be great if someone can elaborate on this. Thank you.