HEX | Binary | |
AND0 | 7FFC3FFF | 0111 1111 1111 1100 0011 1111 1111 1111 |
OR0 | 80005000 | 1000 0000 0000 0000 0101 0000 0000 0000 |
AND1 | 73FFFFFF | 0111 0011 1111 1111 1111 1111 1111 1111 |
OR1 | 8C000000 | 1000 1100 0000 0000 0000 0000 0000 0000 |
Perhaps a failure to reset e.g. the earlier generations of the ATi/AMD card might be due to the auxiliary power keeping the card "alive" even when the power to the PCIe slot is cut. If that is the case then maybe a switch or relay that turns off the auxiliary input (upon detection of a Vcc cut) might help (such relays ought to be able to take quite a few amps though 240W@12V => 20A).
The special thing about the vGPU feature is that it can be shared among up to 8 virtual guests as one GPU, i.e. it is not dedicated to one VM as VGA passthrough (or sDGA in ESXi). That requires a more sophisticated solution than when it is dedicated which made me suspect that the drivers are not only paravirtualized but also hardware assisted through certain extensions (mind you that the AMD-v/Intel VT-x extensions do not require special paravirtualized drivers on the guest side). The downside with this technology is that it currently only gives up to 512MB of video RAM to each VM and that only DirectX up to version 9.0c is supported, at least in ESXi. Other conditions may apply in Hyper-v and other hypervisors that support the vGPU technology. So, maybe there are no hardware extensions involved with the vGPU technology after all.
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi
Maybe it's not that much "magic" in sharing a GPU between VMs but it is quite tricky to do that without overhead and yet be rather as feature rich as if it were on bare metal.
Before AMD-v and Intel VT-x the CPU sharing took a rather substantial penalty from the virtualization.
Now this penalty is rather small thanks to the hardware assisted virtualization technology offered through VT-x and AMD-v. From the papers on vGPU there seems to be a rather small penalty to sharing the GPU, either they have really managed to bring up smart drivers or there is something hardware assisted to back it up. Maybe there is a rather substantial overhead that is "offset" by the capabilities of the GPU.
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi
Before AMD-v and Intel VT-x the CPU sharing took a rather substantial penalty from the virtualization.
Even with those virtualization performance penalty is substantial:
http://www.altechnative.net/2012/08/04/virtual-performance-part-1-vmware/
There were also other solutions before VT-x that provided only marginally worse performance (e.g. kqemu)
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi
Any chance you could post a detailed explanation of what you did to make an unbricking rig? I have a suspicion that the root cause of the death of my first GTX690 might have been a misflash that corrupted the PLX chip (PCIe bridge) EEPROM. It'd be nice to have a go at resurrecting it.
Whats not clear to me at this point is what will and won't work for those of us who want to make daily use of the end result. I personally would like to give guests in ESXi a decent 3D performance bump but I'm not sure how to approach that (what card is seen as the best starting point, what work needs doing to it etc). I realise this thread isn't about making card X work with technology Y but most of us are here for the virtualisation benefits.