Author Topic: [MOVED] Hacking NVidia Cards into their Professional Counterparts  (Read 1204575 times)

acidice333 and 7 Guests are viewing this topic.

Offline karakarga

  • Newbie
  • Posts: 3
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #525 on: September 08, 2013, 06:52:54 am »
Hi to all, I have just become a member, and wish to ask!

Chip Computer Magazine September issue of Turkey, quoted a "GTX670 to Quadro K5000" mod from German Chip Magazine crew, with a pdf link at http://chip.tk/13W2MpS adress!

They have used a Zotac 2GB GTX670 Amp Edition. But, for the resistor 0, they have used, 20k instead of 40k! :o

Which one is the best?
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #526 on: September 08, 2013, 10:39:38 am »
1) Removing both resistors (so all 0-4 resistor locations are empty) results in device ID of 0x11BF, i.e. Grid K2 - which is what I was aiming for anyway. From there on I can soft-mod to K5000 or GTX680MX if required (or anything else with IDs between 0x11A0 and 0x11BF).

Did you add two 40k resistors in the correct locations? If you did not, this could be the cause.

verybigbadboy notes he has some stability problems when they are not on.
I do not have these stability problems (but I did add them on for good measure a month back).
You may be having problems because of this that neither one of us experienced. Try adding the resistors.

I'm not convinced. The stability issues I have seen mentioned is related to the PCI device ID changing randomly. I am not seeing that. It is always 0x11BF. I am only seeing the 1280x800 limitation on my VM host. On bare metal in another machine it works fine.

In K2 mode, the card works for VGA passthrough on Xen. Sort of. Almost. It works find at up to 1280x800. If I select any res higher than that, it fails. As far as I can tell, the monitor is told to go into sleep mode. Tested with 320.49 and 320.78 drivers.

I am running Xen 4.2.2 with no patches (save a SLIC table I added in to active Windows to). The unofficial nVidia patches do not have to be used, but the did work for me if you wanted to do GPU passthrough without the cirrus card. My current graphics driver is 320.00. Both the Geforce and Quadro/Grid drivers give me the same performance. I have not upgraded to test the new ones. Try that revision and see if it helps.

I am using Xen 4.3.0 and the same setup works fine with faux-Quadro 2000, 5000 and 6000 cards. I am using XP64. This is probably the big difference between my setup and everyone else's, but I habe tried it on bare metal on XP64 and it works fine there.

 

Offline gamezr2ez

  • Contributor
  • Posts: 30
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #527 on: September 08, 2013, 06:50:13 pm »
Your "monitor is told to go into sleep mode" sounds like the card is locking up. I don't know what your setup is, what info can you get from the hypervisor when that happens? When my card would lockup just like you are describing, the ID would still read 0x11BF. It wasn't until I popped on a 100k resistor to R2 that it no longer locked up just like you are describing. You should try it instead of dismissing it.

As for using XP64, I haven't tested this card in that environment. A previous card that I had working with that a few years back required I use the `stdvga=1` option to get rid of the CIRRUS card before it would work. I would have screwy results otherwise. It would either not work at all, or would crash when I changed resolution or launched a full screen game. Try a Win7 VM.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #528 on: September 08, 2013, 07:47:39 pm »
The card doesn't crash/lock up - if I don't click the button to keep the new mode, it reverts back to the previos mode after 15 seconds, at which point it works again. And it works fine on a different machine (bare metal XP64, different motherboard).

I'll put some 40K resistors on it instead of leaving them off and see if it helps - stranger things have happened, so I'm not prepared to dismiss anything at this point.
 

Offline opoeta

  • Newbie
  • Posts: 4
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #529 on: September 09, 2013, 11:51:14 am »
Small update guys.

I finally got around to playing a little more with my GTX680. Soldering 0402 components manually is an absolute bitch even with solder paste, a decent magnifying lamp, good eyes and steady hands.

Findings:

1) Removing both resistors (so all 0-4 resistor locations are empty) results in device ID of 0x11BF, i.e. Grid K2 - which is what I was aiming for anyway. From there on I can soft-mod to K5000 or GTX680MX if required (or anything else with IDs between 0x11A0 and 0x11BF).

2) In K2 mode, the card works for VGA passthrough on Xen. Sort of. Almost. It works find at up to 1280x800. If I select any res higher than that, it fails. As far as I can tell, the monitor is told to go into sleep mode. Tested with 320.49 and 320.78 drivers. Has anyone else found this? I haven't done any BIOS modding yet, but did anyone else see a similar issue? Is this something Nvidia did in recent drivers to cripple modified cards when running in a VM? I tested the K2-ified card in another bare metal machine with the same monitors, and in all cases there it works fine. But on my VM host, when passed through to a VM, it works great up to and including at 1280x800, and the screen just remains blank at higher resolutions. Talk about bizzare.

This is an interesting finding - my soft-Quadrified GTS450 (Q2000), GTX470 (Q5000), and GTX480 (Q6000) cards work just fine under the exact same conditions. I wonder if this is some kind of an obscure compatibility issue between Grid and Qx000 cards in the same machine since they have different size memory apertures - something could be getting confused.

Until I can get this resolved, modifying of my GTX690 is on hold.

Gordon, could explain here how you transformed your GTX4xx in Quadro? I also saw that you turned your GTX580 in Quadro 7000, could give us the way?
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #530 on: September 10, 2013, 03:23:21 pm »
opoeta, I'm writing up the process at the moment. I need to do a bit more testing and re-testing - I modified my cards a few months ago and I need to get them out of my production machine before I can re-test them to make sure that the writeup is correct - I wouldn't want to cause any inadvertent bricking. Once I've re-tested and written it up, I'll post a link here. Unfortunately, I have to get the GTX680 working first - that can then replace one of my 4xx series cards that I can then use to re-test the procedure.

I've been meaning to do this for the past month, but something more important always comes up just when I think I have a few hours put aside for GPU hacking. Apologies for the delay. :(
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #531 on: September 10, 2013, 07:09:56 pm »
It looks like the K6000 drivers and device ID are now available:
http://us.download.nvidia.com/XFree86/Linux-x86/319.49/README/supportedchips.html

Only a matter of time before somebody finds the correct resistors on a Titan to modify. gnif and verybigbadboy, I'm looking at you ;)
Anyone Interested in doing this if there is a donation round to cover the cost of a sacrificial Titan?
 

Offline Soulnight

  • Contributor
  • Posts: 28
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #532 on: September 10, 2013, 07:19:15 pm »
YEAH...but what for? If there are no aditionnal functionnalities available! NO Support of nvidia MOSAIC for example. Or did I miss something?
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #533 on: September 10, 2013, 08:22:52 pm »
Virtualization. It may or may not work - but it's worth a shot. For me, only the cards that are listed as MultiOS have worked (Quadro [256]000) and Grid K2, but not Tesla K10 or Quadro 7000. Other people have reported that K5000 works for them for virtualization (I have just confirmed this myself), so there is a reasonable chance that K6000 will work too. You know - for when half of a K2 just isn't quite enough. :)
« Last Edit: September 11, 2013, 10:06:37 pm by gordan »
 

Offline opoeta

  • Newbie
  • Posts: 4
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #534 on: September 11, 2013, 04:14:55 pm »
Waiting for the tutorial of Q6000
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #535 on: September 11, 2013, 08:30:54 pm »
For the record - putting the 40K resistors in positions 0 and 2 did NOT solve my problem of the card seemingly no longer being able to handle modes above 1280x800 when virtualizing. It still works absolutely fine on a different bare metal machine in all resolutions and in 3D applications. When I try to set the res to anything above 1280x800, the monitor goes to sleep as if the input signal disappears. And since I don't click the button to keep the new mode, the OS reverts back to the previous resolution, and the screen output comes back. Very strange.

Edit: And it gets weirder. If I plug in my old 17" VGA monitor that can do 1280x1024 - the card happily putputs 1280x1024 to that over VGA. Which makes me wonder if something bizzare is happening with the second DVI link in VM mode. I can test for that - my backup T221 is running on SL-DVI connections. Lo and behold, I that comes up at 3840x2400@13Hz. So for some reason when running virtualized, my faux Grid K2 refuses to run in DL-DVI mode on both of it's ports. But when running on a bare metal machine, it works fine. W-T-F. This makes me wonder whether this is a Grid K2 "feature". Has anyone got DL-DVI working in a VM with a gridified GTX680?

Edit 2: I just soft-modded the card to a K5000. No change - for some reason whenever the second DVI channel gets enabled, it all goes wrong.

Has anybody seen this issue before?
« Last Edit: September 11, 2013, 10:05:22 pm by gordan »
 

Offline gamezr2ez

  • Contributor
  • Posts: 30
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #536 on: September 12, 2013, 01:40:39 am »
The card doesn't crash/lock up - if I don't click the button to keep the new mode, it reverts back to the previos mode after 15 seconds, at which point it works again. And it works fine on a different machine (bare metal XP64, different motherboard).

Then the issue is most likely with XP64, Xen and gpu passthrough. XP as a whole was never meant for virtualization and support for it is just hacked together. Try Windows 7 instead of the operating system that is older than Xen itself.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #537 on: September 12, 2013, 08:58:22 am »
I'll try Windows 7 for the sake of completeness, but unfortunately, using it is not an option for me, due to a complete lack of desktop spanning functionality (T221s show up as 2 or 4 discrete monitors). In XP this works fine, in Vista and later the functionality has been removed. Windows 8 allegedly adds it back, but suffering Metro on top of spending more on two Windows 8 licences than I spent on my GTX680 is something I am not prepared to do.

Another thing I might try is XP64 on bare metal on my VM machine - just to eliminate the possibility of some utterly bizzare motherboard-influenced issue.

My current workaround is to swap my primary and secondary T221s around, so my gaming VM is running 2xSL-DVI. That works around the DL-DVI not working, albeit by limiting the refresh to 25Hz (33Hz with a custom mode).

For my other VM connected to a standard 30" monitor, I might just have to jump ship back to ATI. :(

One of these days a monitor manufacturer will make something that actually beats a T221 on pixel count (>= 3840x2400, i.e. more than the current 4K screens) in comparable size (<= 24"). But for now, the 12 year old technology is still unbeaten.
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #538 on: September 12, 2013, 02:33:11 pm »
i've got a dual DL-DVI quadro 6000 bios ;)

it seems the original discussion on strapping the 4xx series to quadro is all based on the post that shows how to strap the bits 0,1,2,3,4 but not actually flashing the quadro bios.

Anyone actually use this stuff with esxi 5.5? Supposively they support intel/AMD GPU in the new version.

seems like the problems are:

SR-IOV support (some have it?) VT-D
FLR (Function Level Reset) - without it your vm's will crash the card upon reset (happens alot in windows vista/7/8)

is VGX just simply SR-IOV/MR-IOV with FLR?

API-intercept seems cool but to actually pull off a FLR SR-IOV on nvidia would be the real trick!

It seems the XEN guys are light years ahead of everyone else as far as getting things to work - and AMD GPU seems to have far superior support.

But if esxi 5.5 can handle intel HD and AMD GPU, perhaps we need to just take a look at the new version ?
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #539 on: September 12, 2013, 03:45:51 pm »
NONE of the modifications are based on flashing a Quadro BIOS onto the card. I did limited testing on this (Q2000 BIOS on a GTS450), and all that achieves is lower the clock speeds on the GTS450 down to what they are on Q2000. There is still some stuff in my TODO queue to investigate things like ECC support, though.

To do it properly, you should really change at least a few more things in a addition to the straps, e.g. the record in the BIOS containing the PCI ID (at around position 0x18E on 4xx cards), as well as the board/card identification strings.

FLR is nowhere nearly as required as people indicate. If you pass the whole device (i.e. VGA + HDMI audio) it works fine, and with Quadro (real one or a modified GeForce) rebooting VMs works just fine, if your motherboard BIOS and PCIe bridges aren't buggy (some careful research there is needed if you're looking to buy a suitable motherboard, otherwise you will spend weeks troubleshooting and writing hypervisor patches, and you're pretty much screwed if you use a closed-source hypervisor).

For the record - I have Quadro 2000, 5000, and 6000 cards (real and faux 2000, modified GeForce 5000 and 6000) and rebooting VMs works fine (most of the time - I have problems due to various hardware/firmware bugs that plague the EVGA SR-2). It is ATI cards that suffer from the rebooting crashes and performance degradation after reboots. No GPUs available today have FLR, including real Quadros or real FirePros. It is not needed if the BIOS and drivers are doing their job.

VGX and the new Xen project that implements something similar exposes a guest VM driver API that offloads the GPU tasks onto a real GPU, shared between multiple VMS. Nothing to do with FLR whatsoever. It is essentially a virtualized GPU driver API designed to allow you to share GPU processing between multiple guests.

FWIW, I have had much better luck with Quadrified Nvidia cards for Xen virtualization than with ATI cards. ATI cards have far too many limitations (only a single DL-DVI port from HD5xxx series onward), don't work properly with multi-monitor spanning in my experience (at least not on IBM T221s), and suffer from the VM reboot bugs in the BIOS and drivers (e.g. if the VM dosn't crash on a reboot, the performance is degraded). Then again, I seem to have just hit a XP Quadro driver bug that breaks DL-DVI from working in a VM (but not on bare metal). So neither are perfect, but Nvidia just seems to suck a lot less than ATI (even though I may just have to resort to using an ATI card for one of my VMs if I can't work around the DL-DVI problem on the GTX680 based K5000/K2).

On a separate note, device reset can actually be implemented in multiple different ways, not only if it has FLR support. For example, Xen also supports a method of using the PCIe power management; putting the card into the powered-down state and bringing it back results in the device being reset (if it implements the PCIe power saving functionality properly).
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #540 on: September 12, 2013, 09:24:35 pm »
http://gfxspeak.com/wp-content/uploads/2013/09/VGX-GPU-virtualization-Nvidia.png

better example: http://on-demand.gputechconf.com/gtc/2013/presentations/S3501-NVIDIA-GRID-Virtualization.pdf

more:
http://www.nvidia.com/object/cloud-gaming-gpu-boards.html



I thought what they are saying here is not API intercept?

API intercept doesn't give you cuda,opencl,directx right?

I'm still trying to figure out how they can manage fair share loading of a video card since a vm could potentially tear ass on a video card with true hardware virtualization.

« Last Edit: September 12, 2013, 09:33:55 pm by mrkrad »
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #541 on: September 12, 2013, 10:54:43 pm »
I was under the impression that the VMware way involves passing a GPU to a VM. Grid K2 is a GTX690 i.e. two GTX680s. So you can "share" a card between two VMs, but in reality you are not sharing GPUs between VMs. Grid K1 is the same sort of thing, only it comes with 4 lower-spec GPUs (for passing to up to 4 VMs) rather than two high spec ones. In other words, you cannot split a Grid K2 more than 2 ways, and you cannot split a Grid K1 more than 4 ways.

The new Xen way is quite radically different (and far more ambitious), from what I gather, but I haven't really looked into it in great depth since it was only made public yesterday.

My original plan was to run a mod both halves of a GTX690 into Grid K2 as per the hard-mod on this thread (you only have to hard-mod the first byte's resistor, the 2nd byte you can soft-mod), and pass one to each of my VMs (for me and my wife). Unfortunately, the DL-DVI issue with the GTX680 has put that plan on hold - I don't want to waste my time modifying a GTX690 if it's going to prove to be equally unusable (I need at least one DL-DVI working for my wife's 2560x1600 monitor). My plan B is to split the 690 into a GPU for the host and a GPU for my VM (I can live with a T221 running off 2xSL-DVI at 3840x2400@32Hz for gaming), and get something like an ATI 7970 for her VM (most (but not all) of the problems I've had with ATI cards are T221 related, and even though they do suffer issues with VM reboots, we hardly ever need to reboot our VMs - XP64 is extremely stable).
« Last Edit: September 12, 2013, 10:57:12 pm by gordan »
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #542 on: September 13, 2013, 12:12:50 am »
right now you have API intercept - and coming soon will be virtualization.

Do you remember how the video card virtu alization worked? Laptop was on INTEL (or virtual gpu) then it would switch to AMD/NVIDIA but video out from the intel hd port! it is all based around SLI and i'm guessing if you really think about it you could SLI a software GPU and then add hardware assist (or a real gpu DVGA) or api intercept (sVGA) but you are still rocking the software driver (think of that as the ROUTE of video stream).

So software nvidia driver switches to 1% load, and 99% to hardware GPU (if available) - this way you can vmotion to another server without graphics card and still continue gaming (just 30fps to 1fps! lol).

Right now everyone does this method. Then hyper-v/XEN use hardware transcoding to offload (like intel has quick-sync) but it happens at the same time. I think this is kepler technology.

The last step which I have not seen in production anywhere would be true SR-IOV+FLReset + QOS to seperate the tasks (if you think api intercept to consumer video card is safe! not! Easy to glitch and crash or access system ram/other VRAM).

That's why I wish for true SR-IOV/FLR or MR-IOV(bladesystem) - security. Stability as well since if you crash a virtual function of gpu you don't take out all of the other vm's video function or crash the host.

Pretty much if my vmware host crashes or loses power, it is completely rebuilt. Period. Full reinstall and format of all components.  I seriously doubt that vmware esxi using X mosaic and API intercept is going to provide stability. Otherwise what is the point of using Intel VT technology? We could do API-intercept (Binary translation) from the first days of virtualization or "double-dos" lol. Definitely not fast nor secure.


 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #543 on: September 13, 2013, 12:29:54 am »
Well see there is one thing we haven't explored: Tesla cards can send to mellanox connectx-3 in 40gbe or 56FDR IB mode direct so it does not touch the CPU (gpudirect). It had many flaws though since mapping direct memory with virtualization is difficult (bypassing the CPU virtualization violates memory!)

The GPU direct could then output the video stream without compression to 10gbe (40gbe/56IB) and you don't have all that lag. Given that you can get 24 port 10gbe switch for $1200-1500 now and cheap 10gbe dual port nic for 75$ to me this would be the best way!  I don't care about "WAN/PCOIP".

I am tempted to see if you could bridge a virtual USB gpu over 10gbe that would be awesome! I think everyone is so focused on limited bandwidth WAN and gigabit they ignore the most direct path which is use a ton of raw bandwidth and run the GPU remote with a pci-e tunnel effect. It has to be possible!
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #544 on: September 13, 2013, 12:34:03 am »
I can easily believe that you can do this over infiniband. Infiniband works using DMA, and in fact does RDMA. So it is ideally suited for doing precisely what you are describing, in the sense that you can map the GPU BARs via infiniband and use remote GPU number-crunching as if the GPU was local. And if you are already running infiniband for this, why bother with ethernet at all? If you can live with the 15 meter cable length limit, it's the way forward. And it's dirt cheap compared to 10Gb ethernet (not to mention 2-4x faster).
 

Offline baconsteak

  • Contributor
  • Posts: 12
  • Country: au
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #545 on: September 13, 2013, 02:24:35 am »
Has anyone ever heard of this working on a g92 9800gt? I want to use opengl flip 3d.

I have tried changing the straps in bios to 061A with nvflash --straps 0x7FFE23C3 0X1000A804 0X7FFEFFFF 0X00010000
It's recognised as a FX3700 now and the drivers install but the 3d options aren't there so I don't think its working properly.

Is this because the hardstraps are different? How can I check the hardstraps? Nvflash only shows the softstraps. Or is it something unrelated to the straps?
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #546 on: September 13, 2013, 09:24:44 am »
Are you sure this works on a genuine FX3700? I have a laptop with a genuine FX3700M in it, but I don't recall seeing any extra options in the settings compared to a GeForce 260M it replaced. If you tell me what exact options you expect to see, I can look for them and report back.
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #547 on: September 13, 2013, 01:17:54 pm »
Anyone care to make a javascript calculator for straps?

Perhaps you can upload the file, it will check the values, give you options (only valid ones) and then tell you which nvflash --straps to use.

What's going to happen is endian typos killing cards, but if you can make a webpage to help folks, it would reduce the # of fail.

 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #548 on: September 13, 2013, 07:01:47 pm »
Not an unreasonable idea. Pitty NiBiTor authors disappeared (or were disappeared, depending on how conspiracy-theoretic you want to be) - shame it wasn't an open source project, or it could have easily been extended to cover more recent developments.

Having said that - there is an argument that anyone attempting thos really should know what they are doing and know their way around a soldering iron, calculator and an editor. Darwininan force is a good thing.
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #549 on: September 13, 2013, 07:58:26 pm »
Do you know what's up with those mac folks selling "quadro" fakeys on ebay? It seems they turn anything geforce into a quadro somehow?

I wonder if they are doing some other trick to fool the server? almost every card they are selling is identified as Quadro 4000.

Maybe they know how to reload the vbios from ram or something?

I'm not trying to get into the business at all - if I wanted to make money i'd just hack intel nic's for MAC (thunderbolt case, or mac pro) since you could buy $99 intel 10gbe nic and modify smalltree/atto drives to accept the intel standard nic. That would make one rich since the mac people love paying $999 for a $99 pre-tested nic lolololol.
----------

Does anyone have TRUE VGX working at all? Hardware not API-intercept? I would be glad to donate to the cause for one video card with decent amount of ram that would actually do GRID K1 duties.

But it would have to be SR-IOV+FLRESET and work with XEN.

I suspect if someone can do true VGX - I would buy 2 or 4 ! Seriously. Can't afford K1 grid they cost $1400 and K2 costs $1800 !! way too rich for my blood

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf