The problem of mine is that I got a 4G gigabyte GTX 680....which is not the same as EVGA...I can't find the correct resistors on the board...
Could someone help please?
It should not make that much difference. What is the brand and model # of your card?
Gigabyte GTX 4G
I couldn't find the Y1 element on the front but on the back
-e, --ecc-config= Toggle ECC support: 0/DISABLED, 1/ENABLED
-p, --reset-ecc-errors= Reset ECC error counts: 0/VOLATILE, 1/AGGREGATE
-c, --compute-mode= Set MODE for compute applications:
0/DEFAULT, 1/EXCLUSIVE_THREAD,
2/PROHIBITED, 3/EXCLUSIVE_PROCESS
-dm, --driver-model= Enable or disable TCC mode: 0/WDDM, 1/TCC
-fdm, --force-driver-model= Enable or disable TCC mode: 0/WDDM, 1/TCC
Ignores the error that display is connected.
--gom= Set GPU Operation Mode:
0/ALL_ON, 1/COMPUTE, 2/LOW_DP
-ac --application-clocks= Specifies <memory,graphics> clocks as a
pair (e.g. 2000,800) that defines GPU's
speed in MHz while running applications on a GPU.
-rac --reset-application-clocks
Resets the application clocks to the default value.
-pl --power-limit= Specifies maximum power management limit in watts.
Hi all,
I decided to have a go at finding the straps for GPU 1 on my card, with both success and failure as the result. I was able to locate them and modify the GTX690 to be a dual core Quadro K5000, but I made the stupid mistake of running it without a heatsync on the bridge chip in the middle of the two while testing. The chip quickly died from overheating when I got excited and let Linux boot into the graphical environment, and there goes my $1000 video card for the greater good, and as such donations are now more important then ever to replace this card now.
I am now running on a semi faulty GT220 (random lockups) and an AMD Radeon X300 to get my triple head working, but as you can imagine this is a very buggy configuration.
Thank you very much for you job, Gnif. I am working in a Desktop with a GTX 690 using Blender for Architectuiral Rendering. After finding these posts I adviced him And he purchased a GTX 680 GB Zotac, and I could (tks God) Mod it to Quadro K5000.I'm testing it and will take the GTX 690 for modding too. Will I need any extra heatsink to prevent what happenend to you hero card? In you opinion, will be better for my use to mod it to a dual quadro k5000 or to a K10? Thank very much in advance, Gnif.
Also that SOIC that sits near the straps I believe is the EEPROM.
Might I also ask if anyone knows what size these resistors are.... 0603 or 0402 ?
Cheers.
I remember people used to do this so they could run 3d animators and video editors that wouldn't run on the desktop cards... I'm surprised they are skimping on the linux drivers though... Kindof sad really, as they're all I would recommend for linux systems, as the ATI drivers were an absolute hellhole for the past 10 years. What can you do I suppose...
One area where AMD/ATI shines is virtualization. I can pass a 7970 through to a Xen guest with relative ease and get native performance within the VM, useful for gaming no doubt. I believe AMD even worked to help build the code that Xen uses for the gpu passthrough.
In any case, I do not recommend nVidia on linux anymore. I did buy a gtx680 just to do this though.... mmmm FLReset.
Regarding Xen virtualization, I haven't tried Nvidia yet (my Quadro 2000 for testing is in the post), but I sincerely hope the experience is less appalling than with the ATI. Granted, ATI cards almost work whereas desktop Nvidia cards don't work at all with VGA passthrough without a whole raft of extra Xen patches, but the experience is poor at best. All in all, good enough for a demo, but absolutely not good enough for anything meaningful.
My plan is to test whether using a Quadro 2000 (drivers officially supports VGA passthrough) makes for a workable experience before I spend 4x as much on a GTX680 to modify into a Quadro K5000 or a Grid K2. Ideally I'd rather like to get a Titan and see what happens if I mod it's device ID to read as a K5000, but as far as I can tell nobody ever reported trying it, and I'd hate to end up with a Titan that I cannot use for it's intended purpose.
Regarding Xen virtualization, I haven't tried Nvidia yet (my Quadro 2000 for testing is in the post), but I sincerely hope the experience is less appalling than with the ATI. Granted, ATI cards almost work whereas desktop Nvidia cards don't work at all with VGA passthrough without a whole raft of extra Xen patches, but the experience is poor at best. All in all, good enough for a demo, but absolutely not good enough for anything meaningful.
The patches are only 5 files, about 100 lines of code in total. They are just to read the bios from an extracted bios rather than from the card at runtime as well as a few other things that Xen can't pull dynamically, unlike AMDs. It is fairly basic code, nothing fancy.
As far as only "good enough for a demo", I will have to disagree. You may have just had a poor experience and been unfortunate enough to have an uncooperative motherboard and graphics card. I can attest the fact that the passthrough is fairly stable once it is setup properly (that's the hard part). I had it running for 2 weeks as a gaming VM and it never had a hiccup with an older 5670 of mine. It was impressive! That said, I wouldn't put this into a production environment without alot more testing.
My plan is to test whether using a Quadro 2000 (drivers officially supports VGA passthrough) makes for a workable experience before I spend 4x as much on a GTX680 to modify into a Quadro K5000 or a Grid K2. Ideally I'd rather like to get a Titan and see what happens if I mod it's device ID to read as a K5000, but as far as I can tell nobody ever reported trying it, and I'd hate to end up with a Titan that I cannot use for it's intended purpose.
Keep in mind that the Quadro series does not support FLReset. It is probably not a good idea to use that for passthrough if you plan to start and stop the VM. It will work just fine, but Xen/linux won't be able to reset the card upon VM reboot. If you have to reboot the VM you'll still need to reboot the entire machine. You may have crashes or performance degradation otherwise.
The above also applies to the K5000 if you plan to modify a Titan, there will be no FLReset.
I know that they, too, lack FLreset, but I am not all that convinced that FLreset is all that necessary. Sure, it makes it a little easier for the driver to do it's job, but think about this at a low level like an embedded engineer for a moment. On the lowest level it comes down to setting registers on the device. Unless the card is poorly engineered and buggy (e.g. it drops off the bus in a questionable, un-re-attachable and uncontactable state), the driver should always be able t o set the registers to whatever they need to be to get the card to a known, initialized state, without even any help from the card's BIOS. FLreset is a nicety that means your driver doesn't have to handle the initialization of the hardware itself, but it doesn't strike me at all as a necessity to get something like this working properly.
I'll know one way or the other soon enough.
Edit: Having tried a Quadro 2000, the experience thus far is that it is even more unstable than using an ATI card. Most disappointing. I guess I won't be wasting my time modifying a GTX into a Quadro.
I posted this a while back but no one addressed it:
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg223546/#msg223546
The TL,DR version is, does modifying any particular card into Tesla K10, Quadro 4000 or Quadro 5000, or for that matter either GRID K1 or K2 variants enable nvidia-smi support for changing the settings listed in the link above? e.g. ECC/TCC support, application clocks, power limit?
Note that not all the settings may work with a particular (transformed) card. If anyone could try to modify each of the settings for their modified card I would very much appreciate it!
After reading this entire thread can i conclude the following?
a GTX680 can be fairly easily modded into K5000 / K10 / Grid K2 by fixing some ID-resistors
this results in additional features (like gpu passthrough for VM's and Mosaic support)
but no performance gain for Pro apps (specviewperf 11)
Or has anyone (Gnif, VeryBigBadBoy, ReefJunkie, etc.) discovered
how to actually boost the OpenGL performance of a GTX680 ?
For many self-employed pro-users like me that would be truly awesome!
I have a GTX 680 modified to a Grid K2 passed through to a Windows 7 x64 xen VM. I am running the nVidia quadro/tesla/grid drivers version 320.00
Here is a pastebin of my nvidia-smi out
ECC would have to be supported by the RAM, which they wouldn't install in a consumer grade card. The power features and other things I would _assume_ are also added hardware bits that physically don't exist on the card.
I am no expert on this subject or with nvidia-smi, though. If you would like me to try something else, I will. I may have just not used the correct commands.
nvidia-smi -e 1
nvidia-smi -dm 1
nvidia-smi -fdm 1
nvidia-smi --gom=0
nvidia-smi -ac 2000,800
nvidia-smi -pl 250
Also, if anyone else can try these with a converted Quadro or Tesla to confirm those cards behave the same way, that'd be awesome too. (nvidia-smi doesn't explicitly state full support for GRID cards, just Tesla/Quadro)
Supported products:
- Full Support
- NVIDIA Tesla Line:
S2050, C2050, C2070, C2075,
M2050, M2070, M2075, M2090,
X2070, X2090,
K10, K20, K20X, K20Xm, K20c, K20m, K20s
- NVIDIA Quadro Line:
410, 600, 2000, 4000, 5000, 6000, 7000, M2070-Q
K2000, K2000D, K4000, K5000, K6000
- NVIDIA GRID Line:
K1, K2, K340, K520
nvidia-smi -e 1
nvidia-smi -dm 1
nvidia-smi -fdm 1
nvidia-smi --gom=0
nvidia-smi -ac 2000,800
nvidia-smi -pl 250
It does say it fully supports GRID K2. I can mod it over to a Tesla and check the difference (if any). I am not at home at the moment, but I will run your other commands to test it out.
My extent for using features beyond gaming is just some oclhashcat and some experimental x264 gpu stuff. Also some very rare 3D modeling stuff, not enough to care about performance. But if I am reading your post correctly, then this could give added performance in those aspects, yes?
Supported products:
- Full Support
- NVIDIA Tesla Line:
S2050, C2050, C2070, C2075,
M2050, M2070, M2075, M2090,
X2070, X2090,
K10, K20, K20X
- NVIDIA Quadro Line:
4000, 5000, 6000, 7000, M2070-Q, 600, 2000, 3000M and 410
- NVIDIA GeForce Line: None