Great work there gnif. Have they sorted out the "Your video card has stopped responding." |O errors. I thought for ages it was my system. Until 6 months ago when I started to work for a new company who have all Lenovo computers. These machines too gave this error message. This was all under windows7. Sorry I diress, great work there non the less. :-+
Do you mean the "front" one, are the strapping resistors really closer to the other GPU, or is your computer back to front (http://www.electricstuff.co.uk/backwardspc.html)?
Carefully comparing the PCBs (or suitable images thereof) of the 690 with the K5000 might yield useful results.
gnif,
I saw your post on the cuda developer forums, which got deleted, then I tracked your username to overclock.net and finally made my way here. I am interested in exploring this with some of the single gpu cards. I am curious as to how you found this mod, were you able to get the schematics for the nvidia reference design?
Ole Linus has had a bit to say in regards to Nvidia. And he's dead right. A while back Nvidia was the shit for Linux with it's tolerable binary blob drivers. But once they got a rep for being Linux friendly, they seemed to do everything possible to back away from that position. Maybe M$ scared them?
You've got balls for taking a soldering iron to a GTX-690!
I didn't even realise nVidia was doing this. Years ago it use to just be broken parts that were disabled, and now they're just purposely disabling parts. For shame
Hum.. what happens if I get the device id wrong? If the driver doesn't work with the chip that's on the card? Does it simply fall back to "software rendering" or can something worse happen?
I have a 9600GT I could mess around with, however the chip is G94b. It looks like the counterpart is Quadro FX 1800 which has a G94. The missing 'b' bothers me.
hmm very interesting...
I wonder if a similar approach can enable disabled cores on x70 cards to make them x80 (660/670 >> 680 or 560/570 >> 580).
hmm very interesting...
I wonder if a similar approach can enable disabled cores on x70 cards to make them x80 (660/670 >> 680 or 560/570 >> 580).
Two Hack-A-Days from GNIF in almost as many days! :-+
What will he hack next? :-//
Dave.
14:45 (Read 54112 times)
14:46 (Read 54322 times)
3896 Guests are viewing this topic
Quote14:45 (Read 54112 times)
14:46 (Read 54322 times)
3896 Guests are viewing this topic
0.0
I think you've just made the server impervious to the notorious internet_kiss_of_death !!
:-+
Just look at the number of "Most online today" and "'Most online ever" at the bottom of this forum front page , they are equal. ;)
gnif: congrats on modifying the card, but do you think you can make both GPU cores K5000 too?
I am interested in this in Windows, would be great if you or somebody else would successfully modify the GTX 680 card :)
In the past I was always changing Geforce to Quadro - until last generation when it was possible with "softquadro" technique using RivaTuner (forum.guru3d.com), because of performance in AutoCAD/3DS Max, nowadays the performance in these applications is almost equal for both the geforce and quadro models, but additional features available in the driver for Quadro models are always very interesting...
Looking forward to seeing some development in this topic
Cheers!
Unfortunately at the moment I am not able to donate the card :)
But I can donate up to $50, maybe you could collect the money, we just need other 19 people willing to donate $50...
Or for starters - a GTX 680 for about half the price - only 9 people... :)
OK, let me know :) Paypal is no problem for me...
Neo_Moucha: Donate: done! ;)
This promises to be very interesting. I'm gonna get me some popcorn.
Gnif, anything you do will help move us toward better open source Linux drivers and farther away from the closed source binary blobs.
I think what you discovered is the exact reason why those bastards won't release anything in the open.
Possibly, I have been thinking on how they could prevent this in the future and there is some methods they could use, but I will not mention them here simply because I do not want to feed them information :). Needless to say though, if they use them, then the next generation of cards will be impossible to mod without resorting to hacking the binary drivers, which without a doubt encroaches on leagallity issues.
QuotePossibly, I have been thinking on how they could prevent this in the future and there is some methods they could use, but I will not mention them here simply because I do not want to feed them information :). Needless to say though, if they use them, then the next generation of cards will be impossible to mod without resorting to hacking the binary drivers, which without a doubt encroaches on leagallity issues.
Same as jailbreking a phone I guess...
Anyway, they could have played nicely with Linux from the beginning, but no....
Have you looked at the front of the PCB? http://i.imgur.com/WJZqGyl.jpg (http://i.imgur.com/WJZqGyl.jpg)
The same 8-pin soic that is under the resistors you changed is on the lower-right of the other GPU and surrounded by a few resistors and unpopulated spots. It looks pretty promising to me.
Do you see the straps here, anywhere?
http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg (http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg)
I wonder what a GTX580 can become :) (if anything useful)
Do you see the straps here, anywhere?
http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg (http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg)
I wonder what a GTX580 can become :) (if anything useful)
Do you see the straps here, anywhere?
http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg (http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg)
I wonder what a GTX580 can become :) (if anything useful)
Nothing stands out, they may be on the front of the card.
Do you see the straps here, anywhere?
http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg (http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg)
I wonder what a GTX580 can become :) (if anything useful)
Nothing stands out, they may be on the front of the card.
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/images/front_full.jpg (http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/images/front_full.jpg)
Oh, by the way, hats off for your findings so far!
Yeah, I hate big companies that think that all Linux users are no money burnouts that like to hack things. One day they are going to realise that Linux is attracting all the professional programmers and engineers, and is very fast pulling general users away from windows and mac, especially with the recent move by Steam to release for Linux now due to the crappy APIs in Windows 8.
Is it possible to do this with a kernel hack? By just changing I.D. So system and also nvidia's driver see Quadro I.D. and behave that way.
Is it possible to do this with a kernel hack? By just changing I.D. So system and also nvidia's driver see Quadro I.D. and behave that way.
Is it possible to do this with a kernel hack? By just changing I.D. So system and also nvidia's driver see Quadro I.D. and behave that way.
this was possible in the past using "softquadro" with RivaTuner software, see forum.guru3d.com
but nvidia blocked it in newer drivers...
Gnif, would it be possible to follow the tracks from the second processor pins the same way the are routed to the resistors to find them anyway?Have you looked at the front of the PCB? http://i.imgur.com/WJZqGyl.jpg (http://i.imgur.com/WJZqGyl.jpg)
The same 8-pin soic that is under the resistors you changed is on the lower-right of the other GPU and surrounded by a few resistors and unpopulated spots. It looks pretty promising to me.
Agreed, but as stated, this could be completely unrelated to it, it could be controlling voltage and modding could potentially kill the GPU.Do you see the straps here, anywhere?
http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg (http://www.ixbt.com/video3/images/gf110/gtx580-scan-back.jpg)
I wonder what a GTX580 can become :) (if anything useful)
Nothing stands out, they may be on the front of the card.
Have you looked at the front of the PCB? http://i.imgur.com/WJZqGyl.jpg (http://i.imgur.com/WJZqGyl.jpg)
The same 8-pin soic that is under the resistors you changed is on the lower-right of the other GPU and surrounded by a few resistors and unpopulated spots. It looks pretty promising to me.
Agreed, but as stated, this could be completely unrelated to it, it could be controlling voltage and modding could potentially kill the GPU.
Gnif, would it be possible to follow the tracks from the second processor pins the same way the are routed to the resistors to find them anyway?
I have a GTS 450 too and after searching the web a lot, couldn't find any hardmod information, I think for obivious reasons. Dou you have any information on how I could hack my 2GB GTS 450 into a Quadro one. I use linux and need to hardmod instead to softmod, what have abundant windows information. Thak you in advance.
Does this apply to lesser cards such as the 670?
Superb work.
this should be theoretically possible - change it into Quadro 2000 but only for older models before 2011? (GF106 based)
I have one real Quadro 2000 in my possession, so I can scan it and we can look for differences...
I'd agree that the placement of the links is approximately correct. Given the coding (resistor values) this would be another hint that you've found the right ones.
Given that you've confirmed the resistors which control GPU1, you could X-ray the board to find out the ball that these go into and then trace from that ball on the other GPU - just looking at surface layers might give you enough.
If anyone has a dead card you could even 'heat gun' the GPUs off to buzz out tracking.
Anyway, congratulations on your find. Look forward to using the 670 as a Quadro for some future 3D work under Linux ;-).
Simon
Read back through the thread to find the answer.
QuoteRead back through the thread to find the answer.
Are you interested in (borrowing) a 670 to test your suspicions you mentioned? I am local enough (WA though), and my 670 struggles to run my screens because they're incapable of NVIDIA mosaic, so it won't be missed too dearly if it goes wrong.
QuoteRead back through the thread to find the answer.
Are you interested in (borrowing) a 670 to test your suspicions you mentioned? I am local enough (WA though), and my 670 struggles to run my screens because they're incapable of NVIDIA mosaic, so it won't be missed too dearly if it goes wrong.
I would be willing to have a go at the card, but you must understand there is a risk to damaging the card in order to find the correct straps. And if it is possible, what would you want it to become? A 680, Quadro K5000 or Tesla K10?
QuoteRead back through the thread to find the answer.
Are you interested in (borrowing) a 670 to test your suspicions you mentioned? I am local enough (WA though), and my 670 struggles to run my screens because they're incapable of NVIDIA mosaic, so it won't be missed too dearly if it goes wrong.
I would be willing to have a go at the card, but you must understand there is a risk to damaging the card in order to find the correct straps. And if it is possible, what would you want it to become? A 680, Quadro K5000 or Tesla K10?
I don't have a preference between quadro or Tesla, because I assume both can perform mosaic. Essentially the problem is that I use T221s, which always identify as two screens, meaning that true multi-monitor support is impossible with NVIDIA consumer cards.
I'll send a PM.
Given that you've confirmed the resistors which control GPU1, you could X-ray the board to find out the ball that these go into and then trace from that ball on the other GPU - just looking at surface layers might give you enough.I was thinking of this too. Someone here has xrayed some boards (https://www.eevblog.com/forum/chat/apollo-saturn-v-computer-logic-reverse-engineered-with-working-model/), maybe you could ask her for advice?
Hi there, thanks for this :)
I just wrote up a pretty detailed post about how this thread made me look at my 3GB GTX 660 Ti's BIOS and find it was similar to a GTX 670s (and different to 2GB GTX 660 Ti's), but because I couldn't read the CAPTCHA I pressed request another, which refreshed the page and lost the post >:(
Basically my 660 Ti's BIOS is almost the same as a 670's: it uses the same board and SKU numbers (20040005) but has extra code inserted which uses the normal 660 Ti board number (20040001, the 670 has similar code at a different address, but uses the 20040005 board number instead), maybe to emulate/downgrade it to a 660?
After seeing this I pressed further and compared 670 2GB vs 670 4GB, and then mapped the values that were different onto my 660 Ti's BIOS (the addresses were a bit different but it wasn't hard to locate them), they matched with the 670 4GB :o
This started to make me think they might have just crippled 4GB 670s into 3GB 660 Ti's, until I opened up my card and found 6x2Gb chips (H5GQ2H24AFR, which is 1.5GB? Maybe I read the datasheet wrong, or the rest were on the back...) There were 2 unfilled spaces though, so I'm guessing it hasn't got the full 4GB :(
The datasheet of those chips mentions that they're 256-bit, but the 660 Ti is reported as only being 192-bit... Maybe flashing the 670 BIOS over would enable the full bandwidth? I'd be willing to try it but I'm worried that the BIOS might do a check against the hardware device ID... I'd guess not since you can change the HW and it still works, but that could be down to a combined Quadro/Tesla/Geforce BIOS... Any info about this would be great!
Also any info about bad flashes would be great too, the only things I can find about them are from BIOS modders, not crossflashing :-\ I'm scared that the wrong RAM config/HW device ID/other stupid check might throw off the card from even being detected in nvflash...
If anyone wants to look further:
660 Ti 3GB BIOS: http://www.techpowerup.com/vgabios/127140/EVGA.GTX660Ti.3072.120806.html (http://www.techpowerup.com/vgabios/127140/EVGA.GTX660Ti.3072.120806.html)
660 Ti 2GB BIOS: http://www.techpowerup.com/vgabios/127242/EVGA.GTX660Ti.2048.120910.html (http://www.techpowerup.com/vgabios/127242/EVGA.GTX660Ti.2048.120910.html)
670 2GB BIOS: http://www.techpowerup.com/vgabios/125688/EVGA.GTX670.2048.120807.html (http://www.techpowerup.com/vgabios/125688/EVGA.GTX670.2048.120807.html)
670 4GB BIOS: http://www.techpowerup.com/vgabios/126722/EVGA.GTX670.4096.120712.html (http://www.techpowerup.com/vgabios/126722/EVGA.GTX670.4096.120712.html)
No worries :). I do not think that the ram configuration is stored in the BIOS at all as it is needed to be known before the GPU even reads the BIOS from the EEPROM. In earlier versions it was based on the hard straps, I do not see any reason why they would have changed this.
As for the RAM size, from what I can see, that module is 2Gb which is 0.25GB per chip * 6 = 1.5GB total. This is very odd unless I am also reading it wrong if you say that the card should have 3GB of RAM. Can you have a real close look at the card to be doubly sure that the part number you provided is correct? Also, you did count the chips on both sides of the PCB?
As for seeing cards with less ram accessible then is physically installed, I highly doubt this would ever occur, the cost saving to the mfg is too large to just disable/hide/waste the additional RAM.Ah yeah that's true, probably should have thought of that before I took it apart ;D
Hi there,
Nice job gnif 8)
Could you show me your modified card`s benchmark results with 3DS Max, 3DMark... or some game benchmarks?! :-DMM I really want to see how it performs.
Thanks a lot and good luck in your further tweaking! :-+
Edit4: Remembered you saying about when it's pulled in a different direction it's a different set of values...Either by coincidence or just reuse/modify of an existing PCB, the ID resistors seem to be always next to one of the heatsink mounting holes.
http://imgur.com/8SKKD1w (http://imgur.com/8SKKD1w)
90% sure this is the spot now, hope you can tell us more :D
Edit4: Remembered you saying about when it's pulled in a different direction it's a different set of values...Either by coincidence or just reuse/modify of an existing PCB, the ID resistors seem to be always next to one of the heatsink mounting holes.
http://imgur.com/8SKKD1w (http://imgur.com/8SKKD1w)
90% sure this is the spot now, hope you can tell us more :D
gnif, you're awesome :)
I have a few questions though.
Does this allow the modded card to get full performance of a Quadro in workstation applications? If yes, have you tried it on specviewperf11 and speccapc? (http://www.spec.org/benchmarks.html (http://www.spec.org/benchmarks.html))
A GK104 GTX 680 4GB is the same as a K5000 ; GK107 GTX 650 2GB is the same as the K2000. The K4000 is iffy, the GK106 GTX 650 Ti /Boost has the same CUDA Cores though it doesn't have 3GB VRAM. I believe the Boost version is the direct correlation since it has 192-bit memory bus, but it's not out yet.
GeForce GTX 650 Ti 0x11C3 http://www.techpowerup.com/gpudb/2059/NVIDIA_GeForce_GTX_650_Ti_Boost.html (http://www.techpowerup.com/gpudb/2059/NVIDIA_GeForce_GTX_650_Ti_Boost.html)
GeForce GTX 650 Ti 0x11C6http://www.techpowerup.com/gpudb/1188/NVIDIA_GeForce_GTX_650_Ti.html (http://www.techpowerup.com/gpudb/1188/NVIDIA_GeForce_GTX_650_Ti.html)
vs
Quadro K4000 0x11FA http://www.techpowerup.com/gpudb/1841/NVIDIA_Quadro_K4000.html (http://www.techpowerup.com/gpudb/1841/NVIDIA_Quadro_K4000.html)
GeForce GTX 650 0x0FC6 http://www.techpowerup.com/gpudb/894/NVIDIA_GeForce_GTX_650.html (http://www.techpowerup.com/gpudb/894/NVIDIA_GeForce_GTX_650.html)
vs
Quadro K2000 0x0FFE http://www.techpowerup.com/gpudb/1838/NVIDIA_Quadro_K2000.html (http://www.techpowerup.com/gpudb/1838/NVIDIA_Quadro_K2000.html)
or
Quadro K2000D 0x0FF9 http://www.techpowerup.com/gpudb/2021/NVIDIA_Quadro_K2000D.html (http://www.techpowerup.com/gpudb/2021/NVIDIA_Quadro_K2000D.html)
From NVIDIA's Linux Drivers, http://www.nvidia.com/object/linux-display-amd64-310.40-driver.html (http://www.nvidia.com/object/linux-display-amd64-310.40-driver.html)
http://us.download.nvidia.com/XFree86/Linux-x86_64/310.40/README/index.html (http://us.download.nvidia.com/XFree86/Linux-x86_64/310.40/README/index.html) ; http://us.download.nvidia.com/XFree86/Linux-x86_64/310.40/README/supportedchips.html (http://us.download.nvidia.com/XFree86/Linux-x86_64/310.40/README/supportedchips.html)
I just discovered that this post is being covered on Tomshardware, haha: http://www.tomshardware.com/news/Nvidia-GTX-690-Quadro-K5000,21656.html (http://www.tomshardware.com/news/Nvidia-GTX-690-Quadro-K5000,21656.html)
Would you please post your card's brand and full model # as obviously not all cards are based on the reference design?
does anyone know what would a GTX660 TI mod into? or if its even modable?
does anyone know what would a GTX660 TI mod into? or if its even modable?
I ran across this while researching how I might pass-through my k20 or 660 ti(s) to a virtual machine using esxi 5.1 and horizon view 5.2.
Gnif,
I'v been keeping up with this thread...I ran across this while researching how I might pass-through my k20 or 660 ti(s) to a virtual machine using esxi 5.1 and horizon view 5.2. I will donate a gtx 660 ti to the cause. How can I get this to you?
bdx
I ran across this while researching how I might pass-through my k20 or 660 ti(s) to a virtual machine using esxi 5.1 and horizon view 5.2.
I know it's off topic but I am also interested in passing hardware directly to the VMs. You probably know this already but if not, in order to do it you need to have a vt-d capable CPU and motherboard as well as BIOS support enabled.
Here's a thread of interest http://forums.mydigitallife.info/threads/33730-VT-d-enabled-motherboards-and-CPUs-for-Paravirtualization (http://forums.mydigitallife.info/threads/33730-VT-d-enabled-motherboards-and-CPUs-for-Paravirtualization)
GPU Name | Resistor 0 / 3th byte | Resistor 1 / 3th byte | Resistor 2 / 8-f 4th byte | Resistor 3 / 0-7 4th byte |
GTX 660 ti | none | 25k | none | 20k |
GTX 670 | none | 25k | 10k | none |
GTX 680 | none | 25k | none | 5k |
GTX 770 | none | 25k | none | 25k |
tesla k10 | none | 25k | 40k | none |
quadro k5000 | 40k | none | 15k | none |
grid k2 | 40k | none | 40k | none |
I was able to successfully modify my card to a Grid K2.
Something that was interesting was I kept getting kernel panics with the 40k resistors. After some experimenting I found a stable solution for this card.
Resistor 0: None
Resistor 1: None
Resistor 2: 100k
Resistor 3: None
My card is an Asus GTX680. I know they build their own PCB layout and my PCB was slightly different than yours (different spacing, same location). I am guessing that may have something to do with it, but I am still a bit confused as to how the resistors directly effect the ID anyway. This was my first time working with SMD components so I may have messed something up, who knows? It works, that is what matters.
Hello,
I managed to find the resistors responsible for PCI Id in the graphics card gtx 680 2gb, Device Id: 10DE 1180
I wonder what happens if we take gtx 670 and modify the id to 680 and the upload 680 BIOS. will it unlock cores?
I have a GTX 660 Ti that I would be willing to submit as a guinea pig. Is this something that gnif or someone else could walk me through over skype?
*edit* nevermind, it looks a little too involved for me to handle. I would need to send the card to someone else.
If someone with the abilty to attempt this wants a card, send me a PM. I'll ship a 660 Ti as long as you ship it back :-+
Same place that I marked earlier... good to know I wasn't off track :-+I have a GTX 660 Ti that I would be willing to submit as a guinea pig. Is this something that gnif or someone else could walk me through over skype?
*edit* nevermind, it looks a little too involved for me to handle. I would need to send the card to someone else.
If someone with the abilty to attempt this wants a card, send me a PM. I'll ship a 660 Ti as long as you ship it back :-+
You edited your message while I was preparing an image for you...
If you have a steady hand and decent tools to move the resistors, here's where I think the resistors are on the 660 Ti (of course I could be terribly wrong :) )
Left: Quadro K5000, right: GTX 660 Ti
(http://i.imgur.com/UsHpSG5.jpg)
does anyone know what would a GTX660 TI mod into? or if its even modable?Well it looks like some 660 Ti's share the same board as the 670, so it might be possible to convert them over and get full use of the 256-bit memory interface... But for all we know these 660-on-670 cards might be some sort of binned hardware with 660 firmware put on to "cripple" them into not using the damaged parts :/ I'm thinking it should work though because the RAM chips themselves are 256-bit.
Perhaps the resistors on the back of the board underneath the GPU set the configuration, but we need a really good photo close-ups of the boards to see the differences. It's a long shot but we got nothing to lose. :)
660 Ti will turn into K5000, because the board designs are exactly the same, except that the K5000 has the full feature chip (1536/128/32 Shaders/TMUs/ROPs) instead of a possibly crippled/damaged version (1344/1112/24 Shaders/TMUs/ROPs).Hmm, do we know where that limitation exists though? It seems that my 660 Ti's board is the same number as a 670s, which should mean that it uses the same kind of chip. Here's some info I was editing into my other post before I saw yours:
See that's the thing, unless the board numbers match, most manufacturers design their own GTX 670 boards because they are higher-end items and consumers demand better power supply design and better components.
With regards to getting the missing computing units, that's neither here nor there. The chips (chip die, silicon) used are all the same (cheaper to manufacture) but they could have factory burnt fuses (inside GPUs) that disable computing units; on-board limitations (like resistors); bios restrictions (least likely imho because it would be a huge "fail", but I could be wrong) or simply be damaged GPUs that did not pass QA for higher boards.
My 20040005 660Ti's BIOS seems to have code to emulate/disguise/downgrade itself to a 20040001 board internally, possibly to make drivers work properly (although gpu-z still picks up the proper number)
I would be extremely interested to learn how verybigbadboy was able to convert his 680 into a VGX/GRID K1
I would be extremely interested to learn how verybigbadboy was able to convert his 680 into a VGX/GRID K1 (i.e. what to solder, what resistors to use and where to get them, etc). That would open up a lot of things to the home-virualization crowd.
I'd even give a little funding for some "idiots guide to turning your 680 into a VGX"
Just sayin.........
yeah...! just registered, only to follow this thread..
second, modding a 680 to a k5000 will be great. i've read verybigbadboy's post, but it's a little bit confusing. i personally don't need k5000 performances, i could go with a k2000 (650) or a k4000 (650 ti).
I'm not 100% about what it's doing, comparing a 670 bios to my 660 Ti's showed that the part of the BIOS where versions/board numbers/SKUs are was almost the same, but lower down in the code there was another reference to the board number which in the 670 BIOS was 20040005 (as expected) but 20040001 on my 660Ti's:My 20040005 660Ti's BIOS seems to have code to emulate/disguise/downgrade itself to a 20040001 board internally, possibly to make drivers work properly (although gpu-z still picks up the proper number)
By the way, how do you know that your BIOS is downgrading your card, perhaps you have some pointers to look at?
Welcome aboard.
You can only get K5000, if modded, because K2000 is based on GK107 and not GK104 GPU.
Further more, you will gain access to what I believe are locked-out high-end features of drivers (and/or applications) because your card will physically identify as something else.
If you compare raw specs, K5000 is clocked much lower than GTX 680 and if you choose to flash the BIOS you will probably reduce performance you have now, but perhaps gain more stability. That is one of the selling points of the high-end visualization cards.
Where it shows the version number/board number/SKU (http://i.imgur.com/H0sKwE8.png)Somehow I am not sure those numbers are actually board number/version, looking at other boards that do not have those numbers.
Where the other reference to the board number is (http://i.imgur.com/SOPQOvZ.png)
would you kindly explain me why gk107 can't be modded.?GK104 GPU is in the GTX 660ti, 670, 680, 690 models.
EVGA doesn't like to release BIOS updates :( Maybe because so many of their cards use so many different boards :palm:Where it shows the version number/board number/SKU (http://i.imgur.com/H0sKwE8.png)Somehow I am not sure those numbers are actually board number/version, looking at other boards that do not have those numbers.
Where the other reference to the board number is (http://i.imgur.com/SOPQOvZ.png)
Have you also checked against an older/newer version of the BIOS if those numbers repeat or are different on the same model line?
NVIDIA Firmware Update Utility (Version 5.118)Guess we can see where that 20040001 is used: Project: 2004-0001
Adapter: GK1xx (10DE,1183,3842,3663) H:--:NRM B:01,PCI,D:00,F:00
The display may go *BLANK* on and off for up to 10 seconds during access to the
EEPROM depending on your display adapter and output device.
Identifying EEPROM...
EEPROM ID (C8,4012) : GD GD25Q20 2.7-3.6V 2048Kx1S, page
Reading adapter firmware image...
Image Size : 182272 bytes
Version : 80.04.4B.00.60
~CRC32 : 8AB5DABA
OEM String : NVIDIA
Vendor Name : NVIDIA Corporation
Product Name : GK104 Board - 20040005
Product Revision : Chip Rev
Device Name(s) : GK1xx
Board ID : E11D
PCI ID : 10DE-1183
Subsystem ID : 3842-3663
Hierarchy ID : Normal Board
Chip SKU : 300-0
Project : 2004-0001
CDP : N/A
Build Date : 07/09/12
Modification Date : 08/06/12
Sign-On Message : GK104 P2004 SKU 0005 VGA BIOS (HWDIAG)
would you kindly explain me why gk107 can't be modded.?GK104 GPU is in the GTX 660ti, 670, 680, 690 models.
GK107 GPU is in the GTX 650.
Can't make one turn chip into another. But, you could buy a GTX 650 and *possibly* mod it to K2000, based on this thread findings. :)
Here's my high rez image of the back of a GeForce GTX 660 Ti:
http://www.eriktande.com/nvidia_geforce_gtx_660_ti.jpg (http://www.eriktande.com/nvidia_geforce_gtx_660_ti.jpg)
I managed to find the resistors responsible for PCI Id in the graphics card gtx 680 2gb GV-N680OC-2GD, Device Id: 10DE 1180I have its 670 sibling, GV-N670OC-2GD. Based on the pictures (attached), it's the same card (except for the resistors, I guess).
Below you can find a list of IDs that I run successfully:Since the K5000 wasn't stable, I'd go with Tesla K10.
gtx 670, Device Id: 10DE 1189 with 1536 cores.
tesla k10, Device Id: 10DE 118F
quadro k5000, Device Id: 10DE 11BA
vgx grid k2, Device Id: 10DE 11BF
2 resistor is responsible for the 4th, symbol 8-f. Tested values: 10k = 9, 15k = A.And here is my question:
3 resistor is responsible for the 4th, symbol 0-7. It is originally 5k on gtx680.
If you use second resistor, third one has to be removed or be 40k, and vice versa.
And here is my question:
Making the 4th symbol an F means 40K resistor (in place of the 10k, a "9" symbol)
But based on the quoted text, that is the same as no resistor, aka see what the 3rd has to say.
Do I over analyse it?
Any comments would be appreciated.
Hi, guys. I'm admiring your work. And since I'm a complete rookie - what do you think I can do with my gtx 660 2Gb that has a GK106 CPU. Does that mean I can mybe mod it to Quadro K4000, also GK106? Device ID is 10DE-11C0 and Quadro K400 ID is xxxx-11FA. Thanks for your thoughts!
Hi, guys. I'm admiring your work. And since I'm a complete rookie - what do you think I can do with my gtx 660 2Gb that has a GK106 CPU. Does that mean I can mybe mod it to Quadro K4000, also GK106? Device ID is 10DE-11C0 and Quadro K400 ID is xxxx-11FA. Thanks for your thoughts!
yes, maybe..
but quadro k4000 is more like 650Ti boost than 660 (same gk106, but different specs).
you can see : http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units, (http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units,) with nvidia card comparison.
it's very well done.
p.s.: i've seen a 660 pcb, but i couldn't find the "resistor pattern" i've seen in gk104 based card, but i'm a rookie as you. maybe it's in the front http://images.anandtech.com/doci/6276/GTX660PCB.jpg, (http://images.anandtech.com/doci/6276/GTX660PCB.jpg,) right to the red dot (lower left angle of gpu core)
i think that we have to wait if 660ti -> k5000 works well..
Would you have a Quadro 2000 image. I own a GTS 450 2GB and I wanto to hardmod it to Quadro. If you help me in it anyway, Tahnk you so much.I would be extremely interested to learn how verybigbadboy was able to convert his 680 into a VGX/GRID K1 (i.e. what to solder, what resistors to use and where to get them, etc). That would open up a lot of things to the home-virualization crowd.
I'd even give a little funding for some "idiots guide to turning your 680 into a VGX"
Just sayin.........
Both verybigbadboy and me have posted enough information for anyone to mod a card based on the reference designs.
His images referring to locations and values as well as the images I posted of GTX 670 apply to GTX 680 as well, unless you have a different board. But then again you did not specify what brand/model you have?
Surely if you come here you must know where to get SMT components? :)
Hi, guys. I'm admiring your work. And since I'm a complete rookie - what do you think I can do with my gtx 660 2Gb that has a GK106 CPU. Does that mean I can mybe mod it to Quadro K4000, also GK106? Device ID is 10DE-11C0 and Quadro K400 ID is xxxx-11FA. Thanks for your thoughts!
yes, maybe..
but quadro k4000 is more like 650Ti boost than 660 (same gk106, but different specs).
you can see : http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units, (http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units,) with nvidia card comparison.
it's very well done.
p.s.: i've seen a 660 pcb, but i couldn't find the "resistor pattern" i've seen in gk104 based card, but i'm a rookie as you. maybe it's in the front http://images.anandtech.com/doci/6276/GTX660PCB.jpg, (http://images.anandtech.com/doci/6276/GTX660PCB.jpg,) right to the red dot (lower left angle of gpu core)
i think that we have to wait if 660ti -> k5000 works well..
Tkank you, yes, we should be patient. I can't seem to open your pic link, could you update it please? tnx
Would you have a Quadro 2000 image. I own a GTS 450 2GB and I wanto to hardmod it to Quadro. If you help me in it anyway, Tahnk you so much.Maybe this?
I just removed 2 and 3 to get F symbol.Thank you, verybigbadboy.
you may also remove them, or you may try to put 40k "in place of the 10k, a "9" symbol"
I think there is no difference.
Looks like "F" is default value for 4 symbol.
and "B" is default value for 3 symbol.
And here is my question:
Making the 4th symbol an F means 40K resistor (in place of the 10k, a "9" symbol)
But based on the quoted text, that is the same as no resistor, aka see what the 3rd has to say.
Do I over analyse it?
Any comments would be appreciated.
I just removed 2 and 3 to get F symbol.
you may also remove them, or you may try to put 40k "in place of the 10k, a "9" symbol"
I think there is no difference.
Looks like "F" is default value for 4 symbol.
and "B" is default value for 3 symbol.
I would be extremely interested to learn how verybigbadboy was able to convert his 680 into a VGX/GRID K1 (i.e. what to solder, what resistors to use and where to get them, etc). That would open up a lot of things to the home-virualization crowd.
I'd even give a little funding for some "idiots guide to turning your 680 into a VGX"
Just sayin.........
Both verybigbadboy and me have posted enough information for anyone to mod a card based on the reference designs.
His images referring to locations and values as well as the images I posted of GTX 670 apply to GTX 680 as well, unless you have a different board. But then again you did not specify what brand/model you have?
Surely if you come here you must know where to get SMT components? :)
For a 680 to K10....ID "1180" to "118F"
There is only one symbol has to be changed and you said only remove resistor 2 and 3 is OK....no change to resistor 1 right?
For a 680 to K5000....ID "1180" to "11BA"
To get the third symbol from "8" to "B"....I need to change the no. 1 resistor to 20K right?
And for the Symbol "0" to A....
In being completely truthful, modding a graphics card into a virtualized graphics card is what caught my eye. Aside from basic knowledge all my mental stock is in IT (which is why I asked very basic questions) :) I am extremely grateful for the instructions thus far. I'm sure this thread will attack a lot of IT, VMware, and CAD enthusiasts.I just wonder if there will be a backlash from Nvidia if people start selling modded versions of cards.
In being completely truthful, modding a graphics card into a virtualized graphics card is what caught my eye. Aside from basic knowledge all my mental stock is in IT (which is why I asked very basic questions) :) I am extremely grateful for the instructions thus far. I'm sure this thread will attack a lot of IT, VMware, and CAD enthusiasts.I just wonder if there will be a backlash from Nvidia if people start selling modded versions of cards.
I don't think they care if a few enthusiasts mod their card or not, but when someone commercializes on it they would probably take notice.
Would you have a Quadro 2000 image. I own a GTS 450 2GB and I wanto to hardmod it to Quadro. If you help me in it anyway, Tahnk you so much.Maybe this?
http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=0&prodSeriesId=3718668&prodNameId=3718669&swEnvOID=4060&swLang=13&mode=2&swItem=wk-104548-1 (http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=0&prodSeriesId=3718668&prodNameId=3718669&swEnvOID=4060&swLang=13&mode=2&swItem=wk-104548-1)
Than you Amigo.
Although it seems GTS 450 is a GF106 only in OEM, v2 and v3 are GF116...
I just wonder if there will be a backlash from Nvidia...Of course there will. I think the logic will change.
Selling modded cards will be simply illegal...
I just wonder if there will be a backlash from Nvidia...Of course there will. I think the logic will change.
But most likely not before the next generation cards arrive.
Selling modded cards will be simply illegal...
i think they will be forced to modify the gpu core to avoid completely mods.. but it will be expensive.I don't think so.
You obviously haven't checked eBay then... :DI meant as a business model.
Looks like "F" is default value for 4 symbol.I other words, if I remove the resistors in all positions - 1,2 and 3 - I'll get a 11BF part, aka GRID K2.
and "B" is default value for 3 symbol.
i think they will be forced to modify the gpu core to avoid completely mods.. but it will be expensive.I don't think so.
Even Intel, owning all its fabs, doesn't do that.
They will make modding harder and bricking easier... That will do it.
... I just turned my EVGA GTX 670 FTW into a K10 ...
Hi,Looks like "F" is default value for 4 symbol.I other words, if I remove the resistors in all positions - 1,2 and 3 - I'll get a 11BF part, aka GRID K2.
and "B" is default value for 3 symbol.
Now my last question: does going GRID (or Tesla) disable the outputs?
Or will that happen only after the appropriate BIOS is installed (if it can be installed)?
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Hi,Thank you, verybigbadboy.
Tesla disable outputs.
Grid not disable outputs.
I not tried to change bios.
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Edit: used GPU-Z
(http://i324.photobucket.com/albums/k359/InitialDriveGTR/k10.gif)
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Edit: used GPU-Z
(http://i324.photobucket.com/albums/k359/InitialDriveGTR/k10.gif)
Yeah ~~
I can only find K5000's Bios....is this one correct?
http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html (http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html)
then use NVFlash to flash the bios into the card?
I still working on finding K10...
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Please do some tests first before replacing the BIOS because you might not get that much of a difference changing ROMs, actually it might degrade your performance due to more conservative settings for the high-end line.
I think it's the features that have become enabled that makes the difference, for example unlocking the virtualization pathway, which driver and applications check/look for.
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Edit: used GPU-Z
Yeah ~~
I can only find K5000's Bios....is this one correct?
http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html (http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html)
then use NVFlash to flash the bios into the card?
I still working on finding K10...
You are much better off getting your original BIOS and using a hex editor to update its device ID, then use the KGB voltage mod tool to fix the checksum, don't bother with the voltage mod stuff. I highly doubt that the BIOS controls the number of cores available, this will either be another hardware strap, or burnt out fuses in the GPU.
gnif, Thanks for your inspiration!!
I understand what you are telling about the BIOS but hex editor and KBG voltage mod tool is out of my knowledge...Seems I may need to stop here and wait for experts |O
However, if i use a GTX 680 4G and hard-mod it to K5000. The hardware config. are the same. Could I get the functions after I install K5000 driver?
I understand what you are telling about the BIOS but hex editor and KBG voltage mod tool is out of my knowledge...Seems I may need to stop here and wait for experts |O
I understand what you are telling about the BIOS but hex editor and KBG voltage mod tool is out of my knowledge...Seems I may need to stop here and wait for experts |O
(http://i.imgur.com/K2cetme.jpg)
I've highlighted for you in blue the portion of the ROM file (in this case EVGA GTX 680 4GB) that contains the Device ID to be changed. Bytes are in the Little Endian order (least significant byte first, indicative of Intel platforms) so DE 10 80 11 translates to 10DE (NV Vendor ID) 1180 (GTX 680 Device ID).
The red highlight just shows the beginning of the actual VBIOS image, the sequence 55 AA is a header so you know you are in the right section, beside seeing all the text around there, too. :)
Once you've done the editing (any hex editor would do, ie. HxD), get the KGB tool (https://www.dropbox.com/s/fsxyvofr1idazhm/kgb_0.6.2.zip (https://www.dropbox.com/s/fsxyvofr1idazhm/kgb_0.6.2.zip)) to fix the new ROM image checksum (very important).
Also, always remember to backup your original ROM image first. :)
So, Should I change the "DE 10 80 11" of the GTX 680 to "DE 10 BA 11"which is K5000 's ID?
What else to do? Sorry that I am just a newbie on this
Summary table...
Well right now I have no idea how to actually test the K10, nor to see what hardware is enabled/disabled, as GPU-Z has a lot of missing information. What resistor values would I need for a GTX 680?You can't just turn GTX 670 into GTX 680 (I'm presuming you are talking about your GTX 670 unless you also have a 680). You could change the Device ID but that will not bring the rest of the features out.
Well right now I have no idea how to actually test the K10, nor to see what hardware is enabled/disabled, as GPU-Z has a lot of missing information. What resistor values would I need for a GTX 680?
....It would be like sticking a ferrari badge on your bike.Myself and those of us who used to put fancy car badges on our bicycles resent that analogy. :)
K20x is PCI-E 2.0 x16. I've got one here. Last minute change by nVidia.Fixed.
Can anyone point out what exactly needs to be changed?This is the million dollar question: nobody knows (yet).
Can anyone point out what exactly needs to be changed?This is the million dollar question: nobody knows (yet).
At this point only two cards (and their PCB clones) have this question answered. Everything else is guesses at best.
See the posts by gnif and verybigbadboy.
Is the only way to know by testing it out?Yes, but at this point the resistors to be modded (aka in charge of the Device ID) on this board haven't been identified yet.
If I understand this right, I should be able to turn my GeForce GTX 660 Ti into a Quadro K5000. Here's a picture of my exact card:Look to the right of the top right heatsink mounting hole. Same pattern there. That would be my guess.
http://www.eriktande.com/nvidia_geforce_gtx_660_ti.jpg (http://www.eriktande.com/nvidia_geforce_gtx_660_ti.jpg)
Can anyone point out what exactly needs to be changed? I'm going to try take it to a local shop and have them give it a shot, but I need to know exactly what to tell them.
>:D
I've never been shy of doing hardware mods as easy as changing out resistors. Anyways, I just turned my EVGA GTX 670 FTW into a K10... I simply removed resistor 2 as per verbigbadboy. Will post back in a bit on the question of whether or not those cores got enabled. I still need a K10 BIOS, so if anyone with access to a real K10, it would be a huge help if you could share that, as I highly doubt I will find it on the internet...
Edit: used GPU-Z
(http://i324.photobucket.com/albums/k359/InitialDriveGTR/k10.gif)
Yeah ~~
I can only find K5000's Bios....is this one correct?
http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html (http://www.techpowerup.com/vgabios/129867/NVIDIA.QuadroK5000.4096.120817.html)
then use NVFlash to flash the bios into the card?
I still working on finding K10...
You are much better off getting your original BIOS and using a hex editor to update its device ID, then use the KGB voltage mod tool to fix the checksum, don't bother with the voltage mod stuff. I highly doubt that the BIOS controls the number of cores available, this will either be another hardware strap, or burnt out fuses in the GPU.
On another note, since I am quite interested in having a 680 with a short pcb, how challenging would it be to swap out the chip from a full 680 with one from a 670? And more importantly, would the end product function?Hah, that would be an ultimate hack. You need to source the chip out first then remove the original GPU, clean the pads, reball the new chip and then mount it. All in the full BGA glory. Perhaps if you have access to the chip and the BGA equipment it is theoretically doable. But then there might be other resistors to adjust as well, flash the ROM, etc.
I was interested in the possibility of unlocking the 670 as well, but some research on OCN has lead me no believe that the chips are laser cut during production, making it impossible.
And here is my question:
Making the 4th symbol an F means 40K resistor (in place of the 10k, a "9" symbol)
But based on the quoted text, that is the same as no resistor, aka see what the 3rd has to say.
Do I over analyse it?
Any comments would be appreciated.
I just removed 2 and 3 to get F symbol.
you may also remove them, or you may try to put 40k "in place of the 10k, a "9" symbol"
I think there is no difference.
Looks like "F" is default value for 4 symbol.
and "B" is default value for 3 symbol.
I have a GTX 660 Ti that I would be willing to submit as a guinea pig. Is this something that gnif or someone else could walk me through over skype?
*edit* nevermind, it looks a little too involved for me to handle. I would need to send the card to someone else.
If someone with the abilty to attempt this wants a card, send me a PM. I'll ship a 660 Ti as long as you ship it back :-+
You edited your message while I was preparing an image for you...
If you have a steady hand and decent tools to move the resistors, here's where I think the resistors are on the 660 Ti (of course I could be terribly wrong :) )
Left: Quadro K5000, right: GTX 660 Ti
(http://i.imgur.com/UsHpSG5.jpg)
K20x is PCI-E 2.0 x16. I've got one here. Last minute change by nVidia.The K20 is PCI-E 2.0 x16, had one until I got a Titan instead, didn't really need the extra enterprise features for my purposes.
I would imagine the k10 and k20 are as well.
Hoping to turn some Titans into K20x
Could someone clarify the difference between the resistor = "symbol" concept, and what has been described by vbbb, below where he removes two resistors to get a symbol?
I took measurements with the resistors in series on the card. I know this is frowned upon, but my measured values match those of the resistors when they are detached from the board, I assume it to be an accurate form of measurement. Would you concur?
R1 = 40kBased on those numbers, the R1, R2 and R3 could be playing the roles of 3,2,1 resistors in vbbb's post here
R2 = 20k
R3 = 5k
R4 = 5k
R5 = 45k
R6 = 33 ohms
R7 = 33 ohms
R8 = 2k
R9 = 2k
R10 = 45k
R11 = 10k
R12 = 10k
Isn't it about time this mod get's a name? In honor of it's author, I propose "GnifMod" or "ModiGnified".How about cGNIFit, aka significant...
Any other ideas?
Hi-
I'm getting ready to pull the trigger on TWO 4GB GTX 680 (Gigabyte GV-N680OC-4GD) cards and will be Modding them to a K5000 & K10.
I run 1/2 the time gaming and normal programs and 1/2 the time CAD and graphic programs. From the benchmarks I have seen ( http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_8.html#sect0 (http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_8.html#sect0) ) a K5000 is 3 - 4 times as fast as a 2GB GTX 680 running CAD programs and a 2GB GTX 680 is about 1.5 times as fast as a K5000 running games.
So, what do I do...
1.) SLI the GTX 680's together. It will be great for games but so-so for CAD.
2.) Mod both cards to a K5000 & K10 (or two K5000's). this will be blazing for CAD and rendering but, so-so for games.
-OR-
3.) Build "daughter cards" for each and be able to switch the resistors from 680 to K5000 & K10. Also, should I dual boot Win 7 and put the GTX 680 drivers and gaming programs on one boot. And put the Quadro drivers and CAD on the other boot. :-//
If the "daughter cards" work, could I just put both GTX 680 & Quadro drivers on the same boot partition & Win 7 will know which ones to use depending what I have the cards set to?
Thanks.
I guess a TITAN cannot be modded in to a Quadro card only Tesla? Not until K6000 comes out I would think. Also can someone post benchmarks of K5000 mod :)
Would be really interesting to compare SPEC benchmarks with K5000.
file: (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/?action=dlattach;attach=42267)
... does "front" in the image mean you have to remove the fanYes.
I just ordered a GV-N680OC-2GD. I want to set it to K5000, Value need is 11BA instead of 1180. I'll have to use 15k for resistor 2 and remove resistor 3. What value will be needed for Resistor 1?
GPU Name | Resistor 1 | Resistor 2 | Resistor 3 |
GTX 670 | 25k | 10k | none |
GTX 680 | 25k | none | 5k |
tesla k10 | 25k | none | none |
quadro k5000 | none | 15k | none |
grid k2 | none | none | none |
Is there any problem with just making everything a Grid-K2?Not really.
I have my GTX 670 FTW setup as a K2 right now, Video outputs work...Thanks.
Thank you for taking this mod all the way, and sorry to hear your 690 died in the process. :(
The chip near is the EEPROM from GigaDevice 25Q20 (http://www.gigadevice.com/WebPage/PageDetail.php?PageId=127&WebPageTypeId=98&WebPageTypeId2=151&WebPageTypeId3=134 (http://www.gigadevice.com/WebPage/PageDetail.php?PageId=127&WebPageTypeId=98&WebPageTypeId2=151&WebPageTypeId3=134))
That might be interesting to note, perhaps all the Device ID straps are always located around it?
It seems this might be a new invention at NVidia (for 6xx) series, looking at the GTX 570 and its EEPROM (red box), or just a coincidence that the Device ID straps are near the EEPROM in some 6xx series cards.
gnif: man, this is sad! (but great work) - patience is everything... is it all dead or is there a chance to replace the damaged chip from another damaged card?
Next month I will donate another $50 to you.
people: c'mon donate at least 10 bucks... everything counts!
Seems like someone heard you, just received a $250 donation, WOW :scared: Over 1/2 way there now. :-+
I removed resistors R2 and R3 on the GTX 660 ti and now have a k10(deviceID 118F)!! What does this say about which resistors need to be removed to give this card the deviceID of 11BF (grid k2)?
From what I can gather it seems I might need to remove R1......but R1 is 40k which would be the same if it were removed.
Any words of wisdom?
I removed resistors R2 and R3 on the GTX 660 ti and now have a k10(deviceID 118F)!!Since the 660Ti is still in the guessing game, you should remove resistors one by one.
index | meaning | resistance | value |
1 | shift | 10k | 1 |
2 | value 0-7 | 20k | 3 |
3 | value 8-f | none | none |
index | meaning | resistance |
4 | value C | 35k |
5 | value D | none |
The GRID K2 is 15% more expensive than Tesla K10 for some reason...
I have all the stuff for BGA reworking, if you manage to find a replacement chip I can swap it in for you
Niche market that NVidia want to squeeze some extra cash out of.They definitely will do their best...
GRID in its native form also doesn't have outputs
http://www.nvidia.ca/object/grid-boards.html (http://www.nvidia.ca/object/grid-boards.html)
But modding a GTX670 to a GRID keeps them live, i.e. no additional card needed...
........Apparently the same chip is used on some ASUS motherboards, if I can locate one I will be in contact :).
........Apparently the same chip is used on some ASUS motherboards, if I can locate one I will be in contact :).
That sounds promising, even I have a "dead" ASUS motherboard (failed BIOS update, no boot, otherwise all OK), let me think what model it is... I think P5K PRO or something like that.
I ran the latest Nvidia 314.22 drivers and Quadro 311.35. It seems the 314.22 drivers are a little bite better so I'm using those.
I did some benchmarking to compare the cards before and after the mod
Hi-Thank you for tests.
Well I ended up getting Two EVGA 04G-P4-3687-KR GeForce 4GB GTX 680. Core Clock 1084mhz and Boost Clock 1150mhz.
The boards are the same as the GV-N680OC-2GD except mine are 4GB. I modded both of them to Quadro K5000 (Thanks old Playstation 3 for the resistors :-DD)
I ran the latest Nvidia 314.22 drivers and Quadro 311.35. It seems the 314.22 drivers are a little bite better so I'm using those.
I did some benchmarking to compare the cards before and after the mod's.
Hi-
Well I ended up getting Two EVGA 04G-P4-3687-KR GeForce 4GB GTX 680. Core Clock 1084mhz and Boost Clock 1150mhz.
The boards are the same as the GV-N680OC-2GD except mine are 4GB. I modded both of them to Quadro K5000 (Thanks old Playstation 3 for the resistors :-DD)
I ran the latest Nvidia 314.22 drivers and Quadro 311.35. It seems the 314.22 drivers are a little bite better so I'm using those.
I did some benchmarking to compare the cards before and after the mod's.
GTX 680 #1 GTX 680 #2 K5000 #1 K5000 #2
3DMARK 11 9022 8987 9077 9016
Passmark 8 (3D Graphics Mark) 6044 6091 6025 5996
PCMark Vantage (Gaming) 19336 18956 18880 16177
PhysX 10158-166 fps 10003-165 fps 10176-167 fps 10123-166 fps
SPECviewperf 11
Catia-03 6.05 5.98 5.9 10.20
Ensight-04 32.20 32.23 32.20 32.27
Lightwave-01 13.23 12.84 13.14 13.22
Maya-03 12.77 12.73 12.86 12.85
Proe-05 0.96 1.00 1.00 0.99
Sw-02 11.09 11.37 11.36 12.78
Tcvis-02 1.01 1.17 1.02 1.02
Snx-01 3.42 3.37 3.40 3.42
As you can see all the scores between stock and modded cards are about the same. The problem is with the SPECviewperf 11 scores. This is the benchmark for Graphic and CAD programs. This is what the Quadro cards were made for. The scores for the modded K5000 should be MUCH higher. Take a look here.
http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_4.html (http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_4.html)
It looks to me that just because the computer thinks it’s a Quadro K5000 does not mean that it will act like a K5000.
I even tried this benchmark with the Quadro drivers and got the same results. Hopefully It's just a driver issue and not a hardware issue.
index | meaning | resistance |
1 | 3 byte value D | none |
2 | 3 byte value C | 35k |
3 | 4 byte values 8-f | none |
4 | 4 byte values 0-7 | 25k |
device name | R1 | R2 | R3 | R4 |
gts 450 | none | 35k | none | 25k |
Quadro 2000 | 35k | none | 5k | none |
........Apparently the same chip is used on some ASUS motherboards, if I can locate one I will be in contact :).
That sounds promising, even I have a "dead" ASUS motherboard (failed BIOS update, no boot, otherwise all OK), let me think what model it is... I think P5K PRO or something like that.
The chip the card uses is a PEX8747 (see: http://www.plxtech.com/products/expresslane/pex8747 (http://www.plxtech.com/products/expresslane/pex8747)). Some boards I read somewhere are using it to expand the number of PCIe slots on it.
Edit: The ASUS P8Z77-V Premium uses it and it seems it is not using a heatsync! Perhaps I have misdiagnosed the fault, I will have another go tomorrow and check things on it to see if I missed anything obvious.
It should be noted that this mod was originally performed not to get a high performance Quadro or Telsa card, it was done to unlock additional features such as Mosaic support which does indeed work.
Hey I know this is off topic so please don't flame.
I was able to, by the grace of god and not my soldering skills, change my 680 into a GRID K2 (mini). If anyone is doing this for virtualization reasons check this VMware thread out for help. I can now report that I am sharing my 680 among multiple Virtual Machines.
http://communities.vmware.com/thread/415887?start=30&tstart=0 (http://communities.vmware.com/thread/415887?start=30&tstart=0)
Please everyone else donate something if this has helped you out!
Hey I know this is off topic so please don't flame.
I was able to, by the grace of god and not my soldering skills, change my 680 into a GRID K2 (mini). If anyone is doing this for virtualization reasons check this VMware thread out for help. I can now report that I am sharing my 680 among multiple Virtual Machines.
http://communities.vmware.com/thread/415887?start=30&tstart=0 (http://communities.vmware.com/thread/415887?start=30&tstart=0)
Please everyone else donate something if this has helped you out!
Been using mine with Windows Server 2012's Hyper-V RemoteFX. It's pretty cool being able to remote desktop in to a virtual machine and then play GTA IV on a 2007 Macbook Pro lol
I've been researching soft straps for the past few days and there's a way to change the Device ID without soldering.
Problem is that I could not find full information about the strap bits and so what I could piece together so far is that you can change the last two digits to a certain extent.
If you have a 0x1180 (GTX 680) you can go up to 0x119F (range: 1180-119F), basically you can change bits 0-4.
I do not know if/where the bit 5 is to take it above 9F into As and Bs for the 3rd character. I am not sure if that bit is even present in the soft straps but seeing there's a resistor for it, I'm hoping it must be somewhere in there...
Anyone with some insight into soft straps, bit 5 and beyond please post. :)
This is pretty sweet! I will have to have a go at it as I hate rebooting into windows for the odd game. I had never heard of the GRID K2 nor what it could do until members mentioned it in this thread.
This is pretty sweet! I will have to have a go at it as I hate rebooting into windows for the odd game. I had never heard of the GRID K2 nor what it could do until members mentioned it in this thread.
What the GRID K2 cards are essentially aimed at is the ability for a person to connect from a system on a say an ultrabook, where hardware is not capable of very high end graphics, to a server, and supply a much higher performance than the local hardware is capable of by itself. You can connect to a virtual machine being hosted on a server with a GRID K2 and use the discrete graphics card in things such as say Solidworks. Where I work we have our servers for our engineering department with GRID K2 cards, we used to use Dell desktops/Laptops with high end quadro cards, but now instead of dropping 3 - 4K on a laptop that might fail after a year or two, everyone gets a cheapo laptop configured to use a virtual machine. This works very well on our gigabit ethernet network too.
This is pretty sweet! I will have to have a go at it as I hate rebooting into windows for the odd game. I had never heard of the GRID K2 nor what it could do until members mentioned it in this thread.
What the GRID K2 cards are essentially aimed at is the ability for a person to connect from a system on a say an ultrabook, where hardware is not capable of very high end graphics, to a server, and supply a much higher performance than the local hardware is capable of by itself. You can connect to a virtual machine being hosted on a server with a GRID K2 and use the discrete graphics card in things such as say Solidworks. Where I work we have our servers for our engineering department with GRID K2 cards, we used to use Dell desktops/Laptops with high end quadro cards, but now instead of dropping 3 - 4K on a laptop that might fail after a year or two, everyone gets a cheapo laptop configured to use a virtual machine. This works very well on our gigabit ethernet network too.
So does that mean that multiple users can share the same video card across multiple VMs? or just a single VM? And how well does it work with games, etc? It would be nice to be able to share my high end card out for my daughter to use instead of having to buy her a high end card also.
Hi everyone, new to this forum so go easy on me !!
Great work on the mods been done on the GK104 chip - I want to have a look at my GK110 chip now :-/O .. might be I should wait until the K6000 card for the device ID + drivers but might settle to try for a K20X
I have attached some pictures of the EVGA Titan card - thinking of looking at the resistors near what I think is the eeprom - am I on the right track ?
Did you actually test this or is it based on the soft-strap information documented here:
https://github.com/pathscale/envytools/blob/master/hwdocs/pstraps.txt
I tried this first and had no success, Linux would ignore them, and we know that in previous generations that the NVidia driver would compare the soft to the hard straps and if they differed enabled 'unstable code' that was designed to cause random hardware faults.
Did you actually test this or is it based on the soft-strap information documented here:
https://github.com/pathscale/envytools/blob/master/hwdocs/pstraps.txt
I tried this first and had no success, Linux would ignore them, and we know that in previous generations that the NVidia driver would compare the soft to the hard straps and if they differed enabled 'unstable code' that was designed to cause random hardware faults.
I did test on a smaller card (8600GT) as a proof of concept. It is a very delicate operation because setting up wrong straps will hose the card (I know - I did it).
To recover you need to short CE# and Vss pins on the card's EEPROM during boot, then unshort them before running nvflash, to recover from the bad flash.
I am going to conduct a few more tests just to be sure.
The references I used to collect the needed information were the link you posted above; this thread: https://devtalk.nvidia.com/default/topic/489965/cuda-programming-and-performance/gtx480-to-c2050-hack-or-unlocking-tcc-mode-on-geforce/1 and couple of other places for random other details. I've also looked at dozens of ROMs comparing their soft strap configurations and what not.
I can confirm that changing the software straps does not change the device ID in the 6 series.
I can confirm that changing the software straps does not change the device ID in the 6 series.
The fellow from that NVidia forum post modded a 480 and 580 with this process. I wonder if NVidia caught up to this in 6xx...
That is what I have said, they did, it does not work in the 6 series, I spent many hours testing this method before restoring to hardware hacking.
That is what I have said, they did, it does not work in the 6 series, I spent many hours testing this method before restoring to hardware hacking.
Humbug....well, Fedex is coming today with my 0402 resistors and some larger EEPROMs. :D
Why the larger EEPROM? are you going to try and install the quadro/tesla bios?
I hoped for the soft straps because I wanted a way of finding a resistor configuration that combined with the soft straps could take a card from GTX to Quadro and back just by changing the soft straps.
You could always hotglue some dip-switches to it and wire them up with wire-wrap.
index | meaning | resistance |
1 | 3 byte C D | 25k |
2 | 3 byte F | none |
3 | 4 byte values 0-7 | 10k |
4 | 4 byte values 8-f | none |
device name | R1 | R2 | R3 | R4 |
GT 640 | 25k | none | 10k | none |
GTX 650 | 25k | none | 35k | none |
Quadro K600 | none | 40k | none | 15k |
GRID K1 | none | 40k | 15k | none |
K2000 | none | 40k | none | 35K |
Hello all again ;) I have good news.
I successfully modified
Zotac PCI-E NV ZT-60206-10L GT640 Synergy 2G 128bit DDR3 900/1600 DVI*2+mHDMI RTL
To NVIDIA GRID K1. It is working fine. passthough works too. BUT It is posible after bios modification. Bios modification is needed only for specific vendors.
And you should use unlocked bios gt640om.rom. I attached it to post.
original bios is gt640ori.rom. I changed masks and updated checksum.
upd:
removing resistor 1 may cause random ID changes after reboot :) I will update post after i solve it
upd:
removing resistor 1 may cause random ID changes after reboot :) I will update post after i solve it
Any news about this? :)
upd:
removing resistor 1 may cause random ID changes after reboot :) I will update post after i solve it
Any news about this? :)
I successfully modified
Zotac PCI-E NV ZT-60206-10L GT640 Synergy 2G 128bit DDR3 900/1600 DVI*2+mHDMI RTL
To NVIDIA GRID K1. It is working fine. passthough works too. BUT Device ID mofidication posible only after bios modification. Bios modification is needed only for specific vendors.
Does this mean that I would need to remove the cooling unit to reach the necessary resistors?
Looking at the new gtx 650 Ti Boost, it looks to be almost identical to the Quadro K4000 short the memory (same processor and cuda core count.) Would it be a hardware only or a require bios or softstrap mod as well to be done properly?It is impossible to say until someone get bios from card.
010: 08 e2 00 00 00 04 00 00 02 10 10 82 ff ff ff 7f
020: 00 00 00 80 0e 10 10 82 ff ff ff 7f 00 00 00 80
Looks like the BIOS id correct.For those whom have changed their "model" of their cards, does this enable use of nvidia-smi options, and does it enables use of higher versions of CUDA functions?
E.g. going from CUDA version 3.0 (GTX 670) to 3.5 (K20) would enable Funnel shift as described in http://stackoverflow.com/questions/12767113/funnel-shift-what-is-it (http://stackoverflow.com/questions/12767113/funnel-shift-what-is-it)
Thank you.
Dear All,Quadro K2000 does not support gpu passthrough.
I have Asus GT640-1GD3-L card and I would like to make it recognizable as K2000 in order to have working VGA Passthrough in XEN (currently guest Windows recognizes card as gt640, but show error 43).
Please find bellow Asus GT640-1GD3-L photos:It is hard to say without ohmmeter where resistors are located.
http://www.overclockers.ru/images/lab/2012/12/24/1/15_ASUS_back_big.jpg (http://www.overclockers.ru/images/lab/2012/12/24/1/15_ASUS_back_big.jpg)
http://www.overclockers.ru/images/lab/2012/12/24/1/18_ASUS_PCB_big.jpg (http://www.overclockers.ru/images/lab/2012/12/24/1/18_ASUS_PCB_big.jpg)
Could you please specify which resistors on the photo should be replaced?
Is it a software method how to make Asus GT640-1GD3-L recognizable as K2000?Your bios is great. You need change resistors only.
I already have checked the BIOS:Code: [Select]010: 08 e2 00 00 00 04 00 00 02 10 10 82 ff ff ff 7f
Looks like the BIOS id correct.
020: 00 00 00 80 0e 10 10 82 ff ff ff 7f 00 00 00 80
Quadro K2000 does not support gpu passthrough.I am confused: information provided via mentioned links confirms than k2000 works with XEN Passthrough. Please correct me if I understood the information incorectly.
http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters (http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters)
http://hcl.xensource.com/GPUPass-throughDeviceList.aspx (http://hcl.xensource.com/GPUPass-throughDeviceList.aspx)
you need to modify it to GRID K1
It is hard to say without ohmmeter where resistors are located.Yes, I have ohmmeter - I will try to find some, but I need start region (the place where needed resistor could be located from your point of view). Please find bellow detailed photo of Pm25LD020 and area around plus back side (it would be nice if you could highlight the resistors that I should check at first):
I not sure but I think they are near big capacitors and i think top sop-8 IC is EEPROM. Resistors located on front and back near empty resistor places. If you have ohmmeter you can try it to find by yourself.
I think you mixing up Quadro 2000 and Quadro K2000, it is different cards.Quadro K2000 does not support gpu passthrough.I am confused: information provided via mentioned links confirms than k2000 works with XEN Passthrough. Please correct me if I understood the information incorectly.
http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters (http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters)
http://hcl.xensource.com/GPUPass-throughDeviceList.aspx (http://hcl.xensource.com/GPUPass-throughDeviceList.aspx)
you need to modify it to GRID K1
Yes, I have ohmmeter - I will try to find some, but I need start region (the place where needed resistor could be located from your point of view). Please find bellow detailed photo of Pm25LD020 and area around plus back side (it would be nice if you could highlight the resistors that I should check at first):
https://dl.dropbox.com/u/52618061/IMG_0249.JPG (https://dl.dropbox.com/u/52618061/IMG_0249.JPG)
https://dl.dropbox.com/u/52618061/IMG_0250.JPG (https://dl.dropbox.com/u/52618061/IMG_0250.JPG)
https://dl.dropbox.com/u/52618061/IMG_0251.JPG (https://dl.dropbox.com/u/52618061/IMG_0251.JPG)
https://dl.dropbox.com/u/52618061/IMG_0253.JPG (https://dl.dropbox.com/u/52618061/IMG_0253.JPG)
https://dl.dropbox.com/u/52618061/IMG_0254.JPG (https://dl.dropbox.com/u/52618061/IMG_0254.JPG)
I think you mixing up Quadro 2000 and Quadro K2000, it is different cards.Yes, You are right :)
Please read links again ;)
So I traced pin 6 from eeprom photos and I think:You are right again: R532 is 25K and it's R1, R558 is R2
R532 is R1 and should be 25k
R558 is R2
from https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg213332/#msg213332 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg213332/#msg213332)
So you need to find R3 and R4 places. you need to find 10k resistor with empty place near.
just unsolder every 10k resistor step by step, test pci device id and solder resistor back.
It may help, look at picture in my post, it was 5k and 10k resistors near R3 and R4 in my case.
Afraid to unsolder 10k resistors which connected to fets.
You are right again: R532 is 25K and it's R1, R558 is R2I attached your photo with marks on resistors, can you check marks? I would like to add this photo to gt640 post.
R3 and R4 are resistors near the mounting hole.
Guest windows also recognized as Nvidia G1, but shows the same error: "Windows has stopped this device because it has reported problems. (Code 43)" :(. I use Ubuntu 13.04 (Beta), Xen 4.2.1, Asrock Z77 Pro4 and Core i5-3470.I think it is nvidia drivers issue.
Could you please help me solve the issue?
verybigbadboy, could you please specify software versions which you use to get working VGA path through on GT640 (modified to Grid K1)?
I attached your photo with marks on resistors, can you check marks? I would like to add this photo to gt640 post.Yes, sure, I will check on today evening at home.
I think it is nvidia drivers issue.Could you please specify which version of nvidia driver did you use when check vga path through?
Can you try to remove nvidia geforce drivers. install quadro drivers after.
Also can you check is card working good without xen?
pc: debian 6 xen 4.2Did you compile Xen from sourcea and apply patches for Nvidia path through support or use xen 4.2 from repository?
home pc: gentoo, kernel 3.7.10, qemu 1.4.0 + libvirt and virt-manager for config.
Could you please specify which version of nvidia driver did you use when check vga path through?I tested DVI outputs, it is works fine.
I had installed nvidia geforce drivers on VM before I made resistors modifications. When modified videocard was installed Windows said that device driver is not found and I download and install quadro drivers. Installation of quadro drivers looks like uninstall geforce drivers (I do not see GeForce driver at Add/Remove Program)
Ok, will check if card is working on Ubuntu without Xen. Also, could you please clarify how Gt640 (Grid k1 mod) should work with DVI, HDMI outputs?
Yes, but I think it should work without patches.pc: debian 6 xen 4.2Did you compile Xen from sourcea and apply patches for Nvidia path through support
home pc: gentoo, kernel 3.7.10, qemu 1.4.0 + libvirt and virt-manager for config.
I attached your photo with marks on resistors, can you check marks? I would like to add this photo to gt640 post.The photo is correct.
Looking at the new gtx 650 Ti Boost, it looks to be almost identical to the Quadro K4000 short the memory (same processor and cuda core count.) Would it be a hardware only or a require bios or softstrap mod as well to be done properly?It is impossible to say until someone get bios from card.
Hi, everybody! Thanks for an interesting topic.
To me it is most interesting "mod GTX 680 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550)" to quadro k5000. Now I have no any graphics card based on the chipset gk104. Therefore, I am free in its choice. The most promising option I see GTX 680 + 4GB mem. For K5000 is important to have the largest possible volume of onboard memory.
With only one such card was a successful experience modding. EVGA 04G-P4-3687-KR GeForce 4GB GTX 680 mod to K5000 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210155/#msg210155). Unfortunately, reefjunkie did not described the details of this mod. It is possible, P?B was the same as with GV-N680OC-2GD
I consider following variants of devices:
Gigabyte GV-N680OC-4GD
ZOTAC GTX 680 4GB [ZT-60103-10P]
EVGA GeForce GTX 680 FTW+ 4GB [04G-P4-3687-KR]
EVGA GeForce GTX 680 FTW+ 4GB [04G-P4-3687-KR]Here is one (taken from http://www.evga.com/forums/tm.aspx?m=1664376&mpage=1 (http://www.evga.com/forums/tm.aspx?m=1664376&mpage=1))
Pictures with a PCB-s I have not found.
Hi, everybody! Thanks for an interesting topic.
To me it is most interesting "mod GTX 680 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-
counterparts/msg207550/#msg207550)" to quadro k5000. Now I have no any graphics card based on the chipset gk104. Therefore, I am
free in its choice. The most promising option I see GTX 680 + 4GB mem. For K5000 is important to have the largest possible volume of onboard memory.
With only one such card was a successful experience modding. EVGA 04G-P4-3687-KR GeForce 4GB GTX 680 mod to K5000 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-
professional-counterparts/msg210155/#msg210155). Unfortunately, reefjunkie did not
described the details of this mod. It is possible, P?B was the same as with GV-N680OC-2GD
I consider following variants of devices:
Gigabyte GV-N680OC-4GD
photo#1reverse side (http://www.nix.ru/autocatalog/gigabyte/video/147201_2258_draft.jpg)
photo#2 reverse side (http://www.hardwareluxx.de/images/stories/galleries/reviews/2012/gigabyte-680oc/gigabyte-680-2.jpg)
photo#3 reverse side (http://www.hardwareluxx.de/images/stories/galleries/reviews/2012/gigabyte-680oc/gigabyte-680-5.jpg)
photo#4 reverse side (http://www.ixbt.com/video3/images/gk104-10/gigabyte-gtx680-scan-back.jpg)
photo front side (http://www.ixbt.com/video3/images/gk104-10/gigabyte-gtx680-scan-front.jpg)
Unlike the GV-N680OC-2GD, Y1 repositioned to the rear side. Different connectors resistors. Independently I do not find their location.
ZOTAC GTX 680 4GB [ZT-60103-10P]
back side (http://www.nix.ru/autocatalog/zotac/134525_2258_draft.jpg)
front side (http://www.easycom.com.ua/data/video/1304062052/img/07_ZOTAC_GeForce_GTX_680_AMP_Edition_Dual_Silencer.JPG)
Video card similar to GV-N680OC-2GD. It is possible, it would be similar mod. The board kept the reference design. It is strange that nobody had chose it for the experiment.
EVGA GeForce GTX 680 FTW+ 4GB [04G-P4-3687-KR]
Pictures with a PCB-s I have not found. Mod executed by reefjunkie described here (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210155/#msg210155) However, no description or photos of the device before and after mods. If it is possible, reefjunkie , please give more details.
EVGA GeForce GTX 680 FTW+ 4GB [04G-P4-3687-KR]Here is one (taken from http://www.evga.com/forums/tm.aspx?m=1664376&mpage=1 (http://www.evga.com/forums/tm.aspx?m=1664376&mpage=1))
Pictures with a PCB-s I have not found.
http://i18.photobucket.com/albums/b147/ArcticSilver/EVGA%20GTX%20680%20FTW%204Gb%20with%20Accelero%20Twin%20Turbo%20II/IMAG0212.jpg (http://i18.photobucket.com/albums/b147/ArcticSilver/EVGA%20GTX%20680%20FTW%204Gb%20with%20Accelero%20Twin%20Turbo%20II/IMAG0212.jpg)
It's identical to the 2GB one and I think this is the reason reefjunkie didn't elaborate on what was done.
Gigabyte board is not a reference, they have redesigned it (look it has 6+8 pin power connector).
Zotac has a board with 6+6 as well as EVGA and PNY (which I believe are identical).
So...that's not possible to mod a Gigabyte GTX 680 4G to a K5000?I have not said so. For me it is - a difficult task.
and check what power state it is in when running the benchmark.
P0Have you noticed improvements when running SPECviewperf (where it suppose to be most visible)?
I can confirm the mod done by blanka.
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
But I pimped it a little bit.
670GTX to K5000 works!
R4 on the front side.
R1, R2, R3 on the bottom side.
K5000 works absolutely stable for me, but has no performance increase in SPECviewperf. I tested with few different Quadro drivers.
Summary
GPU Name R1 / 0-7 4th byte R2 / 8-f 4th byte R3/ 3th (high) R4 / 3th (low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
Quadro k5000 none 15k 40K none
grid k2 none 40K 40K none
I flashed it (EVGA 670GTX 2GB 915MHz) with the K5000 bios from techpowerup.
"nvflash.exe -4 -5 -6 K5000.rom" had to be used because of different subsystem and board id.
It started with minor pixel errors but booted into win7.
After driver installation and reboot win7 didn't start anymore.
Flashing it back worked without problems.
Was wondering if there was anyone who has successfully got the EVGA GT 640 cards working?Hi WillV,
Does this help? I'll go ahead and post my pic of the back as well.Sorry but trace is located under u10, It is imposible to find it without ohmmeter. You need to find resistor and resistor place connected to pin 6 of u10.
I will be getting a Zotac ZT-60106-10P GTX 680 with 4GB RAM soon-ish and proceed to start testing..Why don't you get a card that is known to be moddable?
The goal is a K5000.
I can confirm the mod done by blanka.
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
But I pimped it a little bit.
670GTX to K5000 works!
R4 on the front side.
R1, R2, R3 on the bottom side.
K5000 works absolutely stable for me, but has no performance increase in SPECviewperf. I tested with few different Quadro drivers.
Summary
GPU Name R1 / 0-7 4th byte R2 / 8-f 4th byte R3/ 3th (high) R4 / 3th (low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
Quadro k5000 none 15k 40K none
grid k2 none 40K 40K none
I flashed it (EVGA 670GTX 2GB 915MHz) with the K5000 bios from techpowerup.
"nvflash.exe -4 -5 -6 K5000.rom" had to be used because of different subsystem and board id.
It started with minor pixel errors but booted into win7.
After driver installation and reboot win7 didn't start anymore.
Flashing it back worked without problems.
I can confirm the mod done by blanka.
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
But I pimped it a little bit.
670GTX to K5000 works!
R4 on the front side.
R1, R2, R3 on the bottom side.
K5000 works absolutely stable for me, but has no performance increase in SPECviewperf. I tested with few different Quadro drivers.
Summary
GPU Name R1 / 0-7 4th byte R2 / 8-f 4th byte R3/ 3th (high) R4 / 3th (low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
Quadro k5000 none 15k 40K none
grid k2 none 40K 40K none
I flashed it (EVGA 670GTX 2GB 915MHz) with the K5000 bios from techpowerup.
"nvflash.exe -4 -5 -6 K5000.rom" had to be used because of different subsystem and board id.
It started with minor pixel errors but booted into win7.
After driver installation and reboot win7 didn't start anymore.
Flashing it back worked without problems.
Hi, shlomo:
Thanks for your update. I am just too lazy to fine tune my workaround for those resistor.
About the reason your windows can't start up, it is because you use the original "K5000" firmnware
DO NOT USE ANY OTHER FIRMWARE EXCEPT THEY ARE FROM THE SAME LAYOUT
The original K5000 firmware is use for 4096MB board with GTX680 layout.
Since we are using GTX670/GTX660Ti, Use the original firmware and modify the PCI Device ID is enough.
Please be aware that EVGA's firmware has 2 place that contain its Device ID.
Please use Hex editor and search 8911(hex value of GTX670) and change it to BA11.
Then use KaplerBIOSTweaker to fix the checksum or any utility you like.
This will make the board run at K5000 smoothly without any problem since you didn't change the firmware at all!!!
The problem of mine is that I got a 4G gigabyte GTX 680....which is not the same as EVGA...I can't find the correct resistors on the board...
Could someone help please?
The problem of mine is that I got a 4G gigabyte GTX 680....which is not the same as EVGA...I can't find the correct resistors on the board...
Could someone help please?
It should not make that much difference. What is the brand and model # of your card?
Gigabyte GTX 4G
I couldn't find the Y1 element on the front but on the back
-e, --ecc-config= Toggle ECC support: 0/DISABLED, 1/ENABLED
-p, --reset-ecc-errors= Reset ECC error counts: 0/VOLATILE, 1/AGGREGATE
-c, --compute-mode= Set MODE for compute applications:
0/DEFAULT, 1/EXCLUSIVE_THREAD,
2/PROHIBITED, 3/EXCLUSIVE_PROCESS
-dm, --driver-model= Enable or disable TCC mode: 0/WDDM, 1/TCC
-fdm, --force-driver-model= Enable or disable TCC mode: 0/WDDM, 1/TCC
Ignores the error that display is connected.
--gom= Set GPU Operation Mode:
0/ALL_ON, 1/COMPUTE, 2/LOW_DP
-ac --application-clocks= Specifies <memory,graphics> clocks as a
pair (e.g. 2000,800) that defines GPU's
speed in MHz while running applications on a GPU.
-rac --reset-application-clocks
Resets the application clocks to the default value.
-pl --power-limit= Specifies maximum power management limit in watts.
Hi all,
I decided to have a go at finding the straps for GPU 1 on my card, with both success and failure as the result. I was able to locate them and modify the GTX690 to be a dual core Quadro K5000, but I made the stupid mistake of running it without a heatsync on the bridge chip in the middle of the two while testing. The chip quickly died from overheating when I got excited and let Linux boot into the graphical environment, and there goes my $1000 video card for the greater good, and as such donations are now more important then ever to replace this card now.
I am now running on a semi faulty GT220 (random lockups) and an AMD Radeon X300 to get my triple head working, but as you can imagine this is a very buggy configuration.
Thank you very much for you job, Gnif. I am working in a Desktop with a GTX 690 using Blender for Architectuiral Rendering. After finding these posts I adviced him And he purchased a GTX 680 GB Zotac, and I could (tks God) Mod it to Quadro K5000.I'm testing it and will take the GTX 690 for modding too. Will I need any extra heatsink to prevent what happenend to you hero card? In you opinion, will be better for my use to mod it to a dual quadro k5000 or to a K10? Thank very much in advance, Gnif.
(https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/?action=dlattach;attach=42485)
Also that SOIC that sits near the straps I believe is the EEPROM.
Might I also ask if anyone knows what size these resistors are.... 0603 or 0402 ?
Cheers.
I remember people used to do this so they could run 3d animators and video editors that wouldn't run on the desktop cards... I'm surprised they are skimping on the linux drivers though... Kindof sad really, as they're all I would recommend for linux systems, as the ATI drivers were an absolute hellhole for the past 10 years. What can you do I suppose... :-//
One area where AMD/ATI shines is virtualization. I can pass a 7970 through to a Xen guest with relative ease and get native performance within the VM, useful for gaming no doubt. I believe AMD even worked to help build the code that Xen uses for the gpu passthrough.
In any case, I do not recommend nVidia on linux anymore. I did buy a gtx680 just to do this though.... mmmm FLReset.
Regarding Xen virtualization, I haven't tried Nvidia yet (my Quadro 2000 for testing is in the post), but I sincerely hope the experience is less appalling than with the ATI. Granted, ATI cards almost work whereas desktop Nvidia cards don't work at all with VGA passthrough without a whole raft of extra Xen patches, but the experience is poor at best. All in all, good enough for a demo, but absolutely not good enough for anything meaningful.
My plan is to test whether using a Quadro 2000 (drivers officially supports VGA passthrough) makes for a workable experience before I spend 4x as much on a GTX680 to modify into a Quadro K5000 or a Grid K2. Ideally I'd rather like to get a Titan and see what happens if I mod it's device ID to read as a K5000, but as far as I can tell nobody ever reported trying it, and I'd hate to end up with a Titan that I cannot use for it's intended purpose.
Regarding Xen virtualization, I haven't tried Nvidia yet (my Quadro 2000 for testing is in the post), but I sincerely hope the experience is less appalling than with the ATI. Granted, ATI cards almost work whereas desktop Nvidia cards don't work at all with VGA passthrough without a whole raft of extra Xen patches, but the experience is poor at best. All in all, good enough for a demo, but absolutely not good enough for anything meaningful.
The patches are only 5 files, about 100 lines of code in total. They are just to read the bios from an extracted bios rather than from the card at runtime as well as a few other things that Xen can't pull dynamically, unlike AMDs. It is fairly basic code, nothing fancy.
As far as only "good enough for a demo", I will have to disagree. You may have just had a poor experience and been unfortunate enough to have an uncooperative motherboard and graphics card. I can attest the fact that the passthrough is fairly stable once it is setup properly (that's the hard part). I had it running for 2 weeks as a gaming VM and it never had a hiccup with an older 5670 of mine. It was impressive! That said, I wouldn't put this into a production environment without alot more testing.
My plan is to test whether using a Quadro 2000 (drivers officially supports VGA passthrough) makes for a workable experience before I spend 4x as much on a GTX680 to modify into a Quadro K5000 or a Grid K2. Ideally I'd rather like to get a Titan and see what happens if I mod it's device ID to read as a K5000, but as far as I can tell nobody ever reported trying it, and I'd hate to end up with a Titan that I cannot use for it's intended purpose.
Keep in mind that the Quadro series does not support FLReset. It is probably not a good idea to use that for passthrough if you plan to start and stop the VM. It will work just fine, but Xen/linux won't be able to reset the card upon VM reboot. If you have to reboot the VM you'll still need to reboot the entire machine. You may have crashes or performance degradation otherwise.
The above also applies to the K5000 if you plan to modify a Titan, there will be no FLReset.
I know that they, too, lack FLreset, but I am not all that convinced that FLreset is all that necessary. Sure, it makes it a little easier for the driver to do it's job, but think about this at a low level like an embedded engineer for a moment. On the lowest level it comes down to setting registers on the device. Unless the card is poorly engineered and buggy (e.g. it drops off the bus in a questionable, un-re-attachable and uncontactable state), the driver should always be able t o set the registers to whatever they need to be to get the card to a known, initialized state, without even any help from the card's BIOS. FLreset is a nicety that means your driver doesn't have to handle the initialization of the hardware itself, but it doesn't strike me at all as a necessity to get something like this working properly.
I'll know one way or the other soon enough.
Edit: Having tried a Quadro 2000, the experience thus far is that it is even more unstable than using an ATI card. Most disappointing. I guess I won't be wasting my time modifying a GTX into a Quadro.
I posted this a while back but no one addressed it:
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg223546/#msg223546 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg223546/#msg223546)
The TL,DR version is, does modifying any particular card into Tesla K10, Quadro 4000 or Quadro 5000, or for that matter either GRID K1 or K2 variants enable nvidia-smi support for changing the settings listed in the link above? e.g. ECC/TCC support, application clocks, power limit?
Note that not all the settings may work with a particular (transformed) card. If anyone could try to modify each of the settings for their modified card I would very much appreciate it!
After reading this entire thread can i conclude the following?
a GTX680 can be fairly easily modded into K5000 / K10 / Grid K2 by fixing some ID-resistors
this results in additional features (like gpu passthrough for VM's and Mosaic support)
but no performance gain for Pro apps (specviewperf 11)
Or has anyone (Gnif, VeryBigBadBoy, ReefJunkie, etc.) discovered
how to actually boost the OpenGL performance of a GTX680 ?
For many self-employed pro-users like me that would be truly awesome!
I have a GTX 680 modified to a Grid K2 passed through to a Windows 7 x64 xen VM. I am running the nVidia quadro/tesla/grid drivers version 320.00Thanks for the output. I'll give a bit of background. Under Linux, there are some card settings that cannot be read, and to my knowledge there isn't any sort of application that provides the equivalent control/monitoring of the Windows NVIDIA Inspector and EVGA Precision X/MSI Afterburner applications. At least with nvidia-smi support, monitoring under Linux would be possible if the settings are reported, however at least for your converted Grid K2, it doesn't seem like anything else is reported, boo. (i.e. no data for GPU Utilization)
Here is a pastebin of my nvidia-smi out (http://pastebin.com/8tj3M6wi)
ECC would have to be supported by the RAM, which they wouldn't install in a consumer grade card. The power features and other things I would _assume_ are also added hardware bits that physically don't exist on the card.
I am no expert on this subject or with nvidia-smi, though. If you would like me to try something else, I will. I may have just not used the correct commands.
nvidia-smi -e 1
nvidia-smi -dm 1
nvidia-smi -fdm 1
nvidia-smi --gom=0
nvidia-smi -ac 2000,800
nvidia-smi -pl 250
Also, if anyone else can try these with a converted Quadro or Tesla to confirm those cards behave the same way, that'd be awesome too. (nvidia-smi doesn't explicitly state full support for GRID cards, just Tesla/Quadro)
Supported products:
- Full Support
- NVIDIA Tesla Line:
S2050, C2050, C2070, C2075,
M2050, M2070, M2075, M2090,
X2070, X2090,
K10, K20, K20X, K20Xm, K20c, K20m, K20s
- NVIDIA Quadro Line:
410, 600, 2000, 4000, 5000, 6000, 7000, M2070-Q
K2000, K2000D, K4000, K5000, K6000
- NVIDIA GRID Line:
K1, K2, K340, K520
nvidia-smi -e 1
nvidia-smi -dm 1
nvidia-smi -fdm 1
nvidia-smi --gom=0
nvidia-smi -ac 2000,800
nvidia-smi -pl 250
It does say it fully supports GRID K2. I can mod it over to a Tesla and check the difference (if any). I am not at home at the moment, but I will run your other commands to test it out.
My extent for using features beyond gaming is just some oclhashcat and some experimental x264 gpu stuff. Also some very rare 3D modeling stuff, not enough to care about performance. But if I am reading your post correctly, then this could give added performance in those aspects, yes?
Supported products:
- Full Support
- NVIDIA Tesla Line:
S2050, C2050, C2070, C2075,
M2050, M2070, M2075, M2090,
X2070, X2090,
K10, K20, K20X
- NVIDIA Quadro Line:
4000, 5000, 6000, 7000, M2070-Q, 600, 2000, 3000M and 410
- NVIDIA GeForce Line: None
hello
after a google search I found the forum and I appreciated this article.
if anyone can help me find the right card for the edit.
thank you
hello
after a google search I found the forum and I appreciated this article.
if anyone can help me find the right card for the edit.
thank you
Grab the 680, that card has been modded the most so far :).
is that it will be a real Quadro K5000?
performance, quality ...
is that it will be a real Quadro K5000?
performance, quality ...
It will not be a real K5000 in performance and quality, but it should allow certain options not available to gaming (GTX) series of cards, useful for virtualization etc.
hello
after a google search I found the forum and I appreciated this article.
if anyone can help me find the right card for the edit.
thank you
Grab the 680, that card has been modded the most so far :).
is that it will be a real Quadro K5000?
performance, quality ...
As far as I am aware, the only feature you won't get is the ECC memory...Nobody (so far) has demonstrated that a modded 670/680 card can score the same as Quadro/Tesla/Grid in specviewperf11.
Nobody (so far) has demonstrated that a modded 670/680 card can score the same as Quadro/Tesla/Grid in specviewperf11.
And that is the single most important reason why the pro cards are 3-4 times the $$$ of the GTXs...
As far as I am aware, the only feature you won't get is the ECC memory. Everything else will be the same, except maybe the clock speeds. Quadro cards tend to be clocked a little more conservatively. GeForce cards tend to be pre-overclocked right to their thermal and stability limits, and occasionally beyond.
Many people fail to realize that Nvidia is also reading the Web and following what is going on with all these mods. They did not sit idly for the 4xx to 5xx transition where the soft-switch mod stopped working. And with the 6xx series they've most likely added more roadblocks to prevent any entrepreneurial people from causing them any further revenue loss.
Doing this for them is very easy, after all they are the ones who engineered the chips and so they have all the information needed. We are for the most part tapping in the dark, finding things out through trial and error.
You are absolutely right, and nobody will demonstrate it, because the cards will not score the same.Oka-a-ay...
I'd love to be proven wrong...So, what is it that you think? It can be done or not?
Oka-a-ay...???
So, what is it that you think? It can be done or not?In principle, anything can be done and I do have some ideas to explore, just waiting for a GTX card to arrive in the mail.
You can't have it both ways...
As far as I am aware, the only feature you won't get is the ECC memory...Nobody (so far) has demonstrated that a modded 670/680 card can score the same as Quadro/Tesla/Grid in specviewperf11.
And that is the single most important reason why the pro cards are 3-4 times the $$$ of the GTXs...
Oka-a-ay...???So, what is it that you think? It can be done or not?In principle, anything can be done and I do have some ideas to explore, just waiting for a GTX card to arrive in the mail.
You can't have it both ways...
I opted to get a GTX 5xx because interwebs say 6xx series appears to be optimized towards gaming and not business applications, and is sub-par in performance to 5xx.
How exactly do you figure the use case in which 1536 shaders is not at least as good as 512 shaders? I'm pretty sure a GTX680 will outperform a GTX580 in every way possible.
How exactly do you figure the use case in which 1536 shaders is not at least as good as 512 shaders? I'm pretty sure a GTX680 will outperform a GTX580 in every way possible.
Interwebs is your friend...why don't you do a search and find out. Bigger is not always better. ;)
How exactly do you figure the use case in which 1536 shaders is not at least as good as 512 shaders? I'm pretty sure a GTX680 will outperform a GTX580 in every way possible.
Interwebs is your friend...why don't you do a search and find out. Bigger is not always better. ;)
And I'm asking you to cite a well informed source with scrutinizable empirical evidence. Surely you aren't about to claim that "it must be true because I read it on the internet".
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17 (http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17)
or if you dont believe that, then just google some more benches and tests.
Anyways, don't get offtopic please :)
What's mind boggling is that it appears both 6xx and Quadro Kxxx chips come from the same factory line and the former are just crippled versions of the later; or those that don't pass QA testing for pro-line. But, we've already discussed that here, in the earlier posts...
Two points:See this post
1) What software do you use that is particularly well approximated with specviewperf11?
2) How much of a difference are we talking about? 300% difference, roughly equivalent to the difference in the price tag? I doubt it.
Small update:
I know most of you are trying to get professional cards out of consumer, I just want a card for passthrough so I could game "on linux". In every test I have run, this GTX680 modded to a Grid K2 runs exactly the same as the GTX680 except with Physx. Physx doesn't work with the "professional" drivers so I forced the consumer drivers to install. They work and run fine and now Physx will say it is enabled, but the card will not do any of the work. It offloads everything to the CPU while still reporting it is working to the program. Its a bit strange and annoying. Going to mod it back to confirm my results and trying a Telsa mod just in case that will support it (probably not).
Nvidia, if you are reading this, I just want virtualization FOR PLAYING GAMES. Seriously, AMD actually worked with the community on this one, why can't you just enable it? It clearly works fine.
Small update:
I know most of you are trying to get professional cards out of consumer, I just want a card for passthrough so I could game "on linux". In every test I have run, this GTX680 modded to a Grid K2 runs exactly the same as the GTX680 except with Physx. Physx doesn't work with the "professional" drivers so I forced the consumer drivers to install. They work and run fine and now Physx will say it is enabled, but the card will not do any of the work. It offloads everything to the CPU while still reporting it is working to the program. Its a bit strange and annoying. Going to mod it back to confirm my results and trying a Telsa mod just in case that will support it (probably not).
Nvidia, if you are reading this, I just want virtualization FOR PLAYING GAMES. Seriously, AMD actually worked with the community on this one, why can't you just enable it? It clearly works fine.
Does the card actually work as NVIDIA GRID VGX after modification? e.g. vmware vsphere vgsa?
How did you force the driver to install?
I actually had more success with PhysX. My Win7 64-bit VM with a Quadro 2000 worked fine with PhysX once I installed the PhysX package in addition to the Quadro drivers. I haven't checked whether it was offloading onto the CPU or not, though, but games and GPU-Z all reported PhysX capability.
In the end, I gave up on Nvidia and decided to save myself money, modding effort and time and just got an ATI card instead, because I, too, only wanted a decent GPU to game with in a VM without having to dual boot machine. At least until Steam has a better selection of Linux capable games and get their client software working without requiring bleeding edge glibc.
I just did some inf edits to add the new strings for the card. The driver will not be a signed driver for that PCI ID though. Doesnt affect performance, but it will throw warnings.
Does the card actually work as NVIDIA GRID VGX after modification? e.g. vmware vsphere vgsa?I tested vga passthrough with xen only. It works fine.
Does the card actually work as NVIDIA GRID VGX after modification? e.g. vmware vsphere vgsa?I tested vga passthrough with xen only. It works fine.
In the end, I gave up on Nvidia and decided to save myself money, modding effort and time and just got an ATI card instead, because I, too, only wanted a decent GPU to game with in a VM without having to dual boot machine. At least until Steam has a better selection of Linux capable games and get their client software working without requiring bleeding edge glibc.
I use a chroot for steam on linux so I dont break the rest of my system or have it all bleeding edge. Space isnt an issue now a days and there is no less performance. Annoying, but workable.
anyone have luck with the actual VGX? It would be pretty damn nice if the gpu could be spread out to 10+ VM's at the same time
anyone have luck with the actual VGX? It would be pretty damn nice if the gpu could be spread out to 10+ VM's at the same time
I know that they, too, lack FLreset, but I am not all that convinced that FLreset is all that necessary. Sure, it makes it a little easier for the driver to do it's job, but think about this at a low level like an embedded engineer for a moment. On the lowest level it comes down to setting registers on the device. Unless the card is poorly engineered and buggy (e.g. it drops off the bus in a questionable, un-re-attachable and uncontactable state), the driver should always be able t o set the registers to whatever they need to be to get the card to a known, initialized state, without even any help from the card's BIOS. FLreset is a nicety that means your driver doesn't have to handle the initialization of the hardware itself, but it doesn't strike me at all as a necessity to get something like this working properly.
I know that they, too, lack FLreset, but I am not all that convinced that FLreset is all that necessary. Sure, it makes it a little easier for the driver to do it's job, but think about this at a low level like an embedded engineer for a moment. On the lowest level it comes down to setting registers on the device. Unless the card is poorly engineered and buggy (e.g. it drops off the bus in a questionable, un-re-attachable and uncontactable state), the driver should always be able t o set the registers to whatever they need to be to get the card to a known, initialized state, without even any help from the card's BIOS. FLreset is a nicety that means your driver doesn't have to handle the initialization of the hardware itself, but it doesn't strike me at all as a necessity to get something like this working properly.
And I believe that is exactly the problem, GPU vendors do not want to reveal the initialization and setup of their hardware lest they end up revealing their IP. So the video bios does a lot of heavy lifting for bootstrapping the GPU. intel opregion is at least nice in that it allows for complete OS (in contrast to part firmware) based initialization.
I know that they, too, lack FLreset, but I am not all that convinced that FLreset is all that necessary. Sure, it makes it a little easier for the driver to do it's job, but think about this at a low level like an embedded engineer for a moment. On the lowest level it comes down to setting registers on the device. Unless the card is poorly engineered and buggy (e.g. it drops off the bus in a questionable, un-re-attachable and uncontactable state), the driver should always be able t o set the registers to whatever they need to be to get the card to a known, initialized state, without even any help from the card's BIOS. FLreset is a nicety that means your driver doesn't have to handle the initialization of the hardware itself, but it doesn't strike me at all as a necessity to get something like this working properly.
And I believe that is exactly the problem, GPU vendors do not want to reveal the initialization and setup of their hardware lest they end up revealing their IP. So the video bios does a lot of heavy lifting for bootstrapping the GPU. intel opregion is at least nice in that it allows for complete OS (in contrast to part firmware) based initialization.
I think you got that backwards. If they implemented FLR, they would need to reveal _less_ because the reset would be a single, standards defined call to reset the card without having to reveal _anything_ about the hardware. What using proprietary initialization does do, however, is make it more difficult for open source drivers to be written. This enables companies like Nvidia to charge you 5x the amount for the same hardware just for changing 2 resistors and half a byte of firmware to get access to the "pro" feature set of the driver.
It's not about protecting the IP - it's about protecting the high-margin revenue streams.
Hallo! I'd need to convert my MSI GTX 680 Lightning into a GTX 770 Lightning as I need a card to SLI (680 are unavailable!)...
Flashing the bios doesn't change device ID, so I suppose I need this hack... Any help, please?
Great! Thank you very much! I just hope that Lightning PCB doesn't differ in that point...Hallo! I'd need to convert my MSI GTX 680 Lightning into a GTX 770 Lightning as I need a card to SLI (680 are unavailable!)...
Flashing the bios doesn't change device ID, so I suppose I need this hack... Any help, please?
Hello, You may try to change device ID using gtx 680 guide (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550)
I updated resistors values for gtx 770.
Thank you ;)
Bah. I just modified my GTX580 into a Quadro 7000 only to find out that my main/only reason for modifying it (gaming in a VM) is not applicable on this card - Quadro 7000 is not supported for VGA passthrough! :palm:
You may try to change device id to quadro 6000. :)
I think it is strange to mod 1-GPU GeForce card to 2-GPU Quadro. ;)
Also can you provide picture with resistors positions please :)?
You found one of resistors, It's very good :) May be I try to find second one? ;)You may try to change device id to quadro 6000. :)
That was what I looked into first, but the device ID between 580 and 7000 can be adjusted by only twiddling the bottom 5 bits o fthe ID. To go all the way to the 6000 requires modifying the bottom 13 bits, which is more difficult.
What is the special ingredient that makes the drivers decide to use a non-crippled OpenGL renderer? They clearly identify the card as a quadro, otherwise it wouldn't work in the VM. Most perplexing...
Bah. I just modified my GTX580 into a Quadro 7000 only to find out that my main/only reason for modifying it (gaming in a VM) is not applicable on this card - Quadro 7000 is not supported for VGA passthrough! :palm:
What is the special ingredient that makes the drivers decide to use a non-crippled OpenGL renderer? They clearly identify the card as a quadro, otherwise it wouldn't work in the VM. Most perplexing...
I suppose I should ask. You are running the Quadro drivers, yes?
Also, try these performance tests under native windows, not through Xen. You have to do some wierd stuff to get full options under Xen.
Bah. I just modified my GTX580 into a Quadro 7000 only to find out that my main/only reason for modifying it (gaming in a VM) is not applicable on this card - Quadro 7000 is not supported for VGA passthrough! :palm:
Nether is the K5000. (http://www.dwhynes.com/nvidia-quadro-k5000/) It still works. I would suggest using the Geforce drivers and checking that. When modding to the Quadro on my card, it had different results when using the Geforce vs Quadro drivers.
What is the special ingredient that makes the drivers decide to use a non-crippled OpenGL renderer? They clearly identify the card as a quadro, otherwise it wouldn't work in the VM. Most perplexing...
fusebits in the core. disabled. you think NVidia is so stupid that it would simply be a few resistors on the outside ?
That's an interesting hypothesis, but do you have anything to support it?
Here's how it works :
The resistors are used to downgrade boards depending on what is installed. You can only go down , not up. The max capability of the GPU is set in the GPU. The resistors determine the downmix (memory, speed, cores, voltage ,core voltage, ram voltage) .
You can set resistors to a combination above what the core is specced at but it will not enable. it is a logical OR of what the resistors say and what the fusebits say. Fusebits have priority. Sometimes there are no fusebits as the chip simply does not have the extra features. One board layout can be used for multiple different GPU's in a family.
Trying to do a proof of concept before investing in the real thing. I have an ASUS GTX660TI-DC2T-2GD5 card and would like to mod it to a Quadro K5000 or some other GPU virtualization compatible card. (Hyper-V 2012 ?). The ASUS card seems different from the other images in this thread. Here are some pictures of both sides, if someone would be so kind as to help me out:Hello, looks like they rotated GPU :)
Thank you,
@free_electron - not always the case.like i said : same pcb layout can hold multiple chips. some are actually different , some are fusebits.
More info on the 660Ti ASUS
Hope that helps...
Thanks
... will I loose PhysX support or anything similar by reporting this cards to the computer as a higher model?
... will I loose PhysX support or anything similar by reporting this cards to the computer as a higher model?
May be I am missing something? ;)
When modding to the Quadro on my card, it had different results when using the Geforce vs Quadro drivers.
I have a Quadro 4000 and a Tesla C2075 if either of those would help in this or anything in the future.
so question is will a 39KOhms or a 46KOhms one work?
The dom0 will be used by another person for work with the iGPU. Is this possible?
Got it. What about the nvidia gtx670 passthrough? I've been reading some stuff that says it is not possible, only the workstation counterparts are able to do it. I am sorry I'm just scared of bricking my card as I'm currently a broke student in a 3rd world country studying beginner's virtualization in a subpar laboratory :-+The dom0 will be used by another person for work with the iGPU. Is this possible?
Possible, yes. The i915 (Intel HD4000) will only work if you are using it for a text only console. X has artifacting with the Xen kernel and intel driver when using the iGPU on dom0 (presumed Xen is at fault here, no solution in the works afaik).
I wouldn't recommend using dom0 as a workstation anyway. I passed the iGPU through to an hvm and used it that way. More secure, bit harder to manage dom0 when it is headless, though.
As for the resistors and such, I suggest reading the thread before attempting anything like this. The questions you posed are all answered.
I think I might be getting somewhere with this.
Attached is the glxinfo output from a genuine Q2000 and a GTS450 modified to Q2000.
They are quite substantially different.
I wonder what differences lurk in the BAR0 registers and if they might be changeable...
I'm not sure what version of NVidia drivers you are using, but as a hint try something older, perhaps from the 270.XX line.
With the original softmods back when, I believe that NVidia had caught up with that and disabled advanced features (ie. Quadro) in drivers.
Still trying to determine when it occurred, which requires testing many old versions of drivers.
What would that prove?It would prove that things have changed in the mean time.
How do you figure that? How does the driver know it's not a real Quadro? I'm using the latest, 319.23 on Linux. There are no separate Quadro and GeForce drivers on Linux, it's the same driver for both.I might as well say the same thing back at you: How do *you* figure that?? Obviously you have not done anything past installing the latest drivers and drawing a conclusion based on a quick look at it.
I suspect the functionality is laser cut out.Prove it.
Then again, it could be there is extra strapping on the PCB (e.g. a cap across chip pins) that disables certain functionality, but this is hard to eyeball. All 3 GTS450 cards hav emarkedly different cap arrangements under the GPU, which is different again from the real Q2000.
I am somewhat positive that crippling occurs on the software side
What would that prove?It would prove that things have changed in the mean time.
People have soft-modded their GTX cards to Quadros in the past, remember RivaTuner? It had a soft-strap editor that would change PCI Id to match whatever other card.
Also, remember that fellow who posted on NVidia CUDA forums about modding vbios soft-straps on his GTX 480 to a Tesla.
How do you figure that? How does the driver know it's not a real Quadro? I'm using the latest, 319.23 on Linux. There are no separate Quadro and GeForce drivers on Linux, it's the same driver for both.I might as well say the same thing back at you: How do *you* figure that?? Obviously you have not done anything past installing the latest drivers and drawing a conclusion based on a quick look at it.
I suspect the functionality is laser cut out.Prove it.
Then again, it could be there is extra strapping on the PCB (e.g. a cap across chip pins) that disables certain functionality, but this is hard to eyeball. All 3 GTS450 cards hav emarkedly different cap arrangements under the GPU, which is different again from the real Q2000.
Everyone assumes some kind of manufacturing process or what not involved to cripple GPUs. I suppose because this is an electronics forum that's inherent, but if you have not looked into other aspects, I would not draw conclusions so fast.
Actually I am somewhat positive that crippling occurs on the software side because, unlike you, I tried an older driver giving me different result than the new one, AND I took a peek under the hood (inside the driver).
Soft-modding still works to a large extent. The only limitation is the strap bits that are exposed in software.
In the past three weeks I have soft-modded:
GTS450->Q2000
GTX470->Q5000
GTX480->Q6000
GTX580->Q7000
GTX680->Tesla K10
Q7000 and Tesla K10 aren't "MultiOS" capable, so the driver doesn't make them work with VGA passthrough, which makes the mod of limited usefulness, over and above enabling TCC feature in Windows.
Everything required to do this is pretty well documented by the nuoveau project. There are also other things that are useful (and easy) to mod, for example BAR1 size to be >= the size of the RAM on the card. This lets you take, say, a 4GB card and when you're not gaming, you can map it as a really fast block device for some fast swap or whatever you might need.
Kepler BIOSes are quite different, but having spent an hour looking at the dump out of my GTX680, I've worked out most of the bits relevant to this thread. The strapping on them is... odd. It's done in two places, but what is odd is that it is done the same way in two places.
The other odd thing is that the PCI device ID is set in two places, but it has a profound effect on the way the card is handled. For example, just changing the the strapping for the device ID from GTX680 -> Tesla K10 works fine. But if you also chance the PCI ID, at least in the primary location, the card no longer works properly - it gets detected as a standard unaccelerated VGA adapter, even though the Quadro driver is still running it, and the RAM on it shows up as 2990MB instead of 4096MB. Put the device IDs back to GTX680 (but keep the K10 strapping), and it works fine. I'm guessing there may be a checksum on the blocks containing the PCI IDs, but I didn't have a genuine Tesla K10 BIOS handy to cross-check against it last night.
It seems to me you mistakenly compare ability to be detected as something else (spoofing) with real conversion of one to another. Perhaps in your application of this modification (Xen in Linux) it is sufficient to just spoof the wanted card but what I'm looking for is actually getting the Quadro performance instead.
I'd like to know how did you mod the GTX 580 to Q7000 with soft straps and have it work?
Because, I've spent some time on a similar project using a GTX 570 and the only real way to do this was through changing hard straps (driver ignores vbios soft-straps and the system will not boot, will boot with a black screen or not initialize the GUI). Some, if not all, Quadro vbioses actually check if the ROM and board PCI ID match and also lock out.
I am able to get a Q7000, C2075 through hard straps and the system actually boots and initializes. Then I am able to install the latest Quadro/Tesla driver, alas the performance is not there due to whatever limitations imposed (I'd say software).
Why don't you share that information here, as it is not documented anywhere it will help everyone else?
You might be finding the PCI ID from the VBIOS and the EFI sections, that's why there are two matches.
On newer Windows systems and Macs, EFI is used to init/boot the card and so the PCI ID in that section also has to be changed - needs to match hard straps. EFI code itself also checks for the Vendor ID, PCI ID and Board ID, so that has to be changed accordingly.
As far as I know the checksums exist for the entire ROM and for the HWINFO section (soft-straps), although it is true that the GK ROM format has changed and they've introduced wrappers for the old VBIOS. Obviously one needs to make sure the HWINFO checksum is correct and then adjust the entire ROM checksum.
You've also probably noticed a cryptographic signature at the end of the ROM which UEFI/Winblows uses to verify authenticity of hardware. In Linux that is irrelevant but thanks to M$, they've managed to create a deadlock and control all OEMs who produce motherboards with this whole signature issue (remember Linux booting problems until the whole UEFI issue was figured out).
The usual way. Change the PCI ID, re-calc the checksum, flash the card, change the straps, in that order. It "just works". Doesn't really gain you much, though. Mosaic app might work, but since Q7000 isn't MultiOS, it won't run in VGA passthrough mode.Ah, so you change the straps from the command line with nvflash.
I got to that point without heating up my soldering iron.For me, Windows Quadro driver would not install if I do not change hard-straps. Matter a fact in some cases I could not even get to Windows GUI to install the driver.
Hmm... I wonder if this could be stripped out to reduce the BIOS to the old, EFI-less state that is more open to modifying. It'd also make it a lot more similar to the previous BIOSes in terms of understanding what the various bits do.That should be doable, but you have to make sure that you include all ROM parts (minus the EFI). In Fermi cards, the ROM had two parts (one was the vbios and the other another device, I believe HDMI audio).
I don't own any EFI motherboards, but I would have thought the whole EFI wrapping should be strippable out from the VBIOS. Once you skip past the 0x400 bytes of header, the rest of the BIOS is similar in terms of offsets of known areas to the Fermi BIOSes.IIRC, according to the PCI firmware spec the wrapping stuff ends up being ignored on older computers with BIOS, until the 55 AA signature is detected. On the new UEFI, they will probable read the wrapper and then read the rest (55 AA, on).
Ah, so you change the straps from the command line with nvflash.
I actually edit the straps while I'm editing the ROM. I find it easier because all I have to do afterwards is flash the ROM, although I do make sure that nvflash verifies my ROM first.
I got to that point without heating up my soldering iron.For me, Windows Quadro driver would not install if I do not change hard-straps. Matter a fact in some cases I could not even get to Windows GUI to install the driver.
Hmm... I wonder if this could be stripped out to reduce the BIOS to the old, EFI-less state that is more open to modifying. It'd also make it a lot more similar to the previous BIOSes in terms of understanding what the various bits do.That should be doable, but you have to make sure that you include all ROM parts (minus the EFI). In Fermi cards, the ROM had two parts (one was the vbios and the other another device, I believe HDMI audio).
I don't own any EFI motherboards, but I would have thought the whole EFI wrapping should be strippable out from the VBIOS. Once you skip past the 0x400 bytes of header, the rest of the BIOS is similar in terms of offsets of known areas to the Fermi BIOSes.IIRC, according to the PCI firmware spec the wrapping stuff ends up being ignored on older computers with BIOS, until the 55 AA signature is detected. On the new UEFI, they will probable read the wrapper and then read the rest (55 AA, on).
I may be wrong but IIRC NiBiTor doesn't update the strap checksum, only the full checksum.Who said anything about NiBiTor? :)
Sounds strange, I've never seen an issue like that on any of the cards I modified, it always "just works".I deliberately chose GTX 570 because GTX 4xx are running much hotter than 5xx and if they even have any performance improvements over 5xx I think it's negligible. Altough even GTX 5xx run hot, heck they are all crap due to the terrible NVidia design which makes the GPU run additional 10-15C hotter if you plug two monitors into the card.
Then again, modifying a GTX580 is a complete waste of time anyway. GTX480 is just as fast (in some cases faster due to dual DMA channels when modded to Quadro/Tesla), and is trivial to modify into a Q6000. I only modified my 580 because I already had it and I was seeing odd driver clashing when using a GeForce and a Quadro in the same system under Windows. One driver would end up driving both cards, usually the later one (the GeForce one has a higher version number).
Handy, so you could effectively s/.*55AA// and strip out the EFI capability and defeat crypto. Nice. Presumably the trailing garbage would just get ignored then.You can't defeat the signature but I think Linux will ignore it altogether so it does not matter if it's there. The only reason it would matter is to Windows.
The only question is whether there is an extra ID bit in Kepler soft straps. These are not yet fully documented. It could be one of the unknown bits (but it's not the unknowns next to ID bit 4, I tried those). The reason I say that is because until Kepler, all cards were modifiable using only soft-straps into any other card sporting the same GPU and memory type. And I only mention memory type because the GDDR3 GTS450 differed in more than just the last 5 bits of device ID, I had not seen such a case before.
I may be wrong but IIRC NiBiTor doesn't update the strap checksum, only the full checksum.Who said anything about NiBiTor? :)
I edit everything by hand in a hex editor - I've done so many things now that I know where to look and what to do with it blindfolded.
Sounds strange, I've never seen an issue like that on any of the cards I modified, it always "just works".I deliberately chose GTX 570 because GTX 4xx are running much hotter than 5xx and if they even have any performance improvements over 5xx I think it's negligible. Altough even GTX 5xx run hot, heck they are all crap due to the terrible NVidia design which makes the GPU run additional 10-15C hotter if you plug two monitors into the card.
Then again, modifying a GTX580 is a complete waste of time anyway. GTX480 is just as fast (in some cases faster due to dual DMA channels when modded to Quadro/Tesla), and is trivial to modify into a Q6000. I only modified my 580 because I already had it and I was seeing odd driver clashing when using a GeForce and a Quadro in the same system under Windows. One driver would end up driving both cards, usually the later one (the GeForce one has a higher version number).
Handy, so you could effectively s/.*55AA// and strip out the EFI capability and defeat crypto. Nice. Presumably the trailing garbage would just get ignored then.You can't defeat the signature but I think Linux will ignore it altogether so it does not matter if it's there. The only reason it would matter is to Windows.
The only question is whether there is an extra ID bit in Kepler soft straps. These are not yet fully documented. It could be one of the unknown bits (but it's not the unknowns next to ID bit 4, I tried those). The reason I say that is because until Kepler, all cards were modifiable using only soft-straps into any other card sporting the same GPU and memory type. And I only mention memory type because the GDDR3 GTS450 differed in more than just the last 5 bits of device ID, I had not seen such a case before.
Also, because of all the new crap added to the ROM the EEPROM is bigger too- instead of 256KB it's 512KB now thus requires soldering when modding some cards.
Can you please share the strap checksum algorithm? How do you compute it after modifying the strap manually?The algorithm is the same as for the ROM itself: 0x100 - (S & 0xFF), where S is sum of bytes from offset 0x58 to 0x6a (0x6a is always 0xA5, it's a HWINFO signature) in a standard vbios image. Offset 0x6b is then the checksum itself which is not part of the S.
I presume you speak of the 7xx series - My GTX680 ROM is 180KB.Actually GTX 680 ROM is over 200KB due to the EFI portion and all the other crap like secure signature and new wrappers, etc. That is on a GTX 680 2 or 4GB I've seen.
As for Windows - the only reason for using that is disappearing with Steam acquiring Linux support.Hear, hear!
Actually GTX 680 ROM is over 200KB due to the EFI portion and all the other crap like secure signature and new wrappers, etc. That is on a GTX 680 2 or 4GB I've seen.
As I am aware there is no UEFI stuff in the ASUS bios by default. I believe they have released an unofficial-official hybrid bios. By default though that is all left out.
Side thought, why would you care about stripping out anything from the BIOS; it wont result in added performance.
Has anybody found where the second set of device ID strap resistors is on the GTX690 yet?
Hello everybody,
I'm new on this forum and I would like to thank everybody (and especially gnif) for the mod of geforce GTX 680 to Quadro K5000.
I'm a gamer. I don't want more performance but just one (quadro) functionnality more.
I play games in 3D with the help of nvidia 3D vision on my monitor (input signal 1080P 3D 120Hz).
I would like to do the same but with passive 3D dual projection. I need therefore the option of a quadro K5000 to activate the possibility of passive stereo with 2 projectors connected to the graphic card. Here is the link to the procedure to do passive stereo with a quadro.
http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7. (http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7.)
Could someone please confirm that the option is available with a modded GTX 680 to quadro K5000?
Thank you,
Soulnight ;)
Hello everybody,
I'm new on this forum and I would like to thank everybody (and especially gnif) for the mod of geforce GTX 680 to Quadro K5000.
I'm a gamer. I don't want more performance but just one (quadro) functionnality more.
I play games in 3D with the help of nvidia 3D vision on my monitor (input signal 1080P 3D 120Hz).
I would like to do the same but with passive 3D dual projection. I need therefore the option of a quadro K5000 to activate the possibility of passive stereo with 2 projectors connected to the graphic card. Here is the link to the procedure to do passive stereo with a quadro.
http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7. (http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7.)
Could someone please confirm that the option is available with a modded GTX 680 to quadro K5000?
Thank you,
Soulnight ;)
I'm on same boat. I'm just trying to figure best way to get Dual projection setup with omega filters to work. There is couple methods to get frame synced dual projection to work. One is using Tridef SBS with AMD eyefinity 3840x1080 and second method is using Quadro. I want to use 3D Vision and i'm also ready to try to mod my 670 to K5000 to get quadro features. I think 3D Vision dual projection will work with modded card. I'm bit unsure however if quadro gives framesynced dual output from one card...
I am happy that I am not alone! :clap:
I also want to use the omega filters for the dual projector setup. And I think I will buy two projectors ACER h9500 (ONLY 2D and costs just 850€ with lens shift and 1300 Lumens calibrated!). I'm just worried about the 1:1 Hdmi Mapping problem but that's another story...
I know the tridef solution but it's not ideal and doesn't work with all the games...and you MUST use tridef.
I would like to have a choice and to be able to play every games: therefore the quadro solution.
The real quadro K5000 is not good enough for 3d games (and is expensive) and I really think that a modded gtx 680 (or gtx 670) is the perfect solution to get the functionnality of the quadro AND the 3D games performances of the GTX 680!
I am pretty sure that a quadro card gives framesynced dual output from one card...
See the link: http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7. (http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7.)
The real question is: does it work with a modded gtx 680 as well?
Plus, I would like to go sli with 2 modded GTX 680 into quadro K5000. But I've read that the quadro just support sli for choosen "complete Workstation" from dell etc... However it may still be possible to do it since they won't be true K5000.
As someone succeded in activating SLI with 2 modded GTX 680 into quadro K5000?
@ Jager: how far are you from testing the dual projector setup with the modded gtx 670? When? ::)
Thank you!
Soulnight ;)
Ordered resistors from ebay, those are quite tiny(1mm*0,5mm). I have done some researching and i think up to 2 GPU SLI is no problem to get synced frames, above that K5000 needs Sync card and it should work with modded ones too. After all surround setups for desktops are synced. When i get those resistors(39K is closest to 40K i found, hope it works) i do this mod immediately.
Ordered resistors from ebay, those are quite tiny(1mm*0,5mm). I have done some researching and i think up to 2 GPU SLI is no problem to get synced frames, above that K5000 needs Sync card and it should work with modded ones too. After all surround setups for desktops are synced. When i get those resistors(39K is closest to 40K i found, hope it works) i do this mod immediately.
I hope it will work with a 39K resistor... How do you know the size of the resistors you must take? What are the references?
The problem for SLI with quadro cards is that Nvidia doesn't enable the SLI function for quadro cards UNLESS they are used in specific Workstation that they have certified!
Here the link to nvidia sh**t:
http://www.nvidia.com/object/quadro_sli_compatible_systems.html (http://www.nvidia.com/object/quadro_sli_compatible_systems.html)
But I do hope that because they are "just" modded GTX 680 the false K5000 can still do sli WITHOUT working with a specific nvidia certified workstation...
Damn missed that one! This is bad news indeed :(. I hope it is fixable with some softmodding/just work with modded one. 690 should be affected as well. As for size of resistor, if mean physical size, i just measured those and other info comes from this thread.
Damn missed that one! This is bad news indeed :(. I hope it is fixable with some softmodding/just work with modded one. 690 should be affected as well. As for size of resistor, if mean physical size, i just measured those and other info comes from this thread.
Yeah I mean the physical size of the resistors to use. Could someone post a link to "right" resistors on the internet? Thank you...
The real quadro K5000 is not good enough for 3d games (and is expensive) and I really think that a modded gtx 680 (or gtx 670) is the perfect solution to get the functionnality of the quadro AND the 3D games performances of the GTX 680!
I am pretty sure that a quadro card gives framesynced dual output from one card...
Plus, I would like to go sli with 2 modded GTX 680 into quadro K5000. But I've read that the quadro just support sli for choosen "complete Workstation" from dell etc... However it may still be possible to do it since they won't be true K5000.
The real quadro K5000 is not good enough for 3d games (and is expensive) and I really think that a modded gtx 680 (or gtx 670) is the perfect solution to get the functionnality of the quadro AND the 3D games performances of the GTX 680!
How exactly do you figure a K5000 isn't good enough when a GTX680 is? The spec between them is near identical.
The real quadro K5000 is not good enough for 3d games (and is expensive) and I really think that a modded gtx 680 (or gtx 670) is the perfect solution to get the functionnality of the quadro AND the 3D games performances of the GTX 680!
How exactly do you figure a K5000 isn't good enough when a GTX680 is? The spec between them is near identical.I am pretty sure that a quadro card gives framesynced dual output from one card...
I'm pretty sure all Nvidia cards do, going back at least to 8xxx series (I am running an IBM T221 DG1 off an 8800GT with two DVI outputs, and that monitor supposedly requires genlocked inputs).
Vsync is a subtly different issue, and again, I'm pretty sure all Nvidia cards do run vsynced across separate outputs if they are told to run multiple screens as a same frame buffer. Again, on a T221 DG1 ATI cards (4850, cannot use 5xxx+ since they only comne with single DL-DVI outputs) produce tearing along the middle of the screen (genlocked but not vsynced), but Nvidia cards (tried with 8800GT, various 4xx, 580 and 680 cards, quadrified and vanilla, and a Quadro 2000) do not produce the tearing artifact - which implies they all run multiple screens vsynced. So provided you configure your setup correctly, it should work just fine.Plus, I would like to go sli with 2 modded GTX 680 into quadro K5000. But I've read that the quadro just support sli for choosen "complete Workstation" from dell etc... However it may still be possible to do it since they won't be true K5000.
I didn't think Quadro cards came with SLI bridge ports. My Quadro 2000 certainly doesn't.
One thing is for sure, though, two separate cards won't be providing genlocked output, which means they won't be vsynced either.
How, pretty simple, with comparisons:
Have you done some gaming with that monitor?
It's sure that Dual projection Stereo 3D is out off sync with geforce line when using Tridef(up to one frame out of sync) and geforce do not even support dual output S3D via 3D Vision but Quadros support and it's synced. There is no support for 2 screen Surround for nvidia that provides synced signal so that Tridef could be used with SBS mode. Maybe GeForce drivers enables sync when IBM T221 DG1 is connected, like those drivers is going to do with those Sharp based 4K monitors(i think Asus one is allready supported).
K5000 is capable to support synced dual projection with up to 2-way SLI without need of Quadro Sync card, it can output synced Mosaic up to 8 displays. >> http://www.nvidia.com/object/quadro-scalable-visualization-solutions.html (http://www.nvidia.com/object/quadro-scalable-visualization-solutions.html)
AMD eyefinity puts out synced signal for expanded screens for fullscreen apps(3840x1080 for example).
How, pretty simple, with comparisons:
I wouldn't be so quick to disregard K5000's gaming performance. The only thing the 680 has on it's side is higher clock speeds, but they are not _that_ much higher. And the GPU core is the same.
How, pretty simple, with comparisons:
I wouldn't be so quick to disregard K5000's gaming performance. The only thing the 680 has on it's side is higher clock speeds, but they are not _that_ much higher. And the GPU core is the same.
Higher clock speed and better power supply...and 350€ for the gtx 680 against 1500€ for the K5000.
For gaming purposes, when the modded GTX 680 does have the quadro functionnality, I do not see any advantages for the real K5000... ^-^
I suspect this is a different use case. The difference is that in my setup I am using a single large frame buffer to produce one large desktop across two separate "screens" (which just happen to form a single TFT panel).
In the case of 3D stuff, the frame buffer is probably not the same. Can you configure it as one large "desktop" with one set of 3D images being displayed on columns 1-1920 and the other set on columns 1921-3840?
The only way I can see you getting synced and genlocked frames is by all outputs to screens coming off a single card, i.e. traditional SLI. That allows you to do processing on two cards but the monitor signal all comes out of a single frame buffer and a single vsync. Any more than that requires an external genlock device (QuadroSync mentioned on that page is a separate hardware device).
Depends on what you want to use it for. Evidence thus far shows that at least some of the GL functionality seems to be disabled on hardware level (certain GL primitives are missing, see a dump of glxinfo for Quadrified GTS450 vs. a real Quadro 2000 a few pages back).
Hello everybody,
I'm new on this forum and I would like to thank everybody (and especially gnif) for the mod of geforce GTX 680 to Quadro K5000.
I'm a gamer. I don't want more performance but just one (quadro) functionnality more.
I play games in 3D with the help of nvidia 3D vision on my monitor (input signal 1080P 3D 120Hz).
I would like to do the same but with passive 3D dual projection. I need therefore the option of a quadro K5000 to activate the possibility of passive stereo with 2 projectors connected to the graphic card. Here is the link to the procedure to do passive stereo with a quadro.
http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7. (http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7.)
Could someone please confirm that the option is available with a modded GTX 680 to quadro K5000?
Thank you,
Soulnight ;)
Looks like 670 pcb and K5000 are exactly same. Components are same etc. However 3D Vision pro connector is missing. Wonder if soldering this would give this option too...Other difference is that K5000 have soldered power cords and 670 have connectors soldered on board.
Looks like 670 pcb and K5000 are exactly same. Components are same etc. However 3D Vision pro connector is missing. Wonder if soldering this would give this option too...Other difference is that K5000 have soldered power cords and 670 have connectors soldered on board.
Yes but what for? The 3D vision pro is to use with professional application and those do need the performance of a true K5000. ;)
Looks like 670 pcb and K5000 are exactly same. Components are same etc. However 3D Vision pro connector is missing. Wonder if soldering this would give this option too...Other difference is that K5000 have soldered power cords and 670 have connectors soldered on board.
Yes but what for? The 3D vision pro is to use with professional application and those do need the performance of a true K5000. ;)
That's true, i don't need it but someone could like to use RF glasses. It should work with any 3D vision compatible device as well and can be used for gaming like normal 3D vision. IR was not good for my setup, flashes etc and needed looong usb cable but not using this anymore because W1070 is dlp-link and off course goal is passive 3D...
Looks like 670 pcb and K5000 are exactly same. Components are same etc. However 3D Vision pro connector is missing. Wonder if soldering this would give this option too...Other difference is that K5000 have soldered power cords and 670 have connectors soldered on board.
Yes but what for? The 3D vision pro is to use with professional application and those do need the performance of a true K5000. ;)
That's true, i don't need it but someone could like to use RF glasses. It should work with any 3D vision compatible device as well and can be used for gaming like normal 3D vision. IR was not good for my setup, flashes etc and needed looong usb cable but not using this anymore because W1070 is dlp-link and off course goal is passive 3D...
I'm sorry to tell you that but the benq W1070 (great projector) is not so good for the omega filters. It' a DLP: great! But the throw ratio is not big enough (max 1,5:1 with full Zoom) and you will have color uniformity problems with it. Motormann advises a throw ratio of Minimum 1,5:1 and ideally 2:1.
Source: http://www.avsforum.com/t/1407101/official-omega-3d-passive-projection-system-thread/150#post_22927789 (http://www.avsforum.com/t/1407101/official-omega-3d-passive-projection-system-thread/150#post_22927789)
I think that if you cannot max out the zoom with yours W1070, you should buy 2 cheap H9500 (2D) which have vertical et horizontal Lenshift and are all together better than the W1070.
I suspect this is a different use case. The difference is that in my setup I am using a single large frame buffer to produce one large desktop across two separate "screens" (which just happen to form a single TFT panel).
In the case of 3D stuff, the frame buffer is probably not the same. Can you configure it as one large "desktop" with one set of 3D images being displayed on columns 1-1920 and the other set on columns 1921-3840?
There is no way to do single wide "desktop" for two monitors and using it for gaming. This is where nVidia surround is needed and it do not support 2 monitors. I can off course do wide desktop or clone one to another.
Sharp 4K based monitors use same large frame trick(1920x2160+1920x2160) as your monitor to achieve 4k 60hz resolution so maybe there is driver enabled "mosaic/surround" for your setup too. Sharp 4k is not supported yet and therefor not work but ASUS(rebranded sharp) work with latest drivers, so driver look up if ASUS is connected and enables "mosaic/surround" support for it.
The only way I can see you getting synced and genlocked frames is by all outputs to screens coming off a single card, i.e. traditional SLI. That allows you to do processing on two cards but the monitor signal all comes out of a single frame buffer and a single vsync. Any more than that requires an external genlock device (QuadroSync mentioned on that page is a separate hardware device).
Link says otherwise, with K5000 you can do synced Mosaic up to 8 screen with SLI OR with Quadro Sync. You can use outputs(limited) from 2 cards when doing Surround with geforces(600 series and up) too and surround is always synced.
Maybe Windows 7 is deficient in this way - I wouldn't know. If it is why would you want to use it?
All I can say is that XP64 and Linux both work wonderfully with this setup. :)
My situation is that I have IBM T221 and a GTX 590, however, it didn`t support mosaic mode so that cannot give me 3840x2400 decently, thus I had to buy another ATI card which has analogous feature.
K6000 Quadro. http://nvidianews.nvidia.com/Releases/NVIDIA-Unveils-New-Flagship-GPU-for-Visual-Computing-9e3.aspx (http://nvidianews.nvidia.com/Releases/NVIDIA-Unveils-New-Flagship-GPU-for-Visual-Computing-9e3.aspx). 780/TITAN hack needed :)
K6000 Quadro. http://nvidianews.nvidia.com/Releases/NVIDIA-Unveils-New-Flagship-GPU-for-Visual-Computing-9e3.aspx (http://nvidianews.nvidia.com/Releases/NVIDIA-Unveils-New-Flagship-GPU-for-Visual-Computing-9e3.aspx). 780/TITAN hack needed :)
YES! A hack of the GTX 780 would be needed indeed... A lot more of horse power for playing with 3D vision in 1080p! Someone?
How are you doing Jager? I'm waiting for you to mod your gtx 670 and to tell me that it work for 3d passiv dual projection with 3d vision you know :)
Have you bought your second BENQ W1070 yet?
K6000 looks pretty sweet, but the press release says 2880 cores. Titan only has 2688. Modding it might not work. It has been reported before that modifying a 144-shader GTS450 into a Quadro 2000 doesn't work, while I have successfully modified 192 shader versions into Quadro 2000s, which implies you have to have at least as many shaders and memory controllers for the modification to work.
Also the K series seems to not be supported for MultiOS VGA passthrough virtualization, and there is nothing listed in the press release about this being available on the K6000. Since one of the main benefits of modifying a GeForce into a Quadro is precisely for enabling VGA passthrough, the benefits from a Titan->K6000 mod seem much slimmer even if it does work.
K6000 looks pretty sweet, but the press release says 2880 cores. Titan only has 2688. Modding it might not work. It has been reported before that modifying a 144-shader GTS450 into a Quadro 2000 doesn't work, while I have successfully modified 192 shader versions into Quadro 2000s, which implies you have to have at least as many shaders and memory controllers for the modification to work.
Also the K series seems to not be supported for MultiOS VGA passthrough virtualization, and there is nothing listed in the press release about this being available on the K6000. Since one of the main benefits of modifying a GeForce into a Quadro is precisely for enabling VGA passthrough, the benefits from a Titan->K6000 mod seem much slimmer even if it does work.
GTX 670 to "K5000" seems to work so maybe this is doable.
K6000 looks pretty sweet, but the press release says 2880 cores. Titan only has 2688. Modding it might not work. It has been reported before that modifying a 144-shader GTS450 into a Quadro 2000 doesn't work, while I have successfully modified 192 shader versions into Quadro 2000s, which implies you have to have at least as many shaders and memory controllers for the modification to work.
Also the K series seems to not be supported for MultiOS VGA passthrough virtualization, and there is nothing listed in the press release about this being available on the K6000. Since one of the main benefits of modifying a GeForce into a Quadro is precisely for enabling VGA passthrough, the benefits from a Titan->K6000 mod seem much slimmer even if it does work.
GTX 670 to "K5000" seems to work so maybe this is doable.
What is exactly your GTX 670? Has someone already modded exactly your model or do you hope you will find which resistors you need to modify?
K6000 looks pretty sweet, but the press release says 2880 cores. Titan only has 2688. Modding it might not work. It has been reported before that modifying a 144-shader GTS450 into a Quadro 2000 doesn't work, while I have successfully modified 192 shader versions into Quadro 2000s, which implies you have to have at least as many shaders and memory controllers for the modification to work.
Also the K series seems to not be supported for MultiOS VGA passthrough virtualization, and there is nothing listed in the press release about this being available on the K6000. Since one of the main benefits of modifying a GeForce into a Quadro is precisely for enabling VGA passthrough, the benefits from a Titan->K6000 mod seem much slimmer even if it does work.
GTX 670 to "K5000" seems to work so maybe this is doable.
What is exactly your GTX 670? Has someone already modded exactly your model or do you hope you will find which resistors you need to modify?
I have reference design 670. Same pcb as K5000 have. My resistors arrived too and will do some soldering tomorrow or day after.
You mean today!!! :clap: And then you buy a gtx 780, you mod it, and you tell us how to do it. And then I follow your steps :) ::)
You mean today!!! :clap: And then you buy a gtx 780, you mod it, and you tell us how to do it. And then I follow your steps :) ::)
I will try 670 to "k5000" without failing first. "K5000" SLI is main target at the moment but i think it will never work without certificated workstation. Maybe someone do HyperSLI for quadro some day but i think it's not going to happen. I wish i had skill's to find out if 780 to K6000 is even possible :)
And where are all the skilled people of this forum? All in vacations? Or did nvidia have them assassinate? :wtf:
Well i managed to order capacitors instead of resistors :palm: |O Need to find 40K somewhere, 15K i all ready have...
PS.Link i posted earlier was wrong, edited it.
I'm pretty sure 15K is all you actually need. Modify the 3rd nibble to change the ID from 0x1189 to 0x11A9. From there on you can use the 5 bits of the soft strap to switch between K5000 or GTX680M (which has the same spec as GTX670). It's what I'll be doing to my GTX680 this weekend. Make the hard-strap ID 0x11A0 (GTX680M) instead of 0x1180 (GTX680) by changing only the 3rd nibble resistor. Then from that I can soft-mod to GTX680MX (0x11A3, same spec as GTX680) for non-Quadro uses or a Grid K2 (0x11BF) for virtualization.
You may find it preferable to just strip out all the UEFI crap out while you're BIOS hacking. I keep meaning to write up a BIOS modding guide for hacking most things since the 4xx series onward, but I've not had the time to do it in the past month. :(
I can confirm the mod done by blanka.
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
But I pimped it a little bit.
670GTX to K5000 works!
R4 on the front side.
R1, R2, R3 on the bottom side.
K5000 works absolutely stable for me, but has no performance increase in SPECviewperf. I tested with few different Quadro drivers.
Summary
GPU Name R1 / 0-7 4th byte R2 / 8-f 4th byte R3/ 3th (high) R4 / 3th (low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
Quadro k5000 none 15k 40K none
grid k2 none 40K 40K none
I flashed it (EVGA 670GTX 2GB 915MHz) with the K5000 bios from techpowerup.
"nvflash.exe -4 -5 -6 K5000.rom" had to be used because of different subsystem and board id.
It started with minor pixel errors but booted into win7.
After driver installation and reboot win7 didn't start anymore.
Flashing it back worked without problems.
It's me or the mod made by blanka seemed to be a lot more complicated? Do you need a cable like blanka did? Blanka also said you needed to put R3 manually since there was no place for it but shlomo.m didn't seem to have this problem.
Here the link to the picture of shlomo.m:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg217534/#msg217534 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg217534/#msg217534)
Here the link for blanka mod with pictures as well:
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
What do you think?
ps: I would like to buy a gtx 670 instead of a gtx 680 because the performances are about the same even with 3D vision and the gtx 670 costs 100€ less! In germany, the GTX 670 is selling for 260€! What model should I take?
What was exactly the model of shlomo.m: EVGA 670GTX 2GB 915MHz ???
Thank you!
It should be noted that this mod was originally performed not to get a high performance Quadro or Telsa card, it was done to unlock additional features such as Mosaic support which does indeed work.
Interesting, good work, Jager. I'm surprised that Mosaic doesn't work, it did on older cards. It looks like you only managed to achieve modifying it to Grid K2, though. The main advantage of modifying to a Grid card is VGA passthrough. It is quite plausible that Mosaic and 3D are not supported on the Grid series because they are specialist cards for virtualization. You might find it works when you change it to a K5000.
Note that if you have the card in Grid K2 mode (make sure it is reliably detected as such, I think you are supposed to have a 40K resistor rather than disconnected or things can become problematic), you can modify to K5000 using a BIOS only mod. Strip out the UEFI header (first 1024 bytes (everything up to 0x400, you'll find the AA55 header marking the beginning of the real BIOS. Then strip out the tail (UEFI crypto certs and a bunch of whitespace, trim it out, the end BIOS should be a little under 64KB). Then you can use most of the normal tools to edit it like before. Edit the device ID in the BIOS, re-calculate the checksum (using nibitor or write a program to do it for you), nvflash it to the card and then use nvflash to change the straps to match the device ID and you should be good to go.
Interesting, good work, Jager. I'm surprised that Mosaic doesn't work, it did on older cards. It looks like you only managed to achieve modifying it to Grid K2, though. The main advantage of modifying to a Grid card is VGA passthrough. It is quite plausible that Mosaic and 3D are not supported on the Grid series because they are specialist cards for virtualization. You might find it works when you change it to a K5000.
Note that if you have the card in Grid K2 mode (make sure it is reliably detected as such, I think you are supposed to have a 40K resistor rather than disconnected or things can become problematic), you can modify to K5000 using a BIOS only mod. Strip out the UEFI header (first 1024 bytes (everything up to 0x400, you'll find the AA55 header marking the beginning of the real BIOS. Then strip out the tail (UEFI crypto certs and a bunch of whitespace, trim it out, the end BIOS should be a little under 64KB). Then you can use most of the normal tools to edit it like before. Edit the device ID in the BIOS, re-calculate the checksum (using nibitor or write a program to do it for you), nvflash it to the card and then use nvflash to change the straps to match the device ID and you should be good to go.
Hi, i got it working as K5000 with using pot @40k.
Using mosaic utility did not allowed me to set 3840x1080. I did try it several drivers. There is no additional setup options in nVidia control panel so no "workstation" tree. I did try Mosaic utility but using "query lgpu" it shows supportmosaic=0 +some other information. Whenever i try to enable mosaic with "set rows=1 cols=2 out=0,0 out=0,1 res=3840,1080,60" it returns error flag not supported. Will do more testing later.
@ Jager: here: http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7. (http://nvidia.custhelp.com/app/answers/detail/a_id/3012/~/how-to-configure-passive-or-dual-pipe-stereo-with-quadro-cards-in-windows-7.)
They do not talk about mosaic, do they? ???
I know those are questions for dummies (sorry) but:
1) have you erased any trace of the previous geforce drivers?
2) Do you have 2 displays connected at the time you try to activate mosaic?
Did some more testing. Flashed my GTX670 with K5000 BIOS. GTX670 shader count, memory size and board ID is different so it was quite clear that this will not work. Interesting thing was that when i flashed back to GTX670 BIOS something left behind from K5000 BIOS. Size is now 129kb instead 96kb that GTX670 normally have, K5000 bios is 221kb. When i first opened GPU-Z it recognized GPU as GK104GT and bios as "modified" but after some time without touching bios it showed up correctly. Funny thing is that now i have <workstation> options in nvidia control panel but only "change ECC state" and after looking bios with hex editor it clearly show that those extra 33kb included EEC option(this code is after 670 normal bios code). So it is now quite clear that BIOS for modded one is needed to get this thing actually behaving like K5000, this soldering is just for name change without some hardcore BIOS editing.
That is indeed interesting - does the ECC toggle actually work? Does the memory amount shrink by 1/9? If so, any chance you could PM me a link where I could download the before+after BIOS for analysis?
TRidef doesn't support Nvidia SLI... You would instead need a beast: GTX 780 :) Is it nice to play with tridef games in passive 3d 1080p 60hz? :P How do you know it is synchronized for both projectors?
TRidef doesn't support Nvidia SLI... You would instead need a beast: GTX 780 :) Is it nice to play with tridef games in passive 3d 1080p 60hz? :P How do you know it is synchronized for both projectors?
To be honest 1080p 60hz is not that great because framerate. Tridef also support dual projection only for DX9. Many of my favorite games do not work as well with 3D vision+helix mod. It is quite easy to tell when 3D is out of sync, maybe i do side by side video to confirm this...
Too bad 670 to K5000 was "fail", but i will keep looking. Maybe nvidia inspector's stereo settings could help and there is 3d vision hacks etc...
That is indeed interesting - does the ECC toggle actually work? Does the memory amount shrink by 1/9? If so, any chance you could PM me a link where I could download the before+after BIOS for analysis?
Otion do not work. I can enable it but after restart there is still small star next to checked checkbox that says it will be enabled after restart. It definetely tries to enable on boot because screen blanks way that it wont normally do.
That is indeed interesting - does the ECC toggle actually work? Does the memory amount shrink by 1/9? If so, any chance you could PM me a link where I could download the before+after BIOS for analysis?
Otion do not work. I can enable it but after restart there is still small star next to checked checkbox that says it will be enabled after restart. It definetely tries to enable on boot because screen blanks way that it wont normally do.
A before+after BIOS check would still be handy if it is different.
The other possibility, if you used a 3rd party VBIOS flash package, is that it also brought with it a FPGA blob with it and it somehow flashed that onto the card via some less documented means, and when you downgraded the BIOS that wasn't reverted.
If all your video outputs work fine, I see no reason to not use a Quadro BIOS - but you will still need to modify the straps in the BIOS to match the hard-straps you need to override (if any) and you may also want to change the clock speeds and timings to the GeForce spec.
I also have 8800GT lying around so i flashed it to "FX 3700", same thing. With modded 8800GT bios no Quadro features are actually enabled, just name change also with this. Is there any Quadro mod ever made that actually enables quadro features? How nvidia actually look these features. When doing 3D it is possible to change 3D modes that are not listed in cp when alt-tab and edit registry key. Is there possible to do injector that enables features? ECC feature that i have now is somehow left behind in bios and maybe nvidia is looking just some flags in bios and enables feature even it is not actually possible to do with card. Many quadro features are possible with Geforce as in Linux u can do Mosaic etc with geforce too.
Really thanks for the efforts of gnif and verybigbadboy
However, since I have a EVGA GTX670 with the same PCB layout like GTX660Ti
So I need to find the modification by myself and here is the result.
For the 4th digit, as everyone already knows, it is right on the position of resistor 1 and 2.
Depend on which card you have and you can remove resistor 1 and change it to tesla(40K), grid k2(40K) or Quadro(15K) on resistor 2.
For the 3rd digit, it is the tricky part.
As the low byte on the top side of the PCB with resistor 4.
You don't need to do anything for Tesla K10.
However, if you need to change it to a Quadro K5000 or Grid K2
You need to remove resistor 4 and install resistor 3 "MANUALLY" since no place for resistor 3 any more in the PCB of GTX670 and GTX660Ti
As you can see in my attached bottom side photo for the "rework".
You need to connect to EEPROM pin 6 with a 20K Ohm and pull up to VCC.
My rework is quite ugly but it works fine!
Please be careful and take your own risk for modifying your card!!
Summary
GPU Name Resistor 1 / 0-7 4th byte Resistor 2 / 8-f 4th byte Resistor 3/ 3th byte (high) Resistor 4 / 3th byte(low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
quadro k5000 none 15k 20K none
grid k2 none 40K 20K none
Yes, running Crysis in a VM on a Quadrified GTX480 (Quadro 6000) on my 2nd T221 and there is no tearing whatsoever. Nor was there any tearing on my other T221 with the 8800GT card, but there is massive tearing visible with the Radeon 4850.
1) Removing both resistors (so all 0-4 resistor locations are empty) results in device ID of 0x11BF, i.e. Grid K2 - which is what I was aiming for anyway. From there on I can soft-mod to K5000 or GTX680MX if required (or anything else with IDs between 0x11A0 and 0x11BF).
In K2 mode, the card works for VGA passthrough on Xen. Sort of. Almost. It works find at up to 1280x800. If I select any res higher than that, it fails. As far as I can tell, the monitor is told to go into sleep mode. Tested with 320.49 and 320.78 drivers.
Soldering 0402 components manually is an absolute bitch even with solder paste, a decent magnifying lamp, good eyes and steady hands.
1) Removing both resistors (so all 0-4 resistor locations are empty) results in device ID of 0x11BF, i.e. Grid K2 - which is what I was aiming for anyway. From there on I can soft-mod to K5000 or GTX680MX if required (or anything else with IDs between 0x11A0 and 0x11BF).
Did you add two 40k resistors in the correct locations? If you did not, this could be the cause.
verybigbadboy notes he has some stability problems when they are not on.
I do not have these stability problems (but I did add them on for good measure a month back).
You may be having problems because of this that neither one of us experienced. Try adding the resistors.
In K2 mode, the card works for VGA passthrough on Xen. Sort of. Almost. It works find at up to 1280x800. If I select any res higher than that, it fails. As far as I can tell, the monitor is told to go into sleep mode. Tested with 320.49 and 320.78 drivers.
I am running Xen 4.2.2 with no patches (save a SLIC table I added in to active Windows to). The unofficial nVidia patches do not have to be used, but the did work for me if you wanted to do GPU passthrough without the cirrus card. My current graphics driver is 320.00 (http://www.nvidia.com/object/quadro-tesla-grid-win8-win7-winvista-64bit-320.00-whql-driver.html). Both the Geforce and Quadro/Grid drivers give me the same performance. I have not upgraded to test the new ones. Try that revision and see if it helps.
Small update guys.
I finally got around to playing a little more with my GTX680. Soldering 0402 components manually is an absolute bitch even with solder paste, a decent magnifying lamp, good eyes and steady hands.
Findings:
1) Removing both resistors (so all 0-4 resistor locations are empty) results in device ID of 0x11BF, i.e. Grid K2 - which is what I was aiming for anyway. From there on I can soft-mod to K5000 or GTX680MX if required (or anything else with IDs between 0x11A0 and 0x11BF).
2) In K2 mode, the card works for VGA passthrough on Xen. Sort of. Almost. It works find at up to 1280x800. If I select any res higher than that, it fails. As far as I can tell, the monitor is told to go into sleep mode. Tested with 320.49 and 320.78 drivers. Has anyone else found this? I haven't done any BIOS modding yet, but did anyone else see a similar issue? Is this something Nvidia did in recent drivers to cripple modified cards when running in a VM? I tested the K2-ified card in another bare metal machine with the same monitors, and in all cases there it works fine. But on my VM host, when passed through to a VM, it works great up to and including at 1280x800, and the screen just remains blank at higher resolutions. Talk about bizzare.
This is an interesting finding - my soft-Quadrified GTS450 (Q2000), GTX470 (Q5000), and GTX480 (Q6000) cards work just fine under the exact same conditions. I wonder if this is some kind of an obscure compatibility issue between Grid and Qx000 cards in the same machine since they have different size memory apertures - something could be getting confused.
Until I can get this resolved, modifying of my GTX690 is on hold.
The card doesn't crash/lock up - if I don't click the button to keep the new mode, it reverts back to the previos mode after 15 seconds, at which point it works again. And it works fine on a different machine (bare metal XP64, different motherboard).
Good morning, I tested successfully mod my GTX770 in K5000. How was not finding the resistors, used Trimmers 50k ohms, like these,(http://www.huinfinito.com.br/383-494-large/trimpot-multivoltas-vertical-100k-carenagem-curta.jpg)My congratulations!
And I configured one for 15K and one for 40K, the board used was a Zotac GTX770 AMP!
Follows the model of plate tests and referrals. Subsequently do more stability testing. If anyone needs the bios I upload.
(http://i.imgur.com/3X8pbpO.jpg)
$ diff <(xxd MSI.GTX580.1536.110715.rom) <(xxd MSI.GTX580.3072.110504.rom)
4c4
< 0000030: 0100 0000 c000 8d4e 3037 2f31 352f 3131 .......N07/15/11
---
> 0000030: 0100 0000 c000 8d4e 3035 2f30 342f 3131 .......N05/04/11
6c6
< 0000050: e986 2a00 6214 6025 ffff ffff 0000 0000 ..*.b.`%........
---
> 0000050: e986 2a00 6214 6225 ffff ffff 0000 0000 ..*.b.b%........
12c12
< 00000b0: 3136 3100 0000 0000 0000 0000 0000 0000 161.............
---
> 00000b0: 3133 3000 0000 0000 0000 0000 0000 0000 130.............
1189c1189
< 0004a40: 0000 1f01 0000 0023 6220 0300 3313 2003 .......#b ..3. .
---
> 0004a40: 0000 1f01 0000 0023 6230 0300 3313 2003 .......#b0..3. .
1191,1192c1191,1192
< 0004a60: 7f07 0000 008f 0000 0000 9f01 0000 00af ................
< 0004a70: 0200 0000 bf03 0000 00cf 0400 0000 df05 ................
---
> 0004a60: 7f07 0000 008f 0000 0000 9f01 0000 00a3 ................
> 0004a70: 6230 0300 bf03 0000 00cf 0400 0000 df05 b0..............
1622,1623c1622,1623
< 0006550: 0c19 0000 0c06 0e26 003e 001b 000c 0c0a .......&.>......
< 0006560: 0a0a 0100 0000 0200 160a 0500 0405 0407 ................
---
> 0006550: 0c19 0000 1006 0e30 0078 0020 0010 100e .......0.x. ....
> 0006560: 070b 0100 0000 0200 170b 0500 0405 0407 ................
1646,1647c1646,1647
< 00066d0: 0000 0000 0014 730f 0064 3610 0020 8169 ......s..d6.. .i
< 00066e0: 0050 2200 00ac 53ff ff14 730f 0064 3610 .P"...S...s..d6.
---
> 00066d0: 0000 0000 0014 730f 0038 6710 0020 8169 ......s..8g.. .i
> 00066e0: 0050 2200 00ac 53ff ff14 730f 0038 6710 .P"...S...s..8g.
1660c1660
< 00067b0: 1001 0111 750d 714c 0000 c409 0010 0000 ....u.qL........
---
> 00067b0: 1001 0111 840d 824c 0000 c409 0010 0000 .......L........
2008c2008
< 0007d70: 0090 4402 0090 4401 0090 4402 0090 4402 ..D...D...D...D.
---
> 0007d70: 0090 4402 0090 4401 0090 5502 0090 4402 ..D...D...U...D.
3648c3648
< 000e3f0: ffff ffff ffff ffff ffff ffff ffff ff35 ...............5
---
> 000e3f0: ffff ffff ffff ffff ffff ffff ffff ff10 ................
Are you sure this works on a genuine FX3700? I have a laptop with a genuine FX3700M in it, but I don't recall seeing any extra options in the settings compared to a GeForce 260M it replaced. If you tell me what exact options you expect to see, I can look for them and report back.
how is the checksum exactly calculated in there?
Hi, I've got an evga 670 2gb card that i'm interesting in modding to a k5000.
It seems that everyone is modding their cards to be used in virtual machines, but i'm wondering if modding it opens up the hardware opengl or if it will give me any performance gain for that matter.
Hi guys this is my first post, but I have been reading this thread for quite some time, would you mind tell me what would the benefits be in modding the gtx 580 to a tesla?
-Did anyone actually test it, does it make any difference in double precision (since this is supposed use case)?
-Until now did anyone manage to run multiple VM instances on the same quadrified/gridfied GPU? if not what's the best option, going for a real k5000,k6000 or just doing it the "as supposed to be way" with a grid card?
-Gordan would you mind sharing what 4gb gtx 680 did you manage to quadrfie to k5000?
-Did anyway manage to get any card (including ATIs consumer cards) to work with multiple instances?
You cannot just plug monitors into different outputs and have each be a separate VM sharing a GPU. Grid GPUs have no video outputs at all. I suggest you go and read through all the VMware and Xen documentation on the subject before you ask questions like this here.I don't see where you understood that, since all the time I was talking about instances (maybe that has a different meaning for us) in any case shared gpu (particularly Nvidia Grid) is supported on all 3 major player in the virtualization field , MS with RemoteFX, Citrix and WMware.
Anyway I wonder why do you insist so much on nvidia cards for Dedicated GPU (GPU passthrough) virtualization even by quadrifieing them, while amd consumer cards support it by default?
The nice thing about using nvidia GPUs is the hardware support for H.264 Encoding in Kepler GPUs witch allows to encode the rendered streams fast and CPU free. Did anyone manage to use this hardware acceleration feature for virtualization?
Quote from: gordanYou cannot just plug monitors into different outputs and have each be a separate VM sharing a GPU. Grid GPUs have no video outputs at all. I suggest you go and read through all the VMware and Xen documentation on the subject before you ask questions like this here.I don't see where you understood that, since all the time I was talking about instances (maybe that has a different meaning for us) in any case shared gpu (particularly Nvidia Grid) is supported on all 3 major player in the virtualization field , MS with RemoteFX, Citrix and WMware.
The nice thing about using nvidia GPUs is the hardware support for H.264 Encoding in Kepler GPUs witch allows to encode the rendered streams fast and CPU free. Did anyone manage to use this hardware acceleration feature for virtualization?
Anyway I wonder why do you insist so much on nvidia cards for Dedicated GPU (GPU passthrough) virtualization even by quadrifieing them, while amd consumer cards support it by default?
GTX780 | 0x1004 |
K20 | 0x1022 |
First of all, a big thanks to anyone who contributed: gnif, verybigbadboy, and all the others I forgot to mention ^-^
I'm trying to convert a GTX780 to a Tesla K20 which have the following device IDs:
GTX780 0x1004 K20 0x1022
According to the resistor values that were discovered so far, this would suggest that I would have to find a 5K and a 25K resistor and change them both to 15K since they are both digits are in the 0-7 range. I found the EEPROM and measured the values of the resistors around it. You can find the results below and in the attached photo:
As you can see, I found a 5K resistor which I removed and replaced it with a multi-turn 50K pot which I set to 15K. Unfortunately, this did nothing as the device ID still remains 0x1004 whereas I expected it to be 0x1024. There are two 4.7K resistors at the back of the board and other than that there are no 5K resistors on the board. Either NVIDIA changed the way the device ID is determined, or they changed the values, or there is a simple resistor divider action going on.
Before I go and change the 25K resistor, I want to make sure that I can change the 3rd digit from 0 to 2. I did try to flash a K20 ROM onto the EEPROM, but it is still recognized as a GTX780. Strangely enough, with the K20 ROM, the nvidia-smi tool reports that the board supposedly has 6GB of RAM instead of the actual 3GB. Any ideas or suggestions?
Hello verybigbadboy (thanks for the many mods :D).
I have a Palit GTX780, which I believe is the standard NVIDIA reference design (http://www.palit.biz/palit/vgapro.php?id=2132 (http://www.palit.biz/palit/vgapro.php?id=2132)).
Ok so what I understand from your post that you linked is that I need to modify the GTX780 BIOS on the card to unlock it.
My BIOS has the following values:
00000010: 08 E2 00 00 00 06 00 00 02 10 10 82 FF 3F FC 7F
00000020: 00 50 00 80 0E 10 10 82 FF FF FF 73 00 00 00 8C
So that means my BIOS is locked and I need to change the FF 3F FC 7F to FF FF FF 7F. Is that correct?
I'm going to try it right now :D
4. Change values to be equal values from 4.
5. Update checksum. I do it by nibitor tool. just open bios rom and save it. It produces lot of warnings, but it is ok.
6. Upload bios back to card.
Hello verybigbadboy (thanks for the many mods :D).
I have a Palit GTX780, which I believe is the standard NVIDIA reference design (http://www.palit.biz/palit/vgapro.php?id=2132 (http://www.palit.biz/palit/vgapro.php?id=2132)).
Ok so what I understand from your post that you linked is that I need to modify the GTX780 BIOS on the card to unlock it.
My BIOS has the following values:
00000010: 08 E2 00 00 00 06 00 00 02 10 10 82 FF 3F FC 7F
00000020: 00 50 00 80 0E 10 10 82 FF FF FF 73 00 00 00 8C
So that means my BIOS is locked and I need to change the FF 3F FC 7F to FF FF FF 7F. Is that correct?
I'm going to try it right now :D
yes, and next line too
00000020: 00 50 00 80 to 00 00 00 80
also please update checksum. without it card won't start at all :)Quote4. Change values to be equal values from 4.
5. Update checksum. I do it by nibitor tool. just open bios rom and save it. It produces lot of warnings, but it is ok.
6. Upload bios back to card.
With the pot set at 5K (which is the original value), nvidia-smi now reports that it cannot determine the device handle and gives an unknown error :( Did I miss something?
EDIT: I do see that FILE_A and FILE_B are different: one byte is different at 0x8DFF so I guess the checksum has been updated correctly.
With the pot set at 5K (which is the original value), nvidia-smi now reports that it cannot determine the device handle and gives an unknown error :( Did I miss something?
EDIT: I do see that FILE_A and FILE_B are different: one byte is different at 0x8DFF so I guess the checksum has been updated correctly.
yes checksum looks like corrected correctly.
can you check lspci for videocard id?
or
boot via dos flash drive and
nvflash --list ?
are you trying to flash it with gtx780 or k20 bios? please make changes with original bios first.
A few points:Ok, that was my misunderstanding then. I need to go through the topic then and see if there is any logic to be found concerning the 3rd nibble resistor.
1) The values for the resistors for the 4th nibble are the ones that were documented. Resistor values for the 3rd nibble are NOT the same. For example, on my GTX690, the 3rd nibble resistor is 25K (24.8K) and the value is 8.
2) You cannot measure the value of the resistor while it is attached to the board. What you will end up measuring is the resistance of the resistor in parallel with the resistance of the rest of the circuit (if it is connected - which in most cases it will be).You are right. I just got lucky in that the 5K resistor I desoldered is in fact a 5K resistor. Unfortunately I cannot 'randomly' desolder parts as they are very fragile, so I want to be as sure as possible that I got the right resistor.
3) The 3rd nibble isn't fully adjustable. On 6xx series cards it tops out at 0xB. It doesn't matter what you set it to past 40K, I suspect you'll find it will not go past that value. This may be different on 7xx series cards.Good to know. I soldered 50K multi-turn pots, so I can test with different values and see if it makes any difference. For the third nibble, I need to go from 0 to 2 so I hope that it is possible.
4) Be careful when blanking out the strap values at 0x0C - the card could plausibly be partially soft-strapped, which means that editing the strap value can brick the card - hard. Normally, unbricking is reliant on the card being fully hard-strapped. You can then ground the EEPROM power pin, and the card will boot EEPROM-less and and show up again for nvflash (I have a GTS450 modified this way for easy unbricking when BIOS-modding). If the card relies on partial soft-strapping and you break the soft-strap, the only way of unbricking it may well be to find how the important other bits are hard-strapped and modify them for the correct hard-strap - much harder considering that nobody has yet reverse engineered anything other than the device ID resistor locations.I see. I thought that if I adjust the value of the 5K resistor to its original value and potentially mess with the power pin of the EEPROM, that I can simply reflash it with the original BIOS. I'll take that into consideration next time I try to adjust values in the BIOS.
5) Cross-flashing a ROM from a similar card with a different amount of RAM will not work. At best you will end up with garbled/corrupted screen output, even if text mode boot-up works (and/or the card shows up as a secondary card). The only way you will get a Quadro/Tesla/Grid ROM working on a GeForce card is if you use a card with the same GPU with the same amount of VRAM. The only cross-flashes I have managed to get working are Q2000 1GB -> GTS450 1GB works, and QK5000 4GB -> GTX680 4GB works. And if you are doing that, you will also want to edit the BIOS to adjust the clocks and fan speeds back to where they were on the GeForce card.Yes, apparently the size of the RAM is also stored in the BIOS, so the chances of cross flashing to work are very slim if the hardware differs.
4) Be careful when blanking out the strap values at 0x0C - the card could plausibly be partially soft-strapped, which means that editing the strap value can brick the card - hard. Normally, unbricking is reliant on the card being fully hard-strapped. You can then ground the EEPROM power pin, and the card will boot EEPROM-less and and show up again for nvflash (I have a GTS450 modified this way for easy unbricking when BIOS-modding). If the card relies on partial soft-strapping and you break the soft-strap, the only way of unbricking it may well be to find how the important other bits are hard-strapped and modify them for the correct hard-strap - much harder considering that nobody has yet reverse engineered anything other than the device ID resistor locations.Is everything stored in the EEPROM or is there some configuration stored in nonvolatile memory in the GPU itself? Otherwise it seems that a failsafe way to unbrick would be to rewrite the EEPROM out-of-system using something like a buspirate.
HEX | Binary | |
AND0 | 7FFC3FFF | 0111 1111 1111 1100 0011 1111 1111 1111 |
OR0 | 80005000 | 1000 0000 0000 0000 0101 0000 0000 0000 |
AND1 | 73FFFFFF | 0111 0011 1111 1111 1111 1111 1111 1111 |
OR1 | 8C000000 | 1000 1100 0000 0000 0000 0000 0000 0000 |
Perhaps a failure to reset e.g. the earlier generations of the ATi/AMD card might be due to the auxiliary power keeping the card "alive" even when the power to the PCIe slot is cut. If that is the case then maybe a switch or relay that turns off the auxiliary input (upon detection of a Vcc cut) might help (such relays ought to be able to take quite a few amps though 240W@12V => 20A).
The special thing about the vGPU feature is that it can be shared among up to 8 virtual guests as one GPU, i.e. it is not dedicated to one VM as VGA passthrough (or sDGA in ESXi). That requires a more sophisticated solution than when it is dedicated which made me suspect that the drivers are not only paravirtualized but also hardware assisted through certain extensions (mind you that the AMD-v/Intel VT-x extensions do not require special paravirtualized drivers on the guest side). The downside with this technology is that it currently only gives up to 512MB of video RAM to each VM and that only DirectX up to version 9.0c is supported, at least in ESXi. Other conditions may apply in Hyper-v and other hypervisors that support the vGPU technology. So, maybe there are no hardware extensions involved with the vGPU technology after all.
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi :D
Maybe it's not that much "magic" in sharing a GPU between VMs but it is quite tricky to do that without overhead and yet be rather as feature rich as if it were on bare metal.
Before AMD-v and Intel VT-x the CPU sharing took a rather substantial penalty from the virtualization.
Now this penalty is rather small thanks to the hardware assisted virtualization technology offered through VT-x and AMD-v. From the papers on vGPU there seems to be a rather small penalty to sharing the GPU, either they have really managed to bring up smart drivers or there is something hardware assisted to back it up. Maybe there is a rather substantial overhead that is "offset" by the capabilities of the GPU.
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi :D
I guess it depends on what type of load you expose the virtual CPU to. I have seen tests from Phonoix.com website where the difference between VMs and bare-metal performance is considerably less. Look for example at this article:Before AMD-v and Intel VT-x the CPU sharing took a rather substantial penalty from the virtualization.
Even with those virtualization performance penalty is substantial:
http://www.altechnative.net/2012/08/04/virtual-performance-part-1-vmware/ (http://www.altechnative.net/2012/08/04/virtual-performance-part-1-vmware/)
There were also other solutions before VT-x that provided only marginally worse performance (e.g. kqemu)
Well, here's an update. In trying to find the resistor(s) that control the third nibble, ijsf (the guy who did the original GTX480 to Tesla hack) and I screwed around with the BIOS, and sure enough, the card was not recognized anymore.
I disconnected the power to the eeprom but that didn't help either. In the end I ended up with hooking up the eeprom to my Raspberry Pi, writing a python script that can read from and write to the eeprom and finally managed to write the original BIOS back. Luckily the card works again, and now I can always reflash the card because I have a breakout-board that I can hook up to my RPi :D
Any chance you could post a detailed explanation of what you did to make an unbricking rig? I have a suspicion that the root cause of the death of my first GTX690 might have been a misflash that corrupted the PLX chip (PCIe bridge) EEPROM. It'd be nice to have a go at resurrecting it.
Whats not clear to me at this point is what will and won't work for those of us who want to make daily use of the end result. I personally would like to give guests in ESXi a decent 3D performance bump but I'm not sure how to approach that (what card is seen as the best starting point, what work needs doing to it etc). I realise this thread isn't about making card X work with technology Y but most of us are here for the virtualisation benefits.
Budget isn't an issue (within reason), I'm looking to set up a virtualised gaming rig much like yourself. There'll be a Windows 7 (64-bit ent) VM that'll need as much 3D gaming grunt as possible and a couple of other VMs that need some acceleration to be responsive and usable (possibly 3D too). Naturally I want to go for as much power as possible, I plan to use the machine for current and next gen gaming.
I thought about converting a GTX 480 into a Q6000 as they're fairly cheap to pick up used on ebay, however I'm not sure which brand would follow reference design to make the modification less of a headache.
What wasn't made clear earlier was that a Q6000 clone (GTX 470 / 480) can be used as vSGA with 6 guests, that's a great piece of information for those looking to accelerate 3D on multiple VMs on a budget :-+
What would also be handy to know, and again I assume this is probably beyond scope of this thread, would be if multiple Q6000 clones can be added to a system, one passthrough'd to one VM directly for as much acceleration as possible, and a second Q6000 clone distributing as vSGA between remaining VMs? GTX 480s can be picked up second hand for around £100 each on eBay so they'd make a great price vs benefit starting point.
The GTX 680 I was secretly hoping you would have found a solution by now, but as with all things of this nature, it wouldn't be too easy or everyone would be doing this to their cards :)
Depends on your intended resolution. If all you want is 1080 resolution capability, GTX480/Q6000 will deliver. Granted, my eyes don't seem to see things the way most people's do (more pixels, not as many frames per second, or so it seems), but I happily completed Crysis+Warhead on a T221 3840x2400 in a VM on my Quadrified GTX480. On the same physical host, my wife was finding Borderlands 2 unplayably bad at 2560x1600 (so I temporarily put a HD7970 in her VM (and yes, it crashed the host when you try to reboot the VM - I'm hoping the ATI pollution is going to be temporary) and kept the 480 for my VM).
If you need more than 2 VMs with 3D acceleration on that motherboard, you are going to have to use something like a GTX690, given it only has 2 PCIe x16 slots.
I should perhaps also point out (hint:nudge) that there is currently a quadrified GTX470 on eBay at the moment. ;)
I see no reason why you couldn't use one card for vSGA and one for vDGA. Just bear in mind that you won't be getting video directly out of your vSGA cards - those VMs will feed you a compressed video stream of the desktop that you will have to decode on another machine. The problem with this being that you need another machine as a terminal (unless you use your vDGA machine as a terminal for it, which would work I suppose, but it gets a bit recursive).
But as I said before - I am not an ESXi user, and while I would expect their solution to be a little more polished than Xen, I suspect you will also have a lot less community support to fall back on if it doesn't work out of the box. Also, last time I checked vDGA was treated as an experimental feature.
It could be that I just have a weird GTX680, or there is some OS/environmental issue that is manifesting as the problem I mentioned. One of the guys on the Xen list modified a completely standard GTX680, and his works fine in all modes, so my DVI issues are most likely just a bizzare quirk of my system configuration.
But as I said before - I am not an ESXi user, and while I would expect their solution to be a little more polished than Xen, I suspect you will also have a lot less community support to fall back on if it doesn't work out of the box. Also, last time I checked vDGA was treated as an experimental feature.
True but someone needs to take the plunge to see if this'll work right? I'm in the same boat with the X9SRA, I haven't found any solid documentation this will work but progress isn't made on repetition ;)
It could be that I just have a weird GTX680, or there is some OS/environmental issue that is manifesting as the problem I mentioned. One of the guys on the Xen list modified a completely standard GTX680, and his works fine in all modes, so my DVI issues are most likely just a bizzare quirk of my system configuration.
If you want, I'm happy to test the card in my new system for you, confirm if its working, then post it back in the same condition it was received in. I'm only up the road (in relative terms given a global community, still 120 miles away :-DD) so RM Special Delivery would be fast.
I changed the marked resistor in the pic from 25k to 40k. Now the pci-id is 1025 instead of 1005 for a Titan. The aim is to get 1020 (K20X). I changed already the resistors near by. But it doesn't changed anything. Somebody any ideas?
Haha, I was JUST going to post that I managed to convert my GTX780 to a Tesla K20 and I saw that johndoe beat me to it :-DD I know I should have posted here before I wrote an article on my site about how I figured it out.
The first 5 bits of the device ID are softstrapped so just change the appropriate bits and reflash your card.
But I can also tell you that modding the GTX TITAN BIOS with the right straps will not turn it into a working Tesla K20. For example, you cannot disable the TCC mode, and you cannot run any CUDA code. :( What you CAN do however is go to TechPowerup and download the only K20 BIOS they have, and change the soft straps on that BIOS ;)
Btw, it would be awesome if you run some CUDA samples to see if everything runs fine (and report back of course ;) ).
Ok, i updated the bios, but still not possible to install tesla driver, also manually (driver is not for this windows version). strange...
you guys notice the GTX780 is a GK110 chip? !!?!
Anyone have bios link to K10 or Grid K2 (or k1) please.
Interesting. So GTX780Ti is overdue to hit the shelves. Full shader count (like the K6000), but half the VRAM of the Titan at 3GB, and 1/3 cheaper than the Titan. I wonder if DP will be crippled.
Seems we need to figure out where in the BIOS the VRAM size is stored and in what format. I'm going to be quite displeased if the Quadrified Titan doesn't work for VGA passthrough without a K6000 BIOS flashed onto it, and that will only work with the VRAM size adjustment.
Has anybody got a copy of a K20X BIOS handy? That should "just work" on a Titan with the strap mod.Anyone have bios link to K10 or Grid K2 (or k1) please.
Why do you need it? GTX680 works just fine as a GK2 with it's original BIOS. I'd be surprised if GT635 didn't also work as a GK1 with it's original BIOS.
The BIOS requirements we are talking about here seems to be a new thing on GK110 based cards.
Interesting. So GTX780Ti is overdue to hit the shelves. Full shader count (like the K6000), but half the VRAM of the Titan at 3GB, and 1/3 cheaper than the Titan. I wonder if DP will be crippled.
The BIOS requirements we are talking about here seems to be a new thing on GK110 based cards.
What is a K20XM? I cannot find any official reference to it. Do you have a download link for that BIOS?
K20xm should be the same like k20x from the point of performance. I didn't found any differences. I looked at the inf file of nvidia tesla driver and found that no 1020 id is listed, means no k20x. I changed the predefined 1021 (k20xm) to 1020 and i was able to install driver. But gpuz for example report only 512mb vram. And cuda sim programs still not recognize the card.
I will test. How to change straps to 1021?
@johndoe: I just read that it was GPU-Z that reported 512MB |O GPU-Z cannot be trusted. It says that my GTX780 has 48 cores (my primary card is a GT610 for testing purposes, and THAT card has 48 cores).
Use the nvidia-smi tool to read out the real values. And of course you should also run the deviceQuery CUDA sample and report its output.
After some more research, I figured out some things. The amount of VRAM is determined by a couple of things: the bus-width, the number of RAM chips, and the size of each RAM chip. The GTX780 has 12 chips under the cooler, and zero on the back. A Titan has 12 chips under the cooler, and 12 on the back (please confirm this), and the Tesla BIOS is configured to select 24 chips, whereas I have 12. That's why it doesn't work and that why it should probably work for you.
GeForce driver running a Tesla? Really? That's not supposed to work because the GeForce driver .inf doesn't contain any Quadro/Tesla/Grid device IDs. Unless this is some kind of a unified pre-release driver.
I thought we already established there is no need to replace the nibble 3 resistor - you can just leave it off, and fix the potential slight flapping issue by setting the 5th bit in the soft-strap appropriately.
It is normal that with a Tesla BIOS the machine won't POST on that card - Teslas don't work as standard VGA adapters, they normally show up as 3D adapters. Having said that, on the 690, the primary GPU shows up as VGA, the secondary as 3D, and that posts on ports attached to either, so the chances are that the Tesla BIOS just disables all the video outputs (since Tesla cards don't have any). I imaging flashing a modified Grid BIOS to a GTX680/770/690 would also produce similar results.
Having said that, I seem to recall I found that the device type (i.e. VGA or 3D) is set by a bit in the secondary strap - but I don't know for sure where the secondary strap is in the UEFI setup (several possible candidates IIRC) or whether the one on the main BIOS payload is the effective one. But if you can find it, you could potentially make the Tesla BIOS work with normal VGA enabled, but that would be a bit of a bizzare use-case (going headless after booting).
How do you want to change the device ID from 0x1004/0x1005 to 0x1020 when you can only go as high as 0x101F with just the soft straps?
Yes you can use the GeForce driver with Tesla cards. At work we have Teslas and we never installed Tesla drivers, so I don't know why it wouldn't work. We use the Tesla C2050 which has two DVI outputs, so when you use the Tesla with the standard WDDM driver, it is basically a slower GeForce card.
How do you want to change the device ID from 0x1004/0x1005 to 0x1020 when you can only go as high as 0x101F with just the soft straps?
Perhaps I wasn't clear enough, apologies. I wasn't suggesting a purely soft-mod. I was saying that if you remove the resistor and don't replace it, the value of the nibble goes to the top of it's settable range, but there is often an instability in the 5th bit. For example, on a GTX680, if you remove it, it will go to A or B, and often flip between the two when you reboot. You can compensate for this by soft-strapping the 5th bit high (to make it B) or low (to make it A). Since removing an 0402 is slightly easier than replacing it, it makes for a smaller, easier to apply hard-mod, and the rest can be soft-modded.
Yes you can use the GeForce driver with Tesla cards. At work we have Teslas and we never installed Tesla drivers, so I don't know why it wouldn't work. We use the Tesla C2050 which has two DVI outputs, so when you use the Tesla with the standard WDDM driver, it is basically a slower GeForce card.
That surprises me - I didn't think there was that overlap in the Windows drivers. I hadn't expected the device IDs for any Tesla/Quadro/Grid cards to be listed in the .inf. Certainly when you go to the Nvidia site and select that you want a driver for the Tesla, it will point you at the Tesla/Quadro/Grid download rather than the GeForce one. But hey, if it works for you... :)
Ah, that's what you meant. Now I understand. Still, I don't know if that would work in this case. It's true that removing a 0402 resistor is easier than soldering it (good luck with that :P), but my solution is non-destructive. I can always remove the 33K resistor (1206 btw) and the card will be a GTX780 again, whereas adding the 25K resistor back on the board would be harder :)
There is also another difference. The 25K resistor is a pull-down resistor, and I added a pull-up resistor. The lines that go to the GPU most likely have an ADC that measures the voltage and based on that it sets certain straps. I had also removed the 28.5K resistor and let me tell you, that one had to be precise! It will not boot with a value of 28K or 29K :(
what I was saying. the K10 tesla is the quadro grid K2 - features like software-ecc and vgx support on both. Just strap them and go.
...
I'm hunting VGX since it is better than API-intercept. I think I agree with Gordan. VGX is more like "VT-d" for video cards, alone it won't do squat but it will work with nvidia's VGPU API to accelerate much faster than pure binary translation mode (API intercept)
Ah, that's what you meant. Now I understand. Still, I don't know if that would work in this case. It's true that removing a 0402 resistor is easier than soldering it (good luck with that :P), but my solution is non-destructive. I can always remove the 33K resistor (1206 btw) and the card will be a GTX780 again, whereas adding the 25K resistor back on the board would be harder :)
Can you post a photo of your mod showing exactly where you put the 33K resistor?
And why do you think removing the 3rd nibble resistor wouldn't work in this case? It works on the GTX680 and GTX690 I have. You mean it might upset the voltage somewhere else and cause unrelated straps to end up with wrong values? Surely adding a pull-down resistor in addition to the pull-up would run the same risk of doing that.
Note: I'm not disputing that dealing with a 1206 is far preferable than dealing with an 0402, especially without specialist tools. :)
It is a pull-down resistor on the SO pin of the EEPROM. I measured around 2.3M resistance when I removed the 28.5K resistor which is actually a 30K resistor when measured outside the circuit. Because it needs to be exactly that value, I used a high-precision pot because I couldn't put back the 0402 resistor. It is on the schematic on my site and on the photo I posted a couple of pages back.There is also another difference. The 25K resistor is a pull-down resistor, and I added a pull-up resistor. The lines that go to the GPU most likely have an ADC that measures the voltage and based on that it sets certain straps. I had also removed the 28.5K resistor and let me tell you, that one had to be precise! It will not boot with a value of 28K or 29K :(
Which 28.5K resistor? What was it for?
what I was saying. the K10 tesla is the quadro grid K2 - features like software-ecc and vgx support on both. Just strap them and go.
...
I'm hunting VGX since it is better than API-intercept. I think I agree with Gordan. VGX is more like "VT-d" for video cards, alone it won't do squat but it will work with nvidia's VGPU API to accelerate much faster than pure binary translation mode (API intercept)
I'm not sure I follow what this would be for.
1) Why do you need ECC for graphics rendering and video stream encoding?
2) I'm pretty sure VGX requires no special features at all. vDGA is just straight PCI passthrough a-la Xen. vSGA just makes the GPU act as a co-processor. I'm going to try putting together an ESXi test bed machine with a spare motherboard I have which I _think_ has a non-broken IOMMU with the required features for ESXi PCI passthrough, and try and get my Gridified 690 working on it. If that works, it would prove you need no special features on the GPU itself to make it work.
It will also be good to hear back from foxdie when he has had a chance to test the Quadrified GTX470 with vSGA. If that works, the chances are that other modified cards will, too.
I have the card in my computer now with the cooler on it. When and if I take it apart again, I'll take a photo. But it's really just two very thin wires soldered on the Vcc and the SCLK pins of the EEPROM going to a 1206 33K resistor.
The card has a pull-down resistor by default, and I added a pull-up resistor. If the other end of the SCLK line has an ADC, then having no resistor on it will cause a floating pin which is not something you want with analog electronics. A pull-down, pull-up, or both will make everything more stable electric wise (even though you override that value with the soft-straps).
Not to be a jerk, but wouldn't it be wiser if the conversation about virtualized GPUs would take place in a separate topic? I'm interested in it as well, but the last pages of this topic has been nothing but questions about if card X can be modified or about virtualization. You know, just to make it less cluttered. :)
I have the card in my computer now with the cooler on it. When and if I take it apart again, I'll take a photo. But it's really just two very thin wires soldered on the Vcc and the SCLK pins of the EEPROM going to a 1206 33K resistor.
Oh, I see. So just short the EEPROM Vcc to SCLK with a 33KO resistor?
Won't this potentially upset other things?
I have the card in my computer now with the cooler on it. When and if I take it apart again, I'll take a photo. But it's really just two very thin wires soldered on the Vcc and the SCLK pins of the EEPROM going to a 1206 33K resistor.
Oh, I see. So just short the EEPROM Vcc to SCLK with a 33KO resistor?
Won't this potentially upset other things?
Yes, just a resistor between Vcc and SCLK. Seeing as I'm the only one that has a working Tesla (except for memory size issues), I'm guessing that my mod does not change anything else. I would like to try it with a Titan, but that thing is expensive. :( That's why I'm waiting for johndoe to apply my mod see if it works then. Btw, did you get a Titan?
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021)
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021)
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.
We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.
2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021)
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021)
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?
I don't know exactly where the bits are but I'm in the process of going through the nouveau source that hints that the memory size can be determined by reading a GPU hardware register. There are references to tables in the ROM that contain timings and memory type, but I haven't figured out the location yet.
The GTX 780 Ti has been released: http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html (http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html)
It's interesting to note that the double precision GFLOPS has been artificially limited to 1/24 that of single precision. I wonder if this is something that can be "adjusted."
My quadro 6000 with DUAL DVI was fine with geforce drivers too but it was missing something the GTX470 had.
Funny thing is how the geforce overclockers are running the same kepler at twice the voltage (geforce). If you let the FP run at full speed, that would cook the cards! (literally the quadro 6000 runs at half the voltage across the board as many overclockers are pushing the GTX475/480). Insane!
Note: BAR restriction for VGX mode imperative. only a few server mobo's have this option to keep the IOMMU mapping <4M
note: ECC mode will disable VGX mode!!
note: avoid change of MSI-X or VGX will fail.
note: RDP will disable VGX mode (citrix).
Insane number of requirements to get VGX to actually work, instead of "looks like it is working" or "was working now is not!". buggy as heck!
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Straight strap device ID mod. Second DMA engine is driver controlled. See:
http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/ (http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/)
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.
gordan, do you have a way to transmit a copy of your enviroment?
I can setup a nearly identical setup and we can test the differences between your 690 and my K2.
I bet if we can pinpoint the behavior difference we could figure this out.
I've got some tidbits that might allow deeper inspection to data that nibitor doesn't have acess to.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Straight strap device ID mod. Second DMA engine is driver controlled. See:
Virtualized Gaming: Nvidia Cards Part 2: GeForce, Quadro and GeForce Modified Into a Quadro - Higher End Fermi Models (https://altechnative.net/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/)
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.
Umm it is really interesting ... so, your kepler mods have enabled the dual async engine? (GTX 680 into K5000 does report 2 engines?)
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.
Actually yes, all Tesla cards have dual async DMA engines ... It would be really interesting turning a GF Titan or 780 into a K20 with enabled dual async engines...
IDK about Quadro, but Tesla have dual async DMA engines.
EDIT:
from Nvidia, K5000 has dual copy engine
http://www.nvidia.com/object/quadro-k5000.html#pdpContent=1 (http://www.nvidia.com/object/quadro-k5000.html#pdpContent=1)
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Don't really care about the cooler, that will eventually be replaced with water cooling.
Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
I was going to say the same thing Gordan. But I didn't want to miff your efforts. K2 cost 1.5x the GTX690 ..
But I'm having fun, no money, no selling from me. I just want to find out the true nature of the secret sauce of quadro/grid/tesla.
It's fun!
It was said you can "hack"/alter your "gaming"-GPU into a "workstation" card.
May I ask you; what is the effectiveness of doing it? Will it run CAD-related programs much faster? Or is it just to allow Linux to see it as a workstation card and use multiple monitors?
Will it really operate as a workstation-card?
I need it for college but I don't have the funds to buy a +€600,- videocard.
I'm here to report that we (ijsf and me) correctly modified the memory size configuration, and that the card now runs just fine. Here are the obligatory screenshots:
(http://vps1931.directvps.nl/Tesla-K20c-3GB.png)(http://vps1931.directvps.nl/nvidia-smi.png)
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Some people might overclock to 1100 and others manage over 1300. Don't really care about the ACX cooler, that will eventually be replaced with water cooling.
For the amount that would cost you, you might as well get a Titan to begin with. In fact, for the number of cards you'll destroy soldering on the BGA chips manually, you might as well just get a K6000 outright.
Binning these days does next to nothing. Silicon manufacturing has gotten to the point where all chips will do the same speeds to within a few %, and those last few % are down to luck and generally not worth bothering with. Haven't you noticed that in the past decade if the clock range of a particular Intel CPU series was, say, 2.4GHz for the slowest model and 3.33 GHz for the fastest model, they will all do about 3.4GHz regardless of what they were sold as? Granted, Intel silicon is better than most, but it's not THAT much better.Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
Sounds like what you really should be getting is a K6000. Full shader count of the 780Ti, full DP performance of the Titan, and 12GB of RAM. Yours for a mere £4K on ebay. Given I've not been able to get either my 690 or the Titan to work virtualized, I'm tempted to just trade them in for a pair of genuine K5000 cards, seen as they are now going for around £600 on ebay.
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Some people might overclock to 1100 and others manage over 1300. Don't really care about the ACX cooler, that will eventually be replaced with water cooling.
For the amount that would cost you, you might as well get a Titan to begin with. In fact, for the number of cards you'll destroy soldering on the BGA chips manually, you might as well just get a K6000 outright.
Binning these days does next to nothing. Silicon manufacturing has gotten to the point where all chips will do the same speeds to within a few %, and those last few % are down to luck and generally not worth bothering with. Haven't you noticed that in the past decade if the clock range of a particular Intel CPU series was, say, 2.4GHz for the slowest model and 3.33 GHz for the fastest model, they will all do about 3.4GHz regardless of what they were sold as? Granted, Intel silicon is better than most, but it's not THAT much better.Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
Sounds like what you really should be getting is a K6000. Full shader count of the 780Ti, full DP performance of the Titan, and 12GB of RAM. Yours for a mere £4K on ebay. Given I've not been able to get either my 690 or the Titan to work virtualized, I'm tempted to just trade them in for a pair of genuine K5000 cards, seen as they are now going for around £600 on ebay.
You may be right about the cost of the chips. That will figure in my decision. I was assuming that the cost would only be slightly unreasonable. Like, perhaps 1/4 the cost of the 780 Ti itself. These are not cutting edge chips (anymore).
As far as BGA soldering is concerned, I haven't done that. However, wasn't there someone here who has BGA equipment? Perhaps I could make a deal with him (or someone else) to actually do the soldering. The amount of memory is usually picked up by the amount that is physically present. And, of course, the BIOS can be tweaked to be consistent so I assume that a memory mod would work.
You're right about binning -- until you overclock. Then the differences in the chips show up. I am currently running a 3770K on stock air at 4.4Ghz. It was not stable at 4.6, but almost. I possibly could do 4.5 but haven't pushed it. Some people have gotten as high as 4.7 and been stable. My understanding is that EVGA bins the ACX OC parts so that at each voltage level the chip does just a little bit better in speed and power consumption. If you want to OC it, that can make a difference. Even if you don't you can run a bit cooler.
I'm willing to pay for the cost of the card -- but not a factor of six! You would be looking at no more than 1/14 increase over the Titan and probably would NOT clock anywhere near as high. So, if the Titan is 1000, then 1200 or so is reasonable. Perhaps 1500 for high speed board.
It might be a completely moot point. Changing the ID might not allow the DP to be unlocked. We don't know if the switch to turn it on is is done by fuses or simply made not available unless it is the Titan. If, as seems likely from previous posts, the ID change can be done via soft strapping (since probably only nibble four is involved and the only board changes appear to be in the power distribution area) then it will be a cheap test.
All of the other attempts to turn on DP have not involved changing to a Titan which has a software switch, but rather something else where it is always turned on.
Can't do anything though until people start getting cards.
If you are doing this for fun as the primary motivation, that's great, but otherwise you need to factor in the cost of the man-hours that is going to go into all of this. Combine that with the outcome being far from certain, and the economics of it start to look very questionable.
My view of OC is that there are a lot of "fishermen's tales" and that what most people consider "stable" isn't actually all that stable. If you can run each of the OCCT tests and the multi-threaded tmpfs file hash tests on linux for 24 hours each without any errors, then I may be a little more convinced. Most of the time when people have claimed stability I have been able to shake the machine loose in under 10 minutes.
The whole notion of DP boost by changing the 780's ID to Titan stems from the part in Tom's Hardware review that suggests that the DP shader clock speed is driver controlled.
If you are doing this for fun as the primary motivation, that's great, but otherwise you need to factor in the cost of the man-hours that is going to go into all of this. Combine that with the outcome being far from certain, and the economics of it start to look very questionable.
My view of OC is that there are a lot of "fishermen's tales" and that what most people consider "stable" isn't actually all that stable. If you can run each of the OCCT tests and the multi-threaded tmpfs file hash tests on linux for 24 hours each without any errors, then I may be a little more convinced. Most of the time when people have claimed stability I have been able to shake the machine loose in under 10 minutes.
The whole notion of DP boost by changing the 780's ID to Titan stems from the part in Tom's Hardware review that suggests that the DP shader clock speed is driver controlled.
Nothing about my current build is cost effective. If include the cost of man hours, then the total cost is astronomical. But, it has also been an excellent project for learning things. Extreme - yes. Just the disk storage system is nearly 5K (2K in SSDs, another 1.5K in raid controller / expanders and the rest in hard drives and optical drives). The water cooling is already over 2k and going up. And none of that includes the tools that I have purchased to do the build. So, while it will be a system that I actually use, and is intended for long term use and easy changes, it is also intended to be the system I want without really counting the cost (doesn't mean that I don't consider tradeoffs or have budget limitations - funding is via overtime).
For 4.6Ghz I was able to run Prime95 for several hours before it failed. I was going for a full day.
I understand where the DP boost idea comes from. But, for a Titan, it can clearly be changed in software using the control panel. No one (that I know about) has tracked that all the way down to the hardware / bios, but clearly on the Titan the block is not via fuses. It is far from certain, but if the boost option is simply turned off if it is not a Titan, it is possible that no fuses are involved on the 780 Ti parts either. They appear to be using the same chips. It is certainly worth a try, especially if it has zero cost to try.
So just save up for a K6000 and save yourself the disappointment.
Prime95 is a completely useless test of stability. I have seen Prime95 and various Pi calculators run for days without error only for OCCT or hash calculations to return an error in under 30 seconds. If anything, what you said confirms my view of fisherments' tales.
As long as you are happy with the assumed outcome of no DP improvement, that's fine. There are several things to try, including changing the device ID and flashing the Titan BIOS onto the card. But that doesn't mean it'll succeed. I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
So just save up for a K6000 and save yourself the disappointment.
Prime95 is a completely useless test of stability. I have seen Prime95 and various Pi calculators run for days without error only for OCCT or hash calculations to return an error in under 30 seconds. If anything, what you said confirms my view of fishermen's tales.
As long as you are happy with the assumed outcome of no DP improvement, that's fine. There are several things to try, including changing the device ID and flashing the Titan BIOS onto the card. But that doesn't mean it'll succeed. I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
I am interested in alternate ways of determining stability, Prime95 appears to be what most overclockers used to validate a clock.
I want to increase the clock rate, but not at the expense of stability or correctness. I do not run Linux, I am currently using Windows 7 64. I would appreciate any suggestions you have for validating a clock as stable.
I am not a dedicated overclocker, but once I am on water cooling I would like to see 5Ghz on my 3770K on a 24/7 basis.
Moderate overclocking is not really overclocking at all because Intel drastically underspecifies the performance of the cpus. It isn't until you push past about 4.5Ghz that you are really in the overclocking range.
Why do you appear to be so against the idea? I would think it that it would be of interest to at least some of the participants of this forum and certainly is harmless. It would also resolve an open question about the Titan vs 680 / 680 Ti.
I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
Does the K5000 modified GTX680 allow 30bit color on the displayport output like a real K5000?
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021)
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.
2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.
Did you change just the 3rd nibble resistor pair or did you change the 4th one as well? I suggest you put the 4th nibble (lower pair in the photo) back as it was and soft-mod that part instead. For the 3rd nibble resistor, you can either leave it off and stabilize wiith the soft-mod on the lowest bit of the nibble, or put in a resistor. With 25K or more, the 3rd nibble will go to 0xB.
1) Sorry I don’t know exactly which resistors determine which nibbles but I’m sure that I did the same thing that was proposed in the image as highlighted by the red rectangles. Removed the 25K on the right top. Added a 20K below the removed one. And changed the forth column top resistor from 5K to 15K. Could you please show in the image what is your suggestion??
2) Also could you please tell me more about soft-mod?? Is that some sort of firmware update for the graphic Card? What do you mean by stabilize with soft-mod??
So I read almost halfway through the thread and then searched the rest and I still have some questions. So I current have two GTX 660 (non-ti) that was hoping to convert to k4000's. I simply would like to do this for the virtualization benefits. I currently have 32gb ram in my desktop and it's killing not being able to use all of it. I was thinking of implementing a Xen setup but I think I would need the gtx 660 profession counterpart which I believe is the k4000. I know early on in this post- https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg203239/#msg203239 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg203239/#msg203239) They posted a potential position of the resistors but I didn't see anything come of this. So my question is if anyone has successfully modified a gtx 660 to be a k4000? or would you guys be able to point me towards how I would figure this out myself?
What is WGL_NB_gpu_affinity supposed to do?
Did you mod just the device ID, or did you flash the Grid BIOS onto the card?
What is WGL_NB_gpu_affinity supposed to do?
Did you mod just the device ID, or did you flash the Grid BIOS onto the card?
It need for multigpu render to select right render gpu.
I modded device ID only.
I not able to test it because I have only one videocard in each pc :(
2529,2532c2529,2532
< 0009e00: 0000 90cf 0000 90cf 0000 90cf 0000 96cf ................
< 0009e10: 0200 96cf 0200 9055 0200 9055 0200 9044 .......U...U...D
< 0009e20: 0200 9044 0200 a055 0200 a055 0200 a055 ...D...U...U...U
< 0009e30: 0200 a055 026e 0402 1100 ffff ffff 0000 ...U.n..........
---
> 0009e00: 0000 90cf 0000 90cf 0000 90cf 0000 968f ................
> 0009e10: 0200 908f 0200 9055 0200 9055 0200 9044 .......U...U...D
> 0009e20: 0200 9044 0200 a055 0200 a055 0200 9055 ...D...U...U...U
> 0009e30: 0200 9055 026e 0402 1100 ffff ffff 0000 ...U.n..........
Anybody try NVENC functionality on their modded 680-> K2? This is a very big part of the grid/quadro cards featureset.
oguz286, any chance of a howto write up on the subject of memory size modding?
I got my hands on a K6000 BIOS from here:
http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html (http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html)
and I want to try it on my Titan to see if VGA passthrough starts working, but I need to halve the memory size first.
Also, do you have a download link for a CUDA binary you want me to test on my Titan with a K20c BIOS? Or is that no longer of interest?
Also regarding your article on the website about PCIe 2.0 vs. 3.0, are you saying that flashing a K20c BIOS onto the 780 makes it go in to PCIe 2.0 mode? How can you tell? If you are judging this by what GPU-Z says, I wouldn't trust it too much - my Titan shows up as PCIe 1.1 on a 2.0 motherboard, while my GTX690 shows up as PCIe 3.0 even though the motherboard most definitely cannot do more than PCIe 2.0.
oguz286, any chance of a howto write up on the subject of memory size modding?
I got my hands on a K6000 BIOS from here:
http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html (http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html)
and I want to try it on my Titan to see if VGA passthrough starts working, but I need to halve the memory size first.
Also, do you have a download link for a CUDA binary you want me to test on my Titan with a K20c BIOS? Or is that no longer of interest?
Also regarding your article on the website about PCIe 2.0 vs. 3.0, are you saying that flashing a K20c BIOS onto the 780 makes it go in to PCIe 2.0 mode? How can you tell? If you are judging this by what GPU-Z says, I wouldn't trust it too much - my Titan shows up as PCIe 1.1 on a 2.0 motherboard, while my GTX690 shows up as PCIe 3.0 even though the motherboard most definitely cannot do more than PCIe 2.0.
Bhahaha, I posted that Quadro K6000 BIOS a bit ago there... which reminds me, I'm going to sent a message to svl7 to see if he can do more tweaks to remove the power target throttling past 225W I've experienced. I'm also curious on the PCIe 2.0 vs 3.0 speeds on a GPU modded to K20c BIOS. If PCIe 3.0 is possible, the only thing missing would be the dual-DMA engines.
Can someone help me? I am trying to make a k5000 from my MSI gtx 660 ti, but I don't know where I can find the resistors and what I need to change them with.
Thanks!
Can someone help me? I am trying to make a k5000 from my MSI gtx 660 ti, but I don't know where I can find the resistors and what I need to change them with.
Thanks!
Hello - I was taking a look at the photos that you posted however they aren't all that clear. Try to get some better lighting and try and get the tiny details, lettering etc.. all in focus - this will really help in getting identified what needs to be identified :)
Interesting, so Titan remains the only one with uncrippled DP FP.
It makes sense, I suppose - they had to sacrifice something to keep the GPU with the extra few shaders enabled from cooking itself at gaming grade clocks and voltages required.
Having said that - what about modding the 780Ti into a Titan? If Tom's Hardware is correct and it is due to the driver lowering the DP FP clock speed, the modding it into a Titan would work around this and give you the best of both worlds - extra shaders and full DPFP performance.
--Update--
I noticed one interesting thing: nvidia-smi reports, clock is running with 732 MHz (TITAN 835MHz or something). If I modify the bios with kepler bios tweaker it doesn't change at all. Cuda-z reports for example 900Mhz but the performance doesn't change. Maybe nvidia-smi is correct and I have to change something else.
Actually, the 4th nibble is hard strapped with a res of 33k between vcc and sck, as posted by oguz286. I think the dp performance is ok, only single precision is poor
Ah, ok you mean i have also to adjust the 4th nibble.
It is not a titan or k20c bios. It is a extracted k20xm bios from ibm.
Actually, the 4th nibble is hard strapped with a res of 33k between vcc and sck, as posted by oguz286. I think the dp performance is ok, only single precision is poorCan you be more explicit? are you saying that DP performance isn't crippled and is 1/3rd the SP performance in your conversion?
Actually, the 4th nibble is hard strapped with a res of 33k between vcc and sck, as posted by oguz286. I think the dp performance is ok, only single precision is poorCan you be more explicit? are you saying that DP performance isn't crippled and is 1/3rd the SP performance in your conversion?
[...] I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
oguz286, you are a legend! Any chance of a before/after BIOS hex diff? I'd rather like to try to flash a GTX480 with a Q6000 BIOS with RAM size adjusted appropriately and see what effect it has.
[...] I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
Could you tell me if changed anything in the bios before flashing it? Or could you just flash it after you changed the hard straps?
Where did you obtain this bios?
Quote from: gordanoguz286, you are a legend! Any chance of a before/after BIOS hex diff? I'd rather like to try to flash a GTX480 with a Q6000 BIOS with RAM size adjusted appropriately and see what effect it has.
Also did you get any further with this?
I think I have a different motivation than most users here. It seems Nvidia have crippled 3Dsmax Viewport performance in their recent cards severely. Since 3Dsmax does not use any of the advanced OpenGl features of the Quadros but Direct 3D (9.0) I don't see any reason why geforce cards should perform as bad as they do. Especially the new kepler cards. A gtx780 is performing alot worse than a gtx 480. We have also have a few gtx 680s and the newest one performs as poorly as the gtx780 although it has the same specs as the older ones.
A Quadro K2000m performs alot better then any of the geforce cards.
I first tested with a gtx480 but changing the sof straps and flashing it with the hardware id to a Quadro 6000 but did not see any performance gains in 3dsmax.
I think I have a different motivation than most users here. It seems Nvidia have crippled 3Dsmax Viewport performance in their recent cards severely. Since 3Dsmax does not use any of the advanced OpenGl features of the Quadros but Direct 3D (9.0) I don't see any reason why geforce cards should perform as bad as they do. Especially the new kepler cards. A gtx780 is performing alot worse than a gtx 480. We have also have a few gtx 680s and the newest one performs as poorly as the gtx780 although it has the same specs as the older ones.
A Quadro K2000m performs alot better then any of the geforce cards.
I first tested with a gtx480 but changing the sof straps and flashing it with the hardware id to a Quadro 6000 but did not see any performance gains in 3dsmax.
As far as I can tell from my limited testing, the 3DSMax viewport is software rendered. I tried a genuine Quadro 2000, GTS450, GTS450 modified into a Q2000, GTX470, GTX470->Q5000, GTX480, GTX480->Q6000, GTX580, GTX580->Q7000, GTX680, and GTX680->K5000 and I got about 7fps for the viewport out of all of them, no difference between them at all. Which implies software rendering.
As far as I can tell from my limited testing, the 3DSMax viewport is software rendered. I tried a genuine Quadro 2000, GTS450, GTS450 modified into a Q2000, GTX470, GTX470->Q5000, GTX480, GTX480->Q6000, GTX580, GTX580->Q7000, GTX680, and GTX680->K5000 and I got about 7fps for the viewport out of all of them, no difference between them at all. Which implies software rendering.
As far as I can tell from my limited testing, the 3DSMax viewport is software rendered. I tried a genuine Quadro 2000, GTS450, GTS450 modified into a Q2000, GTX470, GTX470->Q5000, GTX480, GTX480->Q6000, GTX580, GTX580->Q7000, GTX680, and GTX680->K5000 and I got about 7fps for the viewport out of all of them, no difference between them at all. Which implies software rendering.
hey gordan, thanks for answering my questions.
There are different viewport rendering modes available actually. Since 3dsmax 2012 the default is the Nitrous renderer which should be states it is directx9 based ( directx 11 for max 2014). On top of that fps changed for me when changing to different cards. surprisingly best results came from quadro k2000m ( but admittedly different system) and radeon 5870.
for the comparison a large scene with high polycount and complex materials was used.
But even considering sun tzus (which I didn't fully understand considering my findings) post I still cant find an explanation why the newer gtx680 in my testing only returned about half the fps from the older one.
So simply... when viewport is CPU bounded better performance came from better driver, architecture GPU, not horse power.As far as I can tell from my limited testing, the 3DSMax viewport is software rendered. I tried a genuine Quadro 2000, GTS450, GTS450 modified into a Q2000, GTX470, GTX470->Q5000, GTX480, GTX480->Q6000, GTX580, GTX580->Q7000, GTX680, and GTX680->K5000 and I got about 7fps for the viewport out of all of them, no difference between them at all. Which implies software rendering.
hey gordan, thanks for answering my questions.
There are different viewport rendering modes available actually. Since 3dsmax 2012 the default is the Nitrous renderer which should be states it is directx9 based ( directx 11 for max 2014). On top of that fps changed for me when changing to different cards. surprisingly best results came from quadro k2000m ( but admittedly different system) and radeon 5870.
for the comparison a large scene with high polycount and complex materials was used.
But even considering sun tzus (which I didn't fully understand considering my findings) post I still cant find an explanation why the newer gtx680 in my testing only returned about half the fps from the older one.
Servus,
I changed the marked resistor in the pic from 25k to 40k. Now the pci-id is 1025 instead of 1005 for a Titan. The aim is to get 1020 (K20X). I changed already the resistors near by. But it doesn't changed anything. Somebody any ideas?
If it works the way I think it does (and that's a big if), the 4th nibble is controlled by the resistor pair directly to the left of the one you changed to 40K to boost the 3rd nibble to 0x2 (0x1005 -> 0x1025). I think that in order to get to 0x1020 you would need to replace the existing resistor in that pair to the left to 5K. That is pulling to ground on the right side and seems to be connected to SO on the left side.
If I'm right, you should also be able to apply a similar trick to bridging VCC and SCLK with a resistor by connecting SO and GND. If the current ID is 5 that would make the resistor to be modded 30K. to reduce that overal resistance to 5K you would need a 6K resistor between SO and GND.
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
This is awesome advice, thanks. Can you help with the identification of the resistor(s)? I will take a pic of the card when I get home. Also, do you know if SLI will work if I pass through both cards to a VM?
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
This is awesome advice, thanks. Can you help with the identification of the resistor(s)? I will take a pic of the card when I get home. Also, do you know if SLI will work if I pass through both cards to a VM?
Have you checled the 680 photos posted here on the forum that show the location of the resistors? Have you had a look at your card to identify if the resistors are in the same place(s)?
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
This is awesome advice, thanks. Can you help with the identification of the resistor(s)? I will take a pic of the card when I get home. Also, do you know if SLI will work if I pass through both cards to a VM?
Have you checled the 680 photos posted here on the forum that show the location of the resistors? Have you had a look at your card to identify if the resistors are in the same place(s)?
Not yet, I will do this. I am keen to know if SLI will work or not. Or is it that K series or Quadro don't allow for SLI??
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
This is awesome advice, thanks. Can you help with the identification of the resistor(s)? I will take a pic of the card when I get home. Also, do you know if SLI will work if I pass through both cards to a VM?
Have you checled the 680 photos posted here on the forum that show the location of the resistors? Have you had a look at your card to identify if the resistors are in the same place(s)?
Not yet, I will do this. I am keen to know if SLI will work or not. Or is it that K series or Quadro don't allow for SLI??
I have never tried SLI - I'm pretty sure Grids/Teslas don't support it so the driver doesn't expose it. Then again there are a lot of extra limitations in a VM, I noticed that several options that appear on bare metal don't appear on the VM with the same card (e.g. on XP64 domU the PhysX options don't appear, but on bare metal they do). So I wouldn't pre-emptively expect SLI to work, but if you discover otherwise, please, do report back.
It's hard not to get excited over modding a GTX to a Quadro or K series pro card. However, is there any particular reference or non-reference GTX 6 series that can be modded 100%? I am willing to take the risk if it's not too difficult.
Actually, I have the GTX670-DC2-4GD5 non reference, has anyone attempted it on this card and had any luck? Please let us know.
You'll find that even most non-reference GTX670/GTX680 cards only differ minimally from the reference design, and the strap resistors are in the same locations. I have a Gainward Phantom GTX680 card which is technically non-reference, and I successfully modified it. You could actually try a part-mod. If you want to use it for virtualization, I read somewhere that Tesla K10 is supported for PCI passthrough, which means you wouldn't even have to remove the resistor controlling the 3rd nibble - only remove the one controlling the 4th. That should give you ID 0x118F for Tesla K10 and you might find it works just fine. Best of all, the resistor that controls the 4th nibble is on the back of the card, which means you wouldn't even have to take off the heatsink. Please report back if/when you do it.
This is awesome advice, thanks. Can you help with the identification of the resistor(s)? I will take a pic of the card when I get home. Also, do you know if SLI will work if I pass through both cards to a VM?
Have you checled the 680 photos posted here on the forum that show the location of the resistors? Have you had a look at your card to identify if the resistors are in the same place(s)?
Not yet, I will do this. I am keen to know if SLI will work or not. Or is it that K series or Quadro don't allow for SLI??
I have never tried SLI - I'm pretty sure Grids/Teslas don't support it so the driver doesn't expose it. Then again there are a lot of extra limitations in a VM, I noticed that several options that appear on bare metal don't appear on the VM with the same card (e.g. on XP64 domU the PhysX options don't appear, but on bare metal they do). So I wouldn't pre-emptively expect SLI to work, but if you discover otherwise, please, do report back.
If I manage to bork both of my 670s I will post it here. If only I had money to burn on the Palit 780ti , I would love to know if this could be converted to the pro equivalent.
Due to popular demand, I have finally found time to write this up. I had to rush it a little, but hopefully you get the gist.
Nvidia GeForce 4xx Fermi Soft Modding Guide (http://www.altechnative.net/2013/11/25/virtualized-gaming-nvidia-cards-part-3-how-to-modify-a-fermi-based-geforce-into-a-quadro-geforce-gts450gtx470gtx480-to-quadro-200050006000/)
Any questions, please ask away, and I'll update the article to expand on those points where appropriate.
Due to popular demand, I have finally found time to write this up. I had to rush it a little, but hopefully you get the gist.
Nvidia GeForce 4xx Fermi Soft Modding Guide (http://www.altechnative.net/2013/11/25/virtualized-gaming-nvidia-cards-part-3-how-to-modify-a-fermi-based-geforce-into-a-quadro-geforce-gts450gtx470gtx480-to-quadro-200050006000/)
Any questions, please ask away, and I'll update the article to expand on those points where appropriate.
Fantastic! Please let us all know when you have finished your Kepler guide, awesome work.
Thank you for the work in this thread. I am interested in modding a Titan to a TCC enabled device. It will dramatically improve performance in Cuda. The background information is that Titan uses the WDDM display mode driver, which incurs a massive delay every time a kernel is executed. The TCC mode does not suffer this delay because it bypasses the windows WDDM driver.
From what I read, I need to solder a resistor on to the board, flash a K20c bios, and do a hex soft mod? I have very little experience in any of these steps. I will re-read the thread to try an absorb it all, but if there were a guide to go through these steps for beginners, I am sure it would be very popular and greatly appreciated. Thanks again for the hard work here.
I just wanted to post my experience on average ebay/amazon used price for top notch cards!
Average sales price:
Grid K2 : 1650$ [Geforce 690 new cost $1000 !!]
Tesla K10: $1600 [ same thing as Grid K2 mostly]
Tesla K20: $1600 [k20 is almost like K6000, Quadro K40 is K6000 exactly!]
Quadro 6000: $690 [ older FERMI model that works great with ESXi but doesn't have VGX]
Quadro 4000: $200 [ older fermi model, no ecc]
What I'd like to do is locate possible KEPLER cards that would be a good conversion (sensible to hack)!
Hacking a $1000 Geforce 690 to become a $1650 grid K2 seems a little illogical (risk versus cost)
But hacking a $120 Quadro K600 or NVS into one fourth of a Grid K1 might be a good value!
Do you think one of these hacked Quadros could drive IBM's T221 3840x2400 monitors?
Do you think one of these hacked Quadros could drive IBM's T221 3840x2400 monitors?
Do you think one of these hacked Quadros could drive IBM's T221 3840x2400 monitors?
things are bout to change. Let's say big monitors and fruit. soon. that resolution will be everyday ordinary very soon.
I'm not going to hold my breath. People have been saying that since 2003 and it has failed to materialize. There is still no other monitor that will do 3840x2400. The closest is the new Asus that does 3840x2160, and that's 31", which is too big IMO - too much looking side to side. 24" is about the limit of what I'd consider nowdays (having had a 30" Dell before the T221s).
Have you noticed the grid K2 and it's USM's pci-id is very close? I think you might be onto something when you said software. Perhaps the card is dispatching nearby pci functions to enable the grid vgx licensing?
I'm going to get some dinner and take a close look at the bitmask pattern of K2-> K2 USM and K1-> K1 USM , perhaps there is a pattern to this madness.
For example for "Grid K2"
- Physical Function PCIid: 10ed:11bf - GK104GL [GRID K2]
- Virtual Function selectable from
PCIid: 10ed:118b - GK104 [GeForce K2 USM]
PCIid: 10ed:118c - GK104 [NVS K2 USM]
PCIid: 10ed: 11b0 - GK104GL [Quadro K2 USM]
PCIid: 10ed:11b1 - GK104GL [Tesla K2 USM]
All this just to change name in windows without any performance benefit in professional apps, look at this charts!
Update:
As some of you that have been following this thread are aware, I've been having a rather bizzare problem with only DL-DVI working on my modifed GK104 based cards (680 and 690). It just occurred to me that the two cards I have happen to have on thing in common - they are both Gainward cards. Has anyone else successfully managed to get DL-DVI to work in a VM with a Gainward GTX680 Phantom 4GB or Gainward GTX690? Gainward 690s have a few strapping resistors in different places to the EVGA and other 690s, so it is plausible they made some tweaks that cause the problem. Does anyone have either of those cards working with DL-DVI outputs successfully?
On a separate note, I just learned that GTX780Ti has device ID 0x100A. K6000 is 0x103A. We only need to change the 3rd nibble (via oguz286's awesome mod). His mod is to use a 33K resistor to jack up the 3rd nibble from 0 to 2. On my Titan using an 18K resistor instead boosts the ID to 3. So to make a 780Ti to a K6000, simply apply an 18K resistor between VCC and SCLK and voila, job done. For extra points, solder a couple of wires between those pins, solder an 18K resistor on one of them, and a switch to connect them. Break the switch out somewhere accessible (extra extra points for making the switch easily and neatly accessible from the back of the card without obstructing the airflow too badly). Now you can switch between a 780Ti and a K6000 at a flip of a single switch.
Needless to say, the neatness of this means that I'll be acquiring a 780Ti as soon as I've ebayed my Titan.
I work in 3D graphics and have been looking at moving into GPU rendering, but the cost of the quadro's is just so damn much! I am currently using a k4000 at work, but on my home rig that I am speccing, I am seriously considering this kind of mod. Whilst I'm sure this has been covered in this thread, it's already at 52 pages long(!) so I guess I'll have to ask again;
1) what are the potential pitfalls of this?
2) I am no electrical technician/engineer, so how much will I need to learn to do this? I am willing to put a bit of time into it.
I work in 3D graphics and have been looking at moving into GPU rendering, but the cost of the quadro's is just so damn much! I am currently using a k4000 at work, but on my home rig that I am speccing, I am seriously considering this kind of mod. Whilst I'm sure this has been covered in this thread, it's already at 52 pages long(!) so I guess I'll have to ask again;
1) what are the potential pitfalls of this?
2) I am no electrical technician/engineer, so how much will I need to learn to do this? I am willing to put a bit of time into it.
1) Your application may not befit. Modified cards do not have the full feature set of the real Quadros. For example, the SPECviewperf scores are no different from a normal GeForce card, and stereo 3D remains unavailable.
2) For Tesla and Fermi series cards (GeForce 2xx/3xx/4xx) no hardware modification is required - you just need to modify the BIOS by about half a byte. See the link I posted above (or website link under my profile) for more details on that. What cards does your application support for rendering? If it supports GeForce cards, you probably don't need to modify them. If it only supports Quadros, see if it supports Quadro 5000 or 6000. If so, I would suggest you get yourself a relatively cheap 4xx series card (GTX470 or GTX480), and modify them into a corresponding Quadro (470 -> 5000, 480 -> 6000), and see how that fares.
Almost all of us here are mainly interested in the modding to get virtualization features and TCC working (those are the big wins), so if you decide to go down this route, please report your findings/results/before+after results - it would be nice to get some feedback from someone using this for 3D rendering purposes.
I work in 3D graphics and have been looking at moving into GPU rendering, but the cost of the quadro's is just so damn much! I am currently using a k4000 at work, but on my home rig that I am speccing, I am seriously considering this kind of mod. Whilst I'm sure this has been covered in this thread, it's already at 52 pages long(!) so I guess I'll have to ask again;
1) what are the potential pitfalls of this?
2) I am no electrical technician/engineer, so how much will I need to learn to do this? I am willing to put a bit of time into it.
1) Your application may not befit. Modified cards do not have the full feature set of the real Quadros. For example, the SPECviewperf scores are no different from a normal GeForce card, and stereo 3D remains unavailable.
2) For Tesla and Fermi series cards (GeForce 2xx/3xx/4xx) no hardware modification is required - you just need to modify the BIOS by about half a byte. See the link I posted above (or website link under my profile) for more details on that. What cards does your application support for rendering? If it supports GeForce cards, you probably don't need to modify them. If it only supports Quadros, see if it supports Quadro 5000 or 6000. If so, I would suggest you get yourself a relatively cheap 4xx series card (GTX470 or GTX480), and modify them into a corresponding Quadro (470 -> 5000, 480 -> 6000), and see how that fares.
Almost all of us here are mainly interested in the modding to get virtualization features and TCC working (those are the big wins), so if you decide to go down this route, please report your findings/results/before+after results - it would be nice to get some feedback from someone using this for 3D rendering purposes.
Well basically I run 3DS Max, in which the viewport performance gains between quadro & gtx are pretty huge; but as for 3D rendering the GTX cards are on a par with the high end quadro's to be honest, so that's not really where I'd be looking at getting any benefits. All CUDA enabled cards are supported in the rendering package (VRay) afaik.
I'd purely be doing it so that I can get higher FPS whilst modelling things. I work in architectural visualisation, so (due to high res foliage usually!) I often end up with 10~20 million polygon models, which would grind to a halt on a GTX. I am looking to set up a 3D PC at home, but don't have the kind of budget for a quadro - hence being so interested in these mods.
Really? Because everybody that I've spoken to that has had the chance to compare (non hacked) GTX & quadro cards has said the opposite.
What version of Max were you using?
The nitrous viewport in later versions has improved version on version when it comes to performance, though whether this is taking more advantage of the GPU or not is up for debate. I don't think that it is CPU bound, because the performance in the viewport literally grinds to a halt when I start rendering a scene on the GPU and trying to work in the viewport alongside it, despite low CPU load.
Update:
1) GTX680 -> Tesla K10 mod by simply deleting the 4th nibble resistor works fine for VGA passthrough. It seems to yield all the advantages of modding to Tesla/Quadro/Grid while keeping things trivially simple. And since the resistor to be removed is on the back of the PCB, you don't even have to remove the heatsink or break any "warranty void" stickers, if you are concerned about such things. This really does reduce the mod complexity to the level of completely trivial.
2) The DL-DVI issue I mentioned a few times before is a Gainward specific issue on both the 680 and 690. I just got a MSI GTX680, and the problem does not manifest on that card. So if you are planning to do this, avoid Gainward cards.
Update:
1) GTX680 -> Tesla K10 mod by simply deleting the 4th nibble resistor works fine for VGA passthrough. It seems to yield all the advantages of modding to Tesla/Quadro/Grid while keeping things trivially simple. And since the resistor to be removed is on the back of the PCB, you don't even have to remove the heatsink or break any "warranty void" stickers, if you are concerned about such things. This really does reduce the mod complexity to the level of completely trivial.
2) The DL-DVI issue I mentioned a few times before is a Gainward specific issue on both the 680 and 690. I just got a MSI GTX680, and the problem does not manifest on that card. So if you are planning to do this, avoid Gainward cards.
so according to summary table of verybigbadboy (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550)) it says the second resistor should be a 40k one. Actually he says he puts a 100k one because ran some stability issues ... what am I missing?
Ok, so modding an EVGA 690 is not that difficult then. So, by modding it, can you still take advantage of the Dual GPU which is akin to SLI? I might buy one and do the mod, will tie me over until the 780ti is 100% modded.
Ok, so modding an EVGA 690 is not that difficult then. So, by modding it, can you still take advantage of the Dual GPU which is akin to SLI? I might buy one and do the mod, will tie me over until the 780ti is 100% modded.
690 doesn't have XP64 drivers (only Windows I use), and I don't see a SLI option in the nvidia control panel when it's running as a Grid K2 / Tesla K10. I never tried passing more than one GPU to a VM.
I'm more than a little annoyed than my 690 is a Gainward with the bizzare DL-DVI issue, because I really wanted to run with a 690 split between two VMs, due to a slot shortage and an uncrippled GK104 is good enough to run anything (including Crysis at max settings) but the worst programmed games (*cough*Metro Last Light*cough*). But now I've got a Titan on a workbench and a 780Ti in the post, it's hard to not justify using those instead - if the mod goes as planned.
I'll have the 780Ti mod tested within a few days (however long it takes for mine to arrive in the post, which is going to be days rather than weeks), so I'd advise you don't bother buying a 690 for those few days (by the time a 690 you buy today arrives, the 780Ti mod will be tested). Unless you want to trade a tested Teslified 690 for a tested Quadrified 780Ti afterwards (what part of the world are you in?)? :) Just make sure that if you get a 690 it isn't a Gainward, or you'll be limited to 1280x800 when virtualized.
wow, I like your desire to risk bricking those very expensive cards :-+ See PM.
wow, I like your desire to risk bricking those very expensive cards :-+ See PM.
The risk of permanent damage is pretty close to 0.
The i got questions about the "easy, remove 4th nimble gtx680 trick", it would result in a Tesla K10.
Would the DVI/HDMI outputs actually work if used on host OS, not VM?
well that was my problem. I'd be glad to help with my K2 but I need to get a "demo" or "trial" of the xen server preview to do vgpu. I only have esxi 5.5 (api-intercept) and server 2012R2 (api-intercept).
I know "trials" and "demo use" keys exist, I'm not even considering a "pir8" copy.
Anyone have any help here? I think that by shadowing each step and comparing output there would be a more likely chance to get it going with 680/690 but there are a myriad of bugs with vgpu right now. the readme's show tons of issues (screens going black over 1920x1080, iommu mapping to the wrong area due to 64bit and 4gb vram issues, not doing the right command in the right order=fail,rdp disables hardware acceleration period).
If you check the forums on xen for getting VGPU working its literally a minefield of this gpu bios and that server bios and being on the right 2nd tuesday of the month to get full hardware vgpu going.
API intercept does not use an nvidia bios or driver in the VM.
vgpu shared is what i'd love to try out. alas no software
I'm pretty sure you need xenserver 6.2 plus xendesktop 7.x preview and the desktop portion is the big money remote graphics VGPU portion and definitely not free.
If you think about it, the hyper-visor only needs to run the VGX software, the desktop "tools" - vmware tools/drivers - that's where the nvidia USM bios/driver has to work its magic inside the vm.
I think ESXi consolidates all of this (free), where as XEN can sell both parts individually.
If the OS is 64bit - why does it matter where the BAR maps?
This is not entirely correct. I have not seen anywhere that VMWare is doing VGX as of yet, instead opting to stick with their vSGA. There are some similarities but they are definitely not the same thing. Citrix is the first to release NVidia VGX integration, likely stemmed from their partnership work with NVidia to create this and providing software for NVidias demo servers. At the very least, when comparing VMWare vSGA with Citrix and VGX, you are leaving out the required VMWare Horizon View package which is definitely not free.
Hi all again ;)
I successfully modified SPARKLE SXS4501024D5SNM GeForce GTS 450 1GB to Quadro 2000
board is reference nvidia gts 450 http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg (http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg)
upd:
gpu passthrough works fine.
Initial values are:
index meaning resistance 1 3 byte value D none 2 3 byte value C 35k 3 4 byte values 8-f none 4 4 byte values 0-7 25k
device / resistors table
device name R1 R2 R3 R4 gts 450 none 35k none 25k Quadro 2000 35k none 5k none
furmark: http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616 (http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616)
Can you hack the ASUS GTX660 2Gb model? I would be interested in using a quadro.
mrkrad, I guess you missed the post where I explained that soft-modding is ineffective on Kepler class cards (GKxxx GPUs).
My 780Ti arrives, and I can confirm that it is based on the exact same PCB as the Titan. Modding the 3rd nibble works the same way as it does on the 780 and the Titan, so you can make it into K6000 easily.
Unfortunately, a quick visual comparison of the resistors I suspected of controlling the 4th nibble strap shows that they are in the same place on both PCBs, rather than at least one being in the alternate position (for nibble values >= 0x8). So no info on the 4th nibble strap location yet. Then again, given the 780Ti is cheaper than the Titan anyway and trivial to modify into a K6000, I imagine it's going to become the second easiest / most popular mod after the GTX680 to K10 (just remove one resistor off the back of the PCB).
Happy modding.
Which 780ti did you get Gordon? So by modding the 3rd Nibble we can turn it in to a K6000, what does the 4th nibble give us exactly?
Which 780ti did you get Gordon? So by modding the 3rd Nibble we can turn it in to a K6000, what does the 4th nibble give us exactly?
I got the EVGA base model and flashed it with the SSC BIOS. So I got the performance of the most overclocked EVGA card for the price of the cheapest EVGA card. 100% stable, OCCT-ed overnight. GK110 is a demon OC-er, with a bit of tweaking I'm sure it'll go a lot faster, but Kepler BIOS Tweaker doesn't seem to be quite up to the job of power limit modding on it. :( If anyone knows of a better tool, please, do share.
The 4th nibble would let us mod the 780 and Titan into a K6000 as well. Without the 4th nibble info, only the 780Ti is moddable into a K6000.
Fantastic, lets hope the rest will follow your bravery and mod some more 780ti's. I am probably going to buy the same one you have.
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
Or go one better and get a hot air reflow iron. A decent one of those costs more than a Titan, though.
One other possible alternative is to get conductive glue.
But none of those preclude the requirement for very, very steady hands; a good night's sleep and no caffeine for 24 hours before attempting makes a noticeable difference.
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
Or go one better and get a hot air reflow iron. A decent one of those costs more than a Titan, though.
One other possible alternative is to get conductive glue.
But none of those preclude the requirement for very, very steady hands; a good night's sleep and no caffeine for 24 hours before attempting makes a noticeable difference.
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
Or go one better and get a hot air reflow iron. A decent one of those costs more than a Titan, though.
One other possible alternative is to get conductive glue.
But none of those preclude the requirement for very, very steady hands; a good night's sleep and no caffeine for 24 hours before attempting makes a noticeable difference.
Hi
Very interesting, my 780ti is arriving tomorrow :-)
Have you looked at the nvidia-smi output? Especially the "compute mode =" and whether it can be changed e.g with -c "All On" option :-)
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
Or go one better and get a hot air reflow iron. A decent one of those costs more than a Titan, though.
One other possible alternative is to get conductive glue.
But none of those preclude the requirement for very, very steady hands; a good night's sleep and no caffeine for 24 hours before attempting makes a noticeable difference.
I've saved the bios off a GTX 480, then changed the device ID with nibitor to a 06D9 from 06C0 but it still shows up with the old device ID after flashing.
nvflash -5 does require me to type YES due to mismatch but I'm doing something wrong.
So I had some problems with Tesla K10, I had the wrong memory size(29xxMB) and insufficient info in GPU-Z, and nonexistent sensors.
It would be nice if you would offer a service of modding the cards :) Some of us have unsteady hands. Only good for nvflash strapping ..the best avice I can give is not to skimp on soldering iron. Get a good one with pointed tips so you can apply the tiniest of solder amounts.
Or go one better and get a hot air reflow iron. A decent one of those costs more than a Titan, though.
One other possible alternative is to get conductive glue.
But none of those preclude the requirement for very, very steady hands; a good night's sleep and no caffeine for 24 hours before attempting makes a noticeable difference.
I could easily solder anyone’s card if they were nearby me.
Those ultra pointed tips are a pain imo. I found that they loose heat when you touch the component and you have to have the heat turned up more. When its just sitting there it starts to oxidize unless you turn it back down. They just aren't a big enough heat sink and the temp control is all out of whack.
I use a hakko 936 with the 900M-T-B tip that came with it. I think they are around the $80 mark now. When I was working I preferred the chisel type D tips but they are easy enough with either. I used to do 0603s at work.
The key really is flux if you don't get it soldered right away before the flux in the solder evaporates. Get some no clean flux in a syringe tube and some fine tweezers.
My arms or hands shake a fair bit. I just hold them hard against the desk while I'm soldering.
Update:
As some of you that have been following this thread are aware, I've been having a rather bizzare problem with only DL-DVI working on my modifed GK104 based cards (680 and 690). It just occurred to me that the two cards I have happen to have on thing in common - they are both Gainward cards. Has anyone else successfully managed to get DL-DVI to work in a VM with a Gainward GTX680 Phantom 4GB or Gainward GTX690? Gainward 690s have a few strapping resistors in different places to the EVGA and other 690s, so it is plausible they made some tweaks that cause the problem. Does anyone have either of those cards working with DL-DVI outputs successfully?
On a separate note, I just learned that GTX780Ti has device ID 0x100A. K6000 is 0x103A. We only need to change the 3rd nibble (via oguz286's awesome mod). His mod is to use a 33K resistor to jack up the 3rd nibble from 0 to 2. On my Titan using an 18K resistor instead boosts the ID to 3. So to make a 780Ti to a K6000, simply apply an 18K resistor between VCC and SCLK and voila, job done. For extra points, solder a couple of wires between those pins, solder an 18K resistor on one of them, and a switch to connect them. Break the switch out somewhere accessible (extra extra points for making the switch easily and neatly accessible from the back of the card without obstructing the airflow too badly). Now you can switch between a 780Ti and a K6000 at a flip of a single switch.
Needless to say, the neatness of this means that I'll be acquiring a 780Ti as soon as I've ebayed my Titan.
Edit: Maybe I'll keep my Titan for a little bit longer, as if the PCB is the same (which it almost certainly is), it should be very obvious which resistors are configured differently for the 4th nibble. Watch this space.
I modded a Zotac GTX 680 into a K10, with hopes that I could get it to do GPU accelerated computation in this program called CST. CST does not have support for any consumer video cards for acceleration.
After the mod, it shows up at as K10 in windows, but trying to use the video out of the card results in a limited resolution of 1280x720, and using programs such as CST or even attempting to benchmark it with other software results in it not being recognized. Do you believe this a hardware issue or a manufacturer issue?
So, I was able to mod the titan into the k20xm.
you know that 1280 bug is documented by xen as a known issue even with grid
I have no issues with it popping up as a K10 with GPU-Z or Device Manager. I've used Quadro/Tesla drivers 307.45 (recommend version by CST) as well as the latest drivers available. Have you had issues with different drivers? I am trying to reread all the threads from where I've left off here and there. I decided to test gpu acceleration with another program called cgminer (yeah I'm just dabbling with that stuff), and it correctly functions with my stock 560 ti, but thereafter says it says GPU 1 failure.
I have additions GPUs driving my screen, but as they were all nVidia based I didn't want to have driver conflict issues possibly, and that is when I realized the GTX680-modded-to-K10 would not go past 1280.
On page 17 verbigbad boy stated:
"I successfully modified
Zotac PCI-E NV ZT-60206-10L GT640 Synergy 2G 128bit DDR3 900/1600 DVI*2+mHDMI RTL
To NVIDIA GRID K1. It is working fine. passthough works too. BUT Device ID modification posible only after bios modification. Bios modification is needed only for specific vendors.
"
Do you think this would be an issue with this zotac? I'm going to search the thread for other zotac 680s done successfully. I'm not very proficient editing BIOSes and uploading them, I'd have to find a tutorial if that is the case.
Side note, when looking through CST's guidelines for GPU acceleration, they show that the nVidia control panel in windows has a Tesla specific branch after stereoscopic 3D video settings. I don't have this available within my control panel.
So, I was able to mod the titan into the k20xm.
I need mostly DP precision in EM simultaion and want to do the same mod.
Am I correct: all that needs to be done is changing the resistor 25k->40k and flashing the modded 128kb BIOS from k20xm?
It may not work - you need to modify the 4th nibble value to match the Tesla card, and that hasn't been located yet. The drivers check the hard strap rather than the soft strap to enable features. Titan allegedly has full DP performance, but my Titan shows near identical figures for DPFP in CUDA-Z as my 780Ti, so either CUDA-Z isn't measuring it reliably or it is a myth that Titan has uncrippled DP performance.
The only full ID modification available at the moment is 780Ti to K6000, since those don't differ in the 4th nibble, so changing only the 3rd nibble is sufficient.
Also note that video outputs and configuration are set by the BIOS, so flashing a Tesla BIOS onto a GeForce card will disable all video outputs on the card.
Read the thread once again :) and found that the resistor mod which seemed to work is 33k between VCC and SCLK.
According to previous unlucky experiments, resistors controlling the 4th nibble should be located on the card's backside against the flash, right?
By the way, if softmod doesn't work then why original Titan BIOS contains FF FF FF 7F 00 14 00 80 at 0x1C, which, if I'm not mistaken, changes hardware coded 4th nibble from something (0, 1, 4 or 5) to 5?
Regarding the Titan's DP uncrippled performance, AFAIK it is activated by the driver which changes the card's BIOS so it turns on all DP FPUs and at the same time drops frequency to 732 MHz. Do you know if anyone made a comparison of "crippled" and "uncrippled" BIOSes?
So I had some problems with Tesla K10, I had the wrong memory size(29xxMB) and insufficient info in GPU-Z, and nonexistent sensors.
Sounds like you didn't reinstall the driver properly after you modified the card.
I read your assessment but I figured it was worth a test on 1 card, if i can do it before the others need to be exchanged for real quadro's. My understanding was that the quadro's do perform better in viewports than the mainstream cards but I am not sure I want to make that investment to find out. We actually picked up a quadro k4000 to test, just haven't opened it.
If the ID works out as I hope, I can always buy the right resistors... this pack contains a broad range of values so I grabbed it. By the way, I think my diagram of the traces is missing some interconnections, the image is playing light tricks, so I won't know for sure till i open my own card, but looking at Guz's site, his image seems to show a few more interconnects. And it makes sense one could connect the SCLK with VCC based on the traces... and I wonder if the other nibble can be tracked down by looking for a similar connection to a power line.
http://www.guztech.nl/wordpress/index.php/2013/11/researching-nvidia-gpus-geforce-gtx780-and-gtx-titan-to-tesla-k20-and-tesla-k20x/ (http://www.guztech.nl/wordpress/index.php/2013/11/researching-nvidia-gpus-geforce-gtx780-and-gtx-titan-to-tesla-k20-and-tesla-k20x/)
Strange. Had you verified that it is coming up with the correct device ID in GPU-Z and GPU Caps Viewer?
Strange. Had you verified that it is coming up with the correct device ID in GPU-Z and GPU Caps Viewer?
Yes, I always do.
Maybe I'm speaking from my ass here but the tesla mod might be flaky/unstable in my case, but I only tried only one version of the drivers.
Its one thing to hack the ID but I am sure that the video ram is ECC on Quadro where on the Geforce cards are not ECC, if this is the case those errors that can happen in ram or on the way to and from ram will not be corrected thus the data is not 100% accurate compared to a system with ECC ram, just like a Desktop can be used as a Basic Server but at the end of the day its just a desktop with server software, the real servers have ECC and all those other features that Servers come with.
what this means for rendering, I guess you could render something and it might have errors in the image and may spoil your work.
Notes, there also could be other things as well like different memory bus example 256bit vs 384bit this basically means the bus is wider thus more memory bandwidth. "if the clock and timings are the same"
Instruction sets can be different as well and that will have an impact depending on what your doing.
Me personally I would only try this on a SPARE card and research to make sure everything else hardware wise is the same specs.
There is also the possibility that this could work 100%.
Its one thing to hack the ID but I am sure that the video ram is ECC on Quadro where on the Geforce cards are not ECC, if this is the case those errors that can happen in ram or on the way to and from ram will not be corrected thus the data is not 100% accurate compared to a system with ECC ram, just like a Desktop can be used as a Basic Server but at the end of the day its just a desktop with server software, the real servers have ECC and all those other features that Servers come with.
ECC on Nvidia GPUs is done in software - you get 8/10 storage overhead for ECC, but there is no difference in the actual RAM. You enable the ECC feature in the MC, reboot and your VRAM reduces and gets a bit slower. On modified GeForce cards this doesn't work. If you mod a 4GB 680 to a K5000 and flash the K5000 BIOS onto it, the option will appear in the control panel, but it won't actually do anything (RAM size doesn't shrink, RAM I/O doesn't slow down).what this means for rendering, I guess you could render something and it might have errors in the image and may spoil your work.
For aircraft component design or nuclear simulations, that might be an issue. For most uses it's not really an issue.Notes, there also could be other things as well like different memory bus example 256bit vs 384bit this basically means the bus is wider thus more memory bandwidth. "if the clock and timings are the same"
Quadro/Grid/Tesla parts generally have equivalent GeForce parts, and the spec of the GeForce part is at least as good as the pro counterpart, e.g. GTX480 beats the Quadro 6000 on everything but RAM size (1.5GB vs. 6GB), but it is clocked higher, has the same width memory bus, and it has more shaders (480 vs. 448)Instruction sets can be different as well and that will have an impact depending on what your doing.
If that is the case, the driver glazes over it. While there is a difference in the exposed GL primitives (even after modification), it is also very evident that the software implementation of the missing GL primitives on GeForce cards is deliberately massively crippled.Me personally I would only try this on a SPARE card and research to make sure everything else hardware wise is the same specs.
There is also the possibility that this could work 100%.
I guess you haven't noticed (size of the thread being a hint) that there are hundreds of people who have been modifying cards like this for things like virtualization with great success.
So here is a weird scenario to try out with anybody here that has GTX card modified into a Quadro with a virtualization setup for multiple VMs with VGPU enabled. I know the Keplar based Nvidia cards have a technology called GameStream, formally GRID, that can take any approved games or Steam Big Picture mode and pipe it a Nvidia Shield to play games. I wonder if someone can test this on their modded cards and see if multiple VMs can utilize this function.
I know not everyone can get their hands on a Nvidia Shield but there is a Android and PC app being developed here at http://forum.xda-developers.com/showthread.php?t=2505510 (http://forum.xda-developers.com/showthread.php?t=2505510) that works just like the Nvidia Shield and replicates its functions on either another PC or any current Android phone/tablet/TV Console.
So here is a weird scenario to try out with anybody here that has GTX card modified into a Quadro with a virtualization setup for multiple VMs with VGPU enabled. I know the Keplar based Nvidia cards have a technology called GameStream, formally GRID, that can take any approved games or Steam Big Picture mode and pipe it a Nvidia Shield to play games. I wonder if someone can test this on their modded cards and see if multiple VMs can utilize this function.
I know not everyone can get their hands on a Nvidia Shield but there is a Android and PC app being developed here at http://forum.xda-developers.com/showthread.php?t=2505510 (http://forum.xda-developers.com/showthread.php?t=2505510) that works just like the Nvidia Shield and replicates its functions on either another PC or any current Android phone/tablet/TV Console.
I pass through my 780ti modded to K6000 to a windows 8.1 vm. Geforce Experience does not support Quadro cards so it wont work.
Why instead building 8 workstations for irey 3ds Max, just build one containing at least 8 graphic cards...
vmware horizon 5.3 and use for example gtx 580 softmoded to tesla M2090, or 680 softmoded ...
Has anyone investigated if a 780 ti modded to a K6000 has the increased DP performance?
is anyone going to mod a titan black? it has a pci id of 10DE 100C. is this easier to mod than titan?I'd be interested to know as well. Will have a Titan Black in my hands sometime next week.
I'd be interested to know as well. Will have a Titan Black in my hands sometime next week.Can anyone read the Titan card's strap.
sudo apt-get install git libpci-dev libpciaccess-dev libxml2-dev cmake flex bison libx11-dev libvdpau-dev libxext-dev
git clone http://github.com/envytools/envytools/
cd envytools
cmake .
make
cd ..
sudo ./envytools/nva/nvapeek 0x00101000 > nv_strap_peek.txt
I am about 90% certain the strap you speak of is set by the resistors on the PCB - what we have been hacking here. The soft strap also exists which can be set by vbios, but that is nowdays used by the likes if Grid cards to present a different personality to the driver in the VM than what is given by the hard strapped device ID.
I have to say I am unimpressed by just about all the non-reference coolers like the ACX. The reference design is actually really, really good - not only does it ensure good cooling of the card, it also makes all of the hot air get straight out of the case via the grille in the back plate.
In contrast, the coolers like the ACX dump all the hot air straight back into the case, which both reduces the cooling effectiveness (you are cooling with pre-heated recycled air) and makes everything else inside the case run hotter. This is a problem both with running multiple GPUs and power hungry overclocked CPUs.
Oh, and the resistor-across-the-EEPROM mod should work on non-reference 780Tis as well.
You understand right. The features that become unlocked are the ones that are purely up to the driver to allow, such as correct functioning when used virtualised (e.g. Xen PCI passthrough or Xen VDGA/VSGA). It also supposedly unlocks some multi-monitor features (this was OPs original requirement).
What specific OpenGL requirements do you have? There are only a very small number of primitives that are disabled on the GeForce cards. Also, if you are using Linux, you may find that the open source nouveau driver performs better since it isn't deliberately crippled for non-Quadro cards - it's probably worth a try.
Can anybody explain me details of how to locate R2, R3, and R4 with ohmmeter??
A while back someone asked about modding a GTX660 (GK106 based) into a K4000. I've done this on an EVGA GTX660 (part number 02G-P4-2662KR) but should be similar on other GTX660. Be careful there are some GTX660 that use the GK104 and are a different design. This mod is for GTX660 boards with an original PCI ID of 11C0 being modded to a K4000 ID of 11FAThanks for posting your instructions. I could get a EVGA GTX 660 model 02G-P4-3061-KR. Do you think that would work? I would use it for VGA passthrough, to replace my Quadro 2000.
This mod requires removing the heatsink to access the topside of the board. See pictures for resistor locations
R3H on topside of board controls the 3rd nibble. On a GTX660 it is 25K ("C"). Removing it gives "F" . On other boards some folks have reported leaving an open causes issues and a 40K resistor is needed. I didn't have that problem
R4H and R4L on the backside of the boards controls the 4th nibble. Remove upper resistor R4L (5K on GTX660 = "0") and add 15K at low resistor R4H for "A"
Reassemble heatsink before testing. With the mod the board now works in Xen GPU passthrough. It's also way faster than the real K4000
I haven't been able to find a clear picture of the backside of the 02G-P4-3061-KR so can't be sure, but that looks like it might be a different board (which would likely mean different strap positions)Thanks for your reply! Unfortunately I couldn't find any meaningful picture of the card. The spec sheet shows only minor differences between your card and the 3061:
Spec | 02G-P4-3061-KR | 02G-P4-2662-KR |
Base clock: | 980 | 1046 |
Boost clock: | 1033 | 1111 |
Texture Fill Rate: | 78.4 GT/s | 83.68 GT/s |
First of all verify the device ID with GPU-Z and GPU Caps Viewer. They should both show the device ID is 0x118F.
Second, check the soft strap in the BIOS. This is documented relatively early in the thread. See this post by verybigbadboy:
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg213332/#msg213332 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg213332/#msg213332)
If that checks out as well, see if you can dump out the BIOS using nvflash. Make sure it looks correct, isn't 0 bytes in size, and check it with KBT to make sure it is complete and the checksum matches (use KBT for checksum verification/repair, NiBiTor is ancient and doesn't support newer BIOSes).
If you cannot read the BIOS, it means the signal from the BIOS is too attenuated with the resistor removed and you'll have to put a 40K resistor into the alternate location for the 4th nibble. I have not seen this happen on any of my cards when modding to Tesla K10, but other people have reported it. I have, however, seen it happen when modifying the 3rd nibble on a Gainward GTX690, so it does happen on some cards.
Another thing worth trying is an older driver, as mentioned by NEOAethyr above - it is plausible Nvidia have done something recently to prevent modified cards from working.
Hi guys!
I'm writing justo to confirm that everything worked as expected with my evga gtx680!!! Nos it is modded to a grid k2!!!
Thanks a lot for your great feedback!!!
Hi guys!
I'm writing justo to confirm that everything worked as expected with my evga gtx680!!! Nos it is modded to a grid k2!!!
Thanks a lot for your great feedback!!!
Which resistors did you change? Did you have to swap out two resistors (which is what I'm thinking) or do something else?
I'm planning to do the same thing, but with a 690 (of which I only see it converted to Quadro or Tesla). I don't mind doing the extra solder work for the Grid card.
Thanks for any input.
Hi guys!
I'm writing justo to confirm that everything worked as expected with my evga gtx680!!! Nos it is modded to a grid k2!!!
Thanks a lot for your great feedback!!!
Which resistors did you change? Did you have to swap out two resistors (which is what I'm thinking) or do something else?
I'm planning to do the same thing, but with a 690 (of which I only see it converted to Quadro or Tesla). I don't mind doing the extra solder work for the Grid card.
Thanks for any input.
Hi angerthosenear,
I've made this changes:
resistor 0: solder new 40k resistor
resistor 1: removed it
resistor 2: solder new 40k resistor
resistor 3: removed it
I'm experiencing some problems with passthrough mode in vmware view 5.3... I added k2 hardware in passthrough mode and then added that hardware to a virtual machine (as external pci). Windows XP testing machine detected it, but once I halted virtual machin to change network configuration, vmware server took sooo long to shutdown that virtual machine and had many stability problems. Once it reacted, passthrough hardware disappeared, and also the nvidia k2 dissapeared from hardware inventory in the host.
Any ideas guys?
Regards,
villa
Hi guys!
I'm writing justo to confirm that everything worked as expected with my evga gtx680!!! Nos it is modded to a grid k2!!!
Thanks a lot for your great feedback!!!
Which resistors did you change? Did you have to swap out two resistors (which is what I'm thinking) or do something else?
I'm planning to do the same thing, but with a 690 (of which I only see it converted to Quadro or Tesla). I don't mind doing the extra solder work for the Grid card.
Thanks for any input.
Hi angerthosenear,
I've made this changes:
resistor 0: solder new 40k resistor
resistor 1: removed it
resistor 2: solder new 40k resistor
resistor 3: removed it
I'm experiencing some problems with passthrough mode in vmware view 5.3... I added k2 hardware in passthrough mode and then added that hardware to a virtual machine (as external pci). Windows XP testing machine detected it, but once I halted virtual machin to change network configuration, vmware server took sooo long to shutdown that virtual machine and had many stability problems. Once it reacted, passthrough hardware disappeared, and also the nvidia k2 dissapeared from hardware inventory in the host.
Any ideas guys?
Regards,
villa
Is resistor 1,2,3,4 from top to bottom as in the main post (in picture 2)? Or some other picture in this thread?
Thanks for the response. Not sure about your current issue however, I'm not versed in VM stuff at all so can't help there.
Nobody has yet been able to restore the missing GL primitives on GeForce cards. Cross-flashing the BIOS, in the one case where it actually works (Q2000 BIOS onto a GTS450), doesn't seem to achieve anything obviously useful in this regard; then again, Q2000 and GTS450 are very, very similar, much more so than other GeForces are to their equivalent Quadros, with maybe the exception of a K5000/GTX680 4GB variants - in that they have the same amount of VRAM. I haven't tried flashing a full strap-adjusted K5000 BIOS onto my GTX680 yet - it is on my ever-growing TODO list. :(
In case of the GF106 GPUs (GTS450/Q2000) I suspect the missing functionality is cut out of the GPUs before packaging, and if that is the case, the chances of restoring this are non-existant.
Note, however, that modifying a GTS450 into a Quadro 2000 does produce some performance benefits - Maya scores, although still far behind a real Quadro 2000, go up by around 40% after modifying the card.
Thus the first five bits of the device ID can be set in the firmware, meaning that the device ID could be set to values between 0×1000 and 0x101F without modifying any hardware.
Hi there!
First of all, great work!
I have decided to virtualize my windows system, and I wanted to mod my Palit Jetstream GTX 670.
I have removed the R2 resistor. It does not give any picture when booting, but it gets detected in device manager with DEV_118F (Tesla K10) when used as a secondary card. I get an error code 28 when trying to install a driver for that card though.
Do I need to do anything else beside this? I'm kind of reluctant as the PCB is not 100% based on the reference design.
Do I need to edit the firmware for the system to work again? I have an UEFI BIOS. Motherboard is a GIGABYTE GA-X79-UD5.
Cheers!
Modified cards won't work with UEFI motherboards.
A GTX680 modified into a Tesla K10 works just fine as a primary or secondary card on bare metal, and works just fine with secondary pass through card with monitor output in a VM. I have one.
A real K10 has no video outputs, but that has no influence on whether a modified 680 will have working outputs. Video outputs are configured by the BIOS payload.
Okay, that makes sense. But shouldn't I be able to install the drivers for it?
Will I have DirectX acceleration under Windows when run with passthrough?
Okay, that makes sense. But shouldn't I be able to install the drivers for it?
Not if the BIOS didn't initialize the card. UEFI has crypto signatures, and AFAICT if it notices the device ID of the card isn't what is expected, it won't work.Will I have DirectX acceleration under Windows when run with passthrough?
Yes.
Fingers crossed! I read somewhere I can mod the card's firmware for it to work on UEFI boards again, but that's for another time to figure out :D
kernel = "/usr/lib/xen-4.1/boot/hvmloader"
builder='hvm'
memory = 4096
vcpus=4
name = "win7"
vif = ['bridge=xenbr0']
disk = ['phy:/dev/vg0/win7,hda,w','file:/home/oerg866/OSs/en_windows_7_professi$
acpi = 1
device_model = 'qemu-dm'
boot = "d"
sdl = 0
serial='pty'
vnc = 1
vnclisten=""
vncpasswd=""
gfx_passthru = 1
pci=['01:00.0']
Well, so far so....good?
I have managed to install xen (debian 7.5) but I still cannot get the card to generate video. It is passed through like this:
pci=['01:00.0']
and
gfx_passthru=1
Before starting the VM I do modprobe xen-pciback and modprobe pci-stub and I use the script providing "remove_device" to disable the nouveau drivers for it.
The VM starts up but it doesn't generate video on the GTX670 @ K10. I am using the HDMI output, if that helps...
If I disable the gfx_passthru thing, it shows up in device manager as Standard VGA Controller but with a code 10 error (device cannot start) ...
OK here we go: http://img.ctrlv.in/img/14/06/05/5390b3a02b99c.png (http://img.ctrlv.in/img/14/06/05/5390b3a02b99c.png)
I was able to install the drivers, but I cannot select a monitor or anything :/
Error 23 is a long fixed bug. You are either running a version of Xen that hasn't been fixed, or your hypervisor and user space are mismatched. Or the packages you are using are just broken.
You need to make sure xen-pciback loads and claims the devices you are passing through BEFORE the nvidia driver loads. Once pciback claims the devices you are passing through, you can load nvidia/nouveau and any other drivers.
i cant select the k40 in gpu caps viewer, i flashed the original bios back to the k40.
i use this card without any modification on the bios. with TCC it works without problems.
Hi guys,WHY? Just sell it and get a gaming card if that's what you are looking for. You get some $500-600 on ebay.
I have a Quadro K4000. Is it possible to mod it into it's consumer counterparts? And if yes, will I gain any gaming performance?
Ok thank you again for your answers.
I'll go with evga or gigabyte 680 4gb so we will see how that goes >:D
my goal was to find out will it work like real GRID K2 so i can share gpu to multiple vm-s.
without mod with qemu-git on latest debian it works flawless...
with gird k2 ID, i can run it with grid k2 drivers,even with desktop ones, but restart, shutdown, POST screen doesn't work... but when gpu works it works same like 680gtx, in games, streaming hw encode... etc... with both drivers with id GRID K2...
so pretty much card works with GRID K2 id... so next i wanted to try to share it...
on xen-server 6.2 latest - with any GRID K2 host driver... it detects it normally, but when i try to start vm with shared gpu... i got 3 lines in dmesg like...
its init and binding gpu, taking ownership and then nothing... xen-center crashes with error iomem... vgpu exited unexpectedly...
passing it like single card works sometimes...
on native windows 2012 R2 hyper-v installing with any GRID host driver crashes host (kernel i guess, mouse shows here and then with bsod sometimes, on keyboard numlock blinks)... all 3 outputs have some problems like black/white screen blinking and black/green snow (lol)...
last epic try:
qemu with win 2012r2 vm with grid passed... i can make driver and hyper-v to work but i don't have gpu listed in hyper-v options (maybe cause of passtrough, so it doesn't detect it like phy gpu) :(
so next step would be updating bios? would that help? is it possible to recover bios if it fails?
did anyone managed to get it to work like real grid? is it even possible?
Hold on - you can pass an Nvidia card _unmodified_ in KVM and it works fine? Seriously? When did that happen??
one stupid question :)
would it be possible to run for example debian with latest kernel, with xen supervisor then install nvidia-vgx-grid driver for xen-server kernel 2.6? and then somehow cut gpu in vgpu-s like in real xen-server? is xen supervisor for debian same like one in xen-server?
Hold on - you can pass an Nvidia card _unmodified_ in KVM and it works fine? Seriously? When did that happen??
YES... i think with kernel 3.6 :) couple months ago
i am testing it for last month with 2-3-4 gpus in same machine... not a single problem with passing nvidia card...
I ment, if it´s possible to mod the card, so that both GPU´s are working!
Hi think may it is possible to do but you should be successful with 650 it boost
Hi all again ;)
I successfully modified SPARKLE SXS4501024D5SNM GeForce GTS 450 1GB to Quadro 2000
board is reference nvidia gts 450 http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg (http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg)
upd:
gpu passthrough works fine.
Initial values are:
index meaning resistance 1 3 byte value D none 2 3 byte value C 35k 3 4 byte values 8-f none 4 4 byte values 0-7 25k
device / resistors table
device name R1 R2 R3 R4 gts 450 none 35k none 25k Quadro 2000 35k none 5k none
furmark: http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616 (http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616)
Hi all again ;)
I successfully modified SPARKLE SXS4501024D5SNM GeForce GTS 450 1GB to Quadro 2000
board is reference nvidia gts 450 http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg (http://www.ixbt.com/video3/images/ref/gts450-scan-back.jpg)
upd:
gpu passthrough works fine.
Initial values are:
index meaning resistance 1 3 byte value D none 2 3 byte value C 35k 3 4 byte values 8-f none 4 4 byte values 0-7 25k
device / resistors table
device name R1 R2 R3 R4 gts 450 none 35k none 25k Quadro 2000 35k none 5k none
furmark: http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616 (http://www.ozone3d.net/benchmarks/furmark_192_score.php?id=120616)
...
EDIT: Made it a Grid k2 and all works perfect now!!! Thank you all so much!!!
NVIDIA_DEV.1380 = "NVIDIA GeForce GTX 750 Ti"To:
NVIDIA_DEV.1381 = "NVIDIA GeForce GTX 750
NVIDIA_DEV.13BA = "NVIDIA Quadro K2200"
ngphucok : read this page correctly :palm:
dreadkopp : read this forum more :clap: + for best performance your notebooks should have 1Gbps Lan
maxpoz : NO :-BROKE
Would this be doable? Interested in TCC... From:I am interested in this too... Both have the GM107 chip, so in theory would be doable.NVIDIA_DEV.1380 = "NVIDIA GeForce GTX 750 Ti"To:
NVIDIA_DEV.1381 = "NVIDIA GeForce GTX 750NVIDIA_DEV.13BA = "NVIDIA Quadro K2200"
Take a picture of the back, to compare it with the K5200... Area close to the chip.
You need to show exactly just the central region that I showed - around chips U504, U505 and the four screws for the cooler.
It seems that your board is missing one of them...
Hi everybody!
Thanks for great work and a lot of usefull information!
Successfully hacked two cards only with Nvflash:
Palit GTS 450 to Quadro 2000 - Works with passthrough in Esxi 5.5 and in Xenserver 6.2 and in Xen 4.4
Inno3D GT 9800 to Quadro FX2800M - Works with passthrough in Xenserver and Xen, and does NOT work in Esxi (fails with BSOD)
Third card - Palit GTX 660 2 gb, tried to mod it to a K4000 ID of 11FA.
Resistors was replaced in accordance with THIS (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg421274/#msg421274) post.
On this Palit pcb in area of flash chip the same as on EVGA GTX 660.
I had no 15K resistor so I put 16K. And it's looks ok. The 4th nibble is A.
But..
1. 3rd nibble is buggy: Mostly it's F, but sometimes (eg. after xen host reboot) it became "E". And even if I put 40K resistor in R3H on topside.
2. Passthrough does not work. Tested in Xen when ID was ok - 11FA. In device manager there is yellow triangle - error code 43.
Any suggestions?
previously installed 3-screen multiseat system on my Linux Mint 17 x64 Qiana does not work with Xen because of proprietary drivers (multiseat needs proprietary drivers and Xen needs nouveau) :wtf:
Could you explain that statement a bit? I'm planning on doing VGA passthrough to a Linux VM and I didn't think Xen needs nouveau. Thanks!
Xen needs nouveau drivers if You want to run gui (http://askubuntu.com/questions/226279/ubuntu-12-04-as-xen-dom0-with-gui) (gnome, kde etc.) on Dom0.
I would be extremely interested to learn how verybigbadboy was able to convert his 680 into a VGX/GRID K1
Just remove resistor 1 and 3 showed on picture in my first post and you will get grid k2
Also now I am trying to modify gts450 to quadro 2000. But I have a problem with getting 4th symbol. I think it is possible to modify almost all nvidia cards which have counterparts. Gts450 have similar way to setup device id.
I think gt200 series can be modified too, I looked at died gt240 and I think I know where are right resistors.
Asus gtx680 directcu 2 Need help for resistor location. They have there own pcb and the layout is different with other brands.
I can not make sure where is the resistor for hardID control.
I have Multimeter and electric iron.
Any information will be appreciated.
luyi. If you are in suzhou, China. We can have a meet.
P.S.
And the back side details of aroud GPU area.
Top right side near the heatsink mounting hole.
QuoteTop right side near the heatsink mounting hole.
Thank you very much, dear Gnif, blue or red area?
QuoteTop right side near the heatsink mounting hole.
Thank you very much, dear Gnif, blue or red area?
Blue, you need to learn a little about what you are doing here, the red area is clearly wrong as these are sufrace mount capacitors.
I started a Wiki to gather all the info in this forum in a struktured way.
The wiki is still kompletely empty, I'll start filling in out in the next few weeks, mainly with all the info related to the GTX 660 card (which is the one I have).
PLEASE FEEL FREE TO EDIT THIS WIKI located at http://aidivn.wikia.com (http://aidivn.wikia.com)
TO be honest, I still did not check how the permissions settings are, but I'll change them step by step, so everyone can have access to edit.
Looks like the days of softmods might be over... :-//
"NVIDIA Alerts Nouveau: They're Starting To Sign/Validate GPU Firmware Images"
http://www.phoronix.com/scan.php?page=news_item&px=MTc5ODA (http://www.phoronix.com/scan.php?page=news_item&px=MTc5ODA)
I got the value:like this
top 5K 5K 5K 45K
down 45k
Dear gnif, any suggestions?
I had killed my GPU, If you don't have enough knowledge about electronic. please don't like me as a GPU KILLER.....
Anyone got a 980? I would be interested to know if the 780ti resistor on EPROM mod would turn this card in to a K80?
Anyone got a 980? I would be interested to know if the 780ti resistor on EPROM mod would turn this card in to a K80?
I do not believe so, nobody I am aware of has this card yet and is willing to hack on it.
I seem to recall that there are now patches for QEMU that neuter the drivers' ability to detect it is running in a VM anyway. Between the no-snoop patch and passing through the CPU ID from the hardware (as opposed to reporting the CPU as something like QEMU) the driver can't detect it's running in a VM so it boots up the unmodified GeForce card just fine.
No, you no longer need to do any modding, there are patches available that prevent the driver from detecting it is running in a VM which makes everything work without any modifications to the card, software or hardware.
Hi hishamkali,
you made a mistake…
you use the string from gtx480…
this work for me on a GTX460 (0E22)
nvflash –index=X –straps 0x7FFC3FC3 0x10006428 0x7FF1FFFF 0x00020000
bye
Can the titan z be modded like the 780ti?
price of the titan z is down to 999 today.
I modded my 780ti by soldering the resistor on the eprom and curious if the same can easily be done with titan z.
For those of you wanting to use VSGA type solutions, there are known bugs in the Kepler firmwares, and only the latest firmware for the K2 works properly. All the GeForce cards have firmwares where the bugs aren't fixed, so it probably won't work. You could cross-flash a K2 firmware (after you have edited the memory size initialization block) onto a GeForce, but you will lose all video outputs on the card, and I don't think anyone has tried this.
just changing the memory layout initialization block
I posted on this thread a quite a while back with some details of how to do it, and somebody else successfully used the approach on a different card. Couldn't tell you what page off the top of my head, though.
Did you modify the device id hierarchy IDs on the primary K2 BIOS to make it work on a 680? nvflash shouldn't have even allowed you to flash the BIOS onto the card if you hadn't adjusted that.Oops! Need to look into that. But "nvflash -4 -5 -6" happily flashed the card. Of course, I agreed to all the warnings, including selling my soul.
the K5000 is supported for all of the same VGPU functionality as a K2,Even for the XenServer/Nvidia GPU hypervisor technology? http://www.nvidia.co.uk/object/grid-virtual-gpus-uk.html (http://www.nvidia.co.uk/object/grid-virtual-gpus-uk.html)
Did you remember to rebuild the BIOS checksum using KBT after you modified the BIOS? If the checksum is invalid, the card won't work with similar symptoms to what you are describing.Yes, I updated the checksum.
You might want to investigate further using a cheap GTX470 or GTX480 card.I'm considering that for my VM Host build. I was hoping to put together a XenServer (the Citrix distribution) with the GTX680 -> K2 to specifically use the XenServer-Nvidia vGPU/GPU Hypervisor. But if that technology still locked to the "proper" K1/K2 cards, then I will simply have to defer to using multiple cards in full passthrough.
On a separate note, if you want VSGA/VGPU type functionality, there are other ways to achieve something equivalent.Thanks! I'll take a look at your suggestions.
the K5000 is supported for all of the same VGPU functionality as a K2,Even for the XenServer/Nvidia GPU hypervisor technology? http://www.nvidia.co.uk/object/grid-virtual-gpus-uk.html (http://www.nvidia.co.uk/object/grid-virtual-gpus-uk.html)
You might want to investigate further using a cheap GTX470 or GTX480 card.I'm considering that for my VM Host build. I was hoping to put together a XenServer (the Citrix distribution) with the GTX680 -> K2 to specifically use the XenServer-Nvidia vGPU/GPU Hypervisor. But if that technology still locked to the "proper" K1/K2 cards, then I will simply have to defer to using multiple cards in full passthrough.
On a separate note, if you want VSGA/VGPU type functionality, there are other ways to achieve something equivalent.Thanks! I'll take a look at your suggestions.
WRT the K5000 mod, I've removed the two resistors on my GTX680 to make a K2. Can I simply flash the K5000 bios (changing the memory from 4GB to 2GB) onto the GTX680? Do the soft-straps in the bios also need to be changed? (please. please. don't tell me I have to solder a 0402 resistor. I can't even see those buggers, let alone solder them!)
I am only using straight PCI passthrough. For any sensible degree of performance you will probably need most of a 680's worth of GPU.My use case isn't for gaming, rather its for virtualization. And my last stumbling block is getting AVCHD video to play sensibly to a thin client. It seems that Multimedia Passthrough of AVCHD video over remoting protocols is broken with win8. The Mrs. will not be happy with my tinkering if she can't watch videos of the kiddies. And AVCHD decompression without hardware offloading in VMs is inadequate. Funny, the performance issue is only with AVCHD video, not other H264 content. But that's for another thread...
You should also be aware that, as mentioned recently on this thread, most recent QEMU includes patches that neuter the GPU driver's detection of whether it is running in a VM, so there is no longer any need to modify the card.I'll take a look at that if all else fails. But I have a hacked GTX680, so let's see how far we can push it.
To modify to a K5000 you will have to make sure the resistors are in place with appropriate values for a K5000.https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550)
Even of you manage to defeat other problems, this may well foil your plans,
This is one of the big advantages of the old GTX480 cards - no soldering required, and no UEFI BIOS signatures.Absolutely! Unfortunately, they don't really come cheap on eBay for their age. Maybe I'll get lucky. fingers-crossed.
I am only using straight PCI passthrough. For any sensible degree of performance you will probably need most of a 680's worth of GPU.My use case isn't for gaming, rather its for virtualization. And my last stumbling block is getting AVCHD video to play sensibly to a thin client. It seems that Multimedia Passthrough of AVCHD video over remoting protocols is broken with win8. The Mrs. will not be happy with my tinkering if she can't watch videos of the kiddies. And AVCHD decompression without hardware offloading in VMs is inadequate. Funny, the performance issue is only with AVCHD video, not other H264 content. But that's for another thread...
To modify to a K5000 you will have to make sure the resistors are in place with appropriate values for a K5000.https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550 (https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg207550/#msg207550)
According to this, I need a 15K value at resistor 2 but will first try without the 40K at resistor 0. Hopefully, the card will be stable. Did you replace both resistors?
This is one of the big advantages of the old GTX480 cards - no soldering required, and no UEFI BIOS signatures.Absolutely! Unfortunately, they don't really come cheap on eBay for their age. Maybe I'll get lucky. fingers-crossed.
GTX690 has 3 devices on the same slot, and they have a specific hierarchy. A GTX680 has no hierarchy. Hierarchy is encoded in the BIOS and (thankfully), nvflash refuses to flash a BIOS with incorrect hierarchy ID.
The question is - where is the hierarchy ID encoded in the BIOS? The two vBIOSes on the GTX690 are quite different, so it isn't obvious where one is set for hierarchy ID of switch port 8 and the other to switch port 16.
I don't suppose anyone here knows?
Hello, people!Help, please :'(
I need your help. I want to convert MSI GTX 460 v2 Hawk into Quadro 4000m
(http://habrastorage.org/files/01b/e59/aa5/01be59aa552d4ebb84c25dd65b515afa.jpg)
(http://habrastorage.org/files/9b1/cad/0ee/9b1cad0ee87c4300a8170d1de2a5175b.png)
my BIOS http://pasha4ur.org.ua/GF104.rom (http://pasha4ur.org.ua/GF104.rom)
Please, help me with commands for flashing. I broke my head while read about bytes here
http://www.altechnative.net/2013/11/25/virtualized-gaming-nvidia-cards-part-3-how-to-modify-a-fermi-based-geforce-into-a-quadro-geforce-gts450gtx470gtx480-to-quadro-200050006000/ (http://www.altechnative.net/2013/11/25/virtualized-gaming-nvidia-cards-part-3-how-to-modify-a-fermi-based-geforce-into-a-quadro-geforce-gts450gtx470gtx480-to-quadro-200050006000/)
And I foundQuoteHi hishamkali,
you made a mistake…
you use the string from gtx480…
this work for me on a GTX460 (0E22)
nvflash –index=X –straps 0x7FFC3FC3 0x10006428 0x7FF1FFFF 0x00020000
bye
Will it work better in Adobe Apps? Will I can adjust clocks and fans? Games will work too?
Thx for help
GT650M and GTX660M are based on GK107 which is a Kepler generation chip. You can use the soft-straps in the BIOS to change the device ID, same as on earlier Fermi and Tesla GPUs, but whether the extra features like Gamstream and Shadowplay will become available is questionable. By all means, try it and report back, I don't think anyone has attempted this before.
UPDATE: Pulled the trigger for a GTX 470 from the bay. Will mod that to Quadro 5000. My VM host will then have un-gimped GTX470 -> Quadro 5000 and Frankenstein GTX680 -> K5000/K2...and an Intel HD 4000 IGP as an appendix....does anyone run XBMC on a Dom0?
GT650M and GTX660M are based on GK107 which is a Kepler generation chip. You can use the soft-straps in the BIOS to change the device ID, same as on earlier Fermi and Tesla GPUs, but whether the extra features like Gamstream and Shadowplay will become available is questionable. By all means, try it and report back, I don't think anyone has attempted this before.
Hi and thanks for your reply. Could you please post a guide or link that shows how to modify the soft straps in the BIOS to change the device ID, and which programs to use for that? I'd very much appreciate that, and I'll report back with results of course. Thanks in advance.
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: vmiop-env: guest_max_gpfn:0x10efff
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: pluginconfig: /usr/share/nvidia/vgx/grid_k120q.conf,gpu-pci-id=0000:01:00.0
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: Loading Plugin0: libnvidia-vgx
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: gpu-pci-id : 0000:01:00.0
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: vgpu_type : quadro
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: Framebuffer: 0x1A000000
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: Virtual Device Id: 0x0FF7:0x109C
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: ######## vGPU Manager Information: ########
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: Driver Version: 340.57
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: notice: VGX Version: 1.3
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: error: vGPU is supported only on VGX capable boards
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: error: Initialization: unknown error 1
Dec 10 12:43:14 xen fe: vgpu-6[12593]: vmiop_log: error: vmiope_process_configuration: plugin registration error
[root@xen ~]# lsmod | grep nvidia
nvidia 9522927 8
i2c_core 20294 2 nvidia,i2c_i801
[root@xen ~]# nvidia-smi
Wed Dec 10 14:53:19 2014
+------------------------------------------------------+
| NVIDIA-SMI 340.57 Driver Version: 340.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GRID K1 On | 0000:01:00.0 N/A | N/A |
| 30% 34C P8 N/A / N/A | 8MiB / 2047MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
The going rate for Quadro 6000 on ebay is under $650.
Note: Quadro 6000 (Fermi), NOT Quadro K6000 (Kepler).
So, I modded my Titan Black to a K5200 and now I want to do some 4k gaming. Has anyone tried a 'pro' card with a G-Sync compatible monitor?
TBH, G-Sync isn't the highest of priority for me, just want a good future proofed 4k panel but G-sync working would be a bonus.
Anyone?
So, I modded my Titan Black to a K5200 and now I want to do some 4k gaming. Has anyone tried a 'pro' card with a G-Sync compatible monitor?
TBH, G-Sync isn't the highest of priority for me, just want a good future proofed 4k panel but G-sync working would be a bonus.
Anyone?
1) Wait, HOW?! They don't even have the same amount of cores.
2) Guys, is it possible to hardmod a 750 Ti to K620? I am a 14 year old that is currently taking EDT in my high school, and desperately in need for a professional GPU for SW real mode. I could go with any GPU, but then I also do games on Friday nights, so the 750 Ti is a great choice.
Hi everyone,
What is the current state of affairs with VGA pass-through using ESXI/Xen? Is the best performing solution a Nvidia GTX 780 TI converted to a Nvidia Quadro K6000 via hard-mod?
I did just that and it performed very well. I now have an EVGA titan black 6GB with Kraken G10 modded to a 5200. Works fine in ESXi 5.5.
I did just that and it performed very well. I now have an EVGA titan black 6GB with Kraken G10 modded to a 5200. Works fine in ESXi 5.5.
Thanks for letting me know. Just curious, with the GTX 780 TI being detected as a Quadro K6000, I'm assuming all of the GeForce software features (such as ShadowPlay) are disabled? How was the performance compared to a non-virtualised GTX 780 TI? Also, why did you hard-mod your GTX Titan Black to a Quadro K5200 (and not a Quadro K6000)?
I am not sure that there is a difference in performance between 5200 and 6000 since it's a geforce card all the same, just wanted the black for 6gb mem and UHD res.
I've bought ASUS GT640, but unfortunatelly can't find the correct resistors to mod the card into grid K1 for my ESXi lab. Can you please help me with the location of R1 to R4 ? Thanks in advance. Photo attached.
So this may sound like a crazy question..I need to reverse hack.
Is there any way I can mod (via bios flash or otherwise) a pair of K6000 into Titan Black (or GTX 780), that I can then SLI for gaming purposes? Apparently the Quadro cards do not have SLI enabled, except in special systems. No, I can't trade, or sell these cards, which would be the obvious solution, but I would like to just use them for gaming.
Thanks!
Have you tried using EVGA Precision-X or MSI Afterburner to adjust the clock speeds at runtime? Does that work on Quadros?
I'm not sure Qadros can be SLI-ed regardless of the machine they are in - they have no SLI connectors.Some Quadros have SLI connectors, some don't. Rule of thumb: usually low-end / entry- to mid-level Quadros don't have SLI connectors, high(er) end Quadros do...
Other issue is that Quadro cards apparently can't be SLI'd, unless its a Dell, HP or Lenovo workstation? while the comparable non-quadro models can be.A list of workstations enabled for Quadro SLI can be found at the Nvidia website (http://www.nvidia.com/object/quadro_sli_compatible_systems.html) (not sure how often that list is updated, though).
All that means is that those integrators paid Nvidia a big fat fee to certify their workstations. If you can get a full SMBIOS dump from one of those machines, you can probably fire up a KVM VM using QEMU with the relevant SMBIOS payload and it would just work.With regard to SLI you are wrong. Nvidia might be an "evil" company (they did ample of things to deserve this label, but you could say the same about probably any other company of sufficient size), but Quadro SLI certification has a serious background. It really is about protecting the brand reputation and thus protecting business opportunities and margins. It's not about earning money through certification fees...
Nvidia is well known for this kind of blocking in the drivers - they did similar things to block GeForce cards working in VMs, that was kind of why this thread started.
nvflash --index=0 --straps 0x7FFFE7FF 0x10000400 0x7FFFFFFF 0x00000000
If anyone succeeds in modding a Titan X to an M6000 please post as to how exactly you did it. I'm looking to build a CAD system around this hack.
It's worth a shot, if all you need to do is alter the 3rd nibble. It's not like it's a difficult to undo change.
It's worth a shot, if all you need to do is alter the 3rd nibble. It's not like it's a difficult to undo change.
Are you positive you're not thinking of the Titan Black to K5200 conversion by mistake? It would seem to fit your description better:
0x100C -> 0x103C
Titan X is Maxwell, so to convert to an M6000 the ID change would be 3rd and 4th nibbles:
0x17c2 -> 0x17f0
Btw, $10 says it was deliberately designed to not be a single nibble difference. Plus, each nibble goes in a different direction.
But it would be great to have a 12GB converted card though... (stares off into space)
Hi guys. Recently I converted my 690 in to k5000, now i need to undo it but I dont know the values of original resistors. Does any one know?
The GTX 690 has a device id of 0x1188, so to become a Quadro K5000 this has to be changed to 0x11BA
When pulling high:
5K = 8
10K = 9
15K = A
20K = B
25K = C
30K = D
35K = E
40K = F
Anybody looking to sell a modified card than do can 4K video?
Hello,
I tried to set my GTX 770 (MSI N770-TF-2G/OC) into a Quadro K5000, I changed my resistors as describe in page 6 (40k, None, 15k, None).
My GTX is now a K2 so I think, I have to change the 15k by a lower (which value ?). The ID is 10DE-11BF (K2) and not 10DE-11BA (K5000).
Anyway, I need just to get the Grid compatibility, so I could stay like this ...
Thanks,
The ID for the Quadro M6000 is 17F0
What would be the necessary configuration in the board for the exchange of ID?
It's worth a shot, if all you need to do is alter the 3rd nibble. It's not like it's a difficult to undo change.
Are you positive you're not thinking of the Titan Black to K5200 conversion by mistake? It would seem to fit your description better:
0x100C -> 0x103C
Titan X is Maxwell, so to convert to an M6000 the ID change would be 3rd and 4th nibbles:
0x17c2 -> 0x17f0
Btw, $10 says it was deliberately designed to not be a single nibble difference. Plus, each nibble goes in a different direction.
But it would be great to have a 12GB converted card though... (stares off into space)
I think both Quadro and GeForce suffer from FP64 performance capping. Only Tesla has full FP64 capability.
...
TTX will NOT work, since full sized Maxwell does not have FP64 at all (possibly limited by power dissipation), that's why there is no Tesla M80.
I can confirm the mod done by blanka.
https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798 (https://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/msg210798/#msg210798)
But I pimped it a little bit.
670GTX to K5000 works!
R4 on the front side.
R1, R2, R3 on the bottom side.
K5000 works absolutely stable for me, but has no performance increase in SPECviewperf. I tested with few different Quadro drivers.
Summary
GPU Name R1 / 0-7 4th byte R2 / 8-f 4th byte R3/ 3th (high) R4 / 3th (low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
Quadro k5000 none 15k 40K none
grid k2 none 40K 40K none
I flashed it (EVGA 670GTX 2GB 915MHz) with the K5000 bios from techpowerup.
"nvflash.exe -4 -5 -6 K5000.rom" had to be used because of different subsystem and board id.
It started with minor pixel errors but booted into win7.
After driver installation and reboot win7 didn't start anymore.
Flashing it back worked without problems.
GT650M and GTX660M are based on GK107 which is a Kepler generation chip. You can use the soft-straps in the BIOS to change the device ID, same as on earlier Fermi and Tesla GPUs...the strap area is to be found @ 0x58-0x67. But when looking at this offset, i only find the generic-looking
FF FF FF 7F 00 00 00 00 FF FF FF 7F 00 00 00 80
which doesn't look like any bootstrap i've seen in Fermi bioses.Not sure if anyone is still following this topic, but I would like to request some help identifying R1-R4 on this card. The card is an Asus GT640-DCSL-2GD3. I have posted pictures below showing U10 and the surrounding resistors. Please let me know if you need any more information!
Thanks!
Has anybody checked if the modifying the cards removes the limit on the number of simultaneous nvenc encoding sctreams available? GeForce cards are limited to 2 simultaneous H.264 encoding streams, but Quadro/Grid cards are supposed to be able to handle more.
(Note: NVENC only exists on Kepler and Maxwell GPUs, there is no support for it on Fermi and earlier GPUs.)
Hi all,
Recently I've just bought a Asus GT640-2GD3 thinking that the PCB and resistors placement should be the same as 1GD3 but apparently it's not. I've tried to locate the R1 - R4 eg R543 resistor but I cannot find it on the board. Could you help me out here please to identify the resistors to change this to Grid K1? Thanks a million in advance. I've attached the images as well :) Do kindly let me know if it's not sufficient :)
wow, comeback to this thread after 3 years makes me feel so nostalgic! O0
I'm going to do a hardmoding with my GTX 750Ti (Quadro K2200 counterpart). Hopefully all the electrical engineer lessons and stuffs gained through the last 3 years could help me understand "this world" a bit more! :-/O But before touching it, there are a couple of "million" questions that I wish to be enlighten :box:Thanks for your help! ;D
- Can anyone give me a full instruction (or short hints) on how to track down these strap registers?
- Did anyone here successfully mod the Maxwell generation to their quadro counterpart?
- Can anyone provide me the high res picture of the quadro K2200 pcb?
I would also like to post a BIG THANKS to everyone that has donated so far, we are at $300 with $700 remaining at the time of this post.
I plan to video the modification of the GTX690 with before and after benchmarks which will be posted on YouTube. I will also record the process of finding the hardware straps for the other GPU through deductive reasoning and simple testing that is relatively safe to the hardware. So if you want to see this, throw a little into the pool so I can get a card I can safely do this on.
Once we have all the information all figured out I will trawl through this thread and compile everything into a single post making it easy to reference.
I'm going to do a hardmoding with my GTX 750Ti (Quadro K2200 counterpart).
wow, comeback to this thread after 3 years makes me feel so nostalgic! O0
I'm going to do a hardmoding with my GTX 750Ti (Quadro K2200 counterpart). Hopefully all the electrical engineer lessons and stuffs gained through the last 3 years could help me understand "this world" a bit more! :-/O But before touching it, there are a couple of "million" questions that I wish to be enlighten :box:Thanks for your help! ;D
- Can anyone give me a full instruction (or short hints) on how to track down these strap registers?
- Did anyone here successfully mod the Maxwell generation to their quadro counterpart?
- Can anyone provide me the high res picture of the quadro K2200 pcb?
I'll send you the picture in 2 days tops with the quadro K2200 pcb . I have a video card Asus GTX750Ti and I want to mod it to make a Quadro K2200 so be able to GPU Passthrought on my ESXi 5.5 server. without that mod, my VM Windows 7 not stop showing error 43.
My pc specs: ESXi 5.5 u2 on Core i5 4690, 8GB ram, ASRock Z97 Extreme6 with dual NIC support(onboard intel and realtek both gigabit) with Asus GTX750Ti https://www.asus.com/Graphics-Cards/GTX750TIOC2GD5/ (https://www.asus.com/Graphics-Cards/GTX750TIOC2GD5/)
LE: Pictures suck, i told the guy who sell this video card to make a better picture with the upper-right corner where the screw seems to have attached something, that's where resistors seems to be sticked on pcb; I'll post that picture when i get it. I'll post a picture soon with my gtx750ti also
Later later Edit: I uploaded more pictures..still the pictures are not good. I will get to a client who has this card in his workstation and i'll make some good quality pictures.
Hi,
I did some research on my own about the GTX670/660ti vendor layouts because I wanted to have a cheap Grid K2 from ebay.
I want to share with you the things I found out the last two days.
Please be aware that this is only theoretical stuff, so I never tried it out by my own and it's on your own risk if you try it out!
If you want to try it, it would be kind if you reply if it works.
Now the explanation:
1. By comparing the various GTX670/660TI layouts I found out that the Gigabyte 660TI layout seems to be exactly the same layout like the normal original NVIDIA GTX670 layout.
2. I found a high resolution picture of the Gigabyte 660TI back- and frontside and seperated the important views of capacitor 1 to 4 (picture 1).
3. By random I saw that fortunately the capacitors have names beneath them (e.g. R137).
4. I seperated the capacitor names as you can see on the right side in picture 1.
5. I also compared the layouts of various vendors and found other vendors with the same layout like the Gigabyte 660TI. I wrote the models ontop each different layout. Please reply whether I made a failure.
6. By searching throught the various vendor layouts I recognized that the Asus 660TI Cards and also the MSI 660Ti Cards have named capacitors.
7. The Asus 660TI has the same layout like the Asus 670. By writing up the names of the capacitors again I found the probably right capacitors you have to modify (have a look at picture 2 and 3).
Here some overview:
NVIDIA GTX670 Layout:
Gigabyte GTX660TI
Zotac GTX660TI AMP!
PointOfView GTX670
Evga GTX670
NVIDIA GTX670
ASUS GTX660TI DirectCU II Layout:
ASUS GTX670 DirectCU II
ASUS GTX660TI DirectCU II
MSI GTX660TI TI Power OC Layout:
MSI GTX660TI TI Power OC
NVIDIA GTX680 Layout:
Gigabyte GTX670
Zotac GTX670
NVIDIA GTX680
I also uploaded a gif to compare the various cards more easy:
http://i.giphy.com/26FPxZopdlv3Vnqog.gif (http://i.giphy.com/26FPxZopdlv3Vnqog.gif)
Hmmm, threat seems dead. Too bad.
Guess I'll have to go ebay hunting for a few sacrifices and try this on my own.
Not sure if anyone is still following this topic, but I would like to request some help identifying R1-R4 on this card. The card is an Asus GT640-DCSL-2GD3. I have posted pictures below showing U10 and the surrounding resistors. Please let me know if you need any more information!
Variance, the circuit for each strap is simple, it's either a pull up or pull down for each nibble.First of all, Thank you for your patience and willingness to explain in detail. (Incidentally Is there a sticky somewhere I'm missing for a guide on terminology like "nibble" or "strap"?)
Let me make this clear as many people have missed this, there are two possible resistors for each nibble, one pulls up, one pulls down, only one from factory will be populated, the other unpopulated.
1) Use ohms on you meter to figure out which pads are ground, take a photo and note them all (I just put a colored dot on the photo)Could you explain the methodology or point to resources on this topic? So far I've been just placing my multimeter in OHMs mode
2) Use ohms on your meter between the VCC pin of the EEPROM and the pads to figure out which are VCC and note them all (again, just a different color dot).
3) Now you need to identify which positions are common for pull up/down, so again on ohms, measure between the pads that are still unknown looking for two that are connected.
R1, R2, R3, R4. It's frequently indicated that only two of these 4 are populated.
So far I've been just placing my multimeter in OHMs mode and placing the probes at either end of the resistor. I have some assumptions on the steps but I'm not sure if they'd be correct.
1. Identifying ground/vcc: Would I be correct in connecting my black test probe to the ground lead on the PCI-E slot and the red probe to either end of the resistor until I get the known value and then assuming that end of the resistor is the VCC line and the other one is the ground?
2. VCC pin on the EEPROM.. would this be pin 3? not entirely sure what the protocol for doing this is either.
and on another topic... what's the newegg or amazon equivalent people here use to order components like smd resistors?
I stopped following this thread quite some time ago, my instructions were provided to help others identify how to locate and modify the other cards. For those that are new members/single posters that are just asking 'how'... my advice is:
Read this thread from the beginning, we discovered some dead giveaways (SPI flash pinout) on how to identify which resistor contributes to which bits in the ID on this series of card.
Simply removing a value is not a great idea, the input will "float" and the card can and will randomly change it's ID each boot, you must tie the input high or low with the correct resistor value for reliable operation. For those who have it working without doing so is just lucky, for most it wont, and for the GTX690, I can say for certain it does cause a problem.
If your card has not been documented here already and you don't understand the terminology, or do not know what a resistor is, what it means to 'tie high or low' or 'float', or how damaging a multimeter on diode test is to the GPU you should avoid the attempt as the chances of destroying your card are very high.
On a separate note, I just learned that GTX780Ti has device ID 0x100A. K6000 is 0x103A. We only need to change the 3rd nibble (via oguz286's awesome mod).
His mod is to use a 33K resistor to jack up the 3rd nibble from 0 to 2. On my Titan using an 18K resistor instead boosts the ID to 3. So to make a 780Ti to a K6000,
simply apply an 18K resistor between VCC and SCLK and voila, job done. For extra points, solder a couple of wires between those pins, solder an 18K resistor on one of them,
and a switch to connect them. Break the switch out somewhere accessible (extra extra points for making the switch easily and neatly accessible from the back of the card without
obstructing the airflow too badly). Now you can switch betwee a 780Ti and a K6000 at a flip of a single switch.
I have not quite understood where I have to place the 18k resistor. Could someone explaining me on the basis of the following graphic, please?
QuoteI stopped following this thread quite some time ago, my instructions were provided to help others identify how to locate and modify the other cards. For those that are new members/single posters that are just asking 'how'... my advice is:
Read this thread from the beginning, we discovered some dead giveaways (SPI flash pinout) on how to identify which resistor contributes to which bits in the ID on this series of card.
Simply removing a value is not a great idea, the input will "float" and the card can and will randomly change it's ID each boot, you must tie the input high or low with the correct resistor value for reliable operation. For those who have it working without doing so is just lucky, for most it wont, and for the GTX690, I can say for certain it does cause a problem.
If your card has not been documented here already and you don't understand the terminology, or do not know what a resistor is, what it means to 'tie high or low' or 'float', or how damaging a multimeter on diode test is to the GPU you should avoid the attempt as the chances of destroying your card are very high.
Hello
I have a GeForce GTX 980 4GD5 OCV1. Is it possible to modify Isa larger quantities streams NVENC. Currently, card transcodes me only 2 channels. Unfortunately I never found the instructions to change resistors.
Regards
Hello! Tell me which resistors to replace this video card asus GT640-2GD3. Photos of the video card below.
Hi. I just modded a zotac GT640 into a Grid K1...
It identifies as a Grid K1 under nvflash but under esxi shows up as a Quadro 410.
Has anyone experienced this issue?
Good day for everybody! :)
As I can see, the most people successfully modified their Geforce cards into quadro/tesla used them later as a videocard - but!.. Does anybody try to use modded cards for computations? (Ansys, for example, or other FE program)? For example, in Ansys manual "The following cards are supported: NVIDIA Tesla Series (any model), NVIDIA Quadro K5000, K5200, K6000, M6000 .... For NVIDIA GPU cards, the driver version must be 346.59 or newer."
So the question - is it possible to modify geforce into tesla/quadro with active TCC support?
I would very appreciate if somebody answers me.
p.s. Sorry for my English - and "yes" twice - I am completely newbie in such a modding, and I tried to read all the thread...
p.s.s by the way - even on the devtalk.nvidia.com there is a thread with the same content! am really suprised...
i think the key is driver
geforce, quadro, tesla they have exactly the same core
different point just like frequency, ram size and ECC, better power supply, stability...
if driver ok, others also ok :-+ :-+
I know this thread is pretty old but it was interesting enough for me to play around with the idea and it just so happens that a cousin of mine had just upgraded his GPU and agreed to give me his old 680. I followed the instructions (thank you!!) and was able to mod it into a Grid K2. My intention was to have it drive a virtual machine, so I put xenserver 7 on that machine - ran lspci | grep VGA and sure enough it reported it as a Grid K2. So far so good, right? But then I built the vm and I found I couldn't install the driver no matter what. I downloaded an eval windows 10-64 enterprise image, then I tried pro and even ported over my daily driver's activation code...nothing. I figured it was time to get creative so I started to try different things, but so far nothing has worked. I've edited the *.inf files to list the 11BF hardware ID's, I tried all the sections and the descriptor at the bottom. I also tried doing it backward and editing the registry to "re-normalize" the PCI hw ID back to a 680 to see if the drivers would install then...I got past the nvidia hardware check, but it still failed. If I tried to do it manually I'd just get a windows dialogue saying the driver had a "problem" with windows and windows couldn't install it. If I tried the K2 driver, it installed but windows reports a problem with the hardware and the device couldn't be initiated. And if I tried an older driver, it says it can't work with this version of windows.
So I'm kinda racking my brain at this point - but before giving up thought it would be a good idea to ask you guys, since you all discovered this in the first place. I know it was a while ago...but what am I doing wrong? Is it because I'm trying to run this in a pass-through VM? Or is it that you guys only got this to work with an older OS like windows 7? Is it supposed to be a 32-bit OS and won't work on x64?? Or is there a custom or specific driver that you guys used to make it work (and if so, do you have a download link)?
Thanks so much!!
OMG! Just randomly check on this thread again to see news and you finally came back, this make me wanna cry out of joys and excitement! :scared:
Thanks so much for the instructions and the explanations, gnif! I can grab 99% of the process now. However, as you mentioned, only 1 of 2 nipples for each strap is populated, but the process is to add a random 10k resistor to find the target un-populated register. It could leads to a situation where both nipples are populated! Is this a contradiction or am I missing something here?
On the other hand, I wonder if randomly "close" the circuit with a 10k test-resistor do any harm to the card? :-//
About the suggestion on creating a transparent image to track down resistors (or traces?!), I know it would help but the thought of these PCB are all multi-layer makes me feel (somehow) insecure! :-DD
Hi to all,
i am new here. I read this thread from beginn to end and i have 2 questions about
GTX690. The member "gnif" is a genie (thx for your efforts) :) but after burn his card he dont share any
new picture with dual k5000 cpu´s.
My quetions:
1. Is dual K5000 possible?
2. Must i change the firmware or any hacks?
PS: I attached 2 pictures is this correct resistors for the GTX690
Sorry for my bad english
Hello all again ;) I have good news.
I successfully modified
Zotac PCI-E NV ZT-60206-10L GT640 Synergy 2G 128bit DDR3 900/1600 DVI*2+mHDMI RTL
To NVIDIA GRID K1. It is working fine. passthough works too. BUT Device ID mofidication posible only after bios modification. Bios modification is needed only for specific vendors.
upd:
myweb found resistor places for Asus GT640-1GD3-L, no bios modification is needed. pic attached to post.
(...)
Hello all again ;) I have good news.
I successfully modified
Zotac PCI-E NV ZT-60206-10L GT640 Synergy 2G 128bit DDR3 900/1600 DVI*2+mHDMI RTL
To NVIDIA GRID K1. It is working fine. passthough works too. BUT Device ID mofidication posible only after bios modification. Bios modification is needed only for specific vendors.
upd:
myweb found resistor places for Asus GT640-1GD3-L, no bios modification is needed. pic attached to post.
(...)
Hi, I have a question. I have the following card https://www.asus.com/Graphics-Cards/GT6402GD3/overview/ (https://www.asus.com/Graphics-Cards/GT6402GD3/overview/) its device ID is the same like for Zotac but the point is non of the presented versions ot this GT640 has the back look like this one. Thus I do not know which/where are responsible resistors... Are you able to help somehow if I send the back view pic ?
Thanks and regards
Are there any news according the GTX690 yet?
Thank's a lot!
My measurements are a bit strange:
(https://photos-4.dropbox.com/t/2/AAAZ8OuVoO24dO4okwensUxv28wcBIWC_u_xdBW7H3iSpA/12/239771461/png/32x32/1/_/1/2/asus_gtx670_2.png/EJXfh9QBGIdHIAIoAg/jiKNKWjhShcTpDIq81TVXm3hWG1wEq2w_QkWftkWsx4?size=1280x960&size_mode=3)
R - 20M \$\Omega\$? - is probably damaged
and R - 10K \$\Omega\$? - I'm not sure it should be 5K \$\Omega\$?
Please help me, I have to give this card to the service to solder correct resistors
Edit:
ok I already know how it should be
I replaced 20M \$\Omega\$ a good 5k resistor, and still is tesla k1. |O
So the problem is not in the configuration of resistors just unless the card is dead :'(
Did you measure the resistors in or out of circuit?
Did you measure the resistors in or out of circuit?
in,
but now the card is dead,
You have very likely killed the GPU, it is a 1.2v device and the voltage output by your DMM for measurement is high enough to damage the GPU.
You have very likely killed the GPU, it is a 1.2v device and the voltage output by your DMM for measurement is high enough to damage the GPU.
Is this card only for trash?
Without testing I can not say for certain, but it is quite likely.
i habe 8x 780 6GB modified to tesla k40st air cooled and 7x watercooled 780 6GB Tesla K40st ... anyone wants to buy some ?
send me a mail. air cooled 200€, watercooled 270€.
all are working perfect. all with 6GB
send me a PM.
I doubt it, its missing more then just the RAM, an entire power phase is missing, likely for the extra RAM.
I had a gtx 960 chinese fake card sent to me from a friend who bout it, then got a refund.
it's really a GTS450 gf106 192 core, 783 Mhz clocked card with a funky bios eprom that nvflash doesn't read or write to.
gpu-z can read the card, but the bios save fails.
Hi-
Well I ended up getting Two EVGA 04G-P4-3687-KR GeForce 4GB GTX 680. Core Clock 1084mhz and Boost Clock 1150mhz.
The boards are the same as the GV-N680OC-2GD except mine are 4GB. I modded both of them to Quadro K5000 (Thanks old Playstation 3 for the resistors :-DD)
I ran the latest Nvidia 314.22 drivers and Quadro 311.35. It seems the 314.22 drivers are a little bite better so I'm using those.
I did some benchmarking to compare the cards before and after the mod's.
GTX 680 #1 GTX 680 #2 K5000 #1 K5000 #2
3DMARK 11 9022 8987 9077 9016
Passmark 8 (3D Graphics Mark) 6044 6091 6025 5996
PCMark Vantage (Gaming) 19336 18956 18880 16177
PhysX 10158-166 fps 10003-165 fps 10176-167 fps 10123-166 fps
SPECviewperf 11
Catia-03 6.05 5.98 5.9 10.20
Ensight-04 32.20 32.23 32.20 32.27
Lightwave-01 13.23 12.84 13.14 13.22
Maya-03 12.77 12.73 12.86 12.85
Proe-05 0.96 1.00 1.00 0.99
Sw-02 11.09 11.37 11.36 12.78
Tcvis-02 1.01 1.17 1.02 1.02
Snx-01 3.42 3.37 3.40 3.42
As you can see all the scores between stock and modded cards are about the same. The problem is with the SPECviewperf 11 scores. This is the benchmark for Graphic and CAD programs. This is what the Quadro cards were made for. The scores for the modded K5000 should be MUCH higher. Take a look here.
http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_4.html (http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_4.html)
It looks to me that just because the computer thinks it’s a Quadro K5000 does not mean that it will act like a K5000.
I even tried this benchmark with the Quadro drivers and got the same results. Hopefully It's just a driver issue and not a hardware issue.
You've got balls for taking a soldering iron to a GTX-690!
Edit:
For those that are just spewing trash on HaD comments without doing a little research... the parts are identical, changing the Device ID just makes the binary blob advertise the additional features to the system, and enables them. It does NOT affect the clock speeds, and will not make the card faster for general day to day work unless you are using the specialised software that takes advantage of these 'professional' features. Changing the ID does not affect the clock speeds as they are configured by the BIOS which we are not touching.
And stock, the GTX690 is clocked FASTER then the K5000 and the Tesla K10, so you are getting a faster card in comparison, not making the GTX690 faster.
I repeat, this does NOT make your GTX 6XX card faster, nor does it make it slower.
Tesla K10 to Grid K2 successful
I had to mod two resistors for GPU0 and GPU1
What I did was removed 25k and installed 40k to make it perfect match
Does GTX 690 also require modifying two resistors?
I also updated all the vbios to K2 so it looks like K2 now (updated PLX as well)
I will test vgpu functions in few days.
Really thanks for the efforts of gnif and verybigbadboy
However, since I have a EVGA GTX670 with the same PCB layout like GTX660Ti
So I need to find the modification by myself and here is the result.
For the 4th digit, as everyone already knows, it is right on the position of resistor 1 and 2.
Depend on which card you have and you can remove resistor 1 and change it to tesla(40K), grid k2(40K) or Quadro(15K) on resistor 2.
For the 3rd digit, it is the tricky part.
As the low byte on the top side of the PCB with resistor 4.
You don't need to do anything for Tesla K10.
However, if you need to change it to a Quadro K5000 or Grid K2
You need to remove resistor 4 and install resistor 3 "MANUALLY" since no place for resistor 3 any more in the PCB of GTX670 and GTX660Ti
As you can see in my attached bottom side photo for the "rework".
You need to connect to EEPROM pin 6 with a 20K Ohm and pull up to VCC.
My rework is quite ugly but it works fine!
Please be careful and take your own risk for modifying your card!!
Summary
GPU Name Resistor 1 / 0-7 4th byte Resistor 2 / 8-f 4th byte Resistor 3/ 3th byte (high) Resistor 4 / 3th byte(low)
GTX 660Ti 20K None None 25k
GTX 670 None 10K None 25k
tesla k10 none 40K None 25k
quadro k5000 none 15k 20K none
grid k2 none 40K 20K none
Anyone has bios dump of K2?
The one I used from HP is missing InfoROM and it looks like causing compatibility issue on certain system.
Anyone has bios dump of K2?
The one I used from HP is missing InfoROM and it looks like causing compatibility issue on certain system.
can you give me a grid k2 rom,thanks very much, i look for it for a long time.sent via pm
one 25k resistor on each GPU right?I found the analog resistor locations, I ordered a 20 or so of different kinds, all SMD, all 40k rated. Anyways,the resistors are in different locations from GPU0 to GPU1. same chip, still two resistors out of four spots, different locations.
I found one on each GPU but not two
Anyone has bios dump of K2?
The one I used from HP is missing InfoROM and it looks like causing compatibility issue on certain system.
PMed you a link to it
i sucessed k2=>k10 after change 2 resistors.
on dell r820, it works ok, but only with vsga, can not work with vGpu。
i think it may be due to the rom, so flashed with k2 rom(with override bord id), then system can not boot up, block by pcie scan.
i flashed back to k10 rom with other pc now.
Any one knows who to use vGPU , just like the real k2.
Just noticed I didnt reply to your pm, sent you links to both bumps
I really dont get it.
I flashed ROM from namgorf and confirmed that it has InfoROM
Still having this screen on Dell R720
I updated latest firmware on R720 as well...
It works fine on HP Z420 but not on Dell R720
What am I missing |O |O |O |O |O |O |O |O |O |O |O |O |O
(Attachment Link)
I really dont get it.
I flashed ROM from namgorf and confirmed that it has InfoROM
Still having this screen on Dell R720
I updated latest firmware on R720 as well...
It works fine on HP Z420 but not on Dell R720
What am I missing |O |O |O |O |O |O |O |O |O |O |O |O |O
(Attachment Link)
Did you ever get past this? You could say I am following in your footsteps from the shadows. I just converted a Tesla K10 into an NVIDIA GRID K2 yesterday and I get the same result on my PowerEdge R720.
I soldered the strap resistors, reassembled the card with new thermal paste, and flashed the "2014" BIOS to each GPU. I did notice the "PLX" entry in nvflash but didn't do anything with that. Was I supposed to? The SHA256 hash of the BIOS file I flashed was BB04DF8552BF60B827E1C963B1D8527386D9448D554BC01799CE7F8605763951
My R720 has dual E5-2650 V2s, 128GB of PC3L-12800R, and dual 750W power supplies. I saw the other post here about power delivery, but this card has a max power consumption at full load of maybe 200W, and my system power readout is only sitting at 280W with this GPU installed. I am using a quick and dirty homemade power cable. Please excuse the WAGO Lever Nuts. I triple-checked the pinout and continuity with a multimeter, for what it's worth.
Right now I am using the riser connected to CPU2, because it was easier to install the card over there. I ran out of time for troubleshooting last night. I did not try the other riser, removing all other PCI-e cards, or enabling "Above 4G decoding" in the BIOS yet. That is on the agenda for tonight.
I am running a BIOS version from 2016 to avoid the Spectre/Meltdown microcode mitigation.
Hello,
an another solution to try if it's works , take an PC Desktop in windows 10 , put the real power VGA cable ( One 8 pins and One 6 pins ) , of course keep a video card ( so you need 2 * 16X PCI E ) .
If windows detect Grid K2, ( first good point)
launch "GPU Z" software, choose the card 1 or 2, and make a backup of the Bios.
Compare the bios with K2 bios ( with no error it's the second good point )
Use KeplerBiosTweaker_1.27
and you can optimise the consumation of the card and putin the modfied bios in your 2 cards
I make a image of what i need to change in the K10 to change it in Grid K2, could you confirme its ok please?
Hi Modders,
first off, thank you very much for sharing this awsome knowledge - :clap:
.. got myself a gt640 wich will undergo some resistor soldering soon.
I am writing this post to ask, if anybody knows if the Tesla K40 is modabble in a similar way ?
I think its the same PCB like a Quatto 6000, is that correct ?
Could it be modded into a K2 or any other Card for that matter ?
I would like to use it in the VMware Host with the "vCPU" Feature.
Looking forward to get some insights from the experts.
Thanks !
(..)
(not duplicated URIs again)
I really dont get it.
I flashed ROM from namgorf and confirmed that it has InfoROM
Still having this screen on Dell R720
I updated latest firmware on R720 as well...
It works fine on HP Z420 but not on Dell R720
What am I missing |O |O |O |O |O |O |O |O |O |O |O |O |O
(Attachment Link)
Did you ever get past this? You could say I am following in your footsteps from the shadows. I just converted a Tesla K10 into an NVIDIA GRID K2 yesterday and I get the same result on my PowerEdge R720.
I soldered the strap resistors, reassembled the card with new thermal paste, and flashed the "2014" BIOS to each GPU. I did notice the "PLX" entry in nvflash but didn't do anything with that. Was I supposed to? The SHA256 hash of the BIOS file I flashed was BB04DF8552BF60B827E1C963B1D8527386D9448D554BC01799CE7F8605763951
My R720 has dual E5-2650 V2s, 128GB of PC3L-12800R, and dual 750W power supplies. I saw the other post here about power delivery, but this card has a max power consumption at full load of maybe 200W, and my system power readout is only sitting at 280W with this GPU installed. I am using a quick and dirty homemade power cable. Please excuse the WAGO Lever Nuts. I triple-checked the pinout and continuity with a multimeter, for what it's worth.
Right now I am using the riser connected to CPU2, because it was easier to install the card over there. I ran out of time for troubleshooting last night. I did not try the other riser, removing all other PCI-e cards, or enabling "Above 4G decoding" in the BIOS yet. That is on the agenda for tonight.
I am running a BIOS version from 2016 to avoid the Spectre/Meltdown microcode mitigation.
I really dont get it.
I flashed ROM from namgorf and confirmed that it has InfoROM
Still having this screen on Dell R720
I updated latest firmware on R720 as well...
It works fine on HP Z420 but not on Dell R720
What am I missing |O |O |O |O |O |O |O |O |O |O |O |O |O
(Attachment Link)
Did you ever get past this? You could say I am following in your footsteps from the shadows. I just converted a Tesla K10 into an NVIDIA GRID K2 yesterday and I get the same result on my PowerEdge R720.
I soldered the strap resistors, reassembled the card with new thermal paste, and flashed the "2014" BIOS to each GPU. I did notice the "PLX" entry in nvflash but didn't do anything with that. Was I supposed to? The SHA256 hash of the BIOS file I flashed was BB04DF8552BF60B827E1C963B1D8527386D9448D554BC01799CE7F8605763951
My R720 has dual E5-2650 V2s, 128GB of PC3L-12800R, and dual 750W power supplies. I saw the other post here about power delivery, but this card has a max power consumption at full load of maybe 200W, and my system power readout is only sitting at 280W with this GPU installed. I am using a quick and dirty homemade power cable. Please excuse the WAGO Lever Nuts. I triple-checked the pinout and continuity with a multimeter, for what it's worth.
Right now I am using the riser connected to CPU2, because it was easier to install the card over there. I ran out of time for troubleshooting last night. I did not try the other riser, removing all other PCI-e cards, or enabling "Above 4G decoding" in the BIOS yet. That is on the agenda for tonight.
I am running a BIOS version from 2016 to avoid the Spectre/Meltdown microcode mitigation.
I am still having same issue.
I tried multiple K2 BIOS but the issue is still there.
I also tried DELL K2 bios updater but didnt fix the problem (https://www.dell.com/support/home/uk/en/ukbsdt1/drivers/driversdetails?driverid=598p8 (https://www.dell.com/support/home/uk/en/ukbsdt1/drivers/driversdetails?driverid=598p8))
I found inforom from CISCO iso but it didnt really help.
I also suspected my RISER cable which was 9H6FV but wiring is correct and it provided 12V/GND in proper pins (tested with multi meter)
I googled alot and no one complained about compatability issue other than you and me.
Also saw few people uses K2 in R720 without any issue.
At this point, there are only three things left.
1. Requires 1100W
2. Proper inforom
3. 40k ohm is wrong.
Since K2 is working fine on other system, I think its gonna be either option 1 or option 2.
I really dont get it.
I flashed ROM from namgorf and confirmed that it has InfoROM
Still having this screen on Dell R720
I updated latest firmware on R720 as well...
It works fine on HP Z420 but not on Dell R720
What am I missing |O |O |O |O |O |O |O |O |O |O |O |O |O
(Attachment Link)
Did you ever get past this? You could say I am following in your footsteps from the shadows. I just converted a Tesla K10 into an NVIDIA GRID K2 yesterday and I get the same result on my PowerEdge R720.
I soldered the strap resistors, reassembled the card with new thermal paste, and flashed the "2014" BIOS to each GPU. I did notice the "PLX" entry in nvflash but didn't do anything with that. Was I supposed to? The SHA256 hash of the BIOS file I flashed was BB04DF8552BF60B827E1C963B1D8527386D9448D554BC01799CE7F8605763951
My R720 has dual E5-2650 V2s, 128GB of PC3L-12800R, and dual 750W power supplies. I saw the other post here about power delivery, but this card has a max power consumption at full load of maybe 200W, and my system power readout is only sitting at 280W with this GPU installed. I am using a quick and dirty homemade power cable. Please excuse the WAGO Lever Nuts. I triple-checked the pinout and continuity with a multimeter, for what it's worth.
Right now I am using the riser connected to CPU2, because it was easier to install the card over there. I ran out of time for troubleshooting last night. I did not try the other riser, removing all other PCI-e cards, or enabling "Above 4G decoding" in the BIOS yet. That is on the agenda for tonight.
I am running a BIOS version from 2016 to avoid the Spectre/Meltdown microcode mitigation.
I am still having same issue.
I tried multiple K2 BIOS but the issue is still there.
I also tried DELL K2 bios updater but didnt fix the problem (https://www.dell.com/support/home/uk/en/ukbsdt1/drivers/driversdetails?driverid=598p8 (https://www.dell.com/support/home/uk/en/ukbsdt1/drivers/driversdetails?driverid=598p8))
I found inforom from CISCO iso but it didnt really help.
I also suspected my RISER cable which was 9H6FV but wiring is correct and it provided 12V/GND in proper pins (tested with multi meter)
I googled alot and no one complained about compatability issue other than you and me.
Also saw few people uses K2 in R720 without any issue.
At this point, there are only three things left.
1. Requires 1100W
2. Proper inforom
3. 40k ohm is wrong.
Since K2 is working fine on other system, I think its gonna be either option 1 or option 2.
I just ordered 2x 1100W power supploy for R720 and will arrive end of next week.
This will eliminate one of possibility.
I will update the thread once I test it.
Hello.
I checked this thread after many years but nothing changes (still replacing resistors on SPI lines :D).
A few years ago I challenged myself to enable GTX/RTX/Quadro cards for vGPU paravirtualization and I was successful.
Check this: https://gridforums.nvidia.com/default/topic/8934/ (https://gridforums.nvidia.com/default/topic/8934/)
The presented solution is proving that there is no differences between the same chip in GTX-Quadro-Tesla lines and all is about "software".
The solution is useful for virtualization only! It enables the vGPU feature for compatible GTX/RTX/Quadro cards to TESLA/GRID counterparts like M10,M60,P4,T4. It does not modify host installed card (no HW mod, no vBIOS mod, no SW driver modification, probably EULA compliant). It also relaxes all NVIDIA "crippled/throttled" vGPU features to guests (like GPU MEM limits, number of emulated monitors (max 4), resolution of monitors, CUDA...). It does not remove NVIDIA license (to be compliant).
lol, i just saw mcerveny already mentioned kvm's capabilty to passthrough.
@mcerveny: i am intrigued by your "magic script" - out of interest, do you think your software mod could be transfered to esxi (for homelab)?
Plus i see you did use fedora with xen kernel, so it should be appliable to any distribution with xen kernel?
How long did it take to successfuly use your sw mod including research, writing the script ?
It is really f* up how nvidia rips of customers by selling basically the same hardware.
Appreciate some info on this - Thank you
Yes, I tried it with my GTX 1080ti, my rdp server with 1 rdp session was PC with win10 pro, and slave was laptop with win 10 pro... I turned on in registtry acceleration but it doesn't work...
The magic script is the last part of VDI puzzle.
(btw. RDP does not use accelerated GPUs until you unlock it in registry - https://lmgtfy.com/?q=rdp+use+accelerated+GPU+registry (https://lmgtfy.com/?q=rdp+use+accelerated+GPU+registry))
Yes, I tried it with my GTX 1080ti, my rdp server with 1 rdp session was PC with win10 pro, and slave was laptop with win 10 pro... I turned on in registtry acceleration but it doesn't work...
The magic script is the last part of VDI puzzle.
(btw. RDP does not use accelerated GPUs until you unlock it in registry - https://lmgtfy.com/?q=rdp+use+accelerated+GPU+registry (https://lmgtfy.com/?q=rdp+use+accelerated+GPU+registry))
And I thought, that grid card can help me.... And easiest way to try it was your decision :)
Yes, sorry that I choose decision without description not my problem...
At the end I want get:
RDP host for work through thin client
Working 3D acceleration in RDP for work witch CAD application and rendering.
Now I have 2x1080ti. Work with CAD in obviously PC with cards.
But I want RDP :)
Host system requirements:
Windows Server 2016
A DirectX 11.0-compatible GPU with a WDDM 1.2-compatible driver
A CPU with Second Level Address Translation (SLAT) support
The presented solution is proving that there is no differences between the same chip in GTX-Quadro-Tesla lines and all is about "software".
The solution is useful for virtualization only! It enables the vGPU feature for compatible GTX/RTX/Quadro cards to TESLA/GRID counterparts like M10,M60,P4,T4. It does not modify host installed card (no HW mod, no vBIOS mod, no SW driver modification, probably EULA compliant). It also relaxes all NVIDIA "crippled/throttled" vGPU features to guests (like GPU MEM limits, number of emulated monitors (max 4), resolution of monitors, CUDA...). It does not remove NVIDIA license (to be compliant).
Maybe we shold crowdfound mcerveny to make his code open source and continue to develop on github as a team based project :D
Pretty sure this will p*ss off nvidia a lot and could trigger legal actions (?) but i'd also love to see the actual code, especially for homelab projects.
... true indeed.
i remember the sh*tstorm when nvidia released its new licensing model.
https://webcache.googleusercontent.com/search?q=cache:QiFwYJRxW0UJ:https://forum.exetools.com/showthread.php%3Ft%3D18864
I'll do some more research later, link contains some useful info.
@gotofbi : are you sure, there is no online check against nvidias backend ?
...because, unfortunately i think there is
Yes, true. But you must upload license file (license.bin) within 96 hour from download time from licensing portal. The file is locked to first ethernet MAC address. But guest drivers now communicate with NVIDIA (one driver parameter from over 700 :-/O - "RMNvTelemetryCollection" (https://gridforums.nvidia.com/default/topic/258/nvidia-virtual-gpu-technology/documentation-for-vgpu-configs/post/14610/#14610)) and you must login to NVIDIA to use "Geforce experience"...
Hi Guys. This is my first post here so please be gentle! :)
I'm sorry if this is not the right place to be posting this, but it looks like there's a load of people here that know what they are talking about when it comes to NVidia cards.
I have a Dell Alienware NVIDIA Geforce MXM B GTX 770M 3GB Laptop Video Card HW6C9 which looks like someone tried to reprogram the BIOS and in the process knocked two components off the board. Neither are identified and I could really do with some help identifying the missing components so I can replace them.
I have a heat station with a narrow nozzle so I shouldn't have a problem soldering the offending components back on, but really need to find out what they are or find a good reference that can point me in the right direction.
I know it's not the latest card, but it would be nice to get it up and running.
Thanks in advance. Any advice would be really appreciated.
I am assuming I am looking at the BIOS chip here BTW and apologies for the bad photo. It's as close as I could get with my crappy phone.
i virtualized 8 x 1060gb with kvm and they work pretty well.
i virtualized 8 x 1060gb with kvm and they work pretty well.
More details: the card has not been modified? Is pci-passthrough or vGPU made?
It's pci-passthrough - kvm can hide to the guest that it is a virtualized machine.Maybe the reason is that the PCI-Express slot with this card is x1/x4 instead of x8/x16? Try to rearrange the cards between each other in the slots and see if this reduction will remain or not.
Can you please elaborate little bit what need to be patched? is it the driver nvidia-ml or nvidia-vfio-vgpu or nvidia-vgpud in case of kvm?off topic,
With hint and help of mcerveny, I made it work on KVM.
I dont have magic script and its all accomplished with binary patch so I cant release.
smd type 0402, at least on all the cards i did
Hi,AFAIK, the GRID K2 works properly with the DELL R720 as it is the x79 chipset and usually does not have a problem with this. Check BIOS for any settings that can help you to get the card working, such as 4G decoding for example.
Have you already solved the issue between Dell R720 and GRID K2?
Hi,AFAIK, the GRID K2 works properly with the DELL R720 as it is the x79 chipset and usually does not have a problem with this. Check BIOS for any settings that can help you to get the card working, such as 4G decoding for example.
Have you already solved the issue between Dell R720 and GRID K2?
I got an ASUS P104-100 mining specific GPU which has the same PCB and specs as an ASUS GTX 1080 TURBO (other than maybe CUDA cores) and thus could be flashed into a GTX 1080 or 1070 GDDR5X in case there really are only 1920 CUDA cores on board. The reason anyone would want to turn it into a 1080/1070 is beacuse P104 is SERIOUSLY BIOS limited. PCIe is dropped to 1.1 (PCIe 3.0 hardware), VRAM is limited to 4GB (8GB installed) and video output is disabled (even though this card has video output ports). The device ID of a 1080 is x1B80 and P104's is x1B87, so I just need to find the resistors that determine the 4th symbol and then test what resistances equate to what values.
I don't know if something has changed in the meantime, someone on the LTT forum said they transplanted an entire P106 chip onto a GTX 1060 board and it didn't work unless there was a P106 BIOS flashed, basically it behaved like straps don't exist or are somehow the same for both cards. Maybe that's not correct and strap modding would work, but if it is correct maybe it's baked into the chip or some other component, I simply don't have a way of knowing because I couldn't find any info about that on Pascal.
Here are the docs I found: https://buildmedia.readthedocs.org/media/pdf/envytools/latest/envytools.pdf
I would like to know how device ID is determined on Pascal, if it's same as it has been before. What's the easiest way to identify which resistors are used for the 4th value of device ID? I know there is probably an answer in this thread but it's so massive my eyes fell off reading it...
I would also like to know about any secondary strap functions that could have been changed on the P104 (in case video output is disabled via straps I would need to change that too).
I really hope nothing has changed and that a simple strap mod would unlock it's full potential. Also if anyone has a GTX 1070 with GDDR5X memory please PM me your GPU BIOS and GPU-Z screenshot, so I have everything I need in case it does only have 1920 CUDA cores on board (which it most likely does).
Dear Krutav,
thx for your idea, only for understanding i can install a KVM (Kernelbased Virtual Machine) where i can
Install a vbios from any device (example quadro P4000) and configure it to to start a vm with windows 10 and i have a quadro k4000 after in system, right?
If more people want instructions for this, I can write a small guide possibly.
"If more people want instructions for this, I can write a small guide possibly"
I spend 10 beers 😊
I have no idea what about you talk :))) but i believe in you... Still waiting for a step by step method.
lspici -nn
02:00.0 VGA compatible controller (0300): NVIDIA Corporation GP104 (GeForce GTX 1080)
nano /etc/pve/local/qemu-server/100.conf
Now that you are inside your VM configuration, you will need to add this one line that lets us spoof our graphics card.args: -device 'vfio-pci,host=02:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=NVP4000,x-pci-vendor-id=0x10de,x-pci-device-id=0x1BB1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x11A3'
In this code, there are a few things you will need to edit for yourself.host=02:00.0
Change this value according to what you got from running lspci -nnromfile=YOURROM.rom
You need to obtain the ROM for the graphics card that you choose to spoof as, so I went to techpowerup.com and downloaded Quadro P4000 BIOS. On techpowerup.com, you can also obtain the PCI device ID and the subsystem ID.x-pci-device-id=0x1BB1
Here, I have chosen my new Device ID to match that of the Quadro P4000.x-pci-sub-device-id=0x11A3
Here, I have chosen my new SubSystem ID to match that of the Quadro P4000.Proxmox= doneLaptops don't have a very high success rate with VT-D so I recommend using a desktop. If you still want GPU virtualization, you can try out the new GPU partitioning on windows 10 with the reddit guide that I linked in the earlier post.
vm= done (Windows 10 is running)
change to pcie Passthrough = failed (No IOMMU detected, please activate it.See Documentation for further information.) :-X
In bios is only one place to change to vt-d on and off thats all. I think it does not work with my laptop HP Pavilion 17-ab303ng
i test it one more time i think i do something wrong....!
Proxmox= doneLaptops don't have a very high success rate with VT-D so I recommend using a desktop. If you still want GPU virtualization, you can try out the new GPU partitioning on windows 10 with the reddit guide that I linked in the earlier post.
vm= done (Windows 10 is running)
change to pcie Passthrough = failed (No IOMMU detected, please activate it.See Documentation for further information.) :-X
In bios is only one place to change to vt-d on and off thats all. I think it does not work with my laptop HP Pavilion 17-ab303ng
i test it one more time i think i do something wrong....!
Passthrough is working but nvidia driver "cant find any driver for you system" and when i try to install it manually, says me error 43 !!!
Nov 29 16:51:41 L1Proxmox kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 238
Nov 29 16:51:41 L1Proxmox kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 450.89 Thu Oct 22 20:49:26 UTC 2020
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: Verbose syslog connection opened
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: Started (702)
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: Global settings:
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: Size: 16
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: Homogeneous vGPUs: 1
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]: vGPU types: 401
Nov 29 16:51:42 L1Proxmox nvidia-vgpud[702]:
Nov 29 16:51:42 L1Proxmox kernel: NVRM: GPU at 0000:01:00.0 has software scheduler DISABLED with policy NONE.
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: pciId of gpu [0]: 0:1:0:0
Nov 29 16:51:43 L1Proxmox nvidia-vgpu-mgr[715]: notice: vmiop_env_log: nvidia-vgpu-mgr daemon started
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]:
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: Physical GPU:
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: PciID: 0x0000 / 0x0001 / 0x0000 / 0x0000
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: Size: 52
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: DevID: 0x10de / 0x1bb3 / 0x10de / 0x0000
Nov 29 16:51:43 L1Proxmox nvidia-vgpud[702]: Supported vGPUs count: 14
Nov 29 10:00:56 DellT3500-PVE kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 239
Nov 29 10:00:56 DellT3500-PVE kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 450.89 Thu Oct 22 20:49:26 UTC 2020
Nov 29 10:00:58 DellT3500-PVE nvidia-vgpud[753]: Verbose syslog connection opened
Nov 29 10:00:58 DellT3500-PVE nvidia-vgpud[753]: Started (753)
Nov 29 10:00:59 DellT3500-PVE kernel: NVRM: GPU at 0000:02:00.0 has software scheduler DISABLED with policy NONE.
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: Global settings:
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: Size: 16
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: Homogeneous vGPUs: 1
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: vGPU types: 401
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]:
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: pciId of gpu [0]: 0:2:0:0
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpu-mgr[762]: notice: vmiop_env_log: nvidia-vgpu-mgr daemon started
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: GPU not supported by vGPU at PCI Id: 0:2:0:0 DevID: 0x10de / 0x1b80 / 0x10de / 0x0000
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: error: failed to send vGPU configuration info to RM: 6
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: PID file unlocked.
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: PID file closed.
Nov 29 10:00:59 DellT3500-PVE nvidia-vgpud[753]: Shutdown (753)
Nov 29 10:00:59 DellT3500-PVE systemd[1]: nvidia-vgpud.service: Main process exited, code=exited, status=6/NOTCONFIGURED
Nov 29 10:00:59 DellT3500-PVE systemd[1]: nvidia-vgpud.service: Failed with result 'exit-code'.
Hey Krutav,I tested all this on Linux, which works perfectly. On windows, you need to make sure that the CPU type is set to whatever CPU architecture you have. Because my CPU is Kaby Lake, I set that. I will not set it to host because there is some bug with the nested virtualization that is causing it to break. I actually found a solution to this on the proxmox forum somwhere, and while i dont have the link, it should be something about running Hyper V on proxmox with nested virtualization. I'd like to point out that the spoofing trick is intended for loading alternate drivers for something. It will NOT improve SPECviewperf scores or CAD performance on consumer hardawre. Quadro GPU uses far higher quality chips compared to geforce. I'll send some info later on as it comes up.
Can you write me a complete solution without
"Error 43" Problem. For me is importand how can i spoof it as QUADRO.
I missed one more time i have no idea why... :/
Cant find working howto in the forums or in youtube... Everytime come the f...* Error 43 failure.
Best regards
It will NOT improve SPECviewperf scores or CAD performance on consumer hardawre. Quadro GPU uses far higher quality chips compared to geforce.
Get an AMD GPU like RX480. It can be flashed with the vBIOS of a FirePRO and Radeon Pro Instinct GPUs. They are also far cheaper compared to Nvidia, which I wouldn't bother with because of its ridiculous price. I use Solidworks on my GTX 1060 and it works perfectly fine, haven't had any issues with it. Ryzen is great for the project because of the price to performance value.QuoteIt will NOT improve SPECviewperf scores or CAD performance on consumer hardawre. Quadro GPU uses far higher quality chips compared to geforce.
I know. Only Solidworks must see it as prof.card to open some features like Anti-Aliasing, i dont need more performance its enough power what GTX1070 brings.
I buy me a second machine to make experiments. This machine is a workstation i cant make experiments with it. January maybe can i test it.
Is Ryzen/nvidia combo good for this project?
Any link ? which firepro vbios?
Edit: I find some how to´s but only for RX480 to change it in a RX580 nothing about firepro w7100 or something like that.
This is to tricky... Buy me ne hw for 2nd pc (server) Ryzen 3900X + RTX2080tiYes... it is. That's why we have new technologies coming out all the time. Notably, GPU-P on windows allows you to partition any GPU for the VM for absolutely free on Hyper-V. It's far easier compared to all this hardware and software modding because it "just works."
This is totaly new. No thx :)
My goal is now buy a new gpu (2080ti)
Install ubuntu 20/10
Install kvm
Configure a VM with gpu pathtrough
Spoof it as RTX6000 or 8000 <--- this point is very importand for me.
I find a cool video there everything explained how to install it on ubuntu.
https://youtu.be/ID3dlVHDl0c
Only thing is can't find videos for spoofing gpus.
This is totaly new. No thx :)
My goal is now buy a new gpu (2080ti)
Install ubuntu 20/10
Install kvm
Configure a VM with gpu pathtrough
Spoof it as RTX6000 or 8000 <--- this point is very importand for me.
I find a cool video there everything explained how to install it on ubuntu.
https://youtu.be/ID3dlVHDl0c
Only thing is can't find videos for spoofing gpus.
You can use the VFIO PCI spoof arguments that I posted earlier. It works flawlessly.
This is totaly new. No thx :)
My goal is now buy a new gpu (2080ti)
Install ubuntu 20/10
Install kvm
Configure a VM with gpu pathtrough
Spoof it as RTX6000 or 8000 <--- this point is very importand for me.
I find a cool video there everything explained how to install it on ubuntu.
https://youtu.be/ID3dlVHDl0c
Only thing is can't find videos for spoofing gpus.
You can use the VFIO PCI spoof arguments that I posted earlier. It works flawlessly.
Perfect thxxxx
Last questen is there any "error 43" problems with amd radeon cards?
PS Status : Sold my 2 GTX1070 now looking for 2080ti on Ebay 🤪
"but they will disable some features if you are not using Radeon FirePro"
what are these?
PS: My 2080ti (MSI Trio) is on the way to me 8)
"but they will disable some features if you are not using Radeon FirePro"
what are these?
PS: My 2080ti (MSI Trio) is on the way to me 8)
What I am saying is that Nvidia disables GeForce driver on VM, but AMD does not. Instead, AMD disables a couple features according to the forums, but the GPU should still work. That only happens if you are using AMD gaming GPU without setting your KVM model to "host." If you are spoofing the GPU, you should be fine either way.
"but they will disable some features if you are not using Radeon FirePro"
what are these?
PS: My 2080ti (MSI Trio) is on the way to me 8)
What I am saying is that Nvidia disables GeForce driver on VM, but AMD does not. Instead, AMD disables a couple features according to the forums, but the GPU should still work. That only happens if you are using AMD gaming GPU without setting your KVM model to "host." If you are spoofing the GPU, you should be fine either way.
Understand, same way spoof it as "FirePro" with vbios.... and some vfio configs
"but they will disable some features if you are not using Radeon FirePro"
what are these?
PS: My 2080ti (MSI Trio) is on the way to me 8)
What I am saying is that Nvidia disables GeForce driver on VM, but AMD does not. Instead, AMD disables a couple features according to the forums, but the GPU should still work. That only happens if you are using AMD gaming GPU without setting your KVM model to "host." If you are spoofing the GPU, you should be fine either way.
Understand, same way spoof it as "FirePro" with vbios.... and some vfio configs
Actually, if all you want to do is spoof PCI ID, you dont need to flash vBIOS. Solidworks only checks for PCI ID, and no program or driver really cares about the vbios. All I had to do was spoof PCI ID and windows automatically loaded quadro driver and i got advanced functionality in solidworks.
I am interested in the solution of kvm and vgpu. Can anyone guide us which file should be patched?
I am working on nvidia-vgpu. It uses flexlm embedded and binary file (****64) format....so it should be doable
" args: -device 'vfio-pci,host=02:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=NVP4000,x-pci-vendor-id=0x10de,x-pci-device-id=0x1BB1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x11A3' "
This is in a different format how can i konvert it to XML like?
Hi Krutav,
Now i am ready for KVM. i bought new hardware. But i have now 16 cores (Ryzen 3950x)
but only 1 GPU :/ and no money to buy a second GPU.
Is it possible to run fedora like yours with single GPU and run a vm with GPU passthrough Simultaneously?
EDIT: I find a solution for my single gpu problem... Everything is running. Ubuntu as OS and share its own GPU (yes passthrough) with his vm on Qemu. Look here https://youtu.be/eTX10QlFJ6c :popcorn: and next step. GPU SPOOFING
I am interested in the solution of kvm and vgpu. Can anyone guide us which file should be patched?
I am working on nvidia-vgpu. It uses flexlm embedded and binary file (****64) format....so it should be doable
Your PM box is full so I cant send you PM.
Have you worked on flexlm before?
I actually have all the vGPU drivers for standard Linux, ESXI, Xen, and Red Hat. (460 driver 1/19/2021) I am willing to share it with anyone here who is willing to find a solution to vGPU on consumer GPU. Note that spoofing PCI ID is not enough. You need to modify the way that nvidia driver determines if the GPU is capable of vGPU or not by tooling around with Linux. I recommend using RHEL which is free now.
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: (0x0): Received start call from nvidia-vgpu-vfio module: mdev uuid 38512783-4893-47f7-9179-b0594167e86b GPU PCI id 00:01:00.0 config params vgpu_type_id=50
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: (0x0): pluginconfig: vgpu_type_id=50
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: Successfully updated env symbols!
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: NVOS status 0x56
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: Assertion Failed at 0x42af43bf:293
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: 11 frames returned by backtrace
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: /lib/x86_64-linux-gnu/libnvidia-vgpu.so(_nv005021vgpu+0x18) [0x7f0c42b393c8]
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: /lib/x86_64-linux-gnu/libnvidia-vgpu.so(+0xa3e3b) [0x7f0c42aefe3b]
...
I actually have all the vGPU drivers for standard Linux, ESXI, Xen, and Red Hat. (460 driver 1/19/2021) I am willing to share it with anyone here who is willing to find a solution to vGPU on consumer GPU. Note that spoofing PCI ID is not enough. You need to modify the way that nvidia driver determines if the GPU is capable of vGPU or not by tooling around with Linux. I recommend using RHEL which is free now.
I gave that a shot a short while ago, the plan was to intercept all ioctl calls and "fix" the returned data to indicate vGPU support, but I didn't get far. The nvidia-vgpud service seem to only care abut the PCI device ID and changing that was pretty straight forward after some reverse engineering. The nvidia-vgpu-mgr service on the other hand gave me trouble. It seems that simply altering the values returned by the kernel module is not enough to get it working. I am assuming that the nvidia-vgpu-mgr expects some other side effect to take place in the kernel or GPU, but the kernel bails early due to the GPU being unsupported. In order to keep digging into this I will have to setup a kernel debugger which requires hardware that I do not have at the moment.
I have published the code here: https://github.com/DualCoder/vgpu_unlock (https://github.com/DualCoder/vgpu_unlock) if anyone wants to have a look.
Currently it crashes with a failed assertion:Code: [Select]Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: (0x0): Received start call from nvidia-vgpu-vfio module: mdev uuid 38512783-4893-47f7-9179-b0594167e86b GPU PCI id 00:01:00.0 config params vgpu_type_id=50
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: (0x0): pluginconfig: vgpu_type_id=50
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: notice: vmiop_env_log: Successfully updated env symbols!
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: NVOS status 0x56
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: Assertion Failed at 0x42af43bf:293
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: 11 frames returned by backtrace
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: /lib/x86_64-linux-gnu/libnvidia-vgpu.so(_nv005021vgpu+0x18) [0x7f0c42b393c8]
Jan 20 19:21:51 Debian-dom0 nvidia-vgpu-mgr[1429]: error: vmiop_log: /lib/x86_64-linux-gnu/libnvidia-vgpu.so(+0xa3e3b) [0x7f0c42aefe3b]
...
I should be able to continue my efforts in about a week or so. The fact that a couple of other users seemed to be able to do this fairly easily with userspace patches makes me think that I am taking the most complicated approach possible, but I don't really know any other way to approach a problem like this.
Welcome back Krutav,
today arrives my 2nd GPU and i can fire up :).
I hope at evening i don with:
- Install proxmox
- create a vm with Windows 10 64bit
- working GPU passthrough (Nvidia@RTX2080ti)
- Spoof it as Quadro 4000 with RTX4000.rom (patched.vbios)
When everything goes well i write a point for point
every step tutorials for newbies. If not i hope you are tonight (gmt+1)
here and login@my-system ;-)
Best regards
Edit
11:15am GPU arrived
6:40pm: proxmox installed with externally usb@lan adapter my lan adapter (Realtek-r8125) didnt
work with px 6.3.1 failure from proxmox "network not found" i have no idea how i
install driver on proxmox no 'make' executable istalled thatswhy no ./autorun possible.
8:39pm: Ubuntu vm working passthrough.
9:40pm: Windows10 vm (24core24GBram2080ti) with full working passthrough :box: (see picture)
00:55AM: GPU SPOOF as RTX4000 done (check picture)
BUT Solidworks dont open Antialiasing menu.
I am testing something i installed it bevor install quadro drivers on it maybe this is the fold. I am reinstalling it
I dont come to goal :( it identifys as /PCIe/SSE2 instead as "Nvidia RTX 4000" check picture
I dont come to goal :( it identifys as /PCIe/SSE2 instead as "Nvidia RTX 4000" check picture
So PCIE SSE2 is part of the SSE instruction set which these graphics cards rely on. If you have played Minecraft before, you will notice that it will say PCIE SSE2 next to the graphics card name because that is the instructions that the system needs to have in order to run the graphics card. This means that you can use the graphics card with CPUs made newer than 2001 that have the feature. So I don't quite see this as much of an issue, but it may be worth looking into.
(Attachment Link)I dont come to goal :( it identifys as /PCIe/SSE2 instead as "Nvidia RTX 4000" check picture
So PCIE SSE2 is part of the SSE instruction set which these graphics cards rely on. If you have played Minecraft before, you will notice that it will say PCIE SSE2 next to the graphics card name because that is the instructions that the system needs to have in order to run the graphics card. This means that you can use the graphics card with CPUs made newer than 2001 that have the feature. So I don't quite see this as much of an issue, but it may be worth looking into.
okey understand... i think solidworks take the info from registry... but i cant change the id windows.
tonight i run the nvidia quadro software i dont now how :D but i know now how i can gpu passthrough on my system.
have so much problems with iommu and vfio driver like this https://superuser.com/questions/1510581/vga-and-audio-assigned-to-vfio-but-not-usb-and-serial-controller-of-rtx-2080
"So what I recommend doing then is trying to find the correct registry values that determine if SOLIDWORKS can use AntiAliasing or not. If you cannot find it, I will continue the search for the solution!"
thats it. Hack the registry...
It never quite crossed me that so many people have issues with IOMMU PCIE passthrough. Since I use workstation/business grade used systems from few years ago, i haven't had a single issue with anything. All the graphics cards and everything passes through without a problem.I also had no problems until I switched the USB port to the vm. The kernel-modul (xhci_hcd) catched the usb controller from rtx2080ti
02d:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ad7] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3721]
Kernel driver in use: xhci_hcd
#!/bin/sh
PCI_HID="0000:2d:00.2"
echo -n "$PCI_HID" > /sys/bus/pci/drivers/xhci_hcd/unbind
echo -n "$PCI_HID" > /sys/bus/pci/drivers/vfio-pci/bind
PCI_HID="0000:2d:00.3"
echo -n "$PCI_HID" > /sys/bus/pci/drivers/nvidia-gpu/unbind
echo -n "$PCI_HID" > /sys/bus/pci/drivers/vfio-pci/bind
In which folder do I have to put it so that it starts automatically with the system?Some of the features like Quadro View Desktop Management is something that can only be used if the GPU is sure that it is a quadroYes it does but i find a method to start it and activate the window manager ;) (you take it soon as pn that is maybe our unique change to hack it).
ECC as well, which is something missing on consumer cards but can cause applications like vGPU to not start if not configured, so that's another thing that needs to be worked on. I am thinking about using dynamic EEPROM that can change the vBIOS on command but that would likely crash the system so that one is off the list.No idea what you talking about :D
I also had no problems until I switched the USB port to the vm. The kernel-modul (xhci_hcd) catched the usb controller from rtx2080tiI totally forgot the newer cards have USB! Make sure you pass the whole controller to the VM as a PCIE device and add it to your PCIE Stub in GRUB. This way the host doesn't try to initialize USB and screw everything over.
In which folder do I have to put it so that it starts automatically with the system?Do this in GRUB. When you bind the Nvidia USB controller to the PCI-Stub at boot, xhci-module will never load and the host wouldn't know it is there. I'll try to post a sample GRUB configuration later to show you how this can be done. But you likely will not need any script since this kind of thing mostly works out of the box.
Yes it does but i find a method to start it and activate the window manager ;) (you take it soon as pn that is maybe our unique change to hack it).How were you able to get this working? I haven't found any instructions for this yet on the internet so it would be really cool seeing how you pulled it off!
When you activate the window manager everthing looks different in the nvidia system manager.
No idea what you talking about :DECC memory is error correcting code memory which is a feature that professional graphics cards have. However for most functions, it has to be set to DISABLED. To do that, you need the correct Quadro VBIOS or Nvidia driver will say it is a GeForce and the ECC options will be gone. And for vGPU, this feature needs to be disabled and set as that or otherwise you will get no results. I'll post more as new details roll in.
Do this in GRUB. When you bind the Nvidia USB controller to the PCI-Stub at boot, xhci-module will never load and the host wouldn't know it is there. I'll try to post a sample GRUB configuration later to show you how this can be done. But you likely will not need any script since this kind of thing mostly works out of the box.
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet, amd_iommu=on, vfio-pci.ids=10de:1e07,10de:10f7,10de:1ad6,10de:1ad7"
GRUB_CMDLINE_LINUX=""
Do this in GRUB. When you bind the Nvidia USB controller to the PCI-Stub at boot, xhci-module will never load and the host wouldn't know it is there. I'll try to post a sample GRUB configuration later to show you how this can be done. But you likely will not need any script since this kind of thing mostly works out of the box.PLease a short howto where i can place it and it runs at system startup.
How were you able to get this working? I haven't found any instructions for this yet on the internet so it would be really cool seeing how you pulled it off!You have pn.
You can activate the Quadro Destop Manager with a Remote Deskop connection right click on desktop and open RTX Desktop Manager --> activateTime to pull out the old 660 Ti and get an nView desktop working! This will be very useful for a headless workstation maybe.
in older quadro driver its called nView...
nope is not working. this is a bug in the kernel, in newer kernels is not a problem but this case (proxmox >>5.4 kernel<<) yes. Many people hava this problem. I have no idea how can i update the kernel.If it is fine in newer kernel, then we can update. Here is a Github link to a program that makes it easy to install the newest kernel. https://github.com/fabianishere/pve-edge-kernel (https://github.com/fabianishere/pve-edge-kernel) You can install kernel 5.10 with it.
One more thing... very intresing is when you use an older quadro driver Solidworks rendere say i have an "unknown board/PCIe/SSE2"I have no idea why it says this, but at the same time, I don't necessarily see it as a problem. I would make sure that you first get all the GPU components passed through first, which includes usb, audio, gpu, I2C, etc. That way we can rule this out. With my 1080, I turn it into a P4000 quadro without seeing any of this SSE2 stuff. Also, get GPU-Z and take a screenshot of what it is saying, because it can give us some more information.
bevor it says (with newer quadro driver) "/PCIe/SSE2"
I have no idea why it says this, but at the same time, I don't necessarily see it as a problem. I would make sure that you first get all the GPU components passed through first, which includes usb, audio, gpu, I2C, etc. That way we can rule this outdone. Now boots very well and fully working passthrough. i put the script into the /home folder and write a crontab job with
nano crontab -e
@reboot /home/unbind.sh
STRG+O for save the file
STRG+X quit the document
nano /home/unbind.sh
#!/bin/sh
PCI_HID="0000:2d:00.2"
echo -n "$PCI_HID" > /sys/bus/pci/drivers/xhci_hcd/unbind
echo -n "$PCI_HID" > /sys/bus/pci/drivers/vfio-pci/bind
PCI_HID="0000:2d:00.3"
echo -n "$PCI_HID" > /sys/bus/pci/drivers/nvidia-gpu/unbind
echo -n "$PCI_HID" > /sys/bus/pci/drivers/vfio-pci/bind
chmod -x /home/unbind.sh
reboot
you can check it with lspci -v
scroll up and find the devices that may to unbind the driver....I turn it into a P4000 quadro without seeing any of this SSE2 stuff.you see it in the registry or in Realview Hack tool
Computer\HKEY_CURRENT_USER\SOFTWARE\SolidWorks\SOLIDWORKS 2020\Performance\Graphics\Hardware\Current
Also Looking Glass may be of intrest to you, see https://looking-glass.io, or join us on our Discord server (https://discord.com/invite/52SMupxkvt)
Hi Gnif,
very to meet you :) . Thank you very much for your invitation to your discord. Everything is now working with passthrough. Maybe you can help us to work on
Spoofing GPU´s?
Befor we use Looking Glass i must i have a working Solidworks in nativ mode with open "full scene anti aliasing" mode.
args: -device 'vfio-pci,host=2d:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=RTX4000MOD.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x1eb1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x12a0'
$ qemu-system-x86_64 -device vfio-pci,? 2>&1 | grep "x-*"
>> vfio-pci.x-pci-sub-device-id=uint32
>> vfio-pci.x-no-kvm-msi=bool
>> vfio-pci.x-pcie-lnksta-dllla=bool (on/off)
>> vfio-pci.x-igd-opregion=bool (on/off)
>> vfio-pci.x-vga=bool (on/off)
>> vfio-pci.x-pci-vendor-id=uint32
>> vfio-pci.x-req=bool (on/off)
>> vfio-pci.x-igd-gms=uint32
>> vfio-pci.x-no-kvm-intx=bool
>> vfio-pci.x-pci-device-id=uint32
>> vfio-pci.host=str (Address (bus/device/function) of the host device, example: 04:10.0)
>> vfio-pci.x-no-kvm-msix=bool
>> vfio-pci.x-intx-mmap-timeout-ms=uint32
>> vfio-pci.bootindex=int32
>> vfio-pci.x-pcie-extcap-init=bool (on/off)
>> vfio-pci.addr=int32 (Slot and optional function number, example: 06.0 or 06)
>> vfio-pci.x-pci-sub-vendor-id=uint32
>> vfio-pci.x-nv-gpudirect-clique=uint4 (NVIDIA GPUDirect Clique ID (0 - 15))
>> vfio-pci.x-no-mmap=bool
qemu-system-x86_64 -device vfio-pci,? 2>&1 | grep "x-*"
addr=<int32> - Slot and optional function number, example: 06.0 or 06 (default: -1)
bootindex=<int32>
host=<str> - Address (bus/device/function) of the host device, example: 04:10.0
x-balloon-allowed=<bool> - (default: false)
x-igd-gms=<uint32> - (default: 0)
x-igd-opregion=<bool> - on/off (default: false)
x-intx-mmap-timeout-ms=<uint32> - (default: 1100)
x-msix-relocation=<OffAutoPCIBAR> - off/auto/bar0/bar1/bar2/bar3/bar4/bar5 (default: "off")
x-no-geforce-quirks=<bool> - (default: false)
x-no-kvm-intx=<bool> - (default: false)
x-no-kvm-ioeventfd=<bool> - (default: false)
x-no-kvm-msi=<bool> - (default: false)
x-no-kvm-msix=<bool> - (default: false)
x-no-mmap=<bool> - (default: false)
x-no-vfio-ioeventfd=<bool> - (default: false)
x-nv-gpudirect-clique=<uint4> - NVIDIA GPUDirect Clique ID (0 - 15)
x-pci-device-id=<uint32> - (default: 4294967295)
x-pci-sub-device-id=<uint32> - (default: 4294967295)
x-pci-sub-vendor-id=<uint32> - (default: 4294967295)
x-pci-vendor-id=<uint32> - (default: 4294967295)
x-pcie-extcap-init=<bool> - on/off (default: true)
x-pcie-lnksta-dllla=<bool> - on/off (default: true)
x-req=<bool> - on/off (default: true)
x-vga=<bool> - on/off (default: false)
xres=<uint32> - (default: 0)
romfile
arguments is pretty much useless on these nvidia cards and I still need to try it with and AMD/ATI graphics card to further prove the lack of usability with the feature. I say it is useless because the GPU is going to read right off of its own internal ROM. The only practical use of romfile is to get the bootscreen working on a passed through GPU. But it only works if you keep the PCI ID the same. I think Nvidia has outsmarted us in this area and it's understandable. -device virtio-vga,virgl=on
no idea how/does you can use "virgl=on,renderer=Quadro RTX 4000" (or something like this) in our x-line args: -device 'vfio-pci,host=2d:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=RTX4000MOD.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x1eb1,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x12a0
I give up :(
Now it is working :)))))Awesome! What was the solution?
Now it is working :)))))Awesome! What was the solution?
Nice job. But what did it means? :)
Its done but it takes a few updates or
Many milestones done but some milestones must to work out?
:-DD :-DD :-DD :-DD :-DD :-DD :-DD :-DD :-DDNice job. But what did it means? :)
Its done but it takes a few updates or
Many milestones done but some milestones must to work out?
"(I already have gaming pc which is cheaper than a vGPU license so its really only gonna be for fun and not practical use.)"
There is a post in this thread with working hacked vgpu grid driver. Why you not ask him for a working solution?
QuoteThere is a post in this thread with working hacked vgpu grid driver. Why you not ask him for a working solution?
There's 2 options. Make an add-on to DualCoder's script that takes out the licensing requirement. OR, hack the licensing server because it runs FlexNet license server which is apparently very easy to crack.
Still waiting for a working noob version. 😊
Hello everyone!
I have recently obtained Tesla K10 (converted as K2) from eBay. Unfortunately, K2 drivers (non-vGPU) are not supported by modern Linux, so I decided to convert it back to K10 for now. I installed resistors and I got it displayed as K10, however I cannot find a correct BIOS dump for those GPUs.
I used nvflash with override to flash one of the chips using this one: https://www.techpowerup.com/vgabios/213266/213266 (https://www.techpowerup.com/vgabios/213266/213266)
Does anyone have full BIOS dump from original K10 for both vBIOS chips? From what I understand, they are not the same.
:-//
P.S. I plan to work on some interesting project with final goal to convert dual GPU Tesla K10 to dual K5000. K10/K2 and K5000 share similar GPUs (GK104) of different part#, however number of CUDA cores, TMUs and ROPs is the same. (same story as GK104 on GTX690 VS GK104 on K5000 or GK104 on GTX680).
Tesla K10: 10de:118F
Grid K2: 10de:11BF
Quadro K5000: 10de:11BA
So far. all the values of resistors mentioned on this topic are valid for real K10 as well. Picture below shows resistors used for the K10 BIOS chip (rear one, near power connectors). R2 and R3 are responsible for the byte change from 8 to B while R4 and R5 are responsible for the GDDR5 manufacturer (Samsung VS Hynix). R1 and some supporting resistors around are the part of the ROM circuit that is identical to GTX7xx lineup. I compared values and reverse engineered schematic of the ROM. Schematic is attached on the second picture. vBIOS model is the following: http://ww1.microchip.com/downloads/en/devicedoc/doc0606.pdf (http://ww1.microchip.com/downloads/en/devicedoc/doc0606.pdf)
From my understanding, 4th byte difference comes not from the vBIOS circuit (because it affects 3rd byte only), but from the strap #2 on the GPU die itself. GTX780ti schematic is attached for your reference. I have found it on some Russian electronics repair forum. I have seen it was told that I can change 4th byte using BIOS straps, however I did not understand how to do it. That would be great if someone can elaborate on this. Thank you.
That's exactly what I am looking for, GPU1 vBIOS from K10. If someone has it, it would help me a lot.Here is GPU1:
Also, I wonder if community here figured out where 4th byte resistor is connected to on GK104 chips. I understand it goes to the GPU die, however I cannot reverse engineer it without disassembling GPU itself.
Cannot flash vBIOS #1 because chip has device ID ending with eight zeros. How to resolve this? Thank you.
Cannot flash vBIOS #1 because chip has device ID ending with eight zeros. How to resolve this? Thank you.
Those last 8 numbers are the Subsystem ID. GPU1 is supposed to be Subsystem 10DE 0970. So you will need to change that first. What you can do is force the GPU1 flash by entering the force arguments for NVFLASH. There is a modded a version of NVFLASH that bypasses these IDs. Please try that and see if it will let you flash.
QuoteThere is a post in this thread with working hacked vgpu grid driver. Why you not ask him for a working solution?
There's 2 options. Make an add-on to DualCoder's script that takes out the licensing requirement. OR, hack the licensing server because it runs FlexNet license server which is apparently very easy to crack.
Still waiting for a working noob version. 😊
Same for me, I’ve tried to find out how to extend the trial on the licensing server, but I looks like I’ll just have to keep signing up for trial licenses...
Based on my testing, the internal timer to license requirement starts at 20 minutes of guest VM Nvidia driver running. If you can mask that, you get the features forever. But that is harder to do.
args: -device 'vfio-pci,host=06:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=quadrok5000.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x11BA,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x0965'
Based on my testing, the internal timer to license requirement starts at 20 minutes of guest VM Nvidia driver running. If you can mask that, you get the features forever. But that is harder to do.
The guest's driver? On a Windows guest? Then here is a crazy idea: Attach CheatEngine to the driver, enable speedhack, speed = 0, done. I have no idea if that could work, it will probably cause other problem though.
mcerveny: "470.05_gameready_win10-dch_64bit_international.exe" allow passthrou of GTX card without any virtualiaztion quirks (and without error code 43) ?
however SolidWorks with my university license just refuses to run under KVM virtualization whatsoever.
1. Fake Quadro K5000 inside Windows VM can be detected as Tesla K10
Massive thanks to @DualCoder for making the vGPU Unlock program. It works fairly well!
I tested it out with PROXMOX hypervisor (KVM) and a Windows 10 Virtual Machine. I was even able to game at 60 FPS! Only problem is that after 10 minutes it capped me at 3 FPS because I can't afford a license :-DD
In all seriousness though, this is awesome work and it is actually very well done. I suggest all of you try it out and contribute to the project!
Next step: trying to figure out how to bypass licensing...
Quotemcerveny: "470.05_gameready_win10-dch_64bit_international.exe" allow passthrou of GTX card without any virtualiaztion quirks (and without error code 43) ?Quotehowever SolidWorks with my university license just refuses to run under KVM virtualization whatsoever.
If you enable Hyper-V on the guest system, for whatever reason the VM becomes a regular computer and doesn't register as a VM, allowing you to play pretty much any game with VM detection anticheat. I'm sure Solidworks will work on VMs because that's the main selling point of vGPU in the first place. If it doesn't work, make sure you install Hyper-V, on guest, enable nested virtualization on host, and set CPU model to 'host' or 'passthrough.'
Edit 1: Also make sure to install QEMU guest driver or it wont work. I will also test out 470.05 drivers and see if that bypasses the checks.
Edit 2: You probably don't even need to enable HyperV in the first place. With CPU set to 'host' and Nested VT enabled, my VM just works with no Error 43 at all!Quote1. Fake Quadro K5000 inside Windows VM can be detected as Tesla K10
I should have probably mentioned earlier that all that this PCI ID spoof does is change the PCI ID and name of the GPU, but it still recognizes as whatever the GPU actually is. That's why it's not a very effective method. My recommendation for you is to convert each chip on the K10 to a Quadro K5000, or a Grid K2 and it pass it through either as Grid K2 OR K5000 and hopefully all display technologies can activate, such as DirectX. Let me know what works for you and what doesn't.
Quotemcerveny: "470.05_gameready_win10-dch_64bit_international.exe" allow passthrou of GTX card without any virtualiaztion quirks (and without error code 43) ?Quotehowever SolidWorks with my university license just refuses to run under KVM virtualization whatsoever.
If you enable Hyper-V on the guest system, for whatever reason the VM becomes a regular computer and doesn't register as a VM, allowing you to play pretty much any game with VM detection anticheat. I'm sure Solidworks will work on VMs because that's the main selling point of vGPU in the first place. If it doesn't work, make sure you install Hyper-V, on guest, enable nested virtualization on host, and set CPU model to 'host' or 'passthrough.'
GPU die straps for GPU#2 (power side)
QuoteGPU die straps for GPU#2 (power side)
There is a video made by YouTuber Craft Computing where he does the Tesla K10 to K2 mod
nvflash.exe --index=0 -6 k10-1-5000.rom
args: -cpu 'host,hv_time,kvm=off' -device 'vfio-pci,host=06:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on,romfile=quadrok5000.rom,x-pci-vendor-id=0x10de,x-pci-device-id=0x11BA,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x0965'
QuoteGPU die straps for GPU#2 (power side)
Personally, I own a GTX 660 Ti which I have done the PCI strap mod on and got it to a GRID K2. But that did not unlock vGPU so it was useless.
There is a video made by YouTuber Craft Computing where he does the Tesla K10 to K2 mod, and there are also instructions for that in this thread posted a few years ago. Were you able to get it converted into a Quadro K5000? I've been thinking about doing some sort of mod to get better workstation performance... (potato server)
In the meantime, I am going to do some experimentation on AMD cards because that seems like a pathway that nobody is really exploring right now. Let me know if any of you are interested!
====SOLUTION BELOW====
Solution is still under development and tested on GPU #1 ("video port" side) only on Tesla K10. GPU #2 (power connector side) and new rom test is required, however my GPU#2 requires repairs, thus I will test it only later in April when I get my components.
Disclaimer: I am not an Assembly programmer nor GPU designer (yet? still studying to become one). Whatever is written below was just my final most dumb approach (what if it just works?).
0. GPU differences:
GPU: ------- CUDA? ------- 3D enabled ------- Video out? ------- QEMU KVM?
K5000 ------ yes ------------- yes -------------- yes -------------- yes
Tesla K10 --- yes ------------ no ---------------- no --------------- yes
Grid K2 ----- no ------------- yes --------------- no ---------------- no (RHEL KVM works)
Quote====SOLUTION BELOW====
Solution is still under development and tested on GPU #1 ("video port" side) only on Tesla K10. GPU #2 (power connector side) and new rom test is required, however my GPU#2 requires repairs, thus I will test it only later in April when I get my components.
Disclaimer: I am not an Assembly programmer nor GPU designer (yet? still studying to become one). Whatever is written below was just my final most dumb approach (what if it just works?).
0. GPU differences:
GPU: ------- CUDA? ------- 3D enabled ------- Video out? ------- QEMU KVM?
K5000 ------ yes ------------- yes -------------- yes -------------- yes
Tesla K10 --- yes ------------ no ---------------- no --------------- yes
Grid K2 ----- no ------------- yes --------------- no ---------------- no (RHEL KVM works)
Wow @dgusev thank you so much for the writeup! Can't wait to get it it working on the 660 Ti perhaps.
As far as vGPU is concerned on K10's and GRID K2's, I'm thinking of building the XEN Grid 350 drivers for RHEL so that all of us linux users can enjoy Grid K2 on KVM! The drivers across the 2 operating systems are very similar, the only difference being that XEN does not use VFIO-MDEV for VFIO mediated devices, which is what we need for RHEL/linux KVM support. Because the XEN driver uses RPM, it will be pretty easy to work with on RHEL, not sure about other operating systems though. Because I don't own a GRID K2 or Tesla K10, I have no way of testing this and may have to resort to modding DualCoder's vGPU unlock script to support GTX 660Ti. There is also no NVENC or CUDA support on GRID K2 vGPU :( (Not that I know of)
Ultimately, this method allows to get two K5000 without vGPU and other struggles and everything is pretty much enabled compare to Grid K2 that has CUDA and NVENC disabled. I think this is probably the only solution for Kepler-based Tesla cards.
QuoteUltimately, this method allows to get two K5000 without vGPU and other struggles and everything is pretty much enabled compare to Grid K2 that has CUDA and NVENC disabled. I think this is probably the only solution for Kepler-based Tesla cards.
I think your solution is awesome and it will definitely come in handy for many others. I should mention that this requires a KVM host for the PCI ID spoof, and unfortunately my server won't enable passthrough for KVM so I have to use ESXI. Because of this, everyone who is not using a KVM hypervisor or is using as a bare-metal graphics device will need to do PCI ID straps resistors, something covered with great detail on this forum.
Yes, it is official now. GTX/RTX is enabled for passthough without modding from R465 beta drivers.
QuoteYes, it is official now. GTX/RTX is enabled for passthough without modding from R465 beta drivers.
Awesome! Personally I never had code 43 issues, GTX passes through fine and driver doesn't care at all about VM. :-DD
I think we'll definitely be looking at a more open future of consumer graphics cards. Also, for those of you are working remotely with GeForce cards, Nvidia released a patch you can download called "nvidiaopenglrdp.exe" that enables remote desktop OpenGL for CAD users working remotelye. You'll need to create an Nvidia developer account to download it.
Edit: @dgusev my P106 is broken so I cannot try the 3D enable mod similar to how you did with the Tesla K10. It may work... but for some reason the +5V line is going straight to GND. :palm:
Looks like capacitor short circuit
I would love to see the capability of these cards running as mxgpu in VMware ESXi, KVM, Xen and if I can help contribute to make it a possibility, please feel free to contact me or reply. Thanks!!
Based on my testing, the internal timer to license requirement starts at 20 minutes of guest VM Nvidia driver running. If you can mask that, you get the features forever. But that is harder to do.
The guest's driver? On a Windows guest? Then here is a crazy idea: Attach CheatEngine to the driver, enable speedhack, speed = 0, done. I have no idea if that could work, it will probably cause other problem though.
This sounds like an amazing project you guys are working on. Has anyone here considered trying to flash or convert an AMD FirePro S9150 series card? I have a few available and would be willing to dedicate them to the effort if anyone is interested.
Quick tip for anyone whos fighting on the license.
1. Forget about FlexNet. Its not that easy to crack after all
My suggestion will be get W7100, put bigger BIOS chip, flash S7150 bios, override PCI-ID in guest OS to allow AMD gpu driver to be installed.