Author Topic: [MOVED] Hacking NVidia Cards into their Professional Counterparts  (Read 1645229 times)

0 Members and 6 Guests are viewing this topic.

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #675 on: November 06, 2013, 02:21:26 pm »
I have the card in my computer now with the cooler on it. When and if I take it apart again, I'll take a photo. But it's really just two very thin wires soldered on the Vcc and the SCLK pins of the EEPROM going to a 1206 33K resistor.

Oh, I see. So just short the EEPROM Vcc to SCLK with a 33KO resistor?
Won't this potentially upset other things?

Yes, just a resistor between Vcc and SCLK. Seeing as I'm the only one that has a working Tesla (except for memory size issues), I'm guessing that my mod does not change anything else. I would like to try it with a Titan, but that thing is expensive. :( That's why I'm waiting for johndoe to apply my mod see if it works then. Btw, did you get a Titan?

Yes, the Titan arrived last night. I was going to just remove the 3rd nibble resistor, but I like your approach of adding a resistor between Vcc and SCLK better. Unfortunately, I don't have any 1206 resistors lying around, so I had to order some 33K ones as per your findings. I had some non-SMD ones, but I couldn't use them as they wouldn't fit under the heatsink. They 1206s should arrive tomorrow. Will report back when I have some news.
 

Offline oguz286

  • Contributor
  • Posts: 39
  • Country: nl
  • Make, break, hack, tweak
    • GuzTech
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #676 on: November 06, 2013, 06:01:17 pm »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #677 on: November 07, 2013, 12:13:33 am »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.

I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
 

Offline oguz286

  • Contributor
  • Posts: 39
  • Country: nl
  • Make, break, hack, tweak
    • GuzTech
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #678 on: November 07, 2013, 08:46:30 am »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.

I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.

Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is. Another problem is that Tesla BIOSes are not exactly the same as the GeForce BIOSes so its pretty difficult to find the bits that are responsible.
« Last Edit: November 07, 2013, 09:29:03 am by oguz286 »
 

Offline cloudscapes

  • Regular Contributor
  • *
  • Posts: 198
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #679 on: November 07, 2013, 04:12:54 pm »
Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.

I have no idea if this tradeoff is still the case with new cards. It's been quite a while.
 

Offline KamranB

  • Newbie
  • Posts: 2
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #680 on: November 07, 2013, 04:14:28 pm »
Hello,

We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.

2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.

I would be really thankful if you  could help us through this problem.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #681 on: November 07, 2013, 05:04:52 pm »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.

I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.

Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.

Can you elaborate on this? What  byte offset locations in the GeForce BIOS contains the number of chips and their size?

Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.

Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.

We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.

2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.

Did you change just the 3rd nibble resistor pair or did you change the 4th one as well? I suggest you put the 4th nibble (lower pair in the photo) back as it was and soft-mod that part instead. For the 3rd nibble resistor, you can either leave it off and stabilize wiith the soft-mod on the lowest bit of the nibble, or put in a resistor. With 25K or more, the 3rd nibble will go to 0xB.
 

Offline oguz286

  • Contributor
  • Posts: 39
  • Country: nl
  • Make, break, hack, tweak
    • GuzTech
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #682 on: November 07, 2013, 05:28:13 pm »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.

I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.

Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.

Can you elaborate on this? What  byte offset locations in the GeForce BIOS contains the number of chips and their size?

I don't know exactly where the bits are but I'm in the process of going through the nouveau source that hints that the memory size can be determined by reading a GPU hardware register. There are references to tables in the ROM that contain timings and memory type, but I haven't figured out the location yet.
 

Offline ixfd64

  • Frequent Contributor
  • **
  • Posts: 345
  • Country: us
    • Facebook
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #683 on: November 07, 2013, 09:36:39 pm »
The GTX 780 Ti has been released: http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html

It's interesting to note that the double precision GFLOPS has been artificially limited to 1/24 that of single precision. I wonder if this is something that can be "adjusted."

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #684 on: November 07, 2013, 10:59:19 pm »
I thought the eeprom had the memory as the docs stated. It is probably a combination of the layout of chips, and the defined size and speed/voltage rating.

My quadro 6000 with DUAL DVI was fine with geforce drivers too but it was missing something the GTX470 had.

Funny thing is how the geforce overclockers are running the same kepler at twice the voltage (geforce). If you let the FP run at full speed, that would cook the cards! (literally the quadro 6000 runs at half the voltage across the board as many overclockers are pushing the GTX475/480). Insane!

Note: BAR restriction for VGX mode imperative. only a few server mobo's have this option to keep the IOMMU mapping <4M
note: ECC mode will disable VGX mode!!
note: avoid change of MSI-X or VGX will fail.
note: RDP will disable VGX mode (citrix).

Insane number of requirements to get VGX to actually work, instead of "looks like it is working" or "was working now is not!". buggy as heck!
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #685 on: November 07, 2013, 11:53:05 pm »
Nice! If it works out I'm getting a Titan too :) Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.

I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.

Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.

Can you elaborate on this? What  byte offset locations in the GeForce BIOS contains the number of chips and their size?

I don't know exactly where the bits are but I'm in the process of going through the nouveau source that hints that the memory size can be determined by reading a GPU hardware register. There are references to tables in the ROM that contain timings and memory type, but I haven't figured out the location yet.

Please, do report back if you figure it out before I do.

The GTX 780 Ti has been released: http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html

It's interesting to note that the double precision GFLOPS has been artificially limited to 1/24 that of single precision. I wonder if this is something that can be "adjusted."

Interesting, so Titan remains the only one with uncrippled DP FP.
It makes sense, I suppose - they had to sacrifice something to keep the GPU with the extra few shaders enabled from cooking itself at gaming grade clocks and voltages required.

Having said that - what about modding the 780Ti into a Titan? If Tom's Hardware is correct and it is due to the driver lowering the DP FP clock speed, the modding it into a Titan would work around this and give you the best of both worlds - extra shaders and full DPFP performance.

My quadro 6000 with DUAL DVI was fine with geforce drivers too but it was missing something the GTX470 had.

Which is?

Funny thing is how the geforce overclockers are running the same kepler at twice the voltage (geforce). If you let the FP run at full speed, that would cook the cards! (literally the quadro 6000 runs at half the voltage across the board as many overclockers are pushing the GTX475/480). Insane!

That's largely related to the fact that:
1) Gaming rigs are generally much better ventilated than pro grade workstations
2) On pro grade products long term reliability and noise are more important than maximum performance

Note: BAR restriction for VGX mode imperative. only a few server mobo's have this option to keep the IOMMU mapping <4M
note: ECC mode will disable VGX mode!!
note: avoid change of MSI-X or VGX will fail.
note: RDP will disable VGX mode (citrix).

Insane number of requirements to get VGX to actually work, instead of "looks like it is working" or "was working now is not!". buggy as heck!

Interestingly, I'm having no luck with my GTX690 modified into a Grid K2. Works fine on bare metal, but Xen domU fails to initialize it. ESXi fails to initialize it to the point where nvidia-smi can't talk to it at all. But my GTX480 modified into a Quadro 6000 works absolutely fine on all counts. I'm just in the process of getting vSGA working with it. I'll return to give the 690 another try once I have everything working with the 480.

Edit: And I just discovered that VMware Horizon View doesn't work on XP64 and I may have to get something running Windows 7 for the client end to use it. Most annoying.
« Last Edit: November 08, 2013, 01:17:39 am by gordan »
 

Offline dredok

  • Newbie
  • Posts: 4
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #686 on: November 08, 2013, 09:16:29 am »


Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.


How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #687 on: November 08, 2013, 05:36:01 pm »


Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.


How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.

Straight strap device ID mod. Second DMA engine is driver controlled. See:
Virtualized Gaming: Nvidia Cards, Part 2: GeGorce, Quadro and GeForce Modified into a Quadro - Higher End Fermi Models
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.
« Last Edit: May 24, 2020, 12:02:18 pm by gordan »
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #688 on: November 08, 2013, 05:46:34 pm »
gordan, do you have a way to transmit a copy of your enviroment?

I can setup a nearly identical setup and we can test the differences between your 690 and my K2.

I bet if we can pinpoint the behavior difference we could figure this out.

I've got some tidbits that might allow deeper inspection to data that nibitor doesn't have acess to.
 

Offline dredok

  • Newbie
  • Posts: 4
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #689 on: November 08, 2013, 06:14:16 pm »


Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.


How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.

Straight strap device ID mod. Second DMA engine is driver controlled. See:
http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.

Umm it is really interesting ... so, your kepler mods have enabled the dual async engine? (GTX 680 into K5000 does report 2 engines?)
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #690 on: November 08, 2013, 07:09:20 pm »
gordan, do you have a way to transmit a copy of your enviroment?

I can setup a nearly identical setup and we can test the differences between your 690 and my K2.

I bet if we can pinpoint the behavior difference we could figure this out.

I've got some tidbits that might allow deeper inspection to data that nibitor doesn't have acess to.

When you say "environment", what exactly are you referring to?
On my primary Xen virtualization rig, I am 99% sure the problem is the extra PCIe bridging on top of the problematic NF200 bridges.
I have not yet determined exactly what the problem is on ESXi - it could just be that the primary console was running on one of the halves of ther GTX690 which was causing problems. I'll look into it again when I get around to re-fitting it. I'm currently in the middle of modding my Titan into a K6000. I'm not 100% sure but I could swear it is now getting 10% more FPS in Furmark as a K6000 than it did as a Titan. But the real test is going to be checking how it handles virtualization.
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #691 on: November 08, 2013, 07:10:57 pm »


Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.


How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.

Straight strap device ID mod. Second DMA engine is driver controlled. See:
Virtualized Gaming: Nvidia Cards Part 2: GeForce, Quadro and GeForce Modified Into a Quadro - Higher End Fermi Models
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.

Umm it is really interesting ... so, your kepler mods have enabled the dual async engine? (GTX 680 into K5000 does report 2 engines?)

No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.
« Last Edit: May 24, 2020, 12:00:09 pm by gordan »
 

Offline dredok

  • Newbie
  • Posts: 4
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #692 on: November 08, 2013, 07:21:43 pm »
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.

Actually yes, all Tesla cards have dual async DMA engines ... It would be really interesting turning a GF Titan or 780 into a K20 with enabled dual async engines...

IDK about Quadro, but Tesla have dual async DMA engines.

EDIT:
from Nvidia, K5000 has dual copy engine
http://www.nvidia.com/object/quadro-k5000.html#pdpContent=1
« Last Edit: November 08, 2013, 07:24:39 pm by dredok »
 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #693 on: November 08, 2013, 11:43:47 pm »
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.

Actually yes, all Tesla cards have dual async DMA engines ... It would be really interesting turning a GF Titan or 780 into a K20 with enabled dual async engines...

IDK about Quadro, but Tesla have dual async DMA engines.

EDIT:
from Nvidia, K5000 has dual copy engine
http://www.nvidia.com/object/quadro-k5000.html#pdpContent=1

In that case they must have laser cut it out of the GPUs after GF100 - I haven't seen it enabled on GTX580 and later models after modification, including my GTX680 running the full K5000 BIOS.

On an unrelated note, my Titan/K5000 isn't working with VGA passthrough, at least with XP64. I'm going to give it a go with Windows 7 domU.
Edit: Doesn't work on Win7 domU, either. Unless somebody has seen evidence to the contrary, it looks very much like GPUs above GTX680 won't virtualize even after modification. My GTX680/K5000 works, but GTX690 doesn't work in either K2 or K5000 modes, and Titan isn't working as a K6000 virtualized (both work fine on bare metal with the Quadro/Grid drivers). I wonder if the GTX770 works virtualized, given it is a straight relabel of a GTX680.

Can anyone confirm whether their modified GTX690/GTX780/Titan work modified with VGA passthrough?
« Last Edit: November 09, 2013, 02:58:01 am by gordan »
 

Offline FireDragon

  • Regular Contributor
  • *
  • Posts: 62
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #694 on: November 09, 2013, 11:29:59 am »
I am also interested in modding a 780 Ti into a Titan if that would allow the driver to change the DP setting. Since the DP is initially disabled for the Titan and the change is done by the driver, it is possible that would work with the 780 Ti once the drive thinks it has a Titan.

I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Don't really care about the cooler, that will eventually be replaced with water cooling.

Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.

 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #695 on: November 09, 2013, 12:18:11 pm »
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Don't really care about the cooler, that will eventually be replaced with water cooling.

For the amount that would cost you, you might as well get a Titan to begin with. In fact, for the number of cards you'll destroy soldering on the BGA chips manually, you might as well just get a K6000 outright.

Binning these days does next to nothing. Silicon manufacturing has gotten to the point where all chips will do the same speeds to within a few %, and those last few % are down to luck and generally not worth bothering with. Haven't you noticed that in the past decade if the clock range of a particular Intel CPU series was, say, 2.4GHz for the slowest model and 3.33 GHz for the fastest model, they will all do about 3.4GHz regardless of what they were sold as? Granted, Intel silicon is better than most, but it's not THAT much better.

Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.

Sounds like what you really should be getting is a K6000. Full shader count of the 780Ti, full DP performance of the Titan, and 12GB of RAM. Yours for a mere £4K on ebay. Given I've not been able to get either my 690 or the Titan to work virtualized, I'm tempted to just trade them in for a pair of genuine K5000 cards, seen as they are now going for around £600 on ebay.
 

Offline mrkrad

  • Contributor
  • Posts: 37
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #696 on: November 09, 2013, 01:48:14 pm »
I was going to say the same thing Gordan. But I didn't want to miff your efforts. K2 cost 1.5x the GTX690 ..

But I'm having fun, no money, no selling from me. I just want to find out the true nature of the secret sauce of quadro/grid/tesla.

 It's fun!

 

Offline MrAMR

  • Newbie
  • Posts: 2
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #697 on: November 09, 2013, 02:50:03 pm »
Hi,

I'm new to this forum and yesterday night I found this forum/thread thanks to a video from Tek Syndicate on Youtube (regarding killing the new Apple computer). It was said you can "hack"/alter your "gaming"-GPU into a "workstation" card.

May I ask you; what is the effectiveness of doing it? Will it run CAD-related programs much faster? Or is it just to allow Linux to see it as a workstation card and use multiple monitors?
Will it really operate as a workstation-card?

I need it for college but I don't have the funds to buy a +€600,- videocard.

Thank you!
 

Offline oguz286

  • Contributor
  • Posts: 39
  • Country: nl
  • Make, break, hack, tweak
    • GuzTech
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #698 on: November 09, 2013, 03:26:00 pm »
I'm here to report that we (ijsf and me) correctly modified the memory size configuration, and that the card now runs just fine. Here are the obligatory screenshots:

 

Offline gordan

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: 00
    • Tech Articles
Re: [MOVED] Hacking NVidia Cards into their Professional Counterparts
« Reply #699 on: November 09, 2013, 03:39:52 pm »
I was going to say the same thing Gordan. But I didn't want to miff your efforts. K2 cost 1.5x the GTX690 ..
But I'm having fun, no money, no selling from me. I just want to find out the true nature of the secret sauce of quadro/grid/tesla.
 It's fun!

Indeed, my motivation is largely the same, but it also comes down to functionality/features. I need 2 GPUs for 2 VMs with each having monitor outputs. GK2 has no monitor outputs, so it wouldn't have fitted with my use-case. I wanted to avoid having to use 3 slots instead of 1 on my system (due to cards being 2 slots thick).

It was said you can "hack"/alter your "gaming"-GPU into a "workstation" card.
May I ask you; what is the effectiveness of doing it? Will it run CAD-related programs much faster? Or is it just to allow Linux to see it as a workstation card and use multiple monitors?
Will it really operate as a workstation-card?
I need it for college but I don't have the funds to buy a +€600,- videocard.

Read the whole thread. This has been covered at some length. Short version is that there is no huge improvement in professional applications and a Quadro 2000 will utterly annihilate the Titan in SPECviewperf. The Titan will equally annihilate the Quadro 2000 (essentially a GTS450 with a few extra GL primitives implemented in silicon or FPGA somewhere) in gaming. Pick what your main application is, and pick a GPU based on that.

I'm here to report that we (ijsf and me) correctly modified the memory size configuration, and that the card now runs just fine. Here are the obligatory screenshots:



oguz286, you are a legend! Any chance of a before/after BIOS hex diff? I'd rather like to try to flash a GTX480 with a Q6000 BIOS with RAM size adjusted appropriately and see what effect it has.

ION: I've been trying to figure out what it is that makes a modified GTX680 work with VGA passthrough and a GTX690 not work, despite the GPUs being exactly the same. So I've been trying to flash a GTX680 BIOS onto a 690, but there is a problem - GTX690 has 3 devices on the same slot, and they have a specific hierarchy. A GTX680 has no hierarchy. Hierarchy is encoded in the BIOS and (thankfully), nvflash refuses to flash a BIOS with incorrect hierarchy ID.

The question is - where is the hierarchy ID encoded in the BIOS? The two vBIOSes on the GTX690 are quite different, so it isn't obvious where one is set for hierarchy ID of switch port 8 and the other to switch port 16.

I don't suppose anyone here knows?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf