I have the card in my computer now with the cooler on it. When and if I take it apart again, I'll take a photo. But it's really just two very thin wires soldered on the Vcc and the SCLK pins of the EEPROM going to a 1206 33K resistor.
Oh, I see. So just short the EEPROM Vcc to SCLK with a 33KO resistor?
Won't this potentially upset other things?
Yes, just a resistor between Vcc and SCLK. Seeing as I'm the only one that has a working Tesla (except for memory size issues), I'm guessing that my mod does not change anything else. I would like to try it with a Titan, but that thing is expensive. That's why I'm waiting for johndoe to apply my mod see if it works then. Btw, did you get a Titan?
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.
We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.
2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?
I don't know exactly where the bits are but I'm in the process of going through the nouveau source that hints that the memory size can be determined by reading a GPU hardware register. There are references to tables in the ROM that contain timings and memory type, but I haven't figured out the location yet.
The GTX 780 Ti has been released: http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html
It's interesting to note that the double precision GFLOPS has been artificially limited to 1/24 that of single precision. I wonder if this is something that can be "adjusted."
My quadro 6000 with DUAL DVI was fine with geforce drivers too but it was missing something the GTX470 had.
Funny thing is how the geforce overclockers are running the same kepler at twice the voltage (geforce). If you let the FP run at full speed, that would cook the cards! (literally the quadro 6000 runs at half the voltage across the board as many overclockers are pushing the GTX475/480). Insane!
Note: BAR restriction for VGX mode imperative. only a few server mobo's have this option to keep the IOMMU mapping <4M
note: ECC mode will disable VGX mode!!
note: avoid change of MSI-X or VGX will fail.
note: RDP will disable VGX mode (citrix).
Insane number of requirements to get VGX to actually work, instead of "looks like it is working" or "was working now is not!". buggy as heck!
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Straight strap device ID mod. Second DMA engine is driver controlled. See:
http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.
gordan, do you have a way to transmit a copy of your enviroment?
I can setup a nearly identical setup and we can test the differences between your 690 and my K2.
I bet if we can pinpoint the behavior difference we could figure this out.
I've got some tidbits that might allow deeper inspection to data that nibitor doesn't have acess to.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.
How do you enable the bidirectional async DMA? All reports I have seen from hacked cards do not enable the 2nd async DMA... ranging from the GTX 480 to every single target I have seen.
Straight strap device ID mod. Second DMA engine is driver controlled. See:
Virtualized Gaming: Nvidia Cards Part 2: GeForce, Quadro and GeForce Modified Into a Quadro - Higher End Fermi Models
It works for me on both GTX470 -> Q5000 and GTX480 -> Q6000.
Umm it is really interesting ... so, your kepler mods have enabled the dual async engine? (GTX 680 into K5000 does report 2 engines?)
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.
No, I don't think GPUs after the GF100 have dual async DMA engines in them - unless you have a real K5000 and can provide a CUDA-Z screenshot that shows otherwise?
Dual async DMA is a Fermi-only thing, AFAIK.
Actually yes, all Tesla cards have dual async DMA engines ... It would be really interesting turning a GF Titan or 780 into a K20 with enabled dual async engines...
IDK about Quadro, but Tesla have dual async DMA engines.
EDIT:
from Nvidia, K5000 has dual copy engine
http://www.nvidia.com/object/quadro-k5000.html#pdpContent=1
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Don't really care about the cooler, that will eventually be replaced with water cooling.
Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
I was going to say the same thing Gordan. But I didn't want to miff your efforts. K2 cost 1.5x the GTX690 ..
But I'm having fun, no money, no selling from me. I just want to find out the true nature of the secret sauce of quadro/grid/tesla.
It's fun!
It was said you can "hack"/alter your "gaming"-GPU into a "workstation" card.
May I ask you; what is the effectiveness of doing it? Will it run CAD-related programs much faster? Or is it just to allow Linux to see it as a workstation card and use multiple monitors?
Will it really operate as a workstation-card?
I need it for college but I don't have the funds to buy a +€600,- videocard.
I'm here to report that we (ijsf and me) correctly modified the memory size configuration, and that the card now runs just fine. Here are the obligatory screenshots: