I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Some people might overclock to 1100 and others manage over 1300. Don't really care about the ACX cooler, that will eventually be replaced with water cooling.
For the amount that would cost you, you might as well get a Titan to begin with. In fact, for the number of cards you'll destroy soldering on the BGA chips manually, you might as well just get a K6000 outright.
Binning these days does next to nothing. Silicon manufacturing has gotten to the point where all chips will do the same speeds to within a few %, and those last few % are down to luck and generally not worth bothering with. Haven't you noticed that in the past decade if the clock range of a particular Intel CPU series was, say, 2.4GHz for the slowest model and 3.33 GHz for the fastest model, they will all do about 3.4GHz regardless of what they were sold as? Granted, Intel silicon is better than most, but it's not THAT much better.Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
Sounds like what you really should be getting is a K6000. Full shader count of the 780Ti, full DP performance of the Titan, and 12GB of RAM. Yours for a mere £4K on ebay. Given I've not been able to get either my 690 or the Titan to work virtualized, I'm tempted to just trade them in for a pair of genuine K5000 cards, seen as they are now going for around £600 on ebay.
I would also like to get 12 7Ghz GDDR5 chips and add them to the back of the board if it has the chip spots are available so that I can get 6GB. That might work with or without modding. I don't have the 780 Ti yet -- waiting on the EGVA ACX OC version so that I can get binned parts, but hoping for the next month or so. Some people might overclock to 1100 and others manage over 1300. Don't really care about the ACX cooler, that will eventually be replaced with water cooling.
For the amount that would cost you, you might as well get a Titan to begin with. In fact, for the number of cards you'll destroy soldering on the BGA chips manually, you might as well just get a K6000 outright.
Binning these days does next to nothing. Silicon manufacturing has gotten to the point where all chips will do the same speeds to within a few %, and those last few % are down to luck and generally not worth bothering with. Haven't you noticed that in the past decade if the clock range of a particular Intel CPU series was, say, 2.4GHz for the slowest model and 3.33 GHz for the fastest model, they will all do about 3.4GHz regardless of what they were sold as? Granted, Intel silicon is better than most, but it's not THAT much better.Thing is, I would be willing to pay for both of the features. But nobody is even talking about releasing either a 6GB card or unlocking the DP. I don't game, but I can use the DP for computation. I also want the 780 Ti because I will upping to the ASUS 39" UHD monitor once they release it. Both computation and the UHD resolution make 3GB a little iffy.
Sounds like what you really should be getting is a K6000. Full shader count of the 780Ti, full DP performance of the Titan, and 12GB of RAM. Yours for a mere £4K on ebay. Given I've not been able to get either my 690 or the Titan to work virtualized, I'm tempted to just trade them in for a pair of genuine K5000 cards, seen as they are now going for around £600 on ebay.
You may be right about the cost of the chips. That will figure in my decision. I was assuming that the cost would only be slightly unreasonable. Like, perhaps 1/4 the cost of the 780 Ti itself. These are not cutting edge chips (anymore).
As far as BGA soldering is concerned, I haven't done that. However, wasn't there someone here who has BGA equipment? Perhaps I could make a deal with him (or someone else) to actually do the soldering. The amount of memory is usually picked up by the amount that is physically present. And, of course, the BIOS can be tweaked to be consistent so I assume that a memory mod would work.
You're right about binning -- until you overclock. Then the differences in the chips show up. I am currently running a 3770K on stock air at 4.4Ghz. It was not stable at 4.6, but almost. I possibly could do 4.5 but haven't pushed it. Some people have gotten as high as 4.7 and been stable. My understanding is that EVGA bins the ACX OC parts so that at each voltage level the chip does just a little bit better in speed and power consumption. If you want to OC it, that can make a difference. Even if you don't you can run a bit cooler.
I'm willing to pay for the cost of the card -- but not a factor of six! You would be looking at no more than 1/14 increase over the Titan and probably would NOT clock anywhere near as high. So, if the Titan is 1000, then 1200 or so is reasonable. Perhaps 1500 for high speed board.
It might be a completely moot point. Changing the ID might not allow the DP to be unlocked. We don't know if the switch to turn it on is is done by fuses or simply made not available unless it is the Titan. If, as seems likely from previous posts, the ID change can be done via soft strapping (since probably only nibble four is involved and the only board changes appear to be in the power distribution area) then it will be a cheap test.
All of the other attempts to turn on DP have not involved changing to a Titan which has a software switch, but rather something else where it is always turned on.
Can't do anything though until people start getting cards.
If you are doing this for fun as the primary motivation, that's great, but otherwise you need to factor in the cost of the man-hours that is going to go into all of this. Combine that with the outcome being far from certain, and the economics of it start to look very questionable.
My view of OC is that there are a lot of "fishermen's tales" and that what most people consider "stable" isn't actually all that stable. If you can run each of the OCCT tests and the multi-threaded tmpfs file hash tests on linux for 24 hours each without any errors, then I may be a little more convinced. Most of the time when people have claimed stability I have been able to shake the machine loose in under 10 minutes.
The whole notion of DP boost by changing the 780's ID to Titan stems from the part in Tom's Hardware review that suggests that the DP shader clock speed is driver controlled.
If you are doing this for fun as the primary motivation, that's great, but otherwise you need to factor in the cost of the man-hours that is going to go into all of this. Combine that with the outcome being far from certain, and the economics of it start to look very questionable.
My view of OC is that there are a lot of "fishermen's tales" and that what most people consider "stable" isn't actually all that stable. If you can run each of the OCCT tests and the multi-threaded tmpfs file hash tests on linux for 24 hours each without any errors, then I may be a little more convinced. Most of the time when people have claimed stability I have been able to shake the machine loose in under 10 minutes.
The whole notion of DP boost by changing the 780's ID to Titan stems from the part in Tom's Hardware review that suggests that the DP shader clock speed is driver controlled.
Nothing about my current build is cost effective. If include the cost of man hours, then the total cost is astronomical. But, it has also been an excellent project for learning things. Extreme - yes. Just the disk storage system is nearly 5K (2K in SSDs, another 1.5K in raid controller / expanders and the rest in hard drives and optical drives). The water cooling is already over 2k and going up. And none of that includes the tools that I have purchased to do the build. So, while it will be a system that I actually use, and is intended for long term use and easy changes, it is also intended to be the system I want without really counting the cost (doesn't mean that I don't consider tradeoffs or have budget limitations - funding is via overtime).
For 4.6Ghz I was able to run Prime95 for several hours before it failed. I was going for a full day.
I understand where the DP boost idea comes from. But, for a Titan, it can clearly be changed in software using the control panel. No one (that I know about) has tracked that all the way down to the hardware / bios, but clearly on the Titan the block is not via fuses. It is far from certain, but if the boost option is simply turned off if it is not a Titan, it is possible that no fuses are involved on the 780 Ti parts either. They appear to be using the same chips. It is certainly worth a try, especially if it has zero cost to try.
So just save up for a K6000 and save yourself the disappointment.
Prime95 is a completely useless test of stability. I have seen Prime95 and various Pi calculators run for days without error only for OCCT or hash calculations to return an error in under 30 seconds. If anything, what you said confirms my view of fisherments' tales.
As long as you are happy with the assumed outcome of no DP improvement, that's fine. There are several things to try, including changing the device ID and flashing the Titan BIOS onto the card. But that doesn't mean it'll succeed. I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
So just save up for a K6000 and save yourself the disappointment.
Prime95 is a completely useless test of stability. I have seen Prime95 and various Pi calculators run for days without error only for OCCT or hash calculations to return an error in under 30 seconds. If anything, what you said confirms my view of fishermen's tales.
As long as you are happy with the assumed outcome of no DP improvement, that's fine. There are several things to try, including changing the device ID and flashing the Titan BIOS onto the card. But that doesn't mean it'll succeed. I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
I am interested in alternate ways of determining stability, Prime95 appears to be what most overclockers used to validate a clock.
I want to increase the clock rate, but not at the expense of stability or correctness. I do not run Linux, I am currently using Windows 7 64. I would appreciate any suggestions you have for validating a clock as stable.
I am not a dedicated overclocker, but once I am on water cooling I would like to see 5Ghz on my 3770K on a 24/7 basis.
Moderate overclocking is not really overclocking at all because Intel drastically underspecifies the performance of the cpus. It isn't until you push past about 4.5Ghz that you are really in the overclocking range.
Why do you appear to be so against the idea? I would think it that it would be of interest to at least some of the participants of this forum and certainly is harmless. It would also resolve an open question about the Titan vs 680 / 680 Ti.
I have a 4GB GTX680 running a K5000 BIOS, and there is no obvious performance benefit in any test over a standard GTX680 - the only advantage is in persuading the drivers to allow VGA passthrough operation - and even that doesn't seem to work on models more recent than the GTX680.
Does the K5000 modified GTX680 allow 30bit color on the displayport output like a real K5000?
Nice! If it works out I'm getting a Titan too Then I can concentrate on 'Teslafying' the GTX780 completely and running CUDA on it without getting unknown errors when calling cudaMemcpy.
I must say I am really curious if you will be able to figure out where and how the memory configuration is stored. If you look a back on this thread on page 38, you will find this post:
https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg292021/#msg292021
containing the hex diff between 1.5GB and 3GB variants of a GTX580 BIOS of the same version number. Unless I made a huge mistake somewhere (or the BIOSes are mislabeled on TPU, I no longer have a GTX580 I could flash with those BIOSes to test), the memory difference should be encoded somewhere in those 10 lines.
Yeah I already checked the diffs of many BIOSes but the actual size of the memory is not stored literally in the BIOS. The type of memory, the configuration, the clocks etc. are stored as a table in the BIOS and according to these variables you can calculate what the memory size is.
Can you elaborate on this? What byte offset locations in the GeForce BIOS contains the number of chips and their size?Back in the GeForce 2 days, you could turn certain models into a Quadro 2, though in those cases it wasn't just a straight performance unlock. It was a tradeoff. Something like far better CAD and wireframe performance, but games weren't so well optimized anymore. It wasn't something a gamer would do to get a few extra FPS.
Not really the case any more. On GTS450 -> Quadro 2000 (GF106) there is a marginal improvement in some SPEC components (e.g. Maya gets a 40% boost, the rest remains the same), and I hadn't noticed any gaming degradation. On GF100 (GTX470/GTX480) and later there is no performance difference in the SPEC benchmarks, but there is a memory I/O boost (potentially up to double) from enabling the bidirectional async DMA. From the GTX580 (Q7000) onward there is no difference in any aspect of performance that I have been able to observe. I have a GTX680 running a K5000 BIOS and there is no obvious performance difference in either SPEC or gaming benchmarks.We faced the exact problem that you have mentioned in the forum and changed the resistors in a way to get a Quadro Graphic card but it did not work for us. By the way I see that there are little differences in our board with the image that you have shared in the forum.
1) In the upper column that you showed there is a 25K resistor that should be removed and a 20K resistor should be mount below that one.Ok and we did it. But in our board the second right side column is different. There is a resistor on top of this row
which is not on your board and reversely there is a resistor under that on your board which is not present in our board.
2) We plugged in the board and there were one long beep and three short beeps on windows startup. And it did not work.
Did you change just the 3rd nibble resistor pair or did you change the 4th one as well? I suggest you put the 4th nibble (lower pair in the photo) back as it was and soft-mod that part instead. For the 3rd nibble resistor, you can either leave it off and stabilize wiith the soft-mod on the lowest bit of the nibble, or put in a resistor. With 25K or more, the 3rd nibble will go to 0xB.
1) Sorry I don’t know exactly which resistors determine which nibbles but I’m sure that I did the same thing that was proposed in the image as highlighted by the red rectangles. Removed the 25K on the right top. Added a 20K below the removed one. And changed the forth column top resistor from 5K to 15K. Could you please show in the image what is your suggestion??
2) Also could you please tell me more about soft-mod?? Is that some sort of firmware update for the graphic Card? What do you mean by stabilize with soft-mod??
So I read almost halfway through the thread and then searched the rest and I still have some questions. So I current have two GTX 660 (non-ti) that was hoping to convert to k4000's. I simply would like to do this for the virtualization benefits. I currently have 32gb ram in my desktop and it's killing not being able to use all of it. I was thinking of implementing a Xen setup but I think I would need the gtx 660 profession counterpart which I believe is the k4000. I know early on in this post- https://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/msg203239/#msg203239 They posted a potential position of the resistors but I didn't see anything come of this. So my question is if anyone has successfully modified a gtx 660 to be a k4000? or would you guys be able to point me towards how I would figure this out myself?
What is WGL_NB_gpu_affinity supposed to do?
Did you mod just the device ID, or did you flash the Grid BIOS onto the card?
What is WGL_NB_gpu_affinity supposed to do?
Did you mod just the device ID, or did you flash the Grid BIOS onto the card?
It need for multigpu render to select right render gpu.
I modded device ID only.
I not able to test it because I have only one videocard in each pc
2529,2532c2529,2532
< 0009e00: 0000 90cf 0000 90cf 0000 90cf 0000 96cf ................
< 0009e10: 0200 96cf 0200 9055 0200 9055 0200 9044 .......U...U...D
< 0009e20: 0200 9044 0200 a055 0200 a055 0200 a055 ...D...U...U...U
< 0009e30: 0200 a055 026e 0402 1100 ffff ffff 0000 ...U.n..........
---
> 0009e00: 0000 90cf 0000 90cf 0000 90cf 0000 968f ................
> 0009e10: 0200 908f 0200 9055 0200 9055 0200 9044 .......U...U...D
> 0009e20: 0200 9044 0200 a055 0200 a055 0200 9055 ...D...U...U...U
> 0009e30: 0200 9055 026e 0402 1100 ffff ffff 0000 ...U.n..........
Anybody try NVENC functionality on their modded 680-> K2? This is a very big part of the grid/quadro cards featureset.
oguz286, any chance of a howto write up on the subject of memory size modding?
I got my hands on a K6000 BIOS from here:
http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html
and I want to try it on my Titan to see if VGA passthrough starts working, but I need to halve the memory size first.
Also, do you have a download link for a CUDA binary you want me to test on my Titan with a K20c BIOS? Or is that no longer of interest?
Also regarding your article on the website about PCIe 2.0 vs. 3.0, are you saying that flashing a K20c BIOS onto the 780 makes it go in to PCIe 2.0 mode? How can you tell? If you are judging this by what GPU-Z says, I wouldn't trust it too much - my Titan shows up as PCIe 1.1 on a 2.0 motherboard, while my GTX690 shows up as PCIe 3.0 even though the motherboard most definitely cannot do more than PCIe 2.0.
oguz286, any chance of a howto write up on the subject of memory size modding?
I got my hands on a K6000 BIOS from here:
http://forum.techinferno.com/nvidia-video-cards/5022-%5Breq%5D-raise-maximum-power-target-quadro-k6000-bios-attached.html
and I want to try it on my Titan to see if VGA passthrough starts working, but I need to halve the memory size first.
Also, do you have a download link for a CUDA binary you want me to test on my Titan with a K20c BIOS? Or is that no longer of interest?
Also regarding your article on the website about PCIe 2.0 vs. 3.0, are you saying that flashing a K20c BIOS onto the 780 makes it go in to PCIe 2.0 mode? How can you tell? If you are judging this by what GPU-Z says, I wouldn't trust it too much - my Titan shows up as PCIe 1.1 on a 2.0 motherboard, while my GTX690 shows up as PCIe 3.0 even though the motherboard most definitely cannot do more than PCIe 2.0.
Bhahaha, I posted that Quadro K6000 BIOS a bit ago there... which reminds me, I'm going to sent a message to svl7 to see if he can do more tweaks to remove the power target throttling past 225W I've experienced. I'm also curious on the PCIe 2.0 vs 3.0 speeds on a GPU modded to K20c BIOS. If PCIe 3.0 is possible, the only thing missing would be the dual-DMA engines.
Can someone help me? I am trying to make a k5000 from my MSI gtx 660 ti, but I don't know where I can find the resistors and what I need to change them with.
Thanks!