Processors are basically a large lump of fet transistors. Leaving that oversimplification and moving on quickly...
Modern gaming chips today are typically power limited from factory. They have become so thermally efficient and the coolers so good the bottleneck they are facing is the cost of the voltage regulators, PCB and power supply to power them.
So there exists ranges of GPU for example, where there are 3 or 4 models which all use the exact same chip and the exact same memory. The only difference is in the support hardware, like VRMs and cooler and the PCB to back them up, the factory power limiter and of course the cost.
Out of the box they operate with a completely dynamic clock and voltage. They have a self learned voltage to clock speed curve. I believe it is stock template which the card can modify and learn. I don't know if this process is per card or per batch, it's likely done during test jig stage to scan the final stable voltage/clock curve for that individual card.
So stock mine sits at 200Mhz and 700mV at idle. ie. nearly standby. Put under 100% load however and it jumps up on it's V/Hz curve until nearly instantly slams hard into the input power limiter, settling at around 1860Mhz and 1020mV, which results in about 320W power.
So, almost immediately you see there is no room at all to overclock this card. Overclocking it, by trying to force the clock higher, draws more power and the card just clocks itself back further down the curve to meet it's power limit.
Finally we come to why I'm asking this here and not a PC Gamer forum.
When I try to employ under-volting techniques, such as literally lifting the whole voltage/Hz curve +200Mhz. Meaning instead of running at 1860Mhz at 1.020V, it hits 1860Mhz at 950mV and "would" try to hit 2060Mhz at 1020mV. Things don't go as expected.... or do they?
It would seem that lowering the voltage, does not lower the power requirement. I can hear people already shouting "Ohms law!", but I'm not sure it's that simple.
Does a higher Mhz clock speed at a lower voltage actually result in a higher current draw and equal Wattage?
Getting well out of my comfort zone, but fets, be they a big lump of a power fet or a nano-scopic fet on a GPU die, have gate charge requirements and resistence and capacitance working against you.
As you increase the clock speed the criticality of those rising and falling edges increase. The limiting factor on those rising and falling edges is how quickly you can apply charge to the gate of the mosfets and how quickly you can dump the charge off them again. Normally higher voltages make these transitions faster as they can drive more current onto the gates to "charge the gate capacitor" faster. All of which makes more heat, more current, more voltage, more power, more heat. At least that is how it "used" to work.
So what I can't figure out, is if I have a higher quality fabbed die (say a high binned die) and it can run at 1860Mhz at a lower voltage than stock 1.020V, that should mean the gates are charging fast enough and dumping fast enough to maintain a stable processor.... but it should mean less voltage = less current = less power = less heat.
Oddly in testing however while it does result in slightly less heat and it looks like it's clocking higher on occasion, it still hits it's power limiter and it still performs the same or slightly worse.
I mean, actually getting more performance out of the card will require a shunt resistor mod, so I can just blatantly lie to it's power limiter. That's understood.
What I can't figure out is this less voltage, not resulting in less heat per Hz.
Has there been a paradigm shift in IC transistor design that somehow breaks that relationship or is this more likely an anomaly caused by the various power regulation phases, the location of the shunt resistors and the software control of the power limit?
EDIT: I ended up applying a bias on the curve to give me up to +250Mhz at the lowest voltage and +0Mhz at the top end (for stability). As per the perplexing thing with the power limiter, it doesn't perform better at 100% load. However, with many game titles, when locked to the framerate of the display, they don't use all 100% and it's here I do see a significant reduction in heat and power. So much so that a 2022 game, running at 1440p@60FPS, maximum details was drawing 100W under the card limiter. With under 100% load the card also drops its Mhz and therefore it's voltage to match the load. So it seems to be running cooler.