Question Are there any math formulas for GPU overclocking?

Jan 20, 2021
1
0
10
0
I just got a GTX-1660 Super. I'm trying to use Gigabyte's manager software to overclock. Many years ago I read an article that provided some formulae that were kind of heavy on the computer hardware side, which is fine... I have some- not much -formal education with moving signal pulses of varying speed through gate arrays and multiplexer stages to produce a single output signal that matches a desired square wave pattern. The overclocking article I read had a formula relating GPU MHz to the VRAM MHz with the memory aperture acting as a coefficient to make sure that there aren't any buffer over/underflows robbing framerates. Bear in mind that the article I read (I think I read it here TBH!) was maybe 15 years older or more. Hell, now that I think about it, it might have been about overclocking motherboards and getting more out of a FSB with a very narrow apeture!

...but I digress. Back to my new GTX 1660 super!

The videos I watched after slotting the 1660super say to:

{
benchmark, then turn the fan up;
increase the VRAM clock by an arbitrary ammount, then benchmark again;
If nothing crashes, increase the GPU clock by an arbitrary ammount and benchmark;
If nothing crashes, keep increasing each clock seprately until the heat begins to climb over 85 or 3d apps become unstable;
once instability or thermal spike occurs, roll back to last good config and increase values by 50% of the increase that caused a hang;
}

I tried this and my 3d apps crashed after the 1st increase in GPU speed, which was about a 5% increase. I decided to do it from a bitwise approach next, and I've been turning up both clocks simultaneously, but in a x8 relationship, so for every +8 MHz to the GPU clock, I give the VRAM +64 MHz, and increase GPU voltage by +1%. Right now, I've repeated the process 8 times and haven't had any errors and the temp seems to be steady at 86c (186 f - This seems a bit high to me). My question is, does tuning the clocks in a bitwise relationship even matter on a GPU that does parallel processing (as I've gone past the point where things became unstable with the CRANK IT UP AND PRAY! methods I viewed on youtube, I'm inclined to say that it is a good strategy)? Also, how far can I keep pushing the performance envelope with this method before the postage gets canceled? As the clock speeds increase, the amount of electrical current draw is going to increase with it and require a proportional voltage increase, obviously. My problem is that the Gigabyte Aorus software doesn't show what the starting voltage values are... each integer increase on the slider is a +1% increase, but the magnitude of increase in volts isn't shown. As an old TTL hardware guy who doesn't know anything about VLSI specs, I'd really love to know what that initial +0% voltage value is.

Know what? I just found a PCIe pinout table so I could see what the values of the DC voltage rails are, and since there are no negative rails opposing the positive rails across the common ground reference, the bare minimum voltage is +3.3...... unless the GPU has a voltage divider resistor that's in parallel to the take-up resistor for throttling the current, but how would hardlocking both vmax and imax at the power uptake provide a variable vin to the GPU? Differential vouts coming back from the GPU that couple at the series/parallel junction of the regulator resistors so that the GPU can self bias via user input?

Ermmm.... Sorry, I went down a hardware hole there and I now that I've got 12 tabs from Wikipedia and Texas Instruments open I'm gonna hit the emergency eject button and get caught up on The Expanse.
 

ASK THE COMMUNITY