[SOLVED] Really High OC on RTX 2060S

BasicallyNuclear

Reputable
Jan 22, 2017
106
0
4,710
I’m pushing +330 MHz on my gpu core and I’m wondering if it’s too high. It’s running stable and I’m not seeing any artifacts, but other people are only had small amounts to the core like below 170mhz? Am I just really lucky?
 
Solution
Hey vov, your english is really good actually, so thumbs up on that man!! 😀

So I do have a few problems still with what you are saying, mainly i think we view Turing's GPU boost algorithm differently.

1. Yes the voltage/frequency curve is designed by the manufacturer and the GPU maker (so AIB and NVidia).
Where I disagree is you saying that the card goes above the pre-set clock by the manufacturer. It's not as clean cut as it sounds.

That pre-set clock is the official GPU core clock and GPU boost clocks. And the GPU Boost 3.0 algorithm will boost that clock far beyond the official spec if cooling allows.

Where we disagree is that the manufacturer's actually does run thru all the GPU Boost 3.0 voltage/frequency curve. So say...
Even if the chip and its cooling will stand this, the GPU VRM will probably not hold the added current and your card will eventually look like this one:
2090886470_DSC_5307(Forums).jpeg.47a8fdb1688f2aa47a3fca3795768a54.jpeg


I will support it with some numbers:
Assume that GPU consumes 200Watt and the core voltage is 1.0v
In that case, the VRM will require to supply 1V at 200A to the chip (with the chip giving you the resistance of 0.005Ohm over the IC, according to physics).
Now, let's raise the voltage to 1.2v and you will get 240A.
Those MOSFETs are rated between 35-60A each (really depends on the card internals) and dissipate more heat the more they are loaded.
Not to mention that the power lines on the board itself are wide but thin, think of a wire size to carry this kind of current.
 
Even if the chip and its cooling will stand this, the GPU VRM will probably not hold the added current and your card will eventually look like this one:
2090886470_DSC_5307(Forums).jpeg.47a8fdb1688f2aa47a3fca3795768a54.jpeg


I will support it with some numbers:
Assume that GPU consumes 200Watt and the core voltage is 1.0v
In that case, the VRM will require to supply 1V at 200A to the chip (with the chip giving you the resistance of 0.005Ohm over the IC, according to physics).
Now, let's raise the voltage to 1.2v and you will get 240A.
Those MOSFETs are rated between 35-60A each (really depends on the card internals) and dissipate more heat the more they are loaded.
Not to mention that the power lines on the board itself are wide but thin, think of a wire size to carry this kind of current.
I thought you couldn’t adjust the voltage in RTX cards?
 
Vov4ik is not entirely correct.

Every single turing GPU has a max voltage that cannot be changed and can be hit at stock configurations. So any voltage you hit is perfectly safe, not to mention, hitting max voltage requires your core temperature to be under 50C, which basically never happens on standard cards.

GPU Boost 3.0 will dynamically change voltage/core frequency with temperature.

When you overclock the core, you ARE NOT increasing voltage. Period. You are overclocking the frequency for each voltage point on the GPU Boost curve, you are not adding more voltage points (i.e overvollting) to the card.

So your GPU overclock is entirely safe OP as long as you don't touch voltage.

If your card isn't factory overclocked, that would perfectly explain why you are getting really high GPU offsets. Usually everyone can only hit like 100+mhz or 150+mhz, but that's because the card is already overclocked from the factory by 100-150+mhz.
 
Every single turing GPU has a max voltage that cannot be changed and can be hit at stock configurations. So any voltage you hit is perfectly safe, not to mention, hitting max voltage requires your core temperature to be under 50C, which basically never happens on standard cards.

GPU Boost 3.0 will dynamically change voltage/core frequency with temperature.
Hi Techy,

It sounds like you do have some technical background, and not from just shooting home videos and make living off of that.

I have a question about that. I do have one of the Turing GPUs handy, and they have a great utility that is capable of scanning your very own card silicon for points that it can undervolt, and update the frequency-voltage curve. That option indeed gives you a performance boost while still using the stock currents and within manufacturers' TDP budget on the whole card.

With that being said, the additional overclocking options do give you two things that have the potential to do harm:
  1. Increasing the "power target": that way you will increase the actual cap of continuous power draw while monitoring the Core temperature, and not the VRM. The VRM is designed to run at stock current (and dissipate the amount of heat that is generated, using the heatsink it has) and will easily "swallow" much higher peaks but is not designed to do it for prolonged periods of time. In that case, the core with high airflow will still keep the core within the temperature range but the unmonitored VRM will look like the one on the picture I have posted above.
  2. Running "fixed clock": even with no load, the voltage is kept higher to keep the frequency up. The components still get hot and again, the only one that is actually monitored and triggers throttling is the core.
Now with that in mind, please explain your first statement.
Edit: here is another example here on the forum, not a Turing though but I fixed a few between now and when the coin mining boom was on...
 
Last edited:
When you overclock the core, you ARE NOT increasing voltage. Period. You are overclocking the frequency for each voltage point on the GPU Boost curve, you are not adding more voltage points (i.e overvollting) to the card.

So your GPU overclock is entirely safe OP as long as you don't touch voltage.
I have a break time so I can move on with my question above,

So there is a predetermined matrix of frequencies and corresponding voltages for a Turing card.
Any frequency has a corresponding voltage that is set by the manufacturer. The curve goes up with frequency and is not linear. It goes above the pre-set clock frequency for the card.
In other words, if this matrix has not been altered, selecting a higher frequency in the table will automatically preselect a higher voltage. It is easy to see by changing the clock speed and monitoring the core voltage.
Given that, how is the first statement in the quotes can possibly be true?

I truly have hard times to make sense of the rest of the paragraph, but I believe it speaks of the built in “silicone scan” feature that I have mentioned in my previous post. Now English is not my first (not even third) language so it might have something to do with that.
I apologize if I slide off the topic while trying to understand the matter.
 
Last edited:
Hey vov, your english is really good actually, so thumbs up on that man!! 😀

So I do have a few problems still with what you are saying, mainly i think we view Turing's GPU boost algorithm differently.

1. Yes the voltage/frequency curve is designed by the manufacturer and the GPU maker (so AIB and NVidia).
Where I disagree is you saying that the card goes above the pre-set clock by the manufacturer. It's not as clean cut as it sounds.

That pre-set clock is the official GPU core clock and GPU boost clocks. And the GPU Boost 3.0 algorithm will boost that clock far beyond the official spec if cooling allows.

Where we disagree is that the manufacturer's actually does run thru all the GPU Boost 3.0 voltage/frequency curve. So say we have a official GPU Boost of 1625mhz, but GPU boost 3.0 will run the card at 1900mhz because it has a ton of extra temperature headroom and power headroom, that is within spec, and what the manufacturer's know the card will do.

The only reason why a base clock and a boost clock exist, is because that's the only clock the manufacturer can guarantee will work, period in any thermal and workload environment. Tom Petersen, who used to be one of the head engineers at NVidia states this in one of his videos with Gamers Nexus.

Now going to your voltage frequency curve, if you just adjust the offset slider, and don't touch the actual curve, no you will not add more voltage to your overclock. You are just increasing the core clock per voltage point. But yes if you do mess with the actual curve manually, then you can definitely "overvolt" the card so to speak.

I do agree with you that increasing the power limit can definitely not be best for a card. However this varies a LOT by card to card. My GTX 1080 AMP! Edition is a great example, i cannot increase the power limit on it (even though i can) because the power delivery will get too hot.

But on my EVGA 2060 SUPER XC Ultra, I can crank the power limit because the card is properly built for that purpose. Just depends on the card. (This is because EVGA does an excellent job using OVERBUILT/overkill VRM power delivery components on their top tier models to insure overclocking will be beneficial.)
 
Solution
I do agree with you that increasing the power limit can definitely not be best for a card. However this varies a LOT by card to card. My GTX 1080 AMP! Edition is a great example, i cannot increase the power limit on it (even though i can) because the power delivery will get too hot.

But on my EVGA 2060 SUPER XC Ultra, I can crank the power limit because the card is properly built for that purpose. Just depends on the card.
I'm curious about this. About how much warmer does it run if you raise the power limit?
 
Good evening Techy and Thanks,

We are discussing the fine print I guess, It is not very clear what and how was overclocked. I try to stay on the safe side.
So I do have a few problems still with what you are saying, mainly i think we view Turing's GPU boost algorithm differently.

1. Yes the voltage/frequency curve is designed by the manufacturer and the GPU maker (so AIB and NVidia).
Where I disagree is you saying that the card goes above the pre-set clock by the manufacturer. It's not as clean cut as it sounds.

That pre-set clock is the official GPU core clock and GPU boost clocks. And the GPU Boost 3.0 algorithm will boost that clock far beyond the official spec if cooling allows.

Where we disagree is that the manufacturer's actually does run thru all the GPU Boost 3.0 voltage/frequency curve. So say we have a official GPU Boost of 1625mhz, but GPU boost 3.0 will run the card at 1900mhz because it has a ton of extra temperature headroom and power headroom, that is within spec, and what the manufacturer's know the card will do.

That's what I refer to as Silicon Scan. It was not mentioned by the original poster. It is indeed a nice feature. I wish it would work for the memory too. The only drawback is that it is not dynamic and does not take the environment into account (the ambient temperature will influence this). It is an algorithm, not magic.

The only reason why a base clock and a boost clock exist, is because that's the only clock the manufacturer can guarantee will work, period in any thermal and workload environment. Tom Petersen, who used to be one of the head engineers at NVidia states this in one of his videos with Gamers Nexus.
Another true statement. Except that "any environment" would be "specified in the spec". I had a piece of carrier-class cellular network equipment sent back to us as RMA (8 years ago or so), that turned out to be mounted with PK screws on a tree by technicians in {Censored} with tropic rains and whatnot, and (believe it or not) actually worked for 3 months... Specification boundaries are there for a reason.

Now going to your voltage frequency curve, if you just adjust the offset slider, and don't touch the actual curve, no you will not add more voltage to your overclock. You are just increasing the core clock per voltage point. But yes if you do mess with the actual curve manually, then you can definitely "overvolt" the card so to speak.

I do agree with you that increasing the power limit can definitely not be best for a card. However this varies a LOT by card to card. My GTX 1080 AMP! Edition is a great example, i cannot increase the power limit on it (even though i can) because the power delivery will get too hot.
I do have a point here:
Touching a slider adjusts the frequency and sets it fixed w/o regarding if you are still within the same voltages (normal and boost), thus giving you the option to set higher frequency/voltage for your base/boost states, than the ones you had. I seriously doubt that you would have 300Mhz above the initial clock while using the same voltage. It is easy to check with monitoring software. I refer to what it does with MSI Afterburner, try it.
Besides, especially if fixed base=boost clock is set, the whole curve is flattened out:
You had high short-termed peaks above the base clock when boosted (just like intel CPUs) so that the cooling assemblies could swallow those hot peaks, in the modified state - the whole timeline is a peak in terms of the voltage applied, current draw and the heat being generated.
The overclocking capability sure does vary and mainly by the card design and actual components used. And so it does in the motherboards too (they also have similar VRM setup and a piece of silicon installed in the socket)
But on my EVGA 2060 SUPER XC Ultra, I can crank the power limit because the card is properly built for that purpose. Just depends on the card. (This is because EVGA does an excellent job using OVERBUILT/overkill VRM power delivery components on their top tier models to insure overclocking will be beneficial.)
Well, in most cases after initial release with an original print, manufacturers like EVGA make their own board designs and start selling their own "editions" which have their own home-grown VRM, cooling and board design. Those vary greatly and I have seen lots of variations, both good and bad. I think NVidia only sells Cores and development kits.
Edit: I use overclocking too. Just before doing it, I pop the hood and calculate the budget (check what kind of MOSFETs are used and how many of them, check their spec and run numbers, check how the cooling is designed so that I can understand what margins do I have. My 2080 was able to peak just over 2.0GHz but I know that I will have to put it on a soldering pre-heater after a few months of use with this setting so, so it is back to where it really belongs. If I need more, I will have to pay more and get it 🙃

I'm curious about this. About how much warmer does it run if you raise the power limit?
By as many watts as it is raised to - all those watts are translated into heat. GPU that runs at 100Watt = 100 Watt heater.
 
Last edited:
That doesn't answer my question...
Phaaaze,
Are you asking for a number? Range?
In short, it really depends on ambient temperature, humidity, air pressure, thermal conductivity of his components, load graph etc... and can be anywhere between 0 degree and a dead card. It is hard to tell how will this apply to you.
If cooling and current budgets are high - it may even stay the same. If heat dissipation does not keep up the temperature will keep rising until something stops it. My soldering iron is 60W and it can hit 300C in <15 seconds. Most components have thermal shutdown features. Hitting thermal shutdown regularly will eventually lead to a dead IC.

If @TechyInAZ happens to have thermal imaging, he could share his actual numbers in the voltage stabs area
 
Last edited:
Ok, well we have our agreements and disagreements @vov4ik_il . I've done a crazy amount of research into GPU Boost 3.0 and I've listened to Nvidia's videos talking about it, and also nvidia engineer's discussing what it is. So lets just stop derailing this thread. (And I've seen a lot of people overclock their nvidia GPUs, not once in the 5 years I've overclocked have I seen someone melt their GPU from overclocking. Nvidia purposefully locks out overvoltage on turing, making OCing basically dummy proof.)

Yes 300mhz offset above stock clocks is very possible on non factory overclocked cards, not every card hits that, more like 250mhz but still it is possible.

OP, your overclock is fine. Don't touch voltage, and it's fine. From my expereince overclocking GTX 1060s, GTX 1080s and RTX 2060 supers, only increasing the core clock offset and not touching the frequency curve will not generate more voltage. It won't change the slope of the curve. It just allows a higher clock for the same amount of voltage which is stock voltage and that is 100% safe.