I’m pushing +330 MHz on my gpu core and I’m wondering if it’s too high. It’s running stable and I’m not seeing any artifacts, but other people are only had small amounts to the core like below 170mhz? Am I just really lucky?
I thought you couldn’t adjust the voltage in RTX cards?Even if the chip and its cooling will stand this, the GPU VRM will probably not hold the added current and your card will eventually look like this one:
![]()
I will support it with some numbers:
Assume that GPU consumes 200Watt and the core voltage is 1.0v
In that case, the VRM will require to supply 1V at 200A to the chip (with the chip giving you the resistance of 0.005Ohm over the IC, according to physics).
Now, let's raise the voltage to 1.2v and you will get 240A.
Those MOSFETs are rated between 35-60A each (really depends on the card internals) and dissipate more heat the more they are loaded.
Not to mention that the power lines on the board itself are wide but thin, think of a wire size to carry this kind of current.
It is adjusted by a frequency-voltage relation. The higher the frequency you select the higher the voltage that is applied.I thought you couldn’t adjust the voltage in RTX cards?
Hi Techy,Every single turing GPU has a max voltage that cannot be changed and can be hit at stock configurations. So any voltage you hit is perfectly safe, not to mention, hitting max voltage requires your core temperature to be under 50C, which basically never happens on standard cards.
GPU Boost 3.0 will dynamically change voltage/core frequency with temperature.
I have a break time so I can move on with my question above,When you overclock the core, you ARE NOT increasing voltage. Period. You are overclocking the frequency for each voltage point on the GPU Boost curve, you are not adding more voltage points (i.e overvollting) to the card.
So your GPU overclock is entirely safe OP as long as you don't touch voltage.
I'm curious about this. About how much warmer does it run if you raise the power limit?I do agree with you that increasing the power limit can definitely not be best for a card. However this varies a LOT by card to card. My GTX 1080 AMP! Edition is a great example, i cannot increase the power limit on it (even though i can) because the power delivery will get too hot.
But on my EVGA 2060 SUPER XC Ultra, I can crank the power limit because the card is properly built for that purpose. Just depends on the card.
So I do have a few problems still with what you are saying, mainly i think we view Turing's GPU boost algorithm differently.
1. Yes the voltage/frequency curve is designed by the manufacturer and the GPU maker (so AIB and NVidia).
Where I disagree is you saying that the card goes above the pre-set clock by the manufacturer. It's not as clean cut as it sounds.
That pre-set clock is the official GPU core clock and GPU boost clocks. And the GPU Boost 3.0 algorithm will boost that clock far beyond the official spec if cooling allows.
Where we disagree is that the manufacturer's actually does run thru all the GPU Boost 3.0 voltage/frequency curve. So say we have a official GPU Boost of 1625mhz, but GPU boost 3.0 will run the card at 1900mhz because it has a ton of extra temperature headroom and power headroom, that is within spec, and what the manufacturer's know the card will do.
Another true statement. Except that "any environment" would be "specified in the spec". I had a piece of carrier-class cellular network equipment sent back to us as RMA (8 years ago or so), that turned out to be mounted with PK screws on a tree by technicians in {Censored} with tropic rains and whatnot, and (believe it or not) actually worked for 3 months... Specification boundaries are there for a reason.The only reason why a base clock and a boost clock exist, is because that's the only clock the manufacturer can guarantee will work, period in any thermal and workload environment. Tom Petersen, who used to be one of the head engineers at NVidia states this in one of his videos with Gamers Nexus.
I do have a point here:Now going to your voltage frequency curve, if you just adjust the offset slider, and don't touch the actual curve, no you will not add more voltage to your overclock. You are just increasing the core clock per voltage point. But yes if you do mess with the actual curve manually, then you can definitely "overvolt" the card so to speak.
I do agree with you that increasing the power limit can definitely not be best for a card. However this varies a LOT by card to card. My GTX 1080 AMP! Edition is a great example, i cannot increase the power limit on it (even though i can) because the power delivery will get too hot.
Well, in most cases after initial release with an original print, manufacturers like EVGA make their own board designs and start selling their own "editions" which have their own home-grown VRM, cooling and board design. Those vary greatly and I have seen lots of variations, both good and bad. I think NVidia only sells Cores and development kits.But on my EVGA 2060 SUPER XC Ultra, I can crank the power limit because the card is properly built for that purpose. Just depends on the card. (This is because EVGA does an excellent job using OVERBUILT/overkill VRM power delivery components on their top tier models to insure overclocking will be beneficial.)
By as many watts as it is raised to - all those watts are translated into heat. GPU that runs at 100Watt = 100 Watt heater.I'm curious about this. About how much warmer does it run if you raise the power limit?
That doesn't answer my question...By as many watts as it is raised to - all those watts are translated into heat. GPU that runs at 100Watt = 100 Watt heater.
Phaaaze,That doesn't answer my question...
🙂 Agreed, sorry for my off-topic stuff.Ok, well we have our agreements and disagreements @vov4ik_il