Question RTX 4080 Gigabyte Gaming OC (Overclocking) - Lower Minimum FPS, and Lower 1% FPS is there a problem? (I am a noob at Overclocking.)

viperwasp

Honorable
May 29, 2016
14
3
10,515
0
I'm happy with my 4080 without any overclocking. But I figured why not try to overclock since the thermals are so low. And I paid a lot. I wanted to get the most
of of my 4080's high cost. lol And it seemed like something fun to learn. I don't overclock anything else in my build. Of course I use XMP on my Ram etc. But my 12700K is factory default.
See my profile signature for full PC Specs if needed. I have 1000W PSU not listed in Signature. And my monitor is 1080P 240Hz.

Relevant Info
In this review https://www.kitguru.net/components/graphic-cards/dominic-moass/gigabyte-rtx-4080-gaming-oc-review/all/1/
They overclocked the same exact GPU "4080 Gigabyte Gaming OC" that I have to +90Mhz for the Core. And +1780Mhz for the memory.
I used the same program they used. MSI Afterburner. I slowly increased the Overclock working towards the +90mhz and +1780mhz
But I had artifacting in Final Fantasy 15 at +80Mhz +1680mhz. So I had to lower things.

I ran a series of tests and game play tests. I have no artifacting what so ever at Core +75mhz and Memory +1400Mhz. Gamed for hours no visible issues.
Thermals are also fine. GPU hot spot reaches 79C with 3DMark Stress Test 20 Loops. The GPU itself reached 68C MAX. This is with like 20 Loops
of 3DMark or while gaming. With Furmark 30 minute stress test I got up to 82C Hotspot. And I even checked processor thermals too. Generally my
CPU maxes out at 72C while gaming or using 3DMark.

Average and Max FPS increase while in Overclock. And Cyberpunk gains about 3-4 FPS. lol Meaning I don't really think I need to overclock. Maybe for
games where my FPS dips below 60 ir could come handy. But maybe for blender etc it could be useful I don't know.
Am I overclocking wrong? Should I get more then 3-4 FPS in most games? 3DMark Benchmark scores do increase when overclocking. And I don't know
if this is important but 3DMark Stress Test while overclocked passed with 99.4% FPS stability. This was a 20 loop test. It needs 97% to pass.
I feel like 99% or higher is very good. But I assume 3DMark is a synthetic test. And can't be used to guarantee real world results while gaming.

1. I don't know what is more important to overclock on a 1080P setup? Core or Memory or if you need to find a perfect balance etc? Remember
I know almost nothing at all about this. I assume overclocking both can be benifical to gaming, and applications like Stable Diffiusion or Blender.

2. Is it normal to lose Minimum Frames as well as 1% lows while overclocking? GTV 5 and Cyberpunks built in Benchmarks report this to me.
But I've also monitored FPS while playing too. And my FPS monitor reported a slightly lower minimum and low 1% avg.


Examples
Cyberpunk Ultra Settings, Ray Tracing on , DSLL completely off. (Ran the Benchmark a few times)
Non-OC
Average FPS 90
MAX FPS 118
Min FPS 64

Overclocked
Non-OC
Average FPS 94
MAX FPS 119
Min FPS 48-58 (While the other FPS results stayed fairly consistent. Min FPS fluctuated between tests and was decently lower while overclocking.)

GTV 5 (These kind of seem all over the place to me. Almost like the Benchmark it not perfect. or my game settings need adjusting too.)
Non-OC
Frames Per Second (Higher is better) Min, Max, Avg
Pass 0, 16.640070, 155.332565, 123.507942
Pass 1, 35.758343, 161.092850, 116.191498
Pass 2, 30.814648, 203.848663, 141.378784
Pass 3, 34.060989, 256.456299, 168.921249
Pass 4, 6.002087, 238.663467, 125.710945

Overclocked
Frames Per Second (Higher is better) Min, Max, Avg
Pass 0, 17.002609, 150.305862, 124.076401
Pass 1, 39.458160, 158.591705, 116.472008
Pass 2, 31.899452, 211.291412, 141.099152
Pass 3, 5.241041, 278.877777, 168.408203
Pass 4, 6.465227, 250.676819, 127.738876

3D Mark Time Spy Extreme Score (Ran the Test Twice for both non-OC and Overclock)
Non-OC
Score 12405 to 12416

Overclocking 4080 to +75 +1400
Score 12870to 12892

Finally while in game. GTV, Cyberpunk and few other games.
I noticed Average FPS increasing by about 3+ but like I've said minimum fps drops. Usually by 1-2 FPS while in game. Maybe
more but I'm having trouble actually testing it. You may want to ask me if this makes the game noticeably stutter? No not really.
But at the same time I don't notice the smoothness of adding 3-4 FPS either. lol So for the games I've tested and I've tested 4+ games.
Overclocking does change my FPS counter. But I don't notice actual game play differences. And unless I'm playing a game that dips below
60 FPS I probably should not expect it too. But I feel like Overclocking is not suppose to lower minimum frames?

3. I use Hwinfo64. So I've monitored Wattage, temps etc. During these Benchmarks and Stress tests.
I'm going to provide links to two images. One is Hwinfo64 Time Spy Extreme 20 Loops. The other is Furmark
GPU Stress Test. If you see anything that I should be concerned about please let me know? Like
with the Wattage/Power draw etc.

  1. https://postimg.cc/682YLGvs 3DMark 20 Loop
  2. https://postlmg.cc/p5rDWgQN Furmark Stress Test
4. I'm betting I should either give up on Overclocking my 4080 or try to improve results . It does not
seem worth it. I may need to adjust the overclocks values and figure out if adjusting it may
improve results?
Maybe you know of a better Benchmark or Test to run? This is mostly for fun.
From your greater experience with overclocking. Is my GPU an example of don't bother?
or is there maybe still reason to continue? Should I be able to get better results?
Have you seen these issues before?


Thank you I am brand new to overclocking. Hints are welcomed.
And yes I know I can turn OC on and off at will. So I plan to keep it off for most games and applications.
I'm already running most games at like 90-130+ FPS on Ultra etc. But maybe Overclocking can help in some applications or uses.
 
Last edited:
When you set up your MSI Afterburner for overclocking did you enable/unlock voltage control in settings? What settings did you use for core voltage and power limit? For the next test try core voltage at 100% and power limit at 50%. Then for core clock try +200 and memory clock at +1400 then run your test. The core clock is about maxed out, you can experiment with adding or subtracting memory and adding more or less to the power limit for other tests. Let us know how you make out.
 
Last edited:
Reactions: viperwasp

viperwasp

Honorable
May 29, 2016
14
3
10,515
0
Thanks for the reply Fix_that_glitch. No I have not touched volts. From what I have read into you don't need too? Also in the screenshot in the review I read. Where they overclocked. They also did not unlock voltage. On top of that I read that the card I bought only allows for 400 Watts. And if I am reading the wattage correctly in my Furmark test it reached 392 watts power draw on maximum. For all of these reasons combined I don't believe that is a good idea to get into? As I believe your not suppose to go over 400 watts? But I'm basically a noob here. However I'm not going to do something that I have not first seen in a guide or review etc.

Quote
"The first thing to note is the Gaming OC supports a power limit of up to 400W when overclocking, which is good to see – the Founders Edition is capped at 355W. We were only able to add 90MHz to the GPU however, though the GDDR6X memory overclocked like an absolute beast, as we added another 1780MHz. "

But remember I had artifacting at 80Mhz and 1680Mhz respectively. I know I was drawing less then 392 watts during that artifacting. Even with higher overclocks I was play FF15. Which does not draw anything as close to Furmark. So I don't think not having enough power was an issue? And the lower minimum frames where also less then 392 watts power draw. Meaning I've already proven the GPU can draw basically up to 400W. But the issues I'm having is happening at less then that. And I'm getting the sense that I should not or cannot add more voltage.
 
Last edited:

Phaaze88

Titan
Ambassador
You are trying to OC a card that is already OCed twice over:
1)The built in boost algorithm is applying an OC, and has free reign on the core clock; it will dynamically make its own adjustments based on the gpu's parameters. That means it can dial things down if it doesn't like what you did.
2)Gigabyte applied a bump on top of the algorithm's.
3)Neither overclocks the memory, but you have no control over the memory beyond frequency adjustments. Memory overclocks are also less forgiving, and can 'appear' stable:
https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/39.html
"GDDR6X memory comes with an error-detection mechanism that is active on the GeForce RTX 3080[and other models with GDDR6X]. On previous generations, the "ECC memory" feature was only available for much more expensive, professional cards. AMD has enabled it on several Radeon cards, too. While GDDR6X error-detection isn't identical to ECC, it is similar; the memory controller is able to detect most transmission errors, and Error Detection and Replay (EDR) functionality will keep retrying that memory transfer until it succeeds.

For memory overclocking, this means you can no longer rely on crashes or artifacts as indicators of the maximum memory clock. Rather, you have to observe performance, which will reach a plateau when memory is 100% stable and no transactions have to be replayed. Replaying memory errors reduces overall memory performance, which will be visible in reduced FPS scores."

Even TPU only bumped memory clock just under 200mhz on the FE model(compare pages 1 and 41)...

Unlike Ampere, Turing, and Pascal, which are power limited, Ada is voltage limited; not going to get very far on adjusting voltage. Also, the voltage slider in Afterburner and other programs does nothing, even when voltage control is unlocked in settings; the algorithm ignores it.
It doesn't ignore it in Curve Editor, but Ada doesn't have any headroom in voltage anyway. For the other 3 architectures, more voltage = more power, which means running into power limits more frequently, so that's no good really.
Increasing gpu power use also indirectly(directly?) increases memory operating thermals.

Use Furmark for testing cooler strength. That's really all it's good for.
3D Mark, some in-game benches, Heaven Bench, etc., are not a one size fits all program; the actual games are a better test of stability - or rather, the variety is. Granted, not everyone uses their hardware for the same things, so the synthetic benches still have some use, just don't run one and be done with it. As many game samples as possible is best.
Game presets are seldom optimal use of PC resources, but it's a pain in the butt to tinker with settings for all titles at your fingertips, so that one's understandable.

Cpu single core performance > gpu.
Assuming a game's engine isn't total crap, a single cpu core basically determines fps, not 12 , not 8, nor 20 cores. The gpu can either match that pace(fps) or be slower, based on various settings.
That single core handles everything off the engine, even instructions to other cores.



TL;DR: OCing has been mostly a waste since the Gpu Boost 3.0 algorithm came into play with Pascal. You're free to do what you want with the card, but I think you're shooting in the wrong direction due to 1080p on what's a 1440p UW - 4K card.[/spoiler][/spoiler]
 
Reactions: viperwasp

viperwasp

Honorable
May 29, 2016
14
3
10,515
0
Thank you Phaaze88. I understood most of that. And it almost seems like all of that translate into that it is almost pointless to overclock current gen GPUs. Even if I do it can 'appear' stable but have minor or sort of undetectable issues etc. And yes I know it's already an OC model. I know you can further overclock it. But even if overclocking worked perfect. It's usually 3-5 fps for my GPU. No matter how well you overclock it seems. So it's a little fun to get a higher benchscore but ultimately seems not worth while. And on top of that since I'm using 1080P it's like why bother. In the future when I'm using 1440P or if games come out that my PC can only run at like 30-40 FPS it may be worthwhile to check back on overclocking to extend PC's life. And finally CPU overclocking may be more worth while. But I probably won't do that. And I know single core strength is important which is one of the reasons I went with 12700K.
 
Thanks for the reply Fix_that_glitch. No I have not touched volts. From what I have read into you don't need too? Also in the screenshot in the review I read. Where they overclocked. They also did not unlock voltage. On top of that I read that the card I bought only allows for 400 Watts. And if I am reading the wattage correctly in my Furmark test it reached 392 watts power draw on maximum. For all of these reasons combined I don't believe that is a good idea to get into? As I believe your not suppose to go over 400 watts? But I'm basically a noob here. However I'm not going to do something that I have not first seen in a guide or review etc.

Quote
"The first thing to note is the Gaming OC supports a power limit of up to 400W when overclocking, which is good to see – the Founders Edition is capped at 355W. We were only able to add 90MHz to the GPU however, though the GDDR6X memory overclocked like an absolute beast, as we added another 1780MHz. "

But remember I had artifacting at 80Mhz and 1680Mhz respectively. I know I was drawing less then 392 watts during that artifacting. Even with higher overclocks I was play FF15. Which does not draw anything as close to Furmark. So I don't think not having enough power was an issue? And the lower minimum frames where also less then 392 watts power draw. Meaning I've already proven the GPU can draw basically up to 400W. But the issues I'm having is happening at less then that. And I'm getting the sense that I should not or cannot add more voltage.
To be honest I missed that your card was an oc version. And Phaazen is correct in saying it is harder to overclock an already overclocked card. But there is still room for overclocking. Keep in mind that Nvidia supplied the testing people with a special driver so your results will differ from theirs. Although I don't think it is a magical driver that will give super results compared to Whql drivers. You can't hurt your card by using the core and power limit sliders. The cards limits are the cards limits and it will protect itself. Your card max oc settings are listed as 3079 core clock and 1650 memory. So you can try the memory at 1650 not the 1789 listed in the article. Using the 1650 , you can experiment with the core clock to get it to at least 3.0ghz You should be able to get 5% more out of your already overclocked card. That would be the most you would see.
 
Reactions: viperwasp

viperwasp

Honorable
May 29, 2016
14
3
10,515
0
To be honest I missed that your card was an oc version. And Phaazen is correct in saying it is harder to overclock an already overclocked card. But there is still room for overclocking. Keep in mind that Nvidia supplied the testing people with a special driver so your results will differ from theirs. Although I don't think it is a magical driver that will give super results compared to Whql drivers. You can't hurt your card by using the core and power limit sliders. The cards limits are the cards limits and it will protect itself. Your card max oc settings are listed as 3079 core clock and 1650 memory. So you can try the memory at 1650 not the 1789 listed in the article. Using the 1650 , you can experiment with the core clock to get it to at least 3.0ghz You should be able to get 5% more out of your already overclocked card. That would be the most you would see.
Thanks that is very helpful. And I will eventually get back ground to Overclocking my 4080 etc. Both of you have been a great help.
 

Teknoman2

Commendable
Oct 13, 2020
101
13
1,665
13
One thing you need to keep in mind is that when you overclock you need more power and if you hit a power limit, the card will take power from another component (i.e VRAM) to give it to the core resulting in lower fps.
 

ASK THE COMMUNITY