[SOLVED] GPU performance in two systems

Aug 13, 2018
8
0
10
Hi,

I tried my GPU in my friends pc and he can't overclock the card as well as I can and he is getting lower graphics score on time spy as a result. I am curious what would be causing the difference motherboard maybe? Or cpu?

My build is

I5 8600K
ASUS ROG STRIX F Z370
16GB 3200MHZ G.SKILLZ RAM
ASUS STRIX GTX 1080
SAMSUNG 970 NVME 500GB SSD X2
CORSAIR HX750i PSU
CORSAIR H115I PRO

My friends pc setup is

RYZEN 7 1800x
GIGABYTE B350 GAMING
8GB 3200MHZ CORSAIR LPX RAM
WITH MY ASUS STRIX GTX 1080
ADATA M.2 250GB SSD
850W PSU NOT SURE ON MODEL
COOLERMASTER ML240 AIO

I'd like to understand what would cause the difference as I'm still learning about overclocking and getting the best performance out of my pc.
 
Solution
Optimizing your Intel rig:
Frankly there's not much you can do that will make much difference. On my GTX1080 overclocking only made a 5% difference. That's not insignificant but it is hard to detect when gaming. I went from roughly 1900MHz to 2000MHz.

Your CPU will rarely be a bottleneck, and most of the time the R7-1800X won't be. Even then it may just make the difference between getting say 80FPS and 90FPS in 1080p resolution. Though of course some games like GTAV at 1080p are more CPU demanding than games like DOOM.

Higher resolutions like 2560x1440 are even less likely to be bottlenecked by the CPU.

The most important thing for gaming, aside from a GSYNC monitor, is simply to optimize the game settings and that's actually not...
You're getting better framerates because of your CPU mainly.
The i5 8600k has better IPC(instructions per clock) than the ryzen 7 1800x. The i5 should get 10-20 % higher framerates on average in games.

About the OC though, I suspect his case is not as well ventilated as yours, hence the higher clock speeds you're getting on the 1080. That's due to the Nvidia GPU Boost 3.0 which increases core clock speeds depending on the thermals of the GPU(lower temperatures=higher clock speeds and vice versa).
The motherboard shouldn't make a difference, since the most of the power is fed by your PSU to your GPU.
 


The graphics score in a canned benchmark should be nearly IDENTICAL between those two systems with. I'd be surprised if there was even a CPU bottleneck at any time aside from the CPU-specific tests.

No doubt some games get up to 20% (depending on CPU overclocking) nod to the i5-8600K but the demand on a CPU for graphics benchmarks is lower so again I'd be surprised if the R7-1800X was a bottleneck.

What's the DIFFERENCE between the reported in-game GPU frequencies (use an OSD)?
What's the difference between the scores?

I bet the two line up closely.

***BUT***
I believe you can compare the DX11 vs DX12 scores too. For the graphics test there should be NO IMPROVEMENT if the CPU isn't a bottleneck in DX11. Thus a DX12 improvement of say 10% means that the CPU in DX11 would eliminate the bottleneck if it was 10% faster.

CPU tests may be pointless to compare as there's a different core/thread count so the Intel CPU is faster per core but the AMD CPU has more cores and threads so if the physics test is really well threaded which it should be then the AMD CPU would probably win.
 


I meant higher framerates in games.... but yeah, synthetic benchmarks will be GPU dependent only.
I was thinking that his GPU was boosting higher due to themal performance on the Intel rig(better case cooling probably)... hence the better graphics scores.
 
Optimizing your Intel rig:
Frankly there's not much you can do that will make much difference. On my GTX1080 overclocking only made a 5% difference. That's not insignificant but it is hard to detect when gaming. I went from roughly 1900MHz to 2000MHz.

Your CPU will rarely be a bottleneck, and most of the time the R7-1800X won't be. Even then it may just make the difference between getting say 80FPS and 90FPS in 1080p resolution. Though of course some games like GTAV at 1080p are more CPU demanding than games like DOOM.

Higher resolutions like 2560x1440 are even less likely to be bottlenecked by the CPU.

The most important thing for gaming, aside from a GSYNC monitor, is simply to optimize the game settings and that's actually not simple some of the time. Tweaking should always be based on which of the following sync options you use for a particular game:

1) VSYNC OFF
2) VSYNC ON
3) Adaptive VSYNC (or Adaptive Half VSync)
4) GSync

VSYNC ON should never be used if you drop below the target very often (i.e. 60FPS for 60Hz monitor) as that causes added stuttering. VSYNC OFF produces screen tearing but has less lag.... also the higher the refresh/FPS ratio the less tearing you get so 48FPS/144Hz means three screen refreshes for every one new frame.

(it may sound illogical but a high FPS/refresh ratio can also appear to have less screen tear. Like 180FPS/60Hz. You get more frequent, but smaller changes... I won't go into it but you can just try it some time. I found 60FPS (average) at 60Hz in one game to have a lot of obvious screen tear. On a friends monitor 60FPS at 144Hz had far less screen tear... I also tried 180FPS average at 60Hz again VSYNC OFF and while I could look close and see subtle flickering it was not as obvious as 60FPS (ish) at 60Hz.)

Adaptive VSYNC is handy. I use it for a few games like some Assassin's Creed games. In one of them I could not use VSYNC as I'd get sudden stutter if I dropped below 60FPS. Bet then VSYNC OFF had lots of screen tear so either option was flawed.

I then enabled Adaptive VSYNC when that first came out from NVidia (NCP-> manage 3d settings...) and so I tweaked the game settings to get 60FPS about 90% of the time... when it dropped below 60FPS that was usually for short times (some spike in GPU or CPU load) so the screen tear wasn't even very obvious but the stutter was gone. Yay!

Adaptive HALF refresh is most important for high refresh monitors that are NOT asynchronous like GSync or Freesync. 144Hz for example is hard to hit for VSYNC. And if VSYNC OFF still shows screen tear what other option does that leave you? Basically to lock to 72FPS via the half refresh option. That then maintains 72FPS VSYNC ON if the GPU is sufficient but automatically disables VSYNC if not.

GSYNC is rather simple since screen tear is a non-issue. Games like Tomb Raider run pretty good at 50FPS average with an asynchronous monitor so that's a good number to tweak towards, or higher, to optimize the visuals vs responsiveness... for faster shooters maybe aim for 100FPS or higher.
 
Solution
"...I was thinking that his GPU was boosting higher due to themal performance on the Intel rig(better case cooling probably)... hence the better graphics scores."

Who knows, but we do know that the overclock was higher on the first system. That could either be down to the person doing it in software differently, thermals in the case (case cooling etc), or even the power supply not delivering power quite the same way.

I don't even know what the difference was but there's been a trend where a lot of pre-OC models have at most 5% left. People actually COMPLAIN about this but what you really want is the card to automatically extract as much performance as it can... I don't want to fiddle with it.

Actually we used to lose a LOT of performance in some cards because at one point they didn't even have fan control so they had to err on the side of caution and keep the frequency way low so the worst-case would still work. In some cases people could get 30% overclocks or more years ago. Then fans, voltage and frequency kept getting optimized to dynamically tune themselves.
 
Aug 13, 2018
8
0
10
The thermals is a good shout I would agree but he ran the time spy test with the side of his case off. Also we didn't see temp gets particularly high

I have found I get the best results overclocking the card with msi afterburner so we used that on my friends pc as well.

I can run +145 on the core clock and +485 on the memory but when we tried that time spy crashed.

After trying different settings we got it pass the time spy test with a valid score but on lower clock and memory speeds.

Which just got me curious as to why it can't run the same overclock settings on his pc.

We are going to try his gtx 1070 in my pc and see if I can stabilise a higher overclock than he can.

 


Modern graphics cards dynamically adjust their frequencies so just because you choose a particular OFFSET doesn't mean you actually improve by that much.

If the difference is SMALL I'd just err on the side of caution for STABILITY with your frequencies. I played around with overclocking my GTX1080 (EVGA) a long time and even found out that MOST games worked well with a particular overclock but a couple did not.

That maximum, unstable overclock was only 5% higher than the default settings for my card. So I just aimed for about halfway since 2 to 3% wasn't worth the hassle.

It's not always clear what's preventing your max overclock, especially if it's a very small difference compared to another machine. If it doesn't appear to be thermals and there's no obvious CPU bottleneck (%GPU usage is over 95%) then it may be power stability as the voltage jumps around quite a bit.