[SOLVED] FPS & temp difference between two PCs ?

Jul 30, 2021
5
0
10
Hi,
I’ve been having trouble with my PC crashing and i’ve come to the conclusion it’s my GPU (Msi Duke 1070ti) becoming a dead card after experiencing more severe problems like PC booting but card sometimes isnt (very inconsistent) and blue dots now appearing on screen.

I have therefore been trying to single out the problem and i therefore incerted my brothers GPU into my PC (bcs mine doesn’t fit in his build). He has a MSI 1070 Gaming X 6GB. i used heaven benchmark as a sort of stress test and FPS test. And found that his exact GPU runs better in his build that in mine although our PC components are relatively the same if not better in mine.
We have different monitors but the test was done a couple times Ultra settings and disabled the rest all on 1080p by 1920.
Usually i use the DP cable and run on 1440p on my 1070ti, however for the sake of the test only used HDMI and only 1080p


My specs:
i7-8700k (no OC) 3.7GHz, turbo 4.7 GHz
Samsung 970 Evo NVMe PCIe M.2 250GB
Hyper X Fury ddr4 (2x8GB)
B360m PLUS (we have identical motherboard)
Monitor: Dell 32inch 165Hz

His specs:
i7-8700 3.2Ghz base speed, 4.3GHz turbo avg
Samsung 970 Evo NVMe PCIe M.2 500GB (Storage is practically full)
Corsair vengeance LPX DDR4 2400 C14 2x8GB
Monitor: Acer KG271


Differences i observed:
Monitor size differences 27inch and 32inch.
My monitor wasn’t capped at 60Hz however i believe HDMI doesn’t allow higher anyway.

Test results after running a few tries:
My build,
Average 79.8 FPS, 64 degrees

His build,
Average 85.6 FPS, 61 degrees

I can’t understand how there’s a 3 degree difference in temperature for the same card in similar builds and a average 5 FPS increase while at lower temperatures. Im no expert but CPU can’t be bottle-necking since it’s superior version of his CPU and if anything should be delivering better results.
My case is actually less cramped than his and i have 3 Case fans while he has 2.
Both build tests were done with side panel removed and room temperature is a cold C 18° with no sunlight and open windows in either room.

The sole reason for trying this test is to see if only my GPU is to blame for crashes. Even though no crash occurred, the temperature difference and performance difference is alarming to me bcs my card consistently used to be in the mid-high 70s when gaming for longer times. I now no longer know what the issues could be after doing my head for 2-weeks trying different tests and solutions.

If anyone could potentially find some answers or solutions to all this mess that would be greatly appreciated.
Thanks - M
 
Solution
It would be interesting to observe HWMonitor running for perhaps a CPU-Z /bench/bench cpu and compare scores between the two rigs, and then run simply 'stress CPU' for 10-20 minutes within CPU-Z to note the different clock speeds sustained . Power limits enabled/boost duration limits when/if enabled and RAM speeds on mainboards can drastically affect a CPU's clock speed and throughput. Certainly an i7-8700 with Intel advocated behavior and 65W TDP should likely only sustain an all core turbo well below that of an 8700K, with several hundred MHz difference in observed clock speeds under load, despite the advertised 'peak turbo' perhaps not making them sound all that different. (Many falsely claimed there was no difference if one...
What version of windows are you both on? on the task bar type Winver and it will tell you what version of windows you are on.
Are all drivers up to date?
Did you DDU your 1070ti drivers and reinstall 1070 drivers.
What driver version is his 1070 running in his computer?
What PSU are you both running, make and model?
 
Another thing to consider besides the obvious, is 970 Evo performance.

you have 970 EVO 256gb while your friend has 970 EVO 512gb version,

asides from capacity there is actually a difference in performance due to differences in actual hardware.

RANDOM READ (4KB, QD32)
250GB: Up to 200,000 IOPS
500GB: Up to 370,000 IOPS

there is similar difference in writing performance

RANDOM WRITE (4KB, QD32)
250GB: Up to 350,000 IOPS
500GB, 1,000GB: Up to 450,000 IOPS
 
Another thing to consider besides the obvious, is 970 Evo performance.

you have 970 EVO 256gb while your friend has 970 EVO 512gb version,

asides from capacity there is actually a difference in performance due to differences in actual hardware.

RANDOM READ (4KB, QD32)
250GB: Up to 200,000 IOPS
500GB: Up to 370,000 IOPS

there is similar difference in writing performance

RANDOM WRITE (4KB, QD32)
250GB: Up to 350,000 IOPS
500GB, 1,000GB: Up to 450,000 IOPS
So if i understand correctly, the SSD processing power explains the FPS difference however what about the tempt difference. I don’t fully understand how one produced less frames but runs hotter.
 
What version of windows are you both on? on the task bar type Winver and it will tell you what version of windows you are on.
Are all drivers up to date?
Did you DDU your 1070ti drivers and reinstall 1070 drivers.
What driver version is his 1070 running in his computer?
What PSU are you both running, make and model?
better performing PC: Windows 10 version 2004 (OS build 19041.1110)

worse performing PC: Windows 10 Version 20h2 (OS Build 19042.1110)
 
It would be interesting to observe HWMonitor running for perhaps a CPU-Z /bench/bench cpu and compare scores between the two rigs, and then run simply 'stress CPU' for 10-20 minutes within CPU-Z to note the different clock speeds sustained . Power limits enabled/boost duration limits when/if enabled and RAM speeds on mainboards can drastically affect a CPU's clock speed and throughput. Certainly an i7-8700 with Intel advocated behavior and 65W TDP should likely only sustain an all core turbo well below that of an 8700K, with several hundred MHz difference in observed clock speeds under load, despite the advertised 'peak turbo' perhaps not making them sound all that different. (Many falsely claimed there was no difference if one did not intend to overclock, etc..)

One really cannot compare differences with differing res, GPUs, etc...

(The SSD should matter next to nothing, save for how long it takes the gaming benchmark to actually load and begin, and times between game levels, etc.)
 
Solution