News New FurMark Version 2.0 Arrives After 16 Years

Status
Not open for further replies.

bit_user

Titan
Ambassador
I didn't even realize the 1.x was still relevant.
The thing about Furmark is that it seems both highly-parallel and 100% compute-bound. So, it's basically guaranteed to find a GPU's peak temperature, since there's no bottlenecking on memory, occupancy, or anything else to throttle it.

MakeModel
GFLOPS​
GB/s​
FLO/B​
AMDRX 6950 XT
38707​
576​
67.2​
AMDRX 7900 XTX
46690​
960​
48.6​
IntelUHD 770
825​
90​
9.2​
IntelA770
17203​
560​
30.7​
NvidiaRTX 3090 Ti
33500​
1008​
33.2​
NvidiaRTX 4090
73100​
1008​
72.5​

I used base clocks, since we're talking about an all-core workload. Some GPUs could well throttle even below base. The one exception was the UHD 770, where I believe it will run at peak if there's not also a heavy CPU load.

Note how bandwidth-limited GPUs are! I computed fp32 ops per byte, but a single fp32 is actually 4 bytes! This is probably why increasing L2/L3 has helped AMD and Nvidia so much, over the past couple generations. You can probably tell that if you alleviate any bandwidth limitations and have enough threads, these GPUs could easily burn a lot more power than on typical games & apps.

It also should help explain why datacenter GPUs have like 3x the memory bandwidth of high-end desktop models, but not a lot more raw compute, as in HPC type workloads you're more often doing things like reading 2 values and outputting a 3rd.
 
Last edited:
  • Like
Reactions: artk2219
I didn't even realize the 1.x was still relevant.


*edit:SP
People use to fry the graphics card! My first two days with the 6700XT have used the furmark to burn in the card and loose some coil whine in process be cause you can set the resolution change the fps behavior bur the power are Aways in max My graphics giver about little more than 175w when gaming in furmark way beyond the limit!


Oc score

 
Last edited:
  • Like
Reactions: artk2219

fireaza

Distinguished
May 9, 2011
202
20
18,685
...How is it that a GPU stress-test from 2007 can still be relevant anyway? If it was designed to stress GPUs from 2007, then surely it's a cakewalk for 2023 GPUs? Or if it's still stressful for 2023 GPUs, then how on Earth did 2007 GPUs not instantly catch fire while rendering at 1 frame per hour?
 

bit_user

Titan
Ambassador
...How is it that a GPU stress-test from 2007 can still be relevant anyway?
It just does a lot of shader arithmetic. No matter how fast a GPU is, a benchmark like that can saturate it - although, you might get absurd framerates like 900 fps.

If it was designed to stress GPUs from 2007, then surely it's a cakewalk for 2023 GPUs?
You could also say that about cryptomining. Newer GPUs do it faster, but a GPU can never do it too fast.

if it's still stressful for 2023 GPUs, then how on Earth did 2007 GPUs not instantly catch fire while rendering at 1 frame per hour?
Because GPUs typically ramp up their fans and then throttle their clock speeds, as their temperature increases or if their power limit is exceeded.
 
Status
Not open for further replies.