Guys,
We live in a world where all our techno-gadgets operate based upon engineering standards and specifications. As such, when engineers develop specifications, it is done using well established standards and consistent test procedures which either normalize, minimize or eliminate variables.
Conversely, most users approach thermal testing in a haphazard fashion without any organized or logical methodology, which is why there's so much confusion in our computer enthusiast community concerning the topic of processor temperatures. Numbers get flung around like gorilla poo in a cage.
Unfamiliar terminology and specifications, misconceptions and widespread misinformation, conflicting opinions and inconsistent test procedures leaves users uncertain of how to properly check cooling performance. Moreover, when ambient temperature isn't mentioned, and load and idle test conditions aren't defined, the processor temperatures you see on various websites and forums can be highly misleading.
“Stress” tests vary widely and can be characterized into two categories;
stability tests which are
fluctuating workloads, and
thermal tests which are
steady workloads. Utilities that don't
overload or
underload your processor will give you a valid thermal baseline. Here’s a comparison of utilities grouped as
thermal and
stability tests according to % of TDP, averaged across six processor Generations at
stock settings rounded to the nearest 5%:
Although these tests range from
70% to 130% TDP workload, Windows Task Manager interprets every test as
100% CPU Utilization, which is processor resource activity,
not actual %TDP workload.
Processor temperatures respond directly to Power consumption (Watts), which is driven by workload. As workload increases, CPU resource activity can only increase to a maximum of 100% Utilization, while CPU Power consumption can increase well above 100% TDP, especially if overclocked. So when observing thermal performance, it's much more relevant to monitor Power consumption than CPU Utilization.
ManOfArc, to address your question, as you can see in the above table, IntelBurn Test is about a 110% workload. Since IBT is actually an overload, I would recommend that you do not use it. Intead, the CineBench R23 Multi Core test and Prime95 Small FFTs with all AVX test selection disabled are appropriate for testing at 100% workload. And as a footnote, IBT was not authored by Intel; it was instead authored by someone with ample arrogance to call himself "AgentGOD". Also, the fluctuating workload in IBT is similar to LinX and Linpack.
Concerning processor temperatures, there are 3 major variables; environment, hardware and software, all of which must be accounted for when discussing this topic. Additionally, since processor temperatures are directly related to standards and specifications, it is absolutely critical to be
very specific, such as clearly stating ambient temperature and exact test conditions. It is not nearly adequate enough to simply say "just run OCCT" or " just run AIDA64" or "just run Prime95" as most users do, because of the number of variables involved.
For example, AIDA64 has 4 CPU related tests (CPU, FPU, Cache, Memory) which have 15 possible combinations that produce 15 different workloads and 15 different processor temperatures. OCCT and Prime95 both have steady-state and fluctuating AVX / AVX2 / AVX-512, and non-AVX workloads, each combination of which can have drastically different effects on Power consumption and processor temperatures. When users fling numbers around and don't bother to precisely define their test conditions, what could have been a meaningful apples-to-apples comparison only results in thermal fruit salad in a blender, which makes the topic about as clear as mud.
If OCCT's first test, called "CPU", is configured for
Small Data Set,
Normal Mode,
Steady Load and
SSE Instruction Set (no AVX), then it's a
steady-state workload at more than 97% that's nearly identical to the 100% workload of Prime95 Small FFTs with all AVX test selections disabled. Although the Multi Core workload in CineBench R23 (as well as R20) is relatively steady at 100%, since it pauses between rendering cycles, it is not a purely steady-state workload. However, when correctly configured as described above, these three test utilities all produce workloads within just a few Watts of one another, as well as processor temperatures within a degree or so of one another. If you require proof, then run these test for yourself and compare Power consumption and temperatures.
In recent games with AVX, as well as real-world apps with AVX such as those used for rendering or transcoding, the AVX code is
less intensive than Prime95 or OCCT with AVX. So when heavy, fluctuating AVX workloads in games or apps spike to "peak" Power consumption, Core temperatures will typically
approach, but not exceed P95 Small FFTs (no AVX) or OCCT (no AVX). As
drea.drechsler has previously pointed out, the CineBench R23 Multi Core test shown above in the
%TDP table is a good example of a utility which replicates heavy, real-world AVX workloads. If you just game and never use your rig for highly demanding workloads such as rendering or transcoding, then it may be more appropriate to test using CPU-Z - Bench - Stress CPU, which is nearly an 80% workload.
Regardless, simply watching numerical values does not reveal the big picture; to gain a better perspective and understanding of the nature of each workload, it is always best to observe Power consumption and thermal behavior with utilities that provide graphs.
CT
