[SOLVED] Is it safe to use Intel Burn Test on an AMD FX CPU?

ManOfArc

Reputable
Jul 8, 2017
308
7
4,785
0
I have an FX-8350 (yeah, I know) that I am overclocking (4.7GHz).
I notice if I run Prime95 (default settings), the core temp stays around 58C +/- max with no apparent throttling. But if I use IBT, the temps go up to 70C or more and the CPU cores will then randomly throttle down to 1408mhz and back up to 4715mhz. The OC passes both stress tests, but not w/o throttling using IBT. Why? Should I just stay with P95?

FX-8350
212 EVO w/push-pull
Asus M5A99X-EVO
16GB DDR3 1866
RX 590
 

CompuTronix

Intel Master
Moderator
Guys,

We live in a world where all our techno-gadgets operate based upon engineering standards and specifications. As such, when engineers develop specifications, it is done using well established standards and consistent test procedures which either normalize, minimize or eliminate variables.

Conversely, most users approach thermal testing in a haphazard fashion without any organized or logical methodology, which is why there's so much confusion in our computer enthusiast community concerning the topic of processor temperatures. Numbers get flung around like gorilla poo in a cage.

Unfamiliar terminology and specifications, misconceptions and widespread misinformation, conflicting opinions and inconsistent test procedures leaves users uncertain of how to properly check cooling performance. Moreover, when ambient temperature isn't mentioned, and load and idle test conditions aren't defined, the processor temperatures you see on various websites and forums can be highly misleading.

“Stress” tests vary widely and can be characterized into two categories; stability tests which are fluctuating workloads, and thermal tests which are steady workloads. Utilities that don't overload or underload your processor will give you a valid thermal baseline. Here’s a comparison of utilities grouped as thermal and stability tests according to % of TDP, averaged across six processor Generations at stock settings rounded to the nearest 5%:



Although these tests range from 70% to 130% TDP workload, Windows Task Manager interprets every test as 100% CPU Utilization, which is processor resource activity, not actual %TDP workload. Processor temperatures respond directly to Power consumption (Watts), which is driven by workload. As workload increases, CPU resource activity can only increase to a maximum of 100% Utilization, while CPU Power consumption can increase well above 100% TDP, especially if overclocked. So when observing thermal performance, it's much more relevant to monitor Power consumption than CPU Utilization.

ManOfArc, to address your question, as you can see in the above table, IntelBurn Test is about a 110% workload. Since IBT is actually an overload, I would recommend that you do not use it. Intead, the CineBench R23 Multi Core test and Prime95 Small FFTs with all AVX test selection disabled are appropriate for testing at 100% workload. And as a footnote, IBT was not authored by Intel; it was instead authored by someone with ample arrogance to call himself "AgentGOD". Also, the fluctuating workload in IBT is similar to LinX and Linpack.

Concerning processor temperatures, there are 3 major variables; environment, hardware and software, all of which must be accounted for when discussing this topic. Additionally, since processor temperatures are directly related to standards and specifications, it is absolutely critical to be very specific, such as clearly stating ambient temperature and exact test conditions. It is not nearly adequate enough to simply say "just run OCCT" or " just run AIDA64" or "just run Prime95" as most users do, because of the number of variables involved.

For example, AIDA64 has 4 CPU related tests (CPU, FPU, Cache, Memory) which have 15 possible combinations that produce 15 different workloads and 15 different processor temperatures. OCCT and Prime95 both have steady-state and fluctuating AVX / AVX2 / AVX-512, and non-AVX workloads, each combination of which can have drastically different effects on Power consumption and processor temperatures. When users fling numbers around and don't bother to precisely define their test conditions, what could have been a meaningful apples-to-apples comparison only results in thermal fruit salad in a blender, which makes the topic about as clear as mud.

If OCCT's first test, called "CPU", is configured for Small Data Set, Normal Mode, Steady Load and SSE Instruction Set (no AVX), then it's a steady-state workload at more than 97% that's nearly identical to the 100% workload of Prime95 Small FFTs with all AVX test selections disabled. Although the Multi Core workload in CineBench R23 (as well as R20) is relatively steady at 100%, since it pauses between rendering cycles, it is not a purely steady-state workload. However, when correctly configured as described above, these three test utilities all produce workloads within just a few Watts of one another, as well as processor temperatures within a degree or so of one another. If you require proof, then run these test for yourself and compare Power consumption and temperatures.

In recent games with AVX, as well as real-world apps with AVX such as those used for rendering or transcoding, the AVX code is less intensive than Prime95 or OCCT with AVX. So when heavy, fluctuating AVX workloads in games or apps spike to "peak" Power consumption, Core temperatures will typically approach, but not exceed P95 Small FFTs (no AVX) or OCCT (no AVX). As drea.drechsler has previously pointed out, the CineBench R23 Multi Core test shown above in the %TDP table is a good example of a utility which replicates heavy, real-world AVX workloads. If you just game and never use your rig for highly demanding workloads such as rendering or transcoding, then it may be more appropriate to test using CPU-Z - Bench - Stress CPU, which is nearly an 80% workload.

Regardless, simply watching numerical values does not reveal the big picture; to gain a better perspective and understanding of the nature of each workload, it is always best to observe Power consumption and thermal behavior with utilities that provide graphs.

CT :sol:
 
Reactions: SamirD
Why does it matter? Do you run Intel burn test all day long or use it for something productive...or entertaining. I'd like to think the stories of people stress-testing their CPU's to the death are just myth.

If it runs it at all that's good enough, now move on to the fun stuff.
 
Last edited:

sonofjesse

Reputable
Jul 27, 2016
392
76
4,790
9
I think a lot of people spend more time benchmarking and stress testing than using the PC. The older I get, the more happy I am when I press the power button that the PC powers on and works.....lol.

8350 is a good chip, that is what my grandma has. I would just run it and be happy, and run it until it dies. I don't think you will see these temps with normal use, just stress testing.
 
Depending on your use case, sometimes stress testing is vital. I have some systems that run in an 80F environment. Stress testing them prior to deployment was crucial to make sure the cooling would keep up at the highest loads. I wouldn't have even worried if they were in a 30F environment.
 
80F is 26C. That's not really that hot of an environment for a PC.

30F is -1C and you don't have as a room temperature.
Actually, that is considering the cpu can be +20C of the ambient temp without load.

Oh, you haven't been to Canada have you? There are guys that run rack servers in sheds in the backyard. They can end up being far colder than 30F.
 

Karadjgne

Titan
Ambassador
Stop please.
Throw out all of the above, it's absolutely useless.
You have an FX cpu. It has no thermal reader on the core. It's physically impossible to get an accurate temperature of something without that reader.

There's only 1 way to get any semblance of thermal performance from an FX cpu and that's with thermal margins, as read by AMD Overdrive.

So throw out any pre-conceived notions or ideas of an actual temperature, it does not apply to FX. Thermal margins are an algorithm taken from voltages, loads, cores used and other information and mixed up into a number representing the amount of thermal headroom you have left in the cpu. Change any condition and the number changes. But it is not a temperature as such.

With Intels it's like having a measuring cup of 100ml, and temperature represents exactly how much water to the ml is in the cup. With thermal margins, it's how much space is left, not measured in ml. Use one more core, add 0.1 more volts, increase usage 10% more and that space gets smaller. If that space hits 0, there's no more thermal headroom, the cores are maxed.

A TM of 40's is great, that's idle/low/light loads. By the 30's you are getting multiple tabs, light gaming. By the 20's you are doing a decent amount of work and the cpu is quite warm, by the teens you are gaming decent cpu is toasty warm. By 10 or under you are cooking and reaching limits of cooling capacity. By 0 or negatives, you are past the point of shutdowns and possibly damaging the cpu.

That's how to measure temps on an FX. Not by the exact number itself, but by what that number represents. Trying to equate measurements taken from a cpu that has no way of reading a temperature is not going to turn out as expected. Ever.

It's a guaranteed fact that your cores are not reading 58°C with Prime95 and 70°C with IBT, and not throttling with both, not when the TjMax for all the FX cpus is 62°C. TjMax being the exact thermal limit of the cores as measured using engineering samples by AMD technicians and engineers. At 58°C, that'd equate to 96-98°C on an intel, guaranteed throttling, and 70°C would be closer to 110°C +, and well into shutdown procedures 5 minutes ago. The numbers are wrong. Totally.

Get AMD Overdrive, check your TM. See what kind of room you have left with those programs. That'll tell you what's safe and dangerous and that's all that matters, not the exact number.
 
Last edited:

ManOfArc

Reputable
Jul 8, 2017
308
7
4,785
0
Running at fan at 100% is never useless when it comes to cooling.
Thank you for that explanation. AMD Overdrive showed a thermal margin fluctuating between 24C to 04.5C. (yes, I need better cooling) But the clock speed never dropped. All cores stayed at 4716 MHz. So... no throttling. :giggle:
But no one answered my question; is it safe to use INTEL Burn Test on an AMD CPU?
 
I think a lot of people spend more time benchmarking and stress testing than using the PC. The older I get, the more happy I am when I press the power button that the PC powers on and works.....lol.
....
LOL...yeah, that's my take on it too. I used to stress stupidly over the stupid stress tests my system would fail...now I'm happy, if it turns on.

It took me a while, but I've gotten smarter about stress tests. These EXTREME, synthetic tests (Intel Burn test, Prime 95, AIDA64, etc.) are just dumb to run like they say to. ESPECIALLY with modern processors, they are completely unreal with such tightly looped AVX instructions. Just run them a few minutes to see if it's stable and done, regardless of cooling or temps it's getting to.

But it IS a good idea to know thermal capability just do it running a real-world test. The best I know of is Cinebench23, it has a 10 minute and a 30 minute test that's easy to run with no special setups. It's real-world, using actual production rendering routines of a real image. Your system MUST be thermally stable through that because it's what it should be able to do in a real world application.

Another good one (free) is H264 or H265 rendering using Handbrake, but you have to spend some time to set it up and have a decent sized video to transcode.
 
Last edited:

Karadjgne

Titan
Ambassador
It's just software. There's nothing saying that just because it has Intel in the name it can't be used on FX.

As far as the software itself goes, IBT is roughly a 130% load on the cpu, and Prime95 small fft is 100% load, so there will be a difference in temp.

But since you have a TM of mid 20's and single digits, you are fine. Even running blender or other rediculously cpu killing editing software isn't going to equal what IBT is capable of.

@drea.drechsler to be fair, Prime95 isn't a stress test. It's a viable program to run mercenne numbers that happens to include a stress/temp tester to validate stability and thermal ability of the pc prior to a bunch of bsod's happening.

So I could see that seeing actual daily or even constant use by a select few really oddball people.
 
Last edited:
... to be fair, Prime95 isn't a stress test.
....
Ahhhh..but I disagree, even though I agree it does to real 'work' turning out contributions to Mersenne Prime when participating in the distributed project. The thing most anybody discovers about distributed computing projects is you HAVE to de-rate any overclock or you end up contributing garbage results after days (or weeks) of calculations.

In the 'torture test' mode, and ESPECIALLY when running it in 'small FFT's only', classically the preferred test for most overclockers to accept stability claims, it's quite purposefully running far more efficiently than it does running normally in order to return quick results to facilitate derating. I agree there's a practical purpose for it to help de-rate your system, so yes it's useful for that. But you have to be careful what you're testing as large FFT's only can be pretty darn easy on the system and not useful either.

Balanced just cycles between them and you're guaranteed to come across a bunch of 'small fft's' that run at least "130% of load". That just doesn't ever happen when transcoding a video or rendering an image.

What it comes down to is the 'torture testing', which is what I'm referring to, is pretty much synthetic and not representative of real-world activity, not even it's own 'real-world' when running distributed work units that have far more variability.
 
Last edited:

Karadjgne

Titan
Ambassador
True. But you have to draw a line somewhere. And small fft test gives a good clear load (unlike Aida or Occt) which is sort of important for baselune comparisons with aios that run 15-30minutes to stabilize.

It's even more important to have a single standard, something anyone can interpret and recognise. Saying you get 75° in p95 sfft, that's easy. Golden. Cooling is good. Even pushing games hard you'll not see that number, or very far beyond in AVX heavy games.

But with variables in Aida, occt, IBT, CB versions, who uses linpack, who doesn't, what degree of ram is used, what degree of cpu is used etc nothing is solid. There's no base for comparison to real world.

Prime is good for temps, that's basically it. The newer versions changed how they did things, use different coding etc, same results, but it's not really reliable for stability . Cinebench, Asus RealBench, even Aida are much better for stability now.

It's like saying a pitcher has a 100mph fastball. That's only true 6/10 times, 3/10 it's 99mph,1 time it's 101mph, but it's a good number to compare to the backup pitcher's 96mph fastballs. Doesn't mean his curve ball is that fast, nor his slider, or if he'll throw 100mph balls the entire 9 innings, might only throw 3 of them, it's a measure of what's possible, a baseline to judge from, if not necessarily a 'real world' number usage.
 
Last edited:
True. But you have to draw a line somewhere. And small fft test gives a good clear load (unlike Aida or Occt) which is sort of important for baselune comparisons with aios that run 15-30minutes to stabilize.

It's even more important to have a single standard, something anyone can interpret and recognise. Saying you get 75° in p95 sfft, that's easy. Golden. Cooling is good. Even pushing games hard you'll not see that number, or very far beyond in AVX heavy games.

But with variables in Aida, occt, IBT, CB versions, who uses linpack, who doesn't, what degree of ram is used, what degree of cpu is used etc nothing is solid. There's no base for comparison to real world.

Prime is good for temps, that's basically it. The newer versions changed how they did things, use different coding etc, same results, but it's not really reliable for stability . Cinebench, Asus RealBench, even Aida are much better for stability now.
I'm not really concerned if it can't pass an extended prime95 temp test as I'll never run into that in a real-world scenario. What I want to know is I can give it any video transcode job and never worry about coming back after 1/2 hour to a BSOD. So I'm going to run CB23, which is about as bad a real-world job as any I can imagine, for at least 1/2 hour before saying a tweak is good.

I do look for an adequate temp margin here...and adequate is purely judgmental, but I'm not looking to convince anyone else it's good.

One thing I DO use P95 for is to test out my 3700's PBO performance when tweaking it. PBO leaves the algorithm to pull clocks as it hots up, so the right FFT size (128K, the hottest one I've found) makes a very good test to check the clock it drops to at highest temp . I like to use it because the code it's processing is so darn tightly looped and unvarying the clocks stay very stable. Or about as stable as Ryzen can be! It can be useful, and that's one way I use it.
 

Karadjgne

Titan
Ambassador
Exactly. Using prime thermally with a stable output is a perfect use on a Ryzen, trying to do that with Aida or occt which both use varing amounts of linpack per test would have your clocks all over the place. And running hotter, with a stable heat output than anything your transcoding will use means simply that you'll not get a thermal bsod no matter what transcode workload you run.

You won't get that same assurance with CB, since it too is a variable according to actual render workload, those little squares in that breakfast nook don't all render at the same speeds.

So passing an extended prime can be a benefit, if it stays within acceptable limits, those based on opinion or fact, if extended prime is hitting 100°C or North of that, obviously you'd have something to think about when it comes to cooling solutions and a 4hr transcode.

Just looking at it from a different perspective. You happen to use the right tool for your job, I'm just saying I agree with you, but it can have wider implications and applications, but at the same time, limitations.
 

CompuTronix

Intel Master
Moderator
Guys,

We live in a world where all our techno-gadgets operate based upon engineering standards and specifications. As such, when engineers develop specifications, it is done using well established standards and consistent test procedures which either normalize, minimize or eliminate variables.

Conversely, most users approach thermal testing in a haphazard fashion without any organized or logical methodology, which is why there's so much confusion in our computer enthusiast community concerning the topic of processor temperatures. Numbers get flung around like gorilla poo in a cage.

Unfamiliar terminology and specifications, misconceptions and widespread misinformation, conflicting opinions and inconsistent test procedures leaves users uncertain of how to properly check cooling performance. Moreover, when ambient temperature isn't mentioned, and load and idle test conditions aren't defined, the processor temperatures you see on various websites and forums can be highly misleading.

“Stress” tests vary widely and can be characterized into two categories; stability tests which are fluctuating workloads, and thermal tests which are steady workloads. Utilities that don't overload or underload your processor will give you a valid thermal baseline. Here’s a comparison of utilities grouped as thermal and stability tests according to % of TDP, averaged across six processor Generations at stock settings rounded to the nearest 5%:



Although these tests range from 70% to 130% TDP workload, Windows Task Manager interprets every test as 100% CPU Utilization, which is processor resource activity, not actual %TDP workload. Processor temperatures respond directly to Power consumption (Watts), which is driven by workload. As workload increases, CPU resource activity can only increase to a maximum of 100% Utilization, while CPU Power consumption can increase well above 100% TDP, especially if overclocked. So when observing thermal performance, it's much more relevant to monitor Power consumption than CPU Utilization.

ManOfArc, to address your question, as you can see in the above table, IntelBurn Test is about a 110% workload. Since IBT is actually an overload, I would recommend that you do not use it. Intead, the CineBench R23 Multi Core test and Prime95 Small FFTs with all AVX test selection disabled are appropriate for testing at 100% workload. And as a footnote, IBT was not authored by Intel; it was instead authored by someone with ample arrogance to call himself "AgentGOD". Also, the fluctuating workload in IBT is similar to LinX and Linpack.

Concerning processor temperatures, there are 3 major variables; environment, hardware and software, all of which must be accounted for when discussing this topic. Additionally, since processor temperatures are directly related to standards and specifications, it is absolutely critical to be very specific, such as clearly stating ambient temperature and exact test conditions. It is not nearly adequate enough to simply say "just run OCCT" or " just run AIDA64" or "just run Prime95" as most users do, because of the number of variables involved.

For example, AIDA64 has 4 CPU related tests (CPU, FPU, Cache, Memory) which have 15 possible combinations that produce 15 different workloads and 15 different processor temperatures. OCCT and Prime95 both have steady-state and fluctuating AVX / AVX2 / AVX-512, and non-AVX workloads, each combination of which can have drastically different effects on Power consumption and processor temperatures. When users fling numbers around and don't bother to precisely define their test conditions, what could have been a meaningful apples-to-apples comparison only results in thermal fruit salad in a blender, which makes the topic about as clear as mud.

If OCCT's first test, called "CPU", is configured for Small Data Set, Normal Mode, Steady Load and SSE Instruction Set (no AVX), then it's a steady-state workload at more than 97% that's nearly identical to the 100% workload of Prime95 Small FFTs with all AVX test selections disabled. Although the Multi Core workload in CineBench R23 (as well as R20) is relatively steady at 100%, since it pauses between rendering cycles, it is not a purely steady-state workload. However, when correctly configured as described above, these three test utilities all produce workloads within just a few Watts of one another, as well as processor temperatures within a degree or so of one another. If you require proof, then run these test for yourself and compare Power consumption and temperatures.

In recent games with AVX, as well as real-world apps with AVX such as those used for rendering or transcoding, the AVX code is less intensive than Prime95 or OCCT with AVX. So when heavy, fluctuating AVX workloads in games or apps spike to "peak" Power consumption, Core temperatures will typically approach, but not exceed P95 Small FFTs (no AVX) or OCCT (no AVX). As drea.drechsler has previously pointed out, the CineBench R23 Multi Core test shown above in the %TDP table is a good example of a utility which replicates heavy, real-world AVX workloads. If you just game and never use your rig for highly demanding workloads such as rendering or transcoding, then it may be more appropriate to test using CPU-Z - Bench - Stress CPU, which is nearly an 80% workload.

Regardless, simply watching numerical values does not reveal the big picture; to gain a better perspective and understanding of the nature of each workload, it is always best to observe Power consumption and thermal behavior with utilities that provide graphs.

CT :sol:
 
Reactions: SamirD

ASK THE COMMUNITY

TRENDING THREADS