FX6300 + M5A78LM-X Overclock

Aug 4, 2018
12
0
10
Hi, this is my first post so if I'm not doing it correctly let me know. Anyway I was wondering about my overclock because something seems a little weird.

As I'm sure many know you can't adjust vcore voltage as far as I know on this board and the vrm cooling sucks. Honestly I don't know but anyway

I got a stable overclock up to 4.6GHz with a cheap ice cool 100 deepcool cooler and I'm worried something is off. People have said that you can't get overclocks like that on that chip without extreme heat and vcore but I'm only hitting 67 ish max with this overclock. For reference the cooler is 10$.

Anyway, is there a chance something is wrong or do I just have a lucky cpu. Thanks,
 
Solution


If Passmark shows an improvement in it's score then it's an improvement in actual performance. The only question is whether it's measuring something relevant to what you're doing.

More generally, at an undemanding workload it is probably very real. Even just navigation around in...


Most likely APM isn't disabled so the CPU will stay within it's TDP regardless of how high you clock it. What that means is it is constantly throttling back when put under any kind of load so it never has a chance to put out much heat (95w, in the case of a 6300).

I had an M5a88-m ASUS motherboard and it didn't have a BIOS setting to disable APM so it did the same thing. With a Hyper TX3 the hottest my CPU (also a 6300) would get was 55-60C at 4.6G and I thought it was great until I found a utility to disable APM. That's when I realized I needed a bigger cooler - except the VRM's would start throttling cause they overheat like crazy! So I cut the OC back to 4.4G instead, the most that board with it's little 4+1 VRM could handle.

I'd be surprised if yours, with a 3+1, will hold a 6300 at 4.2G but you won't know that unless you can get the CPU to stop throttling itself to stay within TDP.

 
Aug 4, 2018
12
0
10


 
Aug 4, 2018
12
0
10
Honestly I don't know. It seems to stay mostly at 4599mhz and then go down to 2999 sometimes but usually at 4999. It's just weird because most people said 4.4 is the max before you gotta start playing with the vcore and it's a cheap board so I don't want to fry anything. It's my first build and tons of things are done wrong, so idk. Like my gpu isn't properly screwed in, mothrboard.
 


That's the problem as there may be no setting to disable APM in BIOS. It was only provided on the more feature-rich boards, ones with robust VRM's. It was also oddly worded at times: in my son's Gigabyte GA-970-DS3 it's called 'HPC', for 'High Performance Computing' and you have to enable it. On my M5a88-M I used a command line utility called AMDMSRTweaker. I'd run it as a batch file when I wanted to disable APM and 'go hot', but otherwise I'd let it throttle itself if it ever needed to but I don't think it did even playing games.

But in your case the throttling you see may just as likely be from VRM's overheating or Cool-N-Quiet on the Balanced Power Plan. You need to figure out the source of your throttling to know what to do, but ultimately you'll never get 4.6G "P95" stable on a 3 phase CPU VRM... you just never will.

 
Aug 4, 2018
12
0
10
Yeah I figured something was off. I'm really not sure how to fix this and Honestly I think even with the throttling it's still an improvement. On benchmarks (I checked prime 95) the clock never dipped but as you said it doesn't mean anything. But, when I ran passmark the cpu score went from 50th percentile to 61st so maybe it's not that big a deal.
 


That's true... but...some things to consider....

What do you use this 4.6G beast for? Do you do encoding or rendering or other productivity tasks that will load up all the cores as much is they can? If you do, then it will throttle itself back to whatever the APM will allow, or the VRM heat will allow.

But if all you do is gaming then remember it never uses all 6 cores heavily: only the main thread in games ever sees heavy loading, the secondary threads in even the best multi-threaded games are rarely loaded much. That's why my 6300 never throttled because it just never got to it's TDP and even the VRM's weren't over heating. I still had a 4.4Ghz clock speed for that primary thread all the way through any game.

But when i wanted to encode a video I'd 'go hot' and the CPU would heat up until the poor VRM screamed and then it would slow down for a bit. Even then, though, it was faster through a benchmark job than it would be if I lowered to the point the VRM's wouldn't throttle...about 4.2G i think.

Oh yeah...BTW....i glued heatsinks on the VRM FET's and located a fan to blow on them which delayed VRM thermal throttling quite a bit.
 
Aug 4, 2018
12
0
10
Yeah primary for gaming but really not very demanding stuff. Like don't get me wrong I like PC games but the stuff I play isn't exactly triple aaa. I play GTA 5 rarely, no mods and offline. Apart from that just fortnite and dolphin. I don't really do any video editing just light task, one program at a time. Dolphin is largely single core dependent so I guess for my purposes this OC hits 4.6 the way I need it to? The performance boost I got was pretty nice across all these games
 


I'm not really sure how video editing would play with CPU threads since the encoding I do is down-sampling short videos so they can be sent to family to see on phones and such, but I can imagine some tasks within the one program taking advantage of multi-threaded execution if it can. Since there are millions of pixels in one frame, potentially all needing the same kind of manipulation for one edit, it's a problem that inherently benefits from parallel processing and even if it's mostly done in the GPU there will be some 'housekeeping' type tasks the CPU could be called on to help with. You might open Task Manager and check how CPU threads are being utilized while working in your program.

I'm pretty sure if Dolphin uses just one thread it's getting to see 4.6G performance whenever it needs it.

 
Well...as to the downsampling... that's just 'resizing' a video so I can e-mail it at a smaller file size. I may shoot a video of our dog playing at 1080 resolution on my camera, which is a nice high-def suitable for displaying on a TV. But even 5 min's of that is HUGE. So I'll downsample it to 320x200, a really nice size for displaying on a phone, and send a much smaller file to my wife.

To do that, every pixel in every frame of the video has to be 'analyzed' to determine whether it is to be used in the finished picture and what color value it should have. Since every one is being analyzed, if you can put more than one processor to work each on it's own pixels it just goes faster.

You are editing a video in your program it may make a change in the same way: maybe make a change in gamma value to 'brighten' a scene. There may be thousands of frames in that scene, each with millions of pixels that need the same calculations performed on them to determine what the resultant color value it should be. The program could throw that task to one processor, processing one pixel at a time, or it might be able to throw it to all six processors, processing 6 at a time, to get it done faster.

OR better yet: it might use your GPU which has (possibly) hundreds of processor all perfectly optimized for doing just this type of calculation since that's what they do in games. But these are highly specialized processors: they need to have the data fed to them just right so that means a general purpose processor (your CPU) will have to take the original video frame apart, feed the data to the GPU processors pixel by pixel and then take the output from it and put it back into the frame in just the right place. Because there are so many GPU processors at work even that "housekeeping", as I call it, will be very demanding so more CPU's at work will keep the GPU stream processors (or cuda cores or whatever) from stalling.

That may grossly over-simplify it, but that's how I understand it to work. And I've no idea if there are any video editing packages that use GPU processors to assist in edits, but it seems feasible so I throw it out there. But the point is: check your video editing software as it may be more 'multi-threaded' than you realize!
 
Aug 4, 2018
12
0
10
Oh got it that makes a lot more sense hahaha. But honestly I don't do anything like that all I do is light web browsing and video playback, not rendering. As well as some documents, a rainmeter desktop and a few emulators and games. Am I going to feel that 4.6ghz in their workloads or are the results not real. It's a voltage of 1.3 and 4.6ghz with a 12 percent passmark increase in score so the question is is that real. Or is something else going on
 


If Passmark shows an improvement in it's score then it's an improvement in actual performance. The only question is whether it's measuring something relevant to what you're doing.

More generally, at an undemanding workload it is probably very real. Even just navigation around in Windows with one or two apps open, using multiple cores, is not very demanding but can be real.

You can see this in Task Manager- right click on the task bar at bottom and click on Task Manager in the list so it opens. Then look at Performance tab, the CPU graph. If it's just one graph that's overall CPU utilization so right click on it and hover mouse over 'Change Graph to' in the box then select Logical Processors to see utilization of each processor core. It's a 60 second graph: go navigate around in Windows, opening / closing apps, minimizing and maximizing, do a video viewing in your app, things like that. Come back to the Task Manager and you'll see how the cores have been used. All that low level activity shows there is both related and unrelated activity going on in parallel as Windows' scheduler throws apps' and it's own background work around to the available cores as needed.

The processor can do it at 4.6G, if needed, even though it's not on an intense enough scale or duration to heat up processor or VRM to induce throttling. That high clock can make the entire Windows experience smoother as it completes each thread quickly, with very little lag from your doing something to 'seeing' it on the screen.

But then, with 6 cores/threads in play whether you'll feel 4.6G vs 4.1G in Windows alone is becoming more and more doubtful. A lot of people are saying that overclocking is becoming irrelevant unless you do true, highly parallel, productivity tasks like rendering or encoding. I note that in my Ryzen 7 (8 core, 16 thread processor) the Windows Balanced power plan keep 4 cores parked most of the time. It doesn't even use them, to conserve power, unless I bring on some heavy work like an encoding task. Which is done in only a minute or two, now, for those short little videos i down-sample. I don't really need this system, but AMD made high-performance computing so cheap I couldn't resist!

So: is it real? I'd say yes to that. Is it NOTICEABLE though? probably not, depending. But it's free (or cheap, as in my case) so why not do it and learn something in the process!

 
Solution


Task Manager only shows utilization, which is irrespective of what speed the core is running at.

You have to get a utility like HWInfo64 or HWMonitor, something the will report per-core clock speed as well as core utilization to start figuring that out. At least that's how I do it.

If core utilization is really low and clock speed is dropping back then it's probably just Cool-N-Quiet putting processor domains into sleep states to conserve power when it can do so; nothing to fret over. If utilization is high and the speed drop back is moderate it's probably exceeding TDP. But if the speed drop is pretty extreme, maybe as low as 800Mhz, it is probably VRM throttling. This is what I found on my M5a88-M, at least.

I think you'll probably see Cool-N-Quiet kicking in under light loads but it will be TDP throttling otherwise. Reason is that with TDP throttling at 95W (your processor's rating) you'd never put a 125W (or more) load on the VRM to get it to induce throttling. I think your VRM is most likely good for a 95W processor, but that would be about it.
 
Aug 4, 2018
12
0
10

Yeah, I've never seen it drop to around 800 but I've definitely seen it go to 3ghz but that's for around a couple seconds, then it goes right back up to where it was at 4.9 this is when running prime 95 I know that's synthetic tho