Feature AMD Athlon vs Intel Pentium: Which Cheap Chips Are Best?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

joeblowsmynose

Distinguished
...trim ....

Cinebench MT is only representative of heavily multi-threaded workloads with near-perfect scaling. Very little consumer software falls in that category.

But, let's not ignore that it is the single most important benchmark for anyone who does rendering, CPU encoding, or any type of multithreaded simulation, and also is a very important metric for people who do more than one thing at a time on their PC (which is me). This group of consumers is certainly growing, so I wouldn't downplay the usefulness of multithreaded benchmarks, unless superpi and/or playing mostly old games is all one does.

Since my GPU is the bottleneck in gaming (as is everyone's), single core performance isn't much a concern for my gaming requirements. But if I paired a 1080ti with a budget CPU, it suddenly would become very important ... since no one does that (bottlenecks the CPU), how important really is the metric (bottlenecked CPU results) in real world scenarios? (unless one plans on going quad SLI ... then you have to consider the CPU bottleneck as an important metric, but how many people are running quad 1080's or better? Far less than those needing mutithreaded workloads I would assume)

Opening a browser 0.1 seconds faster that the competitor CPU is good for some bragging rights, I guess, but in reality do people who open a browser with an slightly slower single performance CPU choose a CPU for that extra 1/10 of a second? No. No one cares.

Now I'm downplaying the importance of single core performance, lol. I didn't really mean to do that, but rather just to bring some clarity about what each (single vs multicore performance) really means to the end user experience's variable expectations. Both are important, especially considering use cases.
 

InvalidError

Titan
Moderator
since no one does that (bottlenecks the CPU), how important really is the metric (bottlenecked CPU results) in real world scenarios?
CPU bottlenecks happen ALL of the time in gaming benchmarks: game performance scaling suffers greatly the second a performance-critical thread hits a CPU bottleneck and that bottleneck is dictated by single-threaded performance. Your overall CPU usage can be 10-12% on a 8C16T CPU and already be bottlenecked by single-threaded performance in games like WoW which are very lightly threaded because a single thread is pegging its core/thread at 100% and can't go any faster regardless of how many more CPU cores/threads are unused. That's why Intel CPUs still dominate AMD's CPUs in most games despite AMD's core and thread count advantages, performance-critical threads can't run any faster than single-thread performance and the CPU with strongest single-thread performance wins.
 

logainofhades

Titan
Moderator
Even with cheap boards?There are no boards where this didn't happen?
I'm not really following the improvements.

Asrock has been updating the bios on even their A320 boards, for AGESA 1.0.0.6, with the last one being 12/25/18. I would say any ram or XFR limitations, at this point, would be due to the chipset's capability. B350 you probably stand a better shot at everything working. I'd never recommend an A320 anyway.
 

joeblowsmynose

Distinguished
CPU bottlenecks happen ALL of the time in gaming benchmarks: game performance scaling suffers greatly the second a performance-critical thread hits a CPU bottleneck and that bottleneck is dictated by single-threaded performance. Your overall CPU usage can be 10-12% on a 8C16T CPU and already be bottlenecked by single-threaded performance in games like WoW which are very lightly threaded because a single thread is pegging its core/thread at 100% and can't go any faster regardless of how many more CPU cores/threads are unused. That's why Intel CPUs still dominate AMD's CPUs in most games despite AMD's core and thread count advantages, performance-critical threads can't run any faster than single-thread performance and the CPU with strongest single-thread performance wins.

You getting a bit pedantic and off the topic ... :)

You are correct that CPU bottlenecks happen all the time in gaming benchmarks - but it is generally induced by an unrealistic GPU / CPU pairing, or unrealistic pairing of resolution and GPU -- like in this budget processor review for example - $800 GPU paired with a $55 CPU? Of course that is going to introduce bottlenecks on all the CPUs tested here - you just rather stated the obvious ... The question is would anyone actually do that in real world? Anyone? Bueller?

If the single thread bottleneck were a constant regardless of GPU, then it would be an important and valuable metric for understanding real world performance. But this isn't the case; as soon as GPU is bottlenecked, which it has to be for maximum gaming performance, then the single core "bottleneck" doesn't really exist. If it did, Ryzen and i7 wouldn't have nearly the exact same benchmark results a 4k with a 1080ti or better, or with a suitably appropriate GPU for the CPU, which we never see in benchmarks. Just because the benchmark doesn't exist, doesn't mean the value of information it would provide if it did, doesn't exist.

Single core performance is important but sometimes is overrated when applied to real world gaming and some other scenarios. Why is it overrated? Because of what you stated: CPU bottlenecks happen in gaming benchmarks ... but not so much in real world gaming scenarios.

I'll add the disclaimer that I quite enjoy image quality in my games ... sure if you turn all your settings down and make your game look like crap to eke out 346 fps over 322 fps, then the cpu becomes a little more important - I'll give all you "single thread" guys that one. ;) I suppose there's a few people out there who would do this, but I personally don't know any.

I don't really think we're debating, but just emphasizing our individual points to be made. :)
 
Last edited:

InvalidError

Titan
Moderator
You are correct that CPU bottlenecks happen all the time in gaming benchmarks - but it is generally induced by an unrealistic GPU / CPU pairing, or unrealistic pairing of resolution and GPU
There is no such thing as unrealistic pairing, different people prioritize different things even if they seem 'unrealistic' to you and the primary purpose of a CPU review isn't to be realistic, it is to explore the limits of what the CPU is able to do along different metrics to cover the broadest field of use-cases possible for a given amount of testing effort. That's why CPU benchmarks use overkill GPUs to minimize the GPU as a test variable as much as possible and vice-versa - can't draw conclusions when measurements are heavily tainted by unrelated variables.

as soon as GPU is bottlenecked, which it has to be for maximum gaming performance, then the single core "bottleneck" doesn't really exist.
Wrong. While most modern CPUs may be able to keep up with 60Hz, the same cannot be said about 120+Hz. If you want to achieve the maximum FPS possible, such as to frame-lock your 144-240 Hz monitor, you can reduce graphics details to alleviate the GPU bottleneck and now the CPU has to work that much harder to pump all of those extra frames out. That's what you'd do if FPS was your primary concern rather than GPU-busting eye-candy.

Different people, different priorities. There is no one-size-fits-all scenario, whining because benchmarks weren't performed exactly the way you wanted them to be or aren't relevant to you is futile.
 

joeblowsmynose

Distinguished
There is no such thing as unrealistic pairing ... the primary purpose of a CPU review isn't to be realistic

Let's replace the word "unrealistic" with "innapropriate to represent real world scenarios" then if you wish to play the semantics game.

While I agree that most CPU gaming reviews is very "unrealistic" (this one), The primary goal of a CPU review is not to be realistic? If a CPU review can't have some relationship to reality, how much value does it really have? Particularly to less than expert level users who are likely to be unable to "interpret" the unrealistic benchmark results with what they might expect in real life. Just because a reviewer fails to represent what performance will be like in real world scenarios (regardless of the level of "unrealism" included) doesn't mean that reviews have a goal to be unrealistic. It's just sloppy reviewing. Period.

My view is that "unrealism" in reviews needs to be balanced with real case scenarios (which this review lacks) so that readers who are not the most tech savvy can be properly educated without being expected to "interpret" the results (which they do not - they look at the graphs and draw conclusions from that).

Back full circle to my initial point.


Wrong. While most modern CPUs may be able to keep up with 60Hz, the same cannot be said about 120+Hz. If you want to achieve the maximum FPS possible, such as to frame-lock your 144-240 Hz monitor, you can reduce graphics details to alleviate the GPU bottleneck and now the CPU has to work that much harder to pump all of those extra frames out. That's what you'd do if FPS was your primary concern rather than GPU-busting eye-candy.

Not wrong, Unless you own a 2080ti, if you need better game performance you tend to upgrade the GPU, not CPU (unless you stupidly paired your $55 CPU with an $800 GPU). In case you didn't notice I left a disclaimer on that post that addresses your scenario you described here, so I won't address that again. It's only "wrong" in that one scenario and it is "right" in all others. Back full circle to my previous post.
 
Last edited:

InvalidError

Titan
Moderator
Not wrong, Unless you own a 2080ti, if you need better game performance you tend to upgrade the GPU
If I want more FPS in a GPU-bound game, I reduce details, costs nothing.

The only people who need those super-expensive GPUs are those who refuse to compromise. For everyone else, lowering details to achieve higher performance with a lower-end GPU up to whatever the CPU is capable of is always an option.
 

joeblowsmynose

Distinguished
If I want more FPS in a GPU-bound game, I reduce details, costs nothing.

The only people who need those super-expensive GPUs are those who refuse to compromise. For everyone else, lowering details to achieve higher performance with a lower-end GPU up to whatever the CPU is capable of is always an option.

Well, making my game look like crap isn't much of an option for me, so I'll never go below "high" (out of a roughly four tiered quality system - low, med, high, ultra). If I am already playing something like Dota or CSGO, or Fortnite, or whatever (the type of game you want to maximize FPS, although I don't play much of those style games), the FPS is already 100+ at max settings, so no tuning is generally needed ... (like the average person I do not have 144 or 244 mhz monitor, nor do I care to spend big bucks get one ... )

I also don't buy the top end GPU's but rather, I try to find that sweet spot between performance and price, which used to be mid-high, but GPU pricing is a bit screwy these days.

At the point people need to start dropping their quality to medium or low, a new GPU is quite often just around the corner anyway (granted that gaming is important to them) ... and when you get a new one, you get a proper and real performance increase if that is actually important.

I don't know maybe things have changed in th elast few years, but I tend to believe people want to push their quality settings on most games to whatever their GPU can handle.

I get that there are those that "need" gaming trophies and the like and will happily play at 480p resolution and 1980's level of quality (ok I'm exaggerating a little, but hence my previous disclaimer giving you that one), but generally those people are few, and I am not one of them, and I assume that the vast majority of people aren't those people as well.
 
Last edited:

InvalidError

Titan
Moderator
At the point people need to start dropping their quality to medium or low, a new GPU is quite often just around the corner anyway (granted that gaming is important to them) ... and when you get a new one, you get a proper and real performance increase if that is actually important.
Gaming can be "important to them" but not 4k240 Ultra/Insane.

60fps is important to me in most of what little gaming I do aside from WoW. Details and resolution aren't since pixel response blur washes most of it away when actually playing. If I want to stare at how pretty a game can look when I'm sitting still, I can max out everything and stay there at 5-10fps to have a look. That's how I ended up using my HD5770 until its 1GB of VRAM became problematic with Legion.

If I like a game enough to care about what it looks like, I can always replay it when I upgrade the GPU again in however many years. Replaying Portal 2 on a 50" 4k60 TV was nice.
 
There's only a couple scenarios where you need this kind of low power.

  • Firewall appliance: (Either would work, but Intel does have a superior integrated NIC)
  • File Server / NAS / Backup: (Not CPU Intensive)
  • Multimedia Transcoder Server: (ie: PLEX or DLNA) AMD does well with a good overclock. So a draw.
  • Retro games: on (ie: Retro arch) AMD would be better.
  • Point Of Sale: Either would work, but these are not CPU intensive.
  • Mining: Only if you have at most 2 cards due to limited PCIe lanes. CPU speed is irrelevant.
  • Cloud Based Web Server: Intel would win here, but this is more of a system on demand / blade application for small hosters like WordPress.
I wouldn't recommend these devices if you were doing anything like:
  • rendering, artwork (ie: Photoshop/illustrator)
  • Multi Media composition (ie: Adobe Premiere or Vegas)
  • Serious heavy duty office work (ie: Complex Excel sheets, or database access).
  • Gaming at anything > 720p
  • Web server with either Apache or IIS
While Intel may win in a few cases, you have to realistically look at the practical application.