Feature AMD Athlon vs Intel Pentium: Which Cheap Chips Are Best?

abryant

Asst. Managing Editor
Staff member
May 16, 2016
183
17
10,685
20 years ago, AMD's Athlons and Intel's Pentiums were performance CPUs. Today, these brands are relegated to entry-level options. But which is better? Read more here.
Athlon-Versus-Pentium-Cover.JPG

KEVIN CARBOTTE
@kcarbotte

Kevin Carbotte is a Contributing Writer for Tom's Hardware US. He writes news and reviews of graphics cards and virtual reality hardware.
 

joeblowsmynose

Distinguished
Athlon beats Pentium in every render test except one ... "Winner, Intel" ... ??


Also, just to make a correction ... where it reads "That said, if you’re going to be pairing your system with a discrete graphics card, you should turn to Intel."

Should read: ... "That said, if you're going to be pairing your budget CPU with an $800 video card, you should turn to Intel, but if your video card for your budget system is $300 or less, it won't make any difference at all ... Winner, AMD."
 

voodoobunny

Distinguished
Apr 10, 2009
108
2
18,715
Why does AMD never get credit for upgradability? If you buy an Intel CPU now ... in a year, you won't be able to upgrade to the latest generation of CPUs because they will use an incompatible socket. With AMD, you should be able to upgrade to next year's high-end CPUs with minimal problems. Shouldn't that be worth calling out?
 

Math Geek

Titan
Ambassador
seems like the right conclusion to me. the little bit more you get from intel at this low budget is not worth it overall. considering the current prices of intel "budget" chips, there really is no comparison. the pentium is really competing with the 2300g and even higher amd chips at the prices they are going for. they REALLY lose out then with more cores for the same money.
 

shabbo

Distinguished
Nov 29, 2011
31
4
18,545
Appears quite biased towards Intel. AMD wins in every category its a slam dunk. Just say it as it is for a change.

1) Gaming performance winner should be AMD. It can hold it's own without a discrete GPU, and if you do decide to get a GPU you're not going to be looking at a $700 GPU trying to "squeeze the most out of it". Come on man, AMD even saves you $30 extra to spend towards a GPU.

2) Rendering AMD did better over all. Also, regarding Photoshop, why in the world would anyone paying monthly subscription for Photoshop buy a processor worth less?

3) Productivity also AMD did better overall. How is Intel winner here? at most it should be a draw

4) Motherboards. What about future upgradability? your AMD Athlon 200GE will be easily upgraded to Ryzen 3000s which is crucial for budget
 
Mar 4, 2019
7
1
15
Motherboards: AMD's B450 is notably left out, which comes with StoreMI (aka FuzeDrive) for free. On budget, pairing a budget 256gb ssd with a 1tb+ hdd makes a lot of sense for a better overall experience.
 
Last edited:
Mar 4, 2019
7
1
15
I also think the cooling (there's really no way to differentiate products that are designed to be used out-of-the-box in this regard - if anything, AMD bundles you a cooler with way more tdp headroom, and larger/quiter fan) and rendering categories (are entirely moot in this range. When you consider that for a realistic gpu pairing these processors are essentially equal, AMD should get the lead on gaming for its Vega cores. Then there's the fact Intels boards for these processors are effectively disposable as far as upgrades go, AMD boards deserve the win there for longevity alone.

Adjusted real-world scores:
AMD: 6/7
Intel: 1/7
 

joeblowsmynose

Distinguished
Huh? Blender is the only render test where the Athlons beat the Pentiums.


I may have been reading a couple of those graphs backwards. I also don't consider single core rendering tests as a valid rendering benchmark -- no one has ever in the last 10 years, nor will ever in the future, render on a single core, so I dismiss those unless I am looking for relative single core performance - not rendering performance.

It just doesn't work as a rendering benchmark, especially when one considers the difference in multi-threading efficiency between CPU vendors.

Pov-Ray was also taken by the Athlon over the Pentium in the multicore test.
 

InvalidError

Titan
Moderator
I also don't consider single core rendering tests as a valid rendering benchmark -- no one has ever in the last 10 years, nor will ever in the future, render on a single core, so I dismiss those unless I am looking for relative single core performance - not rendering performance.
And this is precisely the point: where else are you going to get at least somewhat realistic numbers to compare per-thread/core throughput and IPC? You need something that gets as close as possible to pure single-threaded and there aren't many such things left. Cinebench single-thread serves this purpose quite well regardless of how unrealistic it would be for an actual production environment. The same could be said of just about any benchmark aimed at one very specific aspect of the whole architecture: L1, L2, L3, L4 and memory benchmarks serve absolutely no purpose in real-world software but are still essential for characterizing the architecture's performance so you may be able to guesstimate software performance between architectures based on how the software tends to hammer the cache and memory hierarchy.

If CPU 'A' has 50% better single-threaded performance than CPU 'B' in Cinebench single-thread, chances are that many games will run the better part of 50% faster on CPU 'A' in games that hit a single-thread performance bottlenecks on CPU 'B' provided the GPU is fast enough to keep up. Still very useful to know despite most software being threaded at least to some extent.
 
Also, just to make a correction ... where it reads "That said, if you’re going to be pairing your system with a discrete graphics card, you should turn to Intel."

Should read: ... "That said, if you're going to be pairing your budget CPU with an $800 video card, you should turn to Intel, but if your video card for your budget system is $300 or less, it won't make any difference at all ... Winner, AMD."
It still makes a huge difference even at 1080p,not everybody games at all settings full ueber ultra.
 
Why does AMD never get credit for upgradability? If you buy an Intel CPU now ... in a year, you won't be able to upgrade to the latest generation of CPUs because they will use an incompatible socket. With AMD, you should be able to upgrade to next year's high-end CPUs with minimal problems. Shouldn't that be worth calling out?
Where you around when zen+ came out?If you have a old mobo you loose all bios improvements and mem controller improvements so basically with an low end mobo even if you buy zen+ you buy a simple zen...
 
And this is precisely the point: where else are you going to get at least somewhat realistic numbers to compare per-thread/core throughput and IPC? You need something that gets as close as possible to pure single-threaded and there aren't many such things left. Cinebench single-thread serves this purpose quite well regardless of how unrealistic it would be for an actual production environment.
There hasn't been anything left for decades now that comes anywhere near to IPC,just look at improvements from SMT even in cinebench,35-40% of each and every core is sitting idle if you only run one thread.
That was the whole point for AMD to go splitsies on the integer/FPU cores on the buldozer line,no real software would use all instructions most of the times.

If you want to find out a single cores maximum limits these days you need to run multiple threads on one core.
 

InvalidError

Titan
Moderator
There hasn't been anything left for decades now that comes anywhere near to IPC,just look at improvements from SMT even in cinebench,35-40% of each and every core is sitting idle if you only run one thread. That was the whole point for AMD to go splitsies on the integer/FPU cores on the buldozer line,no real software would use all instructions most of the times.
No, the point of AMD having two INT pipelines sharing a FLOAT pipeline is because relatively little software at the time used heavy floating-point math. SMT on the other hand operates on the premise that no typical single-thread instruction flow can hog 100% of the CPU's execution resources at any given time due to dependencies, so one or more additional threads are introduced so the scheduler has multiple independent flows to pick from to fill all execution ports most of the time.

If you want to find out a single cores maximum limits these days you need to run multiple threads on one core.
You got this backwards. The point of single-threaded performance is not to find out individual cores' absolute maximum throughput, it is to find out how well the core can handle single-threaded tasks or a game/application that has a performance-critical thread such as the main control thread in countless games, which is why Ryzen gets left behind in so many games despite having a massive core and thread advantage over Intel. All of the multi-threaded performance in the world does you no good if you end up bottlenecked by a critical thread hitting the CPU's single-threaded performance limit.
 
  • Like
Reactions: TJ Hooker

logainofhades

Titan
Moderator
Where you around when zen+ came out?If you have a old mobo you loose all bios improvements and mem controller improvements so basically with an low end mobo even if you buy zen+ you buy a simple zen...


Bios improvements, and memory compatibility have been vastly improved via bios updates. Even support for XFR2 has been added via bios updates. The only thing you are going to really miss out on, is PCI-E 4.0, for Ryzen 3000. Given that no single GPU can fully saturate PCI-E 3.0, this isn't exactly a deal breaker for most users.
 

joeblowsmynose

Distinguished
And this is precisely the point: where else are you going to get at least somewhat realistic numbers to compare per-thread/core throughput and IPC? You need something that gets as close as possible to pure single-threaded and there aren't many such things left. Cinebench single-thread serves this purpose quite well regardless of how unrealistic it would be for an actual production environment. The same could be said of just about any benchmark aimed at one very specific aspect of the whole architecture: L1, L2, L3, L4 and memory benchmarks serve absolutely no purpose in real-world software but are still essential for characterizing the architecture's performance so you may be able to guesstimate software performance between architectures based on how the software tends to hammer the cache and memory hierarchy.

If CPU 'A' has 50% better single-threaded performance than CPU 'B' in Cinebench single-thread, chances are that many games will run the better part of 50% faster on CPU 'A' in games that hit a single-thread performance bottlenecks on CPU 'B' provided the GPU is fast enough to keep up. Still very useful to know despite most software being threaded at least to some extent.

Gaming performance has nothing to do with 3D rendering ... Game benchmarks are already in this review which show game performance results, single core rendering can give you relative single core performance. A category measuring the processors ability to render is neither of those ... rendering is done with the CPU fully loaded, therefore a category called "rendering performance" should be measuring rendering the way rendering is done - all cores loaded - that gives you rendering performance results, of which the category is named.

I have an appreciation for single core rendering results, as a category called "single core performance". But they aren't a "rendering performance" metric - but only a "single core performance" metric ... not the same thing. Its conflating one performance metric for another.
 
Last edited:

joeblowsmynose

Distinguished
It still makes a huge difference even at 1080p,not everybody games at all settings full ueber ultra.
1080p? huh?

I was talking about the fact that no one will pair a pentium with an $800 video card, of which the recommendation is based upon. If you plug in a 1050ti (which is a very likely and realistic pairing with these procs) into either the Athlon or the Pentium, they are going to perform the same . Even a 8700k and a pentium perform the same on a 1050ti ...
 

InvalidError

Titan
Moderator
Gaming performance has nothing to do with 3D rendering ...
The general single-threaded performance of most CPUs lands in the same ballpark regardless of application, give or take a few points if one benchmark hammers a specific subset of instructions more than the rest and causes an execution unit bottleneck. Trying to reduce the likelihood of execution port bottlenecks is why you see Intel shuffle instruction mix across execution ports between architecture refreshes and increase the number of ports with major refreshes like Sandy, Haswell, Skylake and Ice Lake..
 

joeblowsmynose

Distinguished
The general single-threaded performance of most CPUs lands in the same ballpark regardless of application, give or take a few points if one benchmark hammers a specific subset of instructions more than the rest and causes an execution unit bottleneck. Trying to reduce the likelihood of execution port bottlenecks is why you see Intel shuffle instruction mix across execution ports between architecture refreshes and increase the number of ports with major refreshes like Sandy, Haswell, Skylake and Ice Lake..

Gaming performance still has nothing to do with 3D rendering performance; which is why a 6700k dominates a Ryzen 1700 in games (notably when one bottlenecks the CPU), and the Ryzen almost doubles the 6700k's performance in 3D rendering.

So I'll re-iterate, and paraphrase my main point: single core rendering scores belong in a category called "single core performance", and rendering performance should be done with tests that saturate the CPUs cores - just like in reality.
 
No, the point of AMD having two INT pipelines sharing a FLOAT pipeline is because relatively little software at the time used heavy floating-point math. SMT on the other hand operates on the premise that no typical single-thread instruction flow can hog 100% of the CPU's execution resources at any given time due to dependencies, so one or more additional threads are introduced so the scheduler has multiple independent flows to pick from to fill all execution ports most of the time.
Athlon x4 had 3alu+3agu per core
https://en.wikipedia.org/wiki/Athlon#/media/File:Athlon_arch.png
Now you would think that a new arch would be better and have more APUs ,even having 4 APUs would be a 25% increase so that would be not bad.
Bulldozer has 2+2 per core so half of what they at least should have and a decrease from the previous CPUs,people where really annoyed by this back in the day.
https://en.wikipedia.org/wiki/Bulld..._Bulldozer_block_diagram_(CPU_core_block).png
You got this backwards. The point of single-threaded performance is not to find out individual cores' absolute maximum throughput, it is to find out how well the core can handle single-threaded tasks or a game/application that has a performance-critical thread such as the main control thread in countless games, which is why Ryzen gets left behind in so many games despite having a massive core and thread advantage over Intel. All of the multi-threaded performance in the world does you no good if you end up bottlenecked by a critical thread hitting the CPU's single-threaded performance limit.
Yes.
Has nothing to do with what I said though.
As I said, if your bench leaves ~40% of your IPC unused it can't very well show you the per-thread/core throughput and IPC now can it ?!
 
1080p? huh?

I was talking about the fact that no one will pair a pentium with an $800 video card, of which the recommendation is based upon. If you plug in a 1050ti (which is a very likely and realistic pairing with these procs) into either the Athlon or the Pentium, they are going to perform the same . Even a 8700k and a pentium perform the same on a 1050ti ...
Yes that's what I'm saying as well, the 1050ti will heavily bottleneck a pentium and not give you all the FPS the CPU can give unless you play at lower resolution or quality.
The tests with a 1080ti show you what the CPU can do and you have to set your settings accordingly if you want to reach them with a weaker GPU.
 

InvalidError

Titan
Moderator
Athlon x4 had 3alu+3agu per core
https://en.wikipedia.org/wiki/Athlon#/media/File:Athlon_arch.png
Now you would think that a new arch would be better and have more APUs ,even having 4 APUs would be a 25% increase so that would be not bad.
You would only think that if you were ignorant of how instruction schedulers work and practical limits on how much instruction-level parallelism can be extracted out of a single instruction flow. In practice, just because you double execution resources available to one thread doesn't mean you'll be able to double the number of eligible instructions to feed to those resources, and this is why architectures designed for maximum efficiency and multi-threaded throughput have four or more threads per core so the scheduler does not need to work anywhere as hard to keep most execution units busy most of the time.

As I said, if your bench leaves ~40% of your IPC unused it can't very well show you the per-thread/core throughput and IPC now can it ?!
And as I wrote, countless games and applications are heavily dependent on single-threaded performance, so single-threaded benchmarks are still absolutely necessary. Cinebench MT is only representative of heavily multi-threaded workloads with near-perfect scaling. Very little consumer software falls in that category.
 

joeblowsmynose

Distinguished
Yes that's what I'm saying as well, the 1050ti will heavily bottleneck a pentium and not give you all the FPS the CPU can give unless you play at lower resolution or quality.
The tests with a 1080ti show you what the CPU can do and you have to set your settings accordingly if you want to reach them with a weaker GPU.

I don't think you got that quite right (backwards in fact, I believe, unless semantics are heavily at play here) ... a pentium and a 6700k get roughly the same performance on a 1050ti. If the 1050ti is causing a bottleneck at the Pentium (processor) level as you stated - then that bottleneck would be freed on a 6700k giving better performance than the Pentium ... no?

But reality is that this is not the case, therefore the premise that the pentium processor is bottlenecked with a 1050ti is dead wrong. It (the CPU) is bottlenecked on a 1080ti and the GPU is the bottleneck with a 1050ti. Bottlenecking the GPU is what you want to happen, else you've wasted money on too powerful of a GPU, and/or didn't build your rig correctly.
 
Last edited: