Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]elbert[/nom]This multitasking benchmark shows how important an under used core becomes in real world gaming. The 955 pulls ahead of the i3 2100 and the X4 645 loss less FPS than the X3 455.[/citation]

"Pulls Ahead"? The 955 and i3-2100 numbers look like a dead-on tie to me.
 
[citation][nom]Cleeve[/nom]"Pulls Ahead"? The 955 and i3-2100 numbers look like a dead-on tie to me.[/citation]
The i3 2100 drops to 45.7FPS where the 955 only drops to 47.3FPS. The very important part is the i3 2100 loses almost 5FPS off its minimum where as the 955 only loss 1.5FPS. The i3 2100 in multitasking gets very near the 925 and i3 550 in multitasking.
 
[citation][nom]scannall[/nom]I don't get the point of using a discrete GPU here. If you are talking about low end gaming setups. The unlocked A8-3870 K is $144 at Newegg as I type this. So what Intel CPU + discrete GPU could I get for the same $144? And would it perform as well.Very poorly written article, that misses the entire point of its title.[/citation]

First, the article is extremely well titled, you're simply having a problem thinking it through. Think about it" Sub-$200 Gaming CPU" implies a CPU for Gaming, and if you're using a CPU for gaming you will buy a discrete graphics card.

Secondly, look at the results. A $144 A8-3870K will probably get utterly destroyed by an $80 Pentium G630 and $70 Radeon 6570, which has better CPU and GPU potential. And for what, a $6 price spread? Actually, this is a test I'd like to run in the near future, and from what I've seen the A8-3870K doesn't have a chance. Even when it's overclocked it will have a hard time against that setup.

 
I think you should add "console games" to the benchs, like NBA2K12 or PES.
In MY OPINION a Skyrim/BF3 gamer wouldnt save $50, they would buy the 2500k.
Only because were talking about less than $200. Without that restriction, those games dont add anything in the bench. In MY OPINION!
 
It's a great in-depth article and all, but I'm feeling a little like: "breaking news! faster CPUs with big GPU are faster than slower CPUs with big GPU!" xD!

Nice finding on the Pentiums anyway, but just looking at the specs already told me the overall outcome of the article. It's sad to see that gaming-wise, the whole Intel line up takes all the lights away from AMD.

And it would have been nice to see the "really low end" being tested on medium/low 720p/1080p setting for a really budget gaming system (HTPC+light gaming as I call it). I'm sure Hybrid XFire and the MoBo's goodies will hurt Intel's low end by a lot. A notebook review for that would be very nice as well.

Thanks for the article as usual, Mr Don!

Cheers!
 
[citation][nom]elbert[/nom]The i3 2100 drops to 45.7FPS where the 955 only drops to 47.3FPS. The very important part is the i3 2100 loss almost 5FPS off its minimum where as the 955 only loss 1.5FPS. The i3 2100 in multitasking gets very need the 925 and i3 550 in multitasking.[/citation]

Yeah but think about it. The result is a virtual tie, in a game that we've just shown is GPU-bound... the difference in frame rate is quite low compared to other CPUs that take a real significant hit when multitasking. The i3-2100 is pretty good.

Also, the RAR multitask is an extreme example I chose to make a point. Nobody is actually going to compress RAR files while gaming unless they're trying to destroy their performance. under lighter loads that you might actually see when multitasking such as torrenting, etc, ythe difference becomes even smaller.

Like it or not, in the real world the i3-2100 has a huge gaming advantage.
 
Seeing the FX's performance with 1 core of each module disabled would prove interesting, but ultimately the only effect would be to show if AMD's design theory is fundamentally flawed. Even if it works, you're now left with a 4 core 8120/8150 that still gives less IPC than than Intel's Sandy Bridge chips while costing equal/more, but now you don't have the advantage of having 4 more partial cores even. Unless this were to magically increase the IPC to be greater than Sandy Bridge, which is completely beyond being possible, it would do nothing to change their value for the better.

If you want to imagine the effect this would have, a 20% overclock on the chip will show you what the performance would be roughly. It it better, yes. But a 4 core 1(1core/module) FX would still perform significantly lower than an i5 SB while costing more. There's just too much ground to make up there. It would, like you theorized, simply bring it in line with Phenom2. But now you'd be paying $200+ for the equivalent of what you could get for $115 in the Ph2 955 for as long as we can remember in the past now. Still a horrible value move :\

SLI/Xfire requires more CPU than single cards. So two mid-range GPU's together would actually be impacted more-so by a lower end CPU than a single high end card. I've also read that AMD cards tend to require more CPU than Nvidia cards, so it would be ineresting for thoroughness's sake to re-test this with a GTX580 to see what difference that may play in the rankings.
 
[citation][nom]elbert[/nom]The i3 drops to 45.7 where the 955 only drops to 47.3FPS. The very important part is the i3 loss almost 5fps off its minimum where as the 955 only loss 1.5fps. The i3 in multitasking gets very need the 925 and i3 550 in multitasking.[/citation]

That may be true but the difference between the two isn't noticeable anyway. By that I mean it's very unlikely that you or anyone else will notice a difference between 45.7FPS and 47.3 FPS and that's also assuming all else is equal, there could be other problems in either system. Yes, having more cores matters but it isn't a big enough difference in your example.
 
[citation][nom]Cleeve[/nom]Yeah but think about it. The result is a virtual tie, in a game that we've just shown is GPU-bound... the difference in frame rate is quite low compared to other CPUs that take a real significant hit when multitasking. The i3-2100 is pretty good.Also, the RAR multitask is an extreme example I chose to make a point. Nobody is actually going to compress RAR files while gaming unless they're trying to destroy their performance. under lighter loads that you might actually see when multitasking such as torrenting, etc, ythe difference becomes even smaller.Like it or not, in the real world the i3-2100 has a huge gaming advantage.[/citation]
That would depend on if the i3-2100 impacts latency more in online games. Latency is the huge monkey in the room when you don't have under used cores.
 
[citation][nom]blazorthon[/nom]I'm still waiting for an in depth exploration of an FX 8 core with one core from each module disabled, how well it would perform and overclock compared to regular FX 8 core CPUs and the rest of the CPUs worth buying. It has been shown that disabling 1 core from each module improves single-threaded performance significantly (about 10-15% as shown in a previous test). [/citation]

As mentioned in the test setup page, we did use the two windows updates that optimize bulldozer core usage, giving priority to free modules before utilizing the second core in each module.

AMD claims up to a 10% performance increase using these updates, but we didn't see anything impressive, although it's possible the performance was worse without the patches applied...

 
If you wanted to do a true heavy multitasking task a gamer might see, the ideal test would be a 720/1080p stream of your game in question to a site like twitch.tv This is your best chance to separate AMD quad cores from the Intel dual cores. (And maybe even give the FX 8 cores a chance at a win?)

Not only is this growing in popularity at a fairly rapid rate, but live encoding of these HD streams while playing the games at the same time places a significant demand on your CPU. (The encoding alone can eat through two cores on it's own pretty thoroughly) It's also pretty easy to benchmark decently because not only can you monitor the games FPS traditionally, but it will keep track of frame drops in the encoding due to insufficient CPU resources as well.

Doing something like this, a quad core is a virtual requirement, and I could actually see the FX's with 6 or 8 cores showing an advantage depending on how depending the stream encoding is. (A game that already taxes all 4 cores pretty heavily I could see actually benefiting from 6+ cores while streaming for example)

I'd love to see this tested in depth by Toms. :)
 
I wouldn't suggest the FX. In the forums getting gamers to see the importance of an i5 over an i3 is hard. Even the i5 2300's are better options than a short lived test lab game win with an i3. This reminds me of the days when a single core was beating dual cores in 2006.
 
[citation][nom]Cleeve[/nom]As mentioned in the test setup page, we did use the two windows updates that optimize bulldozer core usage, giving priority to free modules before utilizing the second core in each module. AMD claims up to a 10% performance increase using these updates, but we didn't see anything impressive, although it's possible the performance was worse without the patches applied...[/citation]

Please read my comment and notice how your response has nothing to do with what I said. I didn't say anything about the hot patches and they might not even make a significant difference in the experiment I asked for. I asked for 1 core out of each module to be disabled in an FX 8 core CPU to compare the single threaded performance against the same CPU without disabling cores and that has nothing to do with the hot patches. Sure they may change the numbers a little but they aren't a replacement for what I asked for.
 
[citation][nom]xenol[/nom]How is it that the FX-8150 was beaten a few times by the 6150 and the 4150?[/citation]

Games don't use it's 8 cores so the increased clock speed of the lower core count CPUs made a difference. Games often don't even use 4 cores so having 8 slow cores is beaten by 4 or 6 faster cores.
 

Its the much lower clocked FX-8120 being benchmarked. The FX-6100 and FX-4100 have a pretty good clock advantage. The FX-6100 has a 200MHz advantage and the FX-4100 has a 500MHz advantage. In the overclock the FX-8120 makes a huge leap forward over taking all but the i5's.
 
[citation][nom]reynod[/nom]Don the article is a bit misleading ... till I read it a second time.All of these systems tested had an external high end graphics card bolted in ... a 7970.I think many started to read the article thinking the CPU's with onchip GPU's were being used ... like I did ... then went "um er ??" when I realised some didn't.Rerun the benchies for the CPU's that have an ondie GPU and see what results you get.The AMD processors rape the Intel offerings across the board.Compare all of these again and correct me If I am wrong.Lets reccomend the processors on the basis of gaming results with the ondie GPU / CPU combination ...[/citation]

Good point.
 


It's not a good point because no gamer using an Intel CPU uses the integrated graphics for gaming and most AMD buyers also don't use integrated graphics. Why would you buy a CPU based on a feature that you won't use? Granted you might use the integrated graphics for a purpose besides gaming like quicksync but this article is about cheap gaming CPUs, not garbage gaming CPUs and trans-coding.

Most gamers don't buy Llano and the other APUs unless they are on an insanely tight budget and even then an Intel Pentium or (dual core) Celeron with a discrete card would be better and still cheaper. AMD A8s like the 3870K are about the same price as a cheap Pentium and Radeon 6670 while being much slower than the Pentium system without an overclock and probably couldn't overtake the Intel system even with the overclock. It would win in applications that use more than 2 threads but it would still lose to the Pentium system in gaming. Beyond that, the Intel system is much more upgradeable since it could handle a faster graphics card or get a faster CPU whilst Llano's A8-3870K would be the best you could get. You can only upgrade the graphics at that point and that will be more expensive than getting the Pentium with a faster card.

Point is that AMD seems to be pretty much finished as far as gaming CPUs go right now. If the experiment I proposed above is attempted then maybe the 8 core FX CPUs can be shown as useful but even then they won't beat Intel in performance nor price.
 
[citation][nom]blazorthon[/nom]Please read my comment and notice how your response has nothing to do with what I said. [/citation]

Please read my comment and soak it in.

The patches I refer to do *exactly* what you said, they prioritize one module per thread.
 
I think this article is completely fair in every way.

Let's take a look at AMD for a second. They made us wait, they made us listen to the hype, then they released a budget processor with the "FX" moniker. They should have replaced it with "BDGT", so the owners could claim it stood for "Bulldozer GT", and everyone else could claim it stood for "Budget". Then, at least, they'd both be right depending on what benchmark test was being run.

Now on to llano... they named a chip the "A8-3870K". It looks strikingly similar to "i7-2600K", and at first glance to the uninitiated, it would appear to be more powerful given that it has higher numbers all around.

If you ask me, AMD begged to be thrown to the wolves.

For those arguing the llano chips shouldn't be in a benchmark like this, I first refer to the above. Also, llano chips cannot hybrid xfire with any and every AMD graphics chip out there. Only a very select few budget ones. It just won't work. I think they fully deserved to be in this benchmark, because over time, most gamers would ditch their integrated solution eventually, so this test shows all the CPUs in their best possible light. You can then surmise from there, that it all goes downhill for everyone at varying rates as you add more "noise" to the equation (AV/scanners, etc).

Now as far as comparing llano + discreet card in hybrid xfire, vs. an i3 with a single gpu of the same model, I believe its already been proven the i3 will win. Then, if you take into account the llano can use 1866 RAM speeds, and up to 2000 on certain boards, you are just throwing more money at an already budget system. In addition, you'd have to add a little more RAM to the llano system to compensate if some of that memory should be dedicated to the integrated solution. So now you are adding the fastest possible RAM and more of it to llano, just to try to make it a fair fight with an i3 + single gpu (god forbid you should xfire 2 of them with an i3). Now factoring all of that in, and actually trying to set up a test bench with all of those variables, should prove more tedious than its worth, and the idea should whither on the vine, so to speak. More power to anyone that wants to bother, and throw all those "extra fast" goodies in there, but most buying llano, would pick up cheap ddr3 1600 cas 9 ram, probably 8GB of it, and go to town.

I own the A8-3870K, but haven't fired it up yet. Admittedly, I was more excited about it before I read this article, but now I'm well grounded and know exactly what to expect from it with its integrated graphics (based on previous benchmarks, it all ties together nicely now).
 
Status
Not open for further replies.