Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
"It would be unfair to use more expensive memory for AMD since these are meant to be budget systems and that would raise their price."

-uses a motherboard that costs 50 dollars more for the Intel systems-
 

Funny how when the situation changes, so do the rules. Just got to thinking how it wasn't fair to cripple an intel i7 920 by using only 2 memory sticks, but now its perfectly fair to cripple the amd fx cpus by using slower than supported memory.
 
[citation][nom]Cleeve[/nom]This is a CPU comparo, not a memory comparo. Are you suggesting that we should stack the deck and give the AMD CPUs faster memory? [/citation]

Are you serious about that ? Do you reasonably expect that memory configuration does not have impact on CPU performance ? You are stacking the deck for Intel if you DON'T use the best supported memory for each CPU.

You can mention it in the article with all the cost consequencies, but that does not make the gimped memory configration any more valid.
 
[citation][nom]noob2222[/nom]As for the 2500k, yes I saw that, point is why even have it there other than to show how superior intel is.As for the memory, toms has tested it over and over with intel systems that you gain 1%, but I have yet to see anything along the same lines with AMD. Their architectures are massively different, so are the results. Test both systems with 1866 memory, don't cripple AMD just because Intel doesn't benefit, its not AMD's fault that Intel can't utilize faster memory.[/citation]

exactly this. you removed the GPU bottleneck by using the best GPU there is and then you refuse to do the same with memory. hey, why not use a DVD booting live system next time instead of an SSD drive ?
 
[citation][nom]kinggraves[/nom]It does also prove the point though that in GPU bound games, none of these offerings really are a bottleneck (except the A4 which doesn't even belong here). Reality is though that most games out at this point are still not really optimized for multithreading. Games take years to develop, so that isn't too surprising. When some of these engines were first started, quad cores weren't even out. More games in the future should be using multi cores.What disappoints me is actually the weak overclocks I've seen so far on the unlocked Llano. 3.6 vs 4.0 on the 955 mainly accounts for the .3 difference between them, but considering it's a lower TDP part, I'd think that Llano could be pushed a bit further than Phenom IIs. Anyway, the Llanos are brought into this because they're the only Stars based chips on a process similiar to SB chips, meaning they can actually compete on a wattage basis. Bulldozer was supposed to be the 32nm CPU part, but...well that didn't turn out as expected. It's pretty clear though that Llano performs as well as an Athlon II using less energy, so they could have likely had the same results from a Phenom II on a 32nm process.Anyway, Ivy isn't going to be a major improvement in CPU performance, it's a wattage reduction/better IGP upgrade. Piledriver has a lot of room for improvement, so hopefully they manage to fix everything wrong so far and get a solution that runs well. If AMD drops out of the market, we won't be discussing sub 200 processors much longer.[/citation]


Bulldozer "is" a 32nm part.

serendipiti :

One thing to consider is platform value instead of only processor value. You can get an inexpensive AMD FM1 board, and go with the integrated GPU with some gaming capabilities with a very low budget.What really surprises me is that the integrated APU on Llano did not show itself in the tests (in gaming tests, shouldn't give some advantage over phenoms ?) Perhaps the 7970 is a too powerful GPU to realise the crossfired APU ? (the AMD APUs were crossfired ?)
There is no thing as crossfired

There is no thing as crossfired APUs. You prolly meant hybrid xfire.

elbert :

To compare the duals and triple cored CPU's may need VisionTek Bigfoot Killer 2100 Gaming Network Card to get the same latency.

What are you saying? What does a network card have to do with a cpu? It's long gone since a integrated network card would impact the latency of the cpu.

BigMack70 :

The time when TH did actually test the difference between speeds, it was most noticeable going to 1600 Mhz. 1866 WILL give slightly better results, but you don't have to use it to get results. The faster memory would also be put to better use using the IGP, mainly if you're OCing since faster memory also makes the graphics memory faster. It really doesn't take a lot of RAM either, usally 512MB, and you could bump it to 1-2GB if you wanted. It's really grasping at straws to fault the Llano for utilizing faster memory.

Why anyone would use 512mb with a Llano system is out of my understanding. Are are you just trolling?

Certainly not since the flop called the AMD Phenom... Pentiums have been rocking (or at least remaining competitive in) the mid-range since shortly after Conroe came out.

Bulldozer was an even bigger flop unfortunately
 
I doubt the better memory could impact that much on overall CPU IPC fir either the stars or BD architecture ... even tests on early conroe systems showed memory speed was not that important as L1, L1, and cache architecture.

You can't makeup that much on system memory speed for the fact that the BD internal cache memory latency is poor, and the prefetch about as complicated as my nanna's tax.

Your grasping at straws.


 
[citation][nom]Cleeve[/nom]With the vast majority of folks actually running 32-bit windows with a 3 GB usable limit, I would argue the results would have been skewed with 8 or 12 GB of RAM on a 64-bit OS.[/citation]
The vast majority with either a GTX 580 or HD 7970 are not running them on 4GB of RAM much less an AMD Athlon CPU; tiny minority -- who knows? I see bizarre everyday. Problem, the Test Configuration is Windows 7 x64, and again $550+ GPU + $24 RAM which makes no common sense. There are folks out there doing all sorts of odd stuff, and an Article is the last place I expect to see it repeated. You can argue all day and night, you should know better. I 'get' the CPU comparison, but an added factor of RAM bottlenecking just confuses things needlessly.

It's just as odd with this article and 4GB + i5-2500K -> http://www.tomshardware.com/reviews/radeon-hd-7970-benchmark-tahiti-gcn,3104-9.html

I could see these 4GB Articles if a 2x2GB kit was $200 or more, but now you can get a 2x4GB $35 and 4x4GB $80 kits respectively. In any forum articles with odd test config's are being ridiculed needlessly; use at minimum 8GB or better 16GB.

[citation][nom]BigMack70[/nom]I have an older Phenom II x6 @ 4.0 GHz and a 7970 and the setup works great. Sure, the 7970 is bottlenecked a bit... (Skyrim/SCII), but it still puts out 50-60 fps. It's not worth $300 for me to change platforms to a 2500k. Eventually it will be worth that amount of money to switch to intel, but not now.[/citation]
A 'bit' is more like 30%~40% bottlenecked, and even more if you're running 4GB of RAM.

My first sentence was "This article is fine to demonstrate the CPU's." The details are in the wrong direction, and add an unneeded variable.
 


As much as you enjoy blowing it out of proportion, he memory difference is irrelevant at 1080p.

You're grasping at straws to find flaws in this review so you can feel better about your brand preference. If that's your goal, just stay off the internet and stop reading CPU reviews until AMD comes out with something competitive.

For those of us curious to actually learn something, this stuff is interesting.
 


I disagree. The GPU bottleneck is significant, but gains with more memory are barely detectable even in lab tests.

Plus, and all CPUs suffered the same 'disadvantage'. And calling it a disadvantage is a real stretch.

You're nitpicking about things that have no impact. Devil's advocacy for it's own sake is a waste of time.
 
the following is just an observation, not meant to offend people who like and support amd and their products.
looks like fx owners (and calfs) got their feelings hurt.
i guess they thought that despite fx's (relatively) poor stock performance, they could leverage fx's higher memory support, windows scheduler patch, easier overclockability etc to mitigate it's disadvatnage against sandy bridge cpus. especially considering amd's position (pre-zambezi) in value gaming cpu segment.
then came this article where it showed that a (4 module) 8 core fx 8120 couldn't beat a 2 core (4 logical cores) core i3 2100 at gaming...even after overclocking to 4+ ghz clock speed. that, i think, was quite tough to digest.
amd motherboard chipsets like 990x/990fx that offer x58 level stuff at lower price are hamstrung by fx. i wonder how much they'll bottleneck crossfired gfx cards e.g. 2x 6850 or 2x 7850s (if dec 2011's $1200 pc is any indication).
calf - clueless amd loving fanboys.
edit: i realy sory for my bad english.
 
Guys I have to agree with Don.

Best I saw in a previous review was 2 frames per second with ram ... given the same overall clocks.

http://www.anandtech.com/show/4503/sandy-bridge-memory-scaling-choosing-the-best-ddr3/6

The results weren't very stimulating, were they? Just as expected, gaming with faster memory just doesn't make any notable difference. I could have potentially lowered the resolution and settings in an attempt to produce some sort of difference, but I felt that testing these games at the settings they're most likely to be played at was far more enlightening. If you want better gaming performance, the GPU is the best component to upgrade—no news there.

http://www.bit-tech.net/hardware/memory/2011/01/11/the-best-memory-for-sandy-bridge/6

http://www.bit-tech.net/hardware/memory/2011/01/11/the-best-memory-for-sandy-bridge/10

However, our testing shows that memory rated at over 1,866MHz doesn't give much extra performance. Worse still, in some applications only 1,333MHz memory gives a performance penalty, meaning that 1,600MHz memory is fine.

If you're doing anything other than heavy multi-tasking - this goes for gamers in particular - then a 1,600MHz or 1,866MHz kit is plenty. You could opt for CL8, as we saw some advantage in the video encoding test, but we wouldn't obsess over this factor, especially if a CL9 kit is much cheaper.

Our German reviewers (Achim and Roos) did some benching on Phenom too ...

http://www.tomshardware.com/reviews/phenom-ii-ddr3,2319-6.html

The game benchmarks all show small performance improvements if faster and quicker memory is used. However, DDR3-1600 does not provide the extra bang you would expect. In fact, DDR3-1333 and other settings with low latency settings prove to be the best for games.
 
How did Toms miss out the whole of Intel's Celeron Sandy Bridge range?

You guys thinking Pentiums are good value for money should have a look at the Celeron G530 -- around 90% as fast as the Pentium and costs barely more than a SINGLE CORE AMD Sempron 140. Best CPU for money by far, and by the time it's not, you can upgrade to a 2500K or Ivy Bridge when the price drops and AMD finally get a decent CPU out the door that can compete with the current Intel chips and in turn, you get a free CPU to keep for a HTPC build or something.
 
[citation][nom]dj christian[/nom]Bulldozer "is" a 32nm part.[/citation]

I never said it wasn't. Bulldozer was a disappointment and underperforms in regards to what a 32nm part should do. Llano performs like an Athlon II at similar clock speeds would do if shrunk to 32nm.

[citation][nom]dj christian[/nom]There is no thing as crossfired APUs. [/citation]

Hybrid crossfire still counts as "crossfire". If you're getting into the technicality of the wording, it's actually called "Dual Graphics" in the APU/GPU combination.

[citation][nom]dj christian[/nom] Why anyone would use 512mb with a Llano system is out of my understanding. Are are you just trolling?[/citation]

512MB is actually the usual system default for the Llano IGP. I wouldn't suggest using it with that little either, but it isn't as if you need to purchase 4GB just because you have an IGP.

[citation][nom]de5_roy[/nom]the following is just an observation, not meant to offend people who like and support amd and their products.
...
calf - clueless amd loving fanboys.[/citation]

Are you sure you weren't out to offend anyone? This last statement doesn't seem like an observation.

[citation][nom]Cleeve[/nom]As much as you enjoy blowing it out of proportion, he memory difference is irrelevant at 1080p.You're grasping at straws to find flaws in this review so you can feel better about your brand preference. If that's your goal, just stay off the internet and stop reading CPU reviews until AMD comes out with something competitive. For those of us curious to actually learn something, this stuff is interesting.[/citation]

You should expect people to nitpick you when you conduct benchmarking for a professional site and your testing methodology is flawed. You chose to standardize memory based on price, as this is a "budget" comparison. That's fine, so we're keeping price as a constant. You then use a motherboard for the Intel systems which is more costly than the AMD motherboards. So at that point performance becomes the factor and price is irrelevant. Now, different memory isn't going to add 20 FPS to AMD, and you can still get a decent Intel board for $120, but that doesn't matter. People want to see testing done properly, with a constant ideology. If you tested with price in mind, then the Intel board should be the same price range. They should all be the same brand thus attempting similar quality, so the Biostar is out too. You can then state that the Intel board does not have 8x PCIe lanes which is an overall disadvantage to the platform at that price but not a flaw on the CPU itself. Just get a better board if you want the 8x lane. You could have instead kept your performance as the constant and used both the ideal motherboards and memory to give the best possible results for that CPU with the same motherboard features. Instead you stuck with price as the variable at one point and performance as the variable at the other point. It doesn't matter how little difference it makes, the method is flawed. You could have spared yourself the trouble by just using "ideal" memory speeds regardless of price and it would have made little difference.

People do not need to get off the internet, there's plenty of websites on the internet with a high school level comprehension of scientific testing process.
 


First, I don't agree that it's flawed. I believe it's your analysis of the situation that is flawed.

Second, it's not a price issue, it's a standard issue. The same amount, speed, and latency RAM was applied to all platforms. Price is only a factor when it comes to the CPU as all other components are standardized except motherboards. As for motherboards, your PCIe lanes concern doesn't make sense. All boards are using a single card and all have 16x PCIe support, the second slot only comes into play for CrossFire/SLI. Our motherboard comparos have shown for years that performance differences are irrelevant on boards with the same chipset.

You're complaining about things that have a miniscule impact on results (or none at all), and even if they did changing it would increase the performance of the entire spectrum of CPUs we've tested. And since CPU's are the variable we're trying to isolate, your reasoning is invalid.

You're protesting for the sake of argument. Unless you're protesting for the sake of defending a brand preference, in which case... yeah, the internet is better off without you. :)
 

no i am not out to offend anyone. i can't confirm how oversinsitive people will feel though.
like i said i before, this is just an observation. after fx came out, i noticed some people pitching windows 7 scheduler improvements up to 30%, superior performance with overclock etc. most of them turned out to be 8150 owners and people who wanted to buy 8150 and wanted to justify their decision.
this time it's the 8120 and 4100 owners and users who want to upgrade to those cpus and who want to justify their decision.
in reality, people will buy whatever they want to buy, despite what benchmarks or reviews say.
i have one for intel too - m.i.l.f. in case intel pulls a zambezi. unfortunately, a brief googling brought up some... rather unpleasant results for that acronym.... :sweat: ;-(
 
[citation][nom]BigMack70[/nom]...
QED: It makes NO sense for someone with a decent AM3 platform and single GPU setup to spend $300 upgrading to Intel at the moment. Do I wish I had a 2500k? YES! But is it worth anywhere near $300 for me to switch? Not even close.[/citation]
Exactly. My teeth hurt from the grinding I've been doing over buying a 990FX in anticipation of BD, and then how badly it sucked, but I'm not some pathetic AMD fanboi grasping at straws here. I'm also not an Intel fanboi happily whaling away at his opponents long after they're down. I just like knowing what the best budget parts are, and it's pretty clear they are no longer AMD. Insofar as "budget" may also refer to the total power available (e.g. in a rig that needs to be tiny to fit in a confined space like a sleeper-cab), I'm still hoping to see some benchmarks on the 35W "T" parts.

 


I absolutely agree with you, if you have an existing AM3 platform it can provide decent gaming ability as long as you keep your detail settings in check and control the bottleneck. It's not worth a pure CPU upgrade for gaming unless you can afford the graphics punch to go with it.

Just be mindful of what you have and how it's best used; it's best served with a decent single graphics card, just don't go blowing a wad of cash on a couple of 7970's in CrossFire. :) It sounds like you're savvy to this info already, which will serve you very well.
 

I did some playing around today for fun.

8120FX @4.7ghz 5870

1333 memory
Sandra memtest - 15.3 gb/sec
3d mark physics - 7116
metro 2033 1024x768 - 104fps <--- 5870 was 100% bottlenecked at 1900x1200, even at 4.0 ghz fps was 53.3
Civ V maxed settings - 15.336 fps (fraps units benchmark)

2133 memory
Sandra memtest - 21.1gb/sec <-- 39% synthetic test
3d mark physics - 7958 < -- 18%
metro 2033 - 111fps <--- 6.7%
civ V maxed settings - 18.567 fps <--- thats serious performance gains, 21%?!!

just for kicks lets try to find the mhz difference to = 1333 memory.

4.0 ghz at 2133
Sandra memtest - 18.2 gb/sec
3d mark - 6820
metro 2033 - 103.8
Civ V - 15.131 fps

probably closer to 4.1ghz but not going to spend all day testing miniscule details.

pair that with mad shrimps memory testing seen here, BD does not relate to the same as Intel when it comes to memory.

http://www.tomshardware.com/reviews/quad-channel-ddr3-memory-review,3100-10.html

gain of ~1% vs 5%+ for AMD FX.

As you can see from personal testing, running 1333 memory on the FX chips is like taking away 600 mhz.

Granted mine is running 2133, but thats what I paid for, on sale $62 for 8gb, couldn't be happier getting ~10% more performance from spending $12 more on extreme high end memory.

 


Yeah, but your Civ FPS are still below 20 FPS, so how useful is that result? Set the details to something realistic and playable.

And you've set Metro to low res where any CPU gets 60 FPS+ all the time. In fact, because of this review we're probably dropping Metro from CPU comparisons. The game isn't really ideal to test CPU ability since it chokes on graphics when it's at high details, and at low details everything gets above 60 FPS.

Plus, you're making the assumption that the Intel CPUs wouldn't also show performance gains from the faster memory.

Don't get me wrong dude, I'm not hating on your 8120. That's an impressive overclock and I'm glad you're happy with it.

But without sticking that memory in an i3-2100 system with the same settings I don't think you've proved much where it comes to gaming. Frankly, I'd be surprised if an i3-2100 couldn't do better than 20 FPS in Civ at stock at those settings. Heck, if you want to send me your Civ save file I'll even give it a try with 1333 Mhz RAM if you'd like to compare.


 
Great article, I had been looking for more info on how the 4100 and 6100 performed, but couldn't find much.

I was hoping you could share some of the settings for the overclock on the fx 4100 as I recently purchase one. In addition I bought a cooler master hyper 212 evo 120mm heatsink/fan and asus 970 mobo. would there be much room over the test setting with my parts? hopefully this wasn't a double post.
 
Civ v has a built in bench mode, jus in case you weren't awre.

Unit Benchmark. This benchmark is designed to stress test the users system by executing a parallel series of animation and rendering tasks. The workload will heavily stress the CPU as well as GPU and driver. It is designed as a total throughput test to evaluate a users system and determine where the bottlenecks occur. Results reported are similar to the Late Game View benchmark, however the display settings are overridden for consistency. Therefore changes to the graphics settings will not impact the test except for setting resolution. To run this test, run the application with the command line argument “-Benchmark Units”, case insensitive

My setting is 1900x1200 with my 5870. That's why I used the unit benchmark mode. Civs output is some performance number so I frapsed the first 60 seconds, when the shadows are turned on. There are 3 different settings used during the 5 minute bench.
 


I knew about the bench, just wasn't sure if you used a save file when you mentioned FRAPS. I thought the bench spit out a log file so you don't need FRAPS?

Is your 5870 at stock clocks?


 
Status
Not open for further replies.