Best Gaming CPUs For The Money: January 2012 (Archive)

Page 59 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Well that's certainly odd... I thought the 660Ti was supposed to be superior to the 7850 and I KNOW the i5 should beat the Athlon.
 
Watch, no matter what I post, there will be a "That benchmark isn't fair". Instead of whining why not take your 4 core FX and run the same bench and see where it lands?

Here is the extremely CPU bound "Action Scene" benchmark built into Far Cry 2 DX 10 mode max settings averaged over 3 loops.

900x900px-LL-fe1773f5_FC2FULLOC.png
 

Lol. No need to try to make yourself out to be the victim bro. I'm just curious to see the results in a modern game. Not sure why you seem to be taking someone disagreeing with your opinion so personally.

Here is the extremely CPU bound "Action Scene" benchmark built into Far Cry 2 DX 10 mode max settings averaged over 3 loops.

900x900px-LL-fe1773f5_FC2FULLOC.png

See, NOW you actually have some quantifiable benchmarks posted. Albeit not in the most modern game as Far Cry 2 is now 6 years old, but still more quantifiable gaming performance than synthetics. Is it just me or is that an extremely high OC on your 7850? Weren't the stock clocks somewhere around 850 MHz?

EDIT: P.S. I'm interested to see what kind of results an FX model CPU can pull in a similar rig. I don't own one myself.
 
Just want to point out that, last August, TH posted a review of the 750K with quite a few benchmarks from 8 games, multiple productivity applications, and even some synthetics and the FX-4350 typically won out. Here is the link.

If you want to convince anyone that this data is wrong you probably have your work cut out for you as it looks to me like the only real difference between the 750K and the 760K is the stock clock rates.
 


Yeah I know. It's reviews like that that kill me because they didn't overclock to same clock. There really is no point at all in buying an AMD CPU unless you overclock since Haswell pentiums, i3s, and non-K i5s will outperform. Bulldozer and Piledriver based chips especially suck until you clock them pretty high.

750K is a GPU-disabled Trinity. The 760K is a GPU-disabled Richland. There are some TDP management differences, but they are very similar since both Piledriver. I would have bought a 750K but 760K was on sale for same.
 


As far as I know you are the one who is whining about how the Athlon is a better gaming chip. You even went as far as saying than the FX-6300 is not worth recommending over the 750k which is no wonder seeing how you think that an i3 is generally faster than the FX-6300 in games without overclocking, but now you are asking me to disable a module and run a benchmark in a game that I don't have, not to mention that my HD-4850 is much weaker than your overclocked card. If we assume that the FX-4300 doesn't benefit at all from its L3 cache, It still should perform almost identically to the trinity chip at the same frequency in synthetic benchmarks, since they have the exact same Piledriver modules, a simple google search of a Cinebench 11.5r benchmark will prove this point. So saying that the Athlon KILLS and DESTROYS the FX-4300 in synthetic benchmarks is quite ridiculous. Someone also mentioned in a previous post that beyond 4.5 GHz the Athlon chip hits the wall of diminishing returns and you get very little performance from overclocking the chip any higher, while the FX-4300 scales better up to 5 GHz this in particular is an interesting point since you are mainly interested in overclocking to run benchmarks and get high scores. You refused to explain why did you cripple the FX-4300 rig with slower ram and GPU, or how that the 3dmark Physics score has a degree of GPU dependency which is higher with AMD GPUs. You posted the result you scored with that FF demo, I took a look at these forums and I have found rigs with SB/IB i5 and i7 processors and weaker GPUs than the HD 7850 scoring lower than you, some of these chips were even overclocked, SLI and Crossfire results were incredibly high in comparison, which proves, if any thing that the benchmark itself is GPU bound. That is not to mention that some of the other results are plainly inconsistent, there is even an easy method to cheat in the benchmark by pressing Alt+Tab. This is why the benchmark results i referred to earlier are much more consistent -and I remember you were the one whining back then when I posted the Techreport benchmarks about them not being good enough for you. The same graphic card is used with the same settings and the same driver version using the same OS which yields more objective and consistent results than people with all kind of configurations posting their results.
 


I can definitely see how that makes a difference. Yet at the same time if the 4350 would easily OC to 6 GHz (yes, impossible I know) and the 760K got capped around 4.5 GHz then would it be fair to run both OC settings at the same 4.5 GHz clock?

More importantly to this discussion, the 4350 at stock settings was beating the OC'ed 750K in many of those benchmarks. Doesn't that pretty strongly prove that the OC frequency isn't what allowed the 4350 to best the 750K?
 


Let's see, I never said the Athlon destroys anything. I simply said the L3 doesn't help. Complain all you want about my benchmarks, you have posted nothing of your own. This information about the L3 sucking in BD/PD is nothing new. I'm not sure why the Tom's Hardware community is the last to know.

BTW you are searching the wrong thread on Overclock.net for the FF bench, so you can stop calling me a cheater.

http://www.overclock.net/t/1363297/square-enix-final-fantasy-xiv-a-realm-reborn-official-benchmark-exploration/360

 


They also used slow CPU-NB and slow RAM speed/latency on the athlon compared to the FX-4350 which had overclocked NB and HT. It's little details like this that make a big difference in the benches they ran.

I'm running 2600MHz NB and DDR3-2400CL10. The fast RAM is what makes the L3 cache an obsolete waste of transistor space.

An interesting little bench if anyone wants to run it:

LL


 



"Let's see, I never said the Athlon destroys anything."
"jmrainwa said:
I really dont understand the final analysis of the most inexpensive CPUs. It claims a significant performance boost from the 750k to the FX6300 (5.75% avg increase based on the graph for a 50% price increase or $40) but a less of a boost from the 6300 to the 4130 (7.25% avg increase for 4% price increase or $5). So a .14% avg gain per dollar vs a 1.45% avg gain per dollar. Am I missing something?


You got it right. I've benched so many AMD chips, anyways I can tell you that the FX-6300 and FX-8300 only pull ahead of the 750K/760K chips if the application is so well threaded that it can fully utilize the extra cores, provided same overclock speeds. The architecture limits the usefulness of the L3 cache such that it has much higher latency than its predecessor K10/K10.5 Phenoms. In fact, if you use AIDA or Sandra, you can actually measure this yourself. You'll see that an A series chip is just as fast without L3 cache provided it has high RAM speed. On top of that, all piledriver cores have a fairly large L2 cache. Tom's needs to get better at testing and stop making erroneous statements like "being held back due to no L3 cache". It just isn't true."

“I challenge YOU to disable two of your FX-6300 cores and try and out bench my 760K. You will lose, because you are too foolhardy.”

I couldn't care less about your benchmarks actually, or whether you believe that the 750k is better than the FX chips or not, it simply doesn't matter to me. You have refused to accept the results of every single gaming benchmark that have been posted here, which for me are much more consistent and trustworthy than this ridiculous FF benchmark you have referred to, where people with all kind of different hardware and OS configurations posted results that seemed either GPU bound or even inconsistent at times, but I guess that is the only kind of benchmarks you accept as valid. The i5 3570k+GTX 660 Ti rig was running the same benchmark you posted its result and it is from the same thread (Character creation benchmark). I didn't accuse you of cheating, I was merely doubting the validity of such inconsistent and non methodical way of benchmarking where results can be easily manipulated or seem odd.
" Piledriver based chips especially suck until you clock them pretty high." Which is generally isn't true especially in newer better optimized titles.
 


Then I accept your capitulation, though you it is sad that you couldn't even post one of your own benches.

 
"I'm running 2600MHz NB and DDR3-2400CL10. The fast RAM is what makes the L3 cache an obsolete waste of transistor space.."
http://www.youtube.com/watch?v=dWgzA2C61z4
Then I accept your capitulation, though you it is sad that you couldn't even post one of your own benches.
You seem delusional to me, since you have yet to provide any gaming benchmarks that proves against the results of the many benchmarks I and others have posted. And you won't be able too.
 
toms hello, hello friends. my question is: I have an i5 2400 processor, motherboard gigabyte z77, 1x8 and 2x4 gb ddr3 1333 ddr 1333 gb, 4gb 270x gpu, 120gb ssd 840. and I wonder if the change is justified by an i5 4670k and Z87 motherboard?
 
I think Damric has made some good points about the value of L3, particularly in comparison to the value of fast[er] RAM. It appears clearly visible in [his synthetic] benchmarks. How well it translates to actual games will no doubt vary, but it should be intuitively obvious that it won't be the reverse of what is shown in the synthetics.
At the end of the day though, if your chosen CPU running at your clocks is running your games to your satisfaction, what difference do benchmarks (synthetic or "real") actually make? If your hobby is the benchmarking, then it will be hard to find "real" (i.e. gaming) benchmarks that will be as consistent and repeatable as the synthetics.
 


This has all been very interesting discussion (once the personal attacks stopped). At the end of the day though, I'm left with the basic question of why does the bulk of gaming tests out there have the FX series CPUs to be superior, clock for clock, than the the FM2 CPUs?

Looking for other tests of L3 impact on gaming, you have this dated article:
http://www.tomshardware.com/reviews/athlon-l3-cache,2416-6.html

Here, the L3 cache seems to have some impact based on the title. 0% for some, ranging to 5% for a couple and maxing at 20% impact for Left 4 Dead.

Is there a flaw in the methodology here?

At the same time, this article indicates the poor implementation of L3 cache for FX CPUs, so this may be really an issue with the FX CPU architecture and not necessarily L3 cache as a whole:
http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance/3

At the end of the day though, I'll reference this article as an example of what you typically see with CPU scaling in gaming among the AMD cpu's:
http://www.tomshardware.com/reviews/piledriver-k10-cpu-overclocking,3584-15.html

If you compare the 750k @ 4.3 GHz to the FX-4350 @4.7 GHz, and scale the FPS measured by the 750k up according to the difference in clocks, you'll find that the FX-4350 still generally comes out ahead. For the GPU intensive titles (i.e. Crysis 3 and Far Cry 3) the difference is negligible. However, if you look at the CPU intensive titles, especially at the more-likely to be CPU bound lower resolution scenarios, I'm seeing a distinct advantage of the FX-4350.

Take Skyrim for example, scale the 750k results up to the 4350, and you have 77.3FPS * (4.7/4.3) = 84.5 FPS, which is about 5% shy of the 88.8FPS attained by the 4350.

Even more interesting is the difference with the Phenom II X4 965 BE, which according to the linked article above, has a much better implementation of L3 cache. In Skyrim, scaling the Phenom II up to 4.7 GHz would yield 97.4 FPS, which is a whopping 15% better than the scaled 750k and almost 10% better than the FX-4350.

Looking at the test setup, it looks like all three use the same memory setup (correction - the Phenom II is running 1600/8 with the other two running 1866/1875/9), but perhaps the 750k can scale memory up higher which could also make a difference.

However, if everything else is the same with these CPUs, it would appear that the Phenom II > FX > 750k.

Understand that the Phenom II has 4 full cores, where the other two have "Modules". But Skyrim tends to scale based on single-threaded CPU performance.

Any thoughts on this? Is this Tom's test flawed? It would be good to know.
 
This is a quote from mdocod over at overclock.net. The guy is a rocket scientist.

Regarding L3.

L3 makes a significant difference in performance if it is substantially faster than system RAM (higher bandwidth and lower latency). Unfortunately bulldozer/piledriver architecture represents a regression in L3 performance compared to the K10 architecture. When we factor in the regression in L3 performance, combined with the improved memory controller (can run higher speeds/bandwidth, lower effective latency), the delta in performance between the L3 cache and just using system RAM instead, is minimal on PD.

Unfortunately, when the CPU is sharing a memory controller with the GPU, the performance of the system memory (from the CPUs perspective) tanks, and there would be significant advantages to having L3 available to the CPU in this configuration. Pretty silly eh? On FX, L3 isn't important because the system memory is dedicated and the L3 is too slow to offer advantages. Where L3 would have the best chance of improving performance, is on the platform that doesn't have room on the die for it.

In contrast, the L3 performance on the Intel platform, is comparable to the L2 performance on the AMD platform, as such, L3 makes a BIG difference in performance on a haswell chip.

http://www.agner.org/optimize/instruction_tables.pdf

note, the ~77 cycle latency for L3 on PD as measured by Sandra, and L2 latency that is basically on par with Intel's L3 latency... This is on par with what has been measured by other 3rd party analysis for compiler optimization considerations.

For reference, store/load latency for system memory is generally ~125-250 cycles for either platform IIRC depending on the conditions (which execution pipe is issuing the load/store call, and what type of load/store instruction is being issued etc), so L3 on PD isn't much better than just going to system memory. See instruction latency for FXSAVE, FXRSTOR, XSAVE, and XRSTOR operations.

The fact that the L2 is so HUGE on PD, combined with the L3 being so slow, just doesn't leave a lot of opportunity for L3 to offer much benefit.

If you can understand that, you can understand why the L3 on PD/BD is wasted transistor space.
 


Yes, that makes sense, and it is consistent with the article I had linked above. FX was a step back in L3 cache from Phenom.

But how do you account for the better clock-for-clock FPS performance in the Tomshardware article I linked?
 

It may not be the exact same clock, but the 4.3 GHz 750K is close enough to the stock 4.2 GHz 4350 in the test. Most of those scores actually show the OCd 750K right with the stock 4350 ( the compression tests and some of the more demanding games seem to hit it pretty hard though. ) But yes, their claim that "We know that a lack of L3 cache stifled performance. After all, the stock 3.4 GHz Phenom II X4 fared significantly better," doesn't make sense when the 750K kept up with the 4350 in nearly every test.


This makes me curious about the Intel / AMD efficiency and IPC situation right now. I've been operating under the notion that AMD's modules simply can't execute as fast as Intel's. This makes it seem like memory starvation is what's causing the AMD modules to fall behind. If that gets reworked and fixed, will we get back to better performance competition?
 
Status
Not open for further replies.