Best Gaming CPUs For The Money: January 2012 (Archive)

Page 57 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Well it is true that the Core i3-4130 has much better performance per clock then the FX-6300, and I would say that the majority of games out today, sadly do not use many cores, 4 threads seems to be the sweet spot at the moment, that may change hopefully now that developers have to consider heavy multithreading due to the console hardware and DX12 should help with that a lot. Also I was working within the confines of his question, an i5 wasn't an option, though yes an i5 would be a much better choice as you get 4cores/4threads and not the 2cores/4threads with the i3 :)

 

The higher temperature difference makes the SAME cooling system more efficient. Of course, bad thermal compound inside the chip is a bad idea. But given that bad thermal compound arrangement (and the cooler and whatnot), you can dissipate more wattage with a temperature difference of 60C than 40C.

(∆Q/∆t = -K×A×∆T/x)
 


All I'm saying is I would not use the term "efficient". I completely understand the mechanics of heat transfer - you would never describe a system with a crappy thermal compound that drives the temperature higher as "efficient". The temperature is higher because it is inefficient. Therefore, you need a higher temperature in order to achieve the desired heat transfer.
 


You got it right. I've benched so many AMD chips, anyways I can tell you that the FX-6300 and FX-8300 only pull ahead of the 750K/760K chips if the application is so well threaded that it can fully utilize the extra cores, provided same overclock speeds. The architecture limits the usefulness of the L3 cache such that it has much higher latency than its predecessor K10/K10.5 Phenoms. In fact, if you use AIDA or Sandra, you can actually measure this yourself. You'll see that an A series chip is just as fast without L3 cache provided it has high RAM speed. On top of that, all piledriver cores have a fairly large L2 cache. Tom's needs to get better at testing and stop making erroneous statements like "being held back due to no L3 cache". It just isn't true.

 
^^^^^^^^
What he said.

The PhIIs have a 258mm2 die size. The Trinity/Richland APUs have a 246mm2 die size of which roughly 50% is GPU logic. The Athlon 750K (with a 200MHz head start) performs essentially the same as a PhII 955BE.

The L3 gave a nice boost to the Stars cores but there was a point of diminishing return (Intel hit the same wall with the early 'Core' processors -- Toms did an article on it IIRC). The hexcore Thubans (1MB L3 per core) ran just dandy against the Denebs (1.5MB L3 per core).

AMD is doing the right thing in re-engineering the arch and focusing on the 'SIMD Engine Array' instead of L3 cache, while expanding logic for new instructions.

 
$25 E8400 is better though. 😛


Agreed and the Core2 e8400 clone xeon E3110 ive seen for as low as $16 before , is an even better deal. I still have my old e8400 i payed 40 for it 3 years ago when i built my first gaming system. seems like yesterday i was fooling with e8400s and GTS 250 nvidia cards and was so amazed at how much better it was than the media card like gt 240 and 9500GT cards i had before i made that system.
 
Anyone know whats the deal with those Galaxy GT 640s they were dirt cheap about 7 months ago , you could pick one up on ebay or amazon for about 42 dollars shipped now their back up to 65 dollars and up. I'm actually using one of those in one of my systems right now as a backup until Newegg sends me a functioning GTX 650ti that i can put in my bedroom system.
 



Yeah it would be nice if AMD still had it's own fab facilities instead of having to rely on GLOFLO and TSMC.
 

It took intel to get from 32nn to 22nn in ten years, and amd still hasent made the change, if they can make a 28nn gpu they can make a 28nn cpu
 


AMD does not have a foundry, so they have no control over that. They contract with GLOFLO and TSMC, and they switch to the new process as soon as yields become profitable and worthwhile.
 

Ten years ago, Intel was on 90nm with the P4 Prescott introduction. Sandy Bridge was 32nm, Ivy/Haswell on 22nm, Broadwell/Skylake on 14nm.

Intel goes through one process generation roughly every other year, maybe 2.5 years for 14nm.
 


Actually it is a very bad example. If the game is poorly optimized it will run poorly on any hardware no matter what it is. Like I said, I bet you won't be able to tell the difference if you ever played the game on a FX-8350 or on a i5-3550 like the one they used for benching. What is irrelevant it the minimal performance difference in these 'relatively' unoptimized games. What is relevant is that better optimized games show an advantage when running on larger number of cores/threads and that is not theoretical since it is evident with a game like Crysis 3 for example. See how a SB i3 chip (which was recommended as a better gaming chip than even the FX-8350 on some websites when the FX chip first came out) does in Crysis 3 for example.
 


Is that in gaming benchmarks or synthetic benchmarks?
It looks to me that the A10-5800k is at the bottom of these charts and that is a 400 MHz faster chip btw:

http://techreport.com/r.x/a10-6800k/c3-fps.gif
http://techreport.com/r.x/a10-6800k/c3-99th.gif
http://techreport.com/r.x/a10-6800k/fc3-fps.gif
http://techreport.com/r.x/a10-6800k/fc3-99th.gif
http://techreport.com/r.x/a10-6800k/tr-fps.gif
http://techreport.com/r.x/a10-6800k/tr-99th.gif
http://techreport.com/r.x/a10-6800k/metro-fps.gif




 

That's not true. You can clearly see performance is different on different hardware. The poor optimization includes things like limited multithreading, which only serves to make single-threaded CPU performance that much more important.

As for Crysis 3, the gap between an FX-8350 and a Core i3 is only a few FPS, so by your arbitrary rules not significant.
 


Pretty sloppy benches there, with noob settings galore 😛 I slay the highest FX-4300 with same GPU with my lower clocked Athlon 760K in every bench over at 3dmark. Perhaps the L3 cache is slower than my RAM, holding them back 😀

 


Why are they sloppy benches? Can you back up your claims with some benchmarks?
 


Actually the gap is pretty small and you are exaggerating it, it is not like the i5 is leading away with +20 fps or anything, and with DirectX12 multi-threaded performance will matter much more. As for crysis 3 I can post a lot of benchmarks where the i3 gets slaughtered, but even if you take a better look at the benchmarks I have posted you will notice that except for the A series APUs the i3 has the highest 99th percentile frame time, that is a much better indicator of the smoothness of a gaming experience than the FPS.
 


Because they didn't even setup the test rigs optimally for one. And yes, I always back-up my statements with MY OWN benches. I keep extensive logs for every test build, but let use look at the futuremark data base since these are similar to games :)

So you claim that an FX-4300 or whatever should beat an Athlon 760K because the L3 cache. I'm telling you no, it won't because the L3 cache has piss poor latency, which doesn't make it any faster than some good fine tuned DDR3.

I love the 3dmark database because it's so easy to search. I think I like it better even than HWBOT. This is a submission I made back in January with Athlon 760K at 4500MHz, NB at 2600MHz, RAM at 2400MT/s, and HD 7850 at 1200/1500.

Looks like it's up against a similarly clocked FX-4300, unknown RAM clocks, and lower GPU clocks. It's ok we are really only interested in the CPU part of the test.

http://www.3dmark.com/compare/fs/1458979/fs/1930096

Physics Score: Boom Athlon wins 5143.0 to 4996.0. Ok so your theory didn't hold true on that. Granted I had a 32MHz advantage, but it is plain obvious that L3 cache didn't make a difference.

I can go down the line with all the 3d marks if you like, I bench them all at one point or another to pad my HWBOT points.

Or if you can find a benchmark with any FX-4300/4350 whatever + HD 7850/r7 265, I'll run it and compare, Or you can just browse my HWBOT page or whatever.

It's probably unfair because I have benched so many AMD chips so I already know what's going to win ^^
 
I asked you whether you have any actual Gaming benchmarks to support your claim, I wasn't asking you about synthetic benchmarks. What I understood from this: "You got it right. I've benched so many AMD chips, anyways I can tell you that the FX-6300 and FX-8300 only pull ahead of the 750K/760K chips if the application is so well threaded that it can fully utilize the extra cores, provided same overclock speeds. The architecture limits the usefulness of the L3 cache such that it has much higher latency than its predecessor K10/K10.5 Phenoms. In fact, if you use AIDA or Sandra, you can actually measure this yourself. You'll see that an A series chip is just as fast without L3 cache provided it has high RAM speed. On top of that, all piledriver cores have a fairly large L2 cache. Tom's needs to get better at testing and stop making erroneous statements like "being held back due to no L3 cache". It just isn't true."
is that even a FX 6xxx chip or a FX 8xxx aren't any better than the 750k and that the FX-6300 doesn't deserve a place on that recommendation list. You say the L3 cache is worthless but I have seen even a FX-4170 bulldozer chip pulling ahead of the 750k in ACTUAL gaming benchmarks. And what did you mean by setting up testing rigs optimally? So if the benchmark results supported your claims would that pass as an optimal set up for you?
I guess techspot didn't set up their testing rigs optimally as well:
http://static.techspot.com/articles-info/642/bench/CPU_03.png
In Metro: Last light even the FX-4100 is winning against the A10-5700 with 11 fps lead and that is with its piss poor latency L3 cache:
http://static.techspot.com/articles-info/670/bench/CPU_01.png
Far Cry 3:
http://static.techspot.com/articles-info/615/bench/CPU_01.png
Another example of the A10-5700 getting slaughtered by the FX-4100 with 19 fps difference in Splinter Cell Blacklist:
http://static.techspot.com/articles-info/706/bench/CPU_001.png

Edit: And by the way about the synthetic benchmarks, why didn't you use RAM with the same speed on both rigs? Also the HD 7850 in the Athlon rig is overclocked 170 MHZ higher and the graphics memory is clocked 149 MHZ higher. The Physics score test can actually be influenced by the GPU speed and by RAM speed as well. So it is not a "Pure" CPU test exactly. The GPU dependency varies depending on the GPU used to run the test as well from what I read.
 

If you define that gap as pretty small, then so is the gap to the Core i3 in Crysis 3. You can't have it both ways.
 
The gap in fps in that particular benchmark maybe is small but the frame latency on the i3 is much more evident. A benchmark in a more demanding sequence/part of the game shows the clear fps difference between the i3 and the FX-8350 in this game. If you look at Techspots benchmarks that I have just posted you will see a 27 fps difference between the two chips and that is with the i3 averaging at about 36 fps. The gap in ESO is not small by my definition only, a 5 fps difference in average and a 3 fps difference in minimum fps at these frame rates will doubtfully even be noticeable unlike the difference in Crysis 3 in its more demanding parts.
 
There isn't a big gap between the FX-8350 and Core i3-3220 in Crysis 3. The newer Core i3 CPUs would perform better, closing the gap even more.

Crysis3-CPU.png
 
Status
Not open for further replies.