Best Gaming CPUs For The Money: January 2012 (Archive)

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Cleeve[/nom]Read the page.[/citation]
Ah i did, but i slopped reading before this:
Where we do see the potential for Sandy Bridge-E to drive additional performance is in processor-bound games like World of Warcraft or the multiplayer component of Battlefield 3. If you're running a three- or four-way array of graphics cards already, there's a good chance that you already own more than enough rendering muscle. An overclocked Core i7-3960X or -3930K could help the rest of your platform catch up to an insanely powerful arrangement of GPUs.

So i kind of missed it. In fact, back when the 2500K came out, and you guys were like, no point getting the i7 because of the above reason (or whatever you wrote back then), I HAD read this, just that...over the months it sort of faded from memory.

Then i looked at this again and i was like, wait, if 8 threads don't help then what's the point of 12? Never considered the rest of the platform.

On that note, i'd still like to see if any modern game can use more than 4 cores/threads. Maybe you could try that with Crysis 3 and Bioshock when they release? Kind of want to see a in-depth CPU article, exploring how stuff like LLC, core count and stuff liek AVX and SSE4 etc help games.

p.s. What happened to best configs?it's been over a month since voting had to start...
 
So if I read this right:
Where we do see the potential for Sandy Bridge-E to drive additional performance is in processor-bound games like World of Warcraft or the multiplayer component of Battlefield 3. If you're running a three- or four-way array of graphics cards already, there's a good chance that you already own more than enough rendering muscle. An overclocked Core i7-3960X or -3930K could help the rest of your platform catch up to an insanely powerful arrangement of GPUs.

I would benefit from the I7 because I play WOW? I was leaning toward the i5-3570K, but I also do some work in lightroom and the I7 would be a better fit then? I'll have plenty of GPU, I just thought that the extra CPU cost wasn't worth the ROI.
 
[citation][nom]Jayhawk226[/nom]So if I read this right:I would benefit from the I7 because I play WOW? I was leaning toward the i5-3570K, but I also do some work in lightroom and the I7 would be a better fit then? I'll have plenty of GPU, I just thought that the extra CPU cost wasn't worth the ROI.[/citation]

You might have misunderstood.

It's saying that if you're running 3 or 4 graphics cards in tandem (SLI/CrossFire), a Core i7 *platform* like LGA 2011 will help bandwidth.

For a light game like WoW, I can't imagine this is the case, even at high resolutions with AA enabled.
A Core i5 is probably as high as you'd aim for good price/performance.

You don't need multiple graphics cards and crazy-high bandwidth for a silky smooth WoW experience...

 
[citation][nom]Cleeve[/nom]You might have misunderstood.It's saying that if you're running 3 or 4 graphics cards in tandem (SLI/CrossFire), a Core i7 *platform* like LGA 2011 will help bandwidth.For a light game like WoW, I can't imagine this is the case, even at high resolutions with AA enabled.A Core i5 is probably as high as you'd aim for good price/performance. You don't need multiple graphics cards and crazy-high bandwidth for a silky smooth WoW experience...[/citation]

Okay thanks. Yeah I was planning on one card, but one of the better ones, just looking at the best priced in the 2nd or 3rd tier. I have an i5 1st gen with a HD 5770 and I play wow with pretty high settings. I am just spending more on this build.
 
ive come to learn in my time on tom's that 2 powerful cores are better than 4 weaker ones so the pentium should still be recommended for its superior architecture compared to athlon.
but since toms points out that more and more games are taking advantage of 4 cores this shifts some credibility towards the athlon. but there are still not enough games out there that take full advantage of all 4 cores. most just utilize 2 cores and a little bit of other ones.
so toms should have waited a few more months or a year when most games utilize all 4 cores to full extent before recommending a weaker 4 core athlon over a powerful 2 core pentium.
also since ive just learnt that new games do indeed benefit from hyperthreading shouldn't this fact in itself eliminate all competing amd cpu's near that price point coz now they are rendered irrevelent by this new found ability of i3 to benefit from hyperthreading this was not present before ?
 
[citation][nom]mohit9206[/nom]also since ive just learnt that new games do indeed benefit from hyperthreading shouldn't this fact in itself eliminate all competing amd cpu's near that price point coz now they are rendered irrevelent by this new found ability of i3 to benefit from hyperthreading this was not present before ?[/citation]

Hyperthreading has helped the Core i3 in games for a couple years at the very least.
Lots of benchmarks to prove that, this is not new. That's why the Core i3 has been recommended for gaming duty for a long time.

As far as the i3 eliminating competing CPUs near the price point, isn't that the case? You should notice the i3-3220 gets the sole recommendation at $130. The FX-4300 gets an honorable mention for coming close and having an unlocked multiplier for overclocking duty, something the i3 does not possess.
 
[citation][nom]fourzeronine[/nom]I'm really dumb. its 4 cores. windows lists as 2 cores 4 threads so that is properly scheduled u nubs.[/citation]
The only thing that separates a separates a Bulldozer module from any other core is the Bulldozer module has two integer clusters. The rest of the module looks like a normal core, 1 branch predictor, 1 decoder, 1 dispatcher,1 FPU, 1 L2 cache. While it performs like 2 cores during pure integer arithmetic (though the shared resources will limit that performance too), anything else the module does will perform like 1 core.
 
[citation][nom]Jayhawk226[/nom]So if I read this right:I would benefit from the I7 because I play WOW? I was leaning toward the i5-3570K, but I also do some work in lightroom and the I7 would be a better fit then? I'll have plenty of GPU, I just thought that the extra CPU cost wasn't worth the ROI.[/citation]
In WoW, the i7 would be better only in situations that were really CPU heavy, such as raids and PvP. The i5 will still perform admirably, and only show any difference in instances where there is tons of stuff (large PvP and raids).
 
OK, AMD gains an EOL recommendation on price point $ and cores.
Bulldozer/Piledriver still get a pass, though the FX-6300 and FX-8350 (8320) are becoming popular.
The A8-3870K functions in the Athlon II 640 envelope.
Trinity gets a pass on Piledriver architecturure?
I still think the Pentiums offer an upgrade path, and can do well.
 
[citation][nom]Fokissed[/nom]In WoW, the i7 would be better only in situations that were really CPU heavy, such as raids and PvP. The i5 will still perform admirably, and only show any difference in instances where there is tons of stuff (large PvP and raids).[/citation]

I have been noticing a lot of lag in raids and world bosses. I was chocking that up to a combo of my old i5 and my older video card. Since I am trying to put a little more into this build, might be a good idea to put the i7 in then. My laptop has an i7 and a lesser card than my desktop and it seems to do about the same, so maybe that is CPU limitations.
 


No, it has two physical BD modules. It's subjective whether or not what's inside each of those modules can fully be considered two CPU cores.
We all know the BD architecture does not fit the same definition of what we've considered a CPU core for years. What the OS sees has no impact on reality.
 
I've become accustomed over the last year or so to seeing AMD CPUs at the bottom of performance charts in game benchmark articles. I am a little skeptical that somehow AMD has suddenly become a good choice, although I will withhold judgement at least until that new sub-$200 article is out; I've long been a proponent of cost awareness.
A couple of recent $500 SBM builds have done wonders with Pentiums and top-tier video cards, so it also seems that any claim that a Pentium is suddenly a poor choice needs also to be taken with a grain of salt.
Hopefully, the new article includes average frame-times as well as just FPS, so it might tell us if weaker CPUs experience more jitter, even if FPS appears to remain high.
 

Frame time is simply the inverse of instantaneous FPS, so minimum FPS measures the exact same thing. Average frame time is no better than average FPS at revealing phenomena like microstuttering.
 


It's funny that you should use that choice of words. What we actually said was:

"We're not suggesting that dual-core Pentiums are suddenly bad gaming processors. In most games, they're still quite potent. But with new titles like Far Cry 3 utilizing additional threads at higher detail settings and CPU-dependent FXAA becoming more popular, the recommendations need to be shifted. Our gaming CPU hierarchy chart is modified this month, too, accommodating our evolving perspective. This may get tweaked in the months to come, though."
 


Our upcoming review includes average, 75th, and 95th percentile frame times (in addition to AVG/MIN FPS and frame rate over time).

It's a new way were trying out to see if we can get a meaningful easy-to-read chart that gives a good impression of microstuttering.
 


Unfortunately, no. The review I'm talking about is a CPU comparison, not a GPU comparison.

In addition, SLI/Crossfire frame times do not report correctly using standard tools so we're looking into a way to measure per-frame latency with them.
 


I will wager a shiny new US nickel ...

Anandtech - The Trinity GPU
GPUdiagram.png


Tom's - AMD Radeon HD 6670 And 6570: Turkeys Or Turkish Delights?
Turks%20Architecture%20Diagram.jpg


The Richland APU is 32nm Piledriver CPU cores, with 40nm *SIMD Engine* Radeon 2.0 Cores (read that to mean: 'Not 28nm GCN Cores')

It's actually a good thing. AMD still has some nice headroom with 'Radeon 2.0 Cores' with big gains in overall performance and efficiency. Help them refine some HSA logic before the Steamroller shrink, too.





 
[citation][nom]Fokissed[/nom]The only thing that separates a separates a Bulldozer module from any other core is the Bulldozer module has two integer clusters. The rest of the module looks like a normal core, 1 branch predictor, 1 decoder, 1 dispatcher,1 FPU, 1 L2 cache. While it performs like 2 cores during pure integer arithmetic (though the shared resources will limit that performance too), anything else the module does will perform like 1 core.[/citation]

There are only two types of CPU math AFAIK and they are integer and floating point. Both perform as they should. First off, some of your info on the architecture is incorrect. Second, both integer and floating point worklaods operate as they should. That one FPU, jsut FYI, has two 128 bit instructions or one 256 bit whereas the previous generation (Phenom II/Athlon II/Sempron) had one 128 bit and lacked support for 256 bit completely. Looking at Trinity versus Llano benchmarks such as those that Tom's has provided in previous articles prove that Bulldozer/Piledriver does in fact perform like two cores per module in both types of workloads.

It is only being recognized in Windows 8, AFAIK, as a dual core with four threads because MS is piggybacking optimization for the throughput bottle-neck of Bulldozer and Piledriver (which is to be fixed in the next version, Steamroller) on Hyper-Threading support. So, no, Bulldozer and Piledriver don't perform like single cores in any circumstance except in software that doesn't support enough threads in which case it'd work pretty much the same eve on any other multi core CPUs with similarly higher than utilized core counts for that workload.
 


Turks is VLIW5. Trinity uses VLIW4. They're not even the same architecture. I know for a fact that Trinity does not use a Turks GPU, I know for a fact that it uses a die-shrunken, cut-down version of the Cayman as proven by anand and many others, and I know for a fact that even Llano used a 32nm GPU which was a die-shrunk old VLIW5 implementation similar to a cut-down Turks. Richland may not use 28nm GCN cores, but instead further modified VLIW4 or even VLIW5 cores again, but it is assuredly not 40nm. That would be a big step backwards from Trinity and Llano in process technology.

For example, Llano uses the Sumo GPU and it is 32nm. Look it up. That Anand article from which you got your slide is the very same article that states that Trinity uses a die-shrunk (to 32nm) VLIW4 architecture based on the Cayman GPU. Tom's and every other modern review of Trinity that I've read or even heard of says this.

Also, as I recall, Radeon 2.0 (in reference to AMD second generation DX11 architecture) specifically refers to VLIW4, not VLIW5. Turks is a VLIW5 GPU and not even the version used in Radeon 6800, but the version used in Radeon 5000 and 6700 and all lower end Radeon 6600/6500/6400 cards' GPUs.

Also, if the red squares for the SIMD units in the Turks slide are the units in each stream processing core, then there's a problem because VLIW5 has five units per core, not four. That's why it is called VLIW5 IIRC given that VLIW4 has four units per stream processing core.

Quoted from your Anand link:
Trinity's GPU is probably the most well understood part of the chip, seeing as how its basically a cut down Cayman from AMD's Northern Islands family. The VLIW4 design features 6 SIMD engines, each with 16 VLIW4 arrays, for a total of up to 384 cores. The A10 SKUs get 384 cores while the lower end A8 and A6 parts get 256 and 192, respectively. FP64 is supported but at 1/16 the FP32 rate.

As AMD never released any low-end Northern Islands VLIW4 parts, Trinity's GPU is a bit unique. It technically has fewer cores than Llano's GPU, but as we saw with AMD's transition from VLIW5 to VLIW4, the loss didn't really impact performance but rather drove up efficiency. Remember that most of the time that 5th unit in AMD's VLIW5 architectures went unused.

The design features 24 texture units and 8 ROPs, in line with what you'd expect from what's effectively 1/4 of a Cayman/Radeon HD 6970. Clock speeds are obviously lower than a full blown Cayman, but not by a ton. Trinity's GPU runs at a normal maximum of 497MHz and can turbo up as high as 686MHz.

Also, haven't we had this discussion before?

EDIT: I just remembered, Sumo, the GPU of Llano, was based on the Redwood core from some of the low end Radeon 5500 cards. I could check to make sure, but I think that it was basically a die-shrunken Radeon 5550's GPU.
 
[citation][nom]Sakkura[/nom]Woah, what? I can't wait to see that benchmark data now! 'Hyperthreading = useless for gaming' has been the mantra for years and years, this could really shake things up.[/citation]

Things have been like that for a while. For example; BF3 supports up to 8 threads. If I have a 3770k, all 8 threads will take some load when it's needed. This doesn't affect your FPS much at all yet. (On average, you get 1FPS more than a 3570k.) But, for games that are single threaded, it can affect performance. If you're running a single threaded game that's putting heavy load on core 1 thread 1 and another program puts medium load on core 1 thread 2, it could bottleneck your graphics card for as long as that program continues medium-heavy load on that thread. 99% of the time it doesn't bottleneck at all, but there are sometimes those load spikes that make you stutter for a second, lowering your average FPS.
 
funny thing is that Microcenter at least here in MD, charges between 169.99-189.99 everyday for the i5 3570K which changes the entire dynamic. I picked mine up for 169.99 though in hindsight I should have sprung for the i7 3770K for 229.99...
 
[citation][nom]Cleeve[/nom]Tigerdirect is having a short-term sale, and Microcenter has no on-line option so you have to live close to one. Regardless, it doesn't have an impact on the recommendations as the Core i5 is a much better performer in games for the $.You need to do a better job of criticizing. Try reading the first page, it'll help.[/citation]
What about Newegg??? online sales same as Tigerdirect...
 
funny thing is that Microcenter at least here in MD, charges between 169.99-189.99 everyday for the i5 3570K which changes the entire dynamic. I picked mine up for 169.99 though in hindsight I should have sprung for the i7 3770K for 229.99...

The only problem is that MC's deals like that are in-store only and most people (even within the continental USA) do not have good access to such a store. I'd spend about as much in gas as I'd save on the CPU, granted the discounted mobo deals might make up for that (but definitely not for the far from short trip).
 
[citation][nom]wolf2q[/nom]What about Newegg??? online sales same as Tigerdirect...[/citation]

What about it? Sales come and go on a daily basis. We're looking at average online pricing here, not short term promotions.

Read the article before commenting on it, please. In this case, the last two paragraphs on the first page answer your concerns nicely. 😉
 
Picked up an fx 8320 from Microcenter the other day, got it for only 160 bucks. Also, I'm having a blast trying to overclock it with my coolermaster hyper 212 Evo. And a bonus feature is that when I'm testing stability I have to leave the side of my case cracked open to keep the CPU under reasonable temps so my room gets nice and warm!
 
Status
Not open for further replies.