Picking A Sub-$200 Gaming CPU: FX, An APU, Or A Pentium?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


They don't do exactly what I said unless they actually disable the unused core in each module. The point is to not use the extra core in each module at all and then overclock the processor a little more to compensate, hopefully increasing single threaded performance. Besides all that, the hot patches are reported to not be as good as the fix in Windows 8 so even if Windows 8 fixes this the hot patches for Windows 7 still wouldn't be a good substitution for my proposed experiment.

I read that article and no part of it said these patches do that. They may do something similar but being similar isn't being the same. It was a good article but did NOT cover my suggestion no matter how much you seem to think it did. I'm not trying to be rude or offensive I just want to see this experiment brought to light and I can't do it so I brought it up here. I won't be an ass over it if you don't want to try it but I would have expected you to know the differences between disabling the cores and those hot patches and the Windows 8 scheduling fix based on the info in the hot patch article. Well, not so much on the Windows 8 fix but it should be somewhat better than the hot patches at the least, if not by much.

I'm asking you to please give it a try, nothing more.
 
I take benchmarks with a grain of salt... they don't truly produce real world enviroments. To say they do... is a sign of stupidity. There are millions upon millions of different scenerios and it's impossible to benchmark real world, because its just too expensive... one of the reasons PC Bench stopped benchmarking was because it was expensive, it was handed over to someone else. Point is... benchmarks are a good tool to get ideas from, but in no way are they tools to live by. I notice a lot of people in the forums treat benchmarks as Gods, yet doesnt take in account the millions of different enviroments going on in the "real" world. In professional audio, its relation is very simular, I mix on multimillion dollar audio systems like V-Dosc, K1, Ver-Tec, & Meyers to name a few. I can make each system sound just as good as the other in any enviroment you put me in. BB King said it best... its not the gear... its the person. Grant you its different in the computer world when it comes to the gear and person... but its wise words to live by in anything you do. I run Mac OS Boxes, AMD Boxes and Intel boxes and each does their jobs well for what they are intended for. Bottom line... you buy your gear for yourself, to make YOU happy. Intel has a great hold on the gaming market... nothing wrong with that. AMD definitely needs to get back in the game if they are trying to target this market. With that said... For all you fan boys... My TRS-80 still does my dishes and takes the trash out... something Intel nor AMD can do! lol
 
Sorry, AMD fanboys, Intel has really put AMD in the dust this time. I got talked into the 6-core AMD based on price, then went out and got the I7-2600 (not the K) and I can really feel the difference (subjectively of course, not with real-world testing) with the same graphics card and RAID0 array between the 2 platforms. Not to mention the amount of heat that AMD proc spews out, barely staying below throttling temperatures with the crappy stock heatsink and fan. It sounded like my case was about to take off during gaming as the fan on that stock heatsink went to "ludicrous speed"... AMD will really need to step up their game with their next CPU generation or we will be talking about when AMD used to compete.
 
After thinking about it for awhile, since llanos are missing L3 cache, maybe a good matchup might be llano vs Core2 (2 and 4 core models like Q6600 o/c'd to match clock for clock). The best testing I could do would be:

C2Q 6600 at 3.0, 3.2, and 3.6ghz with 8GB DDR2 RAM at 1066(5-5-5-15,2T):

vs

A8-3870K at 3.0, 3.2, and 3.6ghz with 8GB DDR3 RAM at 1600(7-8-7-24,XX)

with both paired with a GTX 260 and GTX 570. I have a funny feeling the Q6600 might pull ahead despite the slower memory. I'm kinda lazy though.
 


Clock for clock I think Core 2 is in line with Phenom II or very close. Need to realize that it has huge L2 cache to make up for it's lack of L3, the Core 2 Quads can have up to 12MB of L2 cache. Core 2 had two cores per die sand the Quads had two dual-core dies and each die had up to 6MB of L2 and the L2 of each die was shared between the cores on the die. I don't think the cache was shared between the dies and I'm pretty sure the two dies used the FSB to communicate with each other.

Also, Core 2 could use DDR3. There are DDR3 LGA 775 motherboards and I've talked with another forum-goer that had a Core 2 Quad system with DDR3 memory. I don't think they support frequencies above 1066MHz without overclocking.

I think that Core 2 would beat Llano/Athlon II handily regardless of if it had DDR2 or DDR3 but it would be a decent test.
 
Why oh why are they benching at 1920x1080?! That is soooo stupid. In many cases performance will be gpu limited. They should be benching at much lower resolutions to show the difference between cpus.

What a worthless article.
 


I'll rephrase my question: can multiple GPUs spread the CPU workload across multiple cores better than a setup with fewer GPUs but the same total GPU performance? Would a GTX 295 spread it's CPU work around better than a GTX 570 or would a Radeon 4870X2 spread it's CPU work around more cores better than a Radeon 6970?

Basically, do multiple GPUs use multiple CPU cores better than a single GPU can, assuming the single GPU has roughly the same performance as the multiple GPU setup?
 


I don't think any current game is too GPU limited at 1080p with the Radeon 7970 so it should be unnecessary to lower the resolution. This way the frame rates will be reasonable, I think this method is better than using low resolutions so long as the video card used is fast enough and you can't beat the 7970 without getting multiple GPUs right now. Everything here should be CPU limited because of the 7970 being as fast as it is. I admit I skipped most of the benchmarks but I'm pretty sure the 7970 can handle almost everything, if not everything at 1080p.
 
It would be nice to see an multithread test on current cpus. In fact, it can solve a lot of questions about TRUE performance. Not only at games... ¬¬
 
Gonna go outside the scope of this article for llano's sake, since its in need of some lovings.

For a sub-$100 M-atx board, you can get 5 sata III ports (RAID 0,1,10), 4 dimm slots, supporting up to 32GB ram, a pcie x16, x4, and x1 slots, as well as pci, 2 USB 3.0 in the back, a USB 3.0 header for the front, and other misc goodies... pretty nice for the price and m-atx pkg., so its definitely not all bad, even moreso if you steer clear of discreet graphics.
 
[citation][nom]reasonablevoice[/nom]Why oh why are they benching at 1920x1080?! That is soooo stupid. In many cases performance will be gpu limited. They should be benching at much lower resolutions to show the difference between cpus.What a worthless article.[/citation]

Then why do only two tiles [the two DX11 ones] show any signs of a GPU bottleneck?

*crickets*
 
First of all, this is exactly the kind of lead article I hope to see on Tom's on Mondays; makes the weekend wait worthwhile.
Second, with even a few H61 boards now offering SATA 6Gb/s and USB 3.0, AMD cannot even claim the mainstream chipset edge anymore. While I'd like to see some productivity benchmarks too, it looks like a rational individual has no more reason to favor AMD, and even less reason to buy AMD (to upgrade an existing system may be about the only one).
BUT...although I'm a little perturbed by the Skyrim numbers, my 970BE didn't suddenly slow to an unusable crawl. It is still handling my needs, although the incentive is there to pass it down to my wife as an upgrade and build SB; if Piledriver sucks, that's likely what I'll do.
Third, I'm not sure there was any reason to go over $150 on these CPUs; after all once an i5 is included, there is no possible doubt about the outcome.
Finally, there was one class of processor that REALLY needed to be in here, and that's the "T" suffixed 35W versions. Where do they fit in? Are they as anemic as their reduced clock speeds imply, or does an i3-2120T perform at least as well as a G860?
 
[citation][nom]jtt283[/nom]First of all, this is exactly the kind of lead article I hope to see on Tom's on Mondays; makes the weekend wait worthwhile.Second, with even a few H61 boards now offering SATA 6Gb/s and USB 3.0, AMD cannot even claim the mainstream chipset edge anymore. While I'd like to see some productivity benchmarks too, it looks like a rational individual has no more reason to favor AMD, and even less reason to buy AMD (to upgrade an existing system may be about the only one).BUT...although I'm a little perturbed by the Skyrim numbers, my 970BE didn't suddenly slow to an unusable crawl. It is still handling my needs, although the incentive is there to pass it down to my wife as an upgrade and build SB; if Piledriver sucks, that's likely what I'll do.Third, I'm not sure there was any reason to go over $150 on these CPUs; after all once an i5 is included, there is no possible doubt about the outcome.Finally, there was one class of processor that REALLY needed to be in here, and that's the "T" suffixed 35W versions. Where do they fit in? Are they as anemic as their reduced clock speeds imply, or does an i3-2120T perform at least as well as a G860?[/citation]

I loved your comment. You suggested AMD might no longer have a niche where they are the favorite. You also mentioned the "T" chips which I totally forgot about. That's the groundwork right there for a new, and interesting, article.

The question posed being... who can build the cheapest, lowest watt solution, that includes 1 boot drive, 4 storage drives, and light gaming, at the lowest possible wattage, taken from the wall. Which scores best in streaming HD content, storage perf benchmarks, etc.?
 


why waste time on a worthless comparison? We know that Llano would win solely because of it's GPU. An Intel Pentium can be paired with the Radeon 6670 for less money than the A8-3870K and the Pentium system would perform much better.
 


That would be problematic for Intel because they would then have a monopoly and that's illegal. Intel would probably pay AMD to stay in the business to avoid this situation unless another company takes over for AMD.
 


Um... what is the most common gaming resolution, Alex?



What use is data from resolutions nobody uses?



To each his own... I was thinking the same about your comment. 😀
 
[citation][nom]jezus53[/nom]I completely agree. I've been wanting to build a somewhat light gaming machine based on these APUs but I haven't really found anyone that tests them all as they are. Instead they throw in a discrete card and scream intel is better. Though that is true with discrete graphics, I want to know how it does with the GPU on die because I know the APUs will destroy the intel CPUs when it comes to all around performance based on integrated graphics.But I do still like this article, it was very well done.[/citation]

I believe PCper.com did a comparison between llano and i3 with only IGPs. The Llano looked to be able to be playable with all games.

 


The 10% efficiency number you quoted was also quoted by AMD in relation to the scheduler patch, and what that patch does (in theory) is prioritize a thread per BD module instead of a thread per core. That's the jist of AMD's beef with the way Windows 7 handled bulldozer by default, and that's why it was patched. Before the patch, Win 7 would use one core at a time, that meant two threads would be run in the same BD module - that was inefficient because they had to share resources. Its better from a performance standpoint to prioritize one thread per BD module until all BD modules have at least one thread, then add threads to each BD module as needed.

It' be nice to experiment with shutting off half a BD module, but there's no way I know of that'd let us try that. unless I'm mistaken that functionality is not exposed in any BIOS. I'm not sure what kind of miracles you expect with a couple hundred MHz overclock traded for disabling half the bulldozer core, but with the efficiency issue already addressed by the scheduler patch I'm not sure it's reasonable to expect an advantage.

Especially since AMD is claiming granularity in power usage for BD cores in the first place - in theory, what you propose as far as half a BD core being shut off and it's TDP being applied to the rest of the chip for higher turbo clocks, that should be happening already according to AMD.


 
Entertaining article at best. the 8120 is the slowest base clocked cpu. as for the article title .. why put the i5 2500k in there and overclock it to 4.0ghz for every test ... I don't see the point other than to show this is strictly another Intel biased article.

AMD FX cpus support 1866 memory so my question is why is EVERY ARTICLE done with 1333? Using ddr3 2000 memory chips and clock at 1333 proves what? your testing one platform against the other, if that platforms supports faster memory at STOCK, so be it, use it.

Also, overclocking ... why only push the 8120 to 4.2 ghz while putting the rest at 4.5?
Article proves one thing, if your not overclocking, don't bother with the slowest stock speed cpu.
 
Noob2222, I'm not going to downvote that, because I think you have a point, however unfortunately I suspect it is a very small one. Based on other articles on memory timing, I'd be surprised if there were even a 5% difference, which wouldn't make a difference to the conclusions in this article. Unfortunately, it looks like you're grasping at straws. I lurk down at the extreme budget end, which means I have built a lot of AMD rigs over the past few years, but I'm not sure I see doing another one. I think that's a damn shame, and even if they don't mean to compete at the high end, I really hope AMD is still willing to compete in the midrange and low end.
 
This is probably the least useful set of benchmarks ever on Tomshardware . It is not of any use to anyone building a budget pc , which it seems to be is the point of using sub $200 processors .

The use of a $470 graphics card with this build negates the entire point of making the comparison . What would have been far more useful would have been a 6850 or 6870 in this build . The results could be quite different . We saw exactly that in the SBM $500 builds last year where the i3 2100 was resoundingly spanked by a Phenom 955 .
Since the FX 4100 can game on a par with , or better than the 955, it should do the same .

As usual criticism of the FX architecture is based on a failure to understand that architecture . AMD shot themselves in the foot by calling these processors quad, hex and 8 core . They are really 2,3, and 4 core parts with a small part of a traditional core duplicated so that the Module [ which should be called a core ] can run two equal threads . Its a hardware implementation of hyperthreading .
 
Status
Not open for further replies.