Gaming Shoot-Out: 18 CPUs And APUs Under $200, Benchmarked

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I have looked through them again and I deduce as follows;

1) Metro and GPU dependant titles frames rates are identical yet AMD on average have the better latencies.

2) In CPU orientated games the Intel parts push higher FPS but the latencies are interchanging with AMD.

In short this misnomer that Intel is much better is exactly that, a misnomer. In fact its so disasterously close people at Intel benchmarketing HQ are slightly concerned that there perverted lies are being found out. Clearly the difference is purely a synthetic level.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
[citation][nom]sarinaide[/nom]Oh I am sorry all motherboards perform exactly the same, who know why we spend $500 on a Rampage when a H61 cheap trick will perform exactly the same. #rolleyes#[/citation]

"#rolleyes# " does not make your point valid.
There have been enough benchmarks in all review sites that show there is practically no difference between cheap-ass and 'extreme end" mobos for STOCK SPEEDS. And very specially for PCI-E and memory interface.
If you OC, then of course there is a difference. Nobody denies it.
 

no problem. i understood that from the content.
you can edit your posts though. you can do that from the forum thread. editing options don't show up in the article's comment section.
in the "comments" section, click "Read the comments on the forums" link and go to the comment thread. there you will have editing buttons at the bottom of each of your post. you can edit them like regular forum posts.
 

The_Trutherizer

Distinguished
Jul 21, 2008
509
0
18,980


That's pretty cool. I never really paid attention to that link.
 

AndrewJacksonZA

Distinguished
Aug 11, 2011
596
106
19,160
I know that you're exclusively testing CPUs and not the whole ecosystem, but it struck me as I read this last page that one would not, in real life, use a CPU in isolation for serious game, or pair a super-expensive GPU to a budget CPU.

What would the results have been if you used a little 6670 with the CPUs? Would the AMD CPUs have performed better because of the ability to Crossfire with their APUs where appropriate?

Disclaimer: I am running an Intel E6750 @ 2.66GHz and an AMD 6670 at home on my gaming PC and I'm happily gaming all my games at maximum detail levels at the maximum resolution my monitor supports which is 1280 x 1024.
 

rdlazar

Honorable
Feb 2, 2013
2
0
10,510
[citation][nom]shikamaru31789[/nom]I'm no expert but I'd think it would perform a little better than an A10. I'd think that having the integrated GPU disabled would give the CPU a bit more oomph. But then the core clock is a bit slower than the A10, so who knows. If AMD does ever release it, it'd need to be at a fairly low price point to be a better buy than the Phenom II X4 965 at $95. I still think the Phenom II would be a better buy regardless, because you can get an AM3+ mobo and a Phenom II now, and upgrade to an 8350 later once games are optimized for 6-8 cores (which might be sooner rather than later if these rumors about the NextBox and PS4 using 8 core AMD CPU's are true).[/citation]

Well, in my country, the only Phenom is the 955BE and the price difference between it and the 750K is about 15$ so I'm not really sure which one to get
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]The_Trutherizer[/nom]A company with a cross licensing agreement with AMD and who uses and optimises for AMD's 64bit instructions quite enthusiastically...[/citation]
Yes, but then i guess it's up to AMD to optimize for Intel's x86-64 implementation, then (since there are slight variations)?

The note on Intel's compiler is that it may not optimize for non-Intel CPUs to the same extent, which means that while it may optimize for common shared implementations, it might not optimize for implememtations and extensions that are native to Intel's chips.

See, an ISA can be implemented in different ways, so can extensions like AVX, SSE etc. Like Piledriver and Ivy Bridge; two different ways to do the same thing, based on the same concept.

So I wouldn't expect Intel to optimize for AMD's instructions.

I think it's safe to say that unless you're in the chip industry, you'll most likely end up oversimplifying things, as we really don't know as much about how a CPU works as the engineers at Intel/AMD.
I mean sure, a lot of people would have read about the x86 ISA and the 8086 processor or other stuff in college, but i highly doubt the 8086 looks anything remotely like today's chips in terms of logic design.

So yeah, Intel not optimizing for AMD's processors might not be as simple as you think. Seriously man, someone has to sit and do that, be paid to do that...there's a cost on Intel's part. Why would they incur that? To help AMD? WHY WOULD THEY DO THAT?

I'm not saying Intel should be alowed to bribe vendors and all like they did, that's anti-compettitive, but making your own product perform better surely isn't? It's not like their compilers make performance any worse on non-Intel processors...nor is it necessary to even use it.

AMD should have their own compiler made available that optimizes code for their own CPUs/APUs, as simple as that.

Anyway:
https://en.wikipedia.org/wiki/X86-64
https://en.wikipedia.org/wiki/X86-64#Differences_between_AMD64_and_Intel_64
Although nearly identical, there are some differences between the two instruction sets in the semantics of a few seldom used machine instructions (and/or situations), which are mainly used for system programming. Compilers generally produce executables (i.e. machine code) that avoid any differences, at least for ordinary application programs. This is therefore of interest mainly to developers of compilers, operating systems and similar, which must deal with individual and special system instructions.
There's a list of differences, go through them.
 
[citation][nom]de5_Roy[/nom]sigh... 'everywhere else'? only 'mistake' they did was link z77 up5 while they used up7 and same thing with 990fx(ud7/ud5) mobos. from the specs it looks like ud5 is anything but a mid-range board(despite the price). do you think that they buy the latest high end gear every time they do a review or do they request vendors to participate by sending their available stuff?i didn't quite understand these bits, but it looks like you just contradicted yourself in the same post.tbh, i don't think that The Highest End amd mobo woulda made any difference if the fps diff. was within margin of error.[/citation]

We're using Gigabyte's Z77X-UP7 for our LGA 1155-based platform. This is the company's flagship offering for that processor interface. The similar (but lower-priced) Z77X-UP5 earned our Recommended Buy award in Six $220-280 Z77 Express-Based Motherboards, Reviewed, so the -UP7 appears to be a great choice for our testing.For the Socket AM3+-based system, we're using Gigabyte's 990FXA-UD5 motherboard. This product achieved the highest CPU overclock in our Five $160 To $240 990FX-Based Socket AM3+ Motherboards comparison, and distinguishes itself with true four-way SLI support.

De5_Roy there we are the used a top line Z77 board where they only used a 990FX mid range board in the UD5 which is about on par with the GD80 and Fatality but far from the complete performance of the UD7 and Crosshair V. A motherboard is generally worth a Frame to as high as 5 - 6 frames that is very significant. I have not argued against the Intel parts pushing higher FPS in some(limited) titles but the AMD parts may have been bottlenecked by the motherboard, and the simple fact it is not a top line motherboard makes it completely negatable.
 

marraco

Distinguished
Jan 6, 2007
671
0
18,990
Years ago, I asked Tomshardware to do analysis of the lower framerates. I was completely ignored. Then I got to Techreport, and ranted about the same issue, under the same username that I use here.

Techreport understood. Then I asked for the results to be sorted from lowest to highest, and Techreport understood. Yet they refused to display the results as framerates, and choose to display in frame times.
I think that framerates, even if they are meaningless for single frames, can be better understood because they are more intuitive.

After Techreport started doing his "under the second" reports, then I got back to Tomshardware, and asked again for this type of analysis, and I gave as example what Techreport was doing. Tomshardware completelly failed to get it, and answered some garbage about also doing "games" reviews. TH didn't understood it at all.

Now, when AMD is fixing his drivers due to Techreport work, then suddenly all hardware websites are copying Techreport, but like Tomshardware, most don't get it.

The problem are those moments when after buying the best hardware, I don't get the promised experience, and I perceive a lot of small frame freezes (Far Cry 2 comes to my memory).

I do not care at all if "n" video cards gives me 250 fps. What I care for are those instants when I perceive small freezes. It breaks immersion. They are the show stopper.

The point is:
1- Yes, I choose cards due to his worse percentile framerates. Not average framerates. So, I disagree with Tomshardware conclusion.
2- No, the way Tomshardware display the results is not as good as seeing all the individual frames sorted from lower to higher (obviously, only the worst frames should be compared). Showing just xx percentiles doesn't tell the complete history. The line graph does. That's what Techreport do, and is for a good reason.

3- There is something that even Techreport can't do. They use FRAPS. But FRAPS do not reports what is actually sent to the monitor.
For example, if a game engine can't draw a new frame, it may choose to show the last frame again. That's reported as 2 frames, but it is felt as a single frame. A freeze. I think that Far Cry 2 does that, because it reports a high number of frames on my GTX670, with very low latencies, but I see constant, intermittent freezes. I think that the game engine itself is "cheating", but to analyze that, is necessary some specialized hardware, capable of analyzing the monitor output, and compare each frame to the last one, to check for differences.
 

kettu

Distinguished
May 28, 2009
243
0
18,710
[citation][nom]cleeve[/nom]To summarize, latency is only relevant if it's significant enough to notice. If it's not significant (and really, it wasn't in any of the tests we took except maybe in some dual-core examples), then, obviously, the frame rate is the relevant measurement.*IF* the latency *WAS* horrible, say, with a high-FPS CPU, then in that case latency would be taken into account in the recommendations. But the latencies were very small, and so they don't really factor in much. Any CPUs that could handle at least four threads did great, the latencies are so imperceptible that they don't matter.[/citation]

So, were the fps differences significant enough to notice? Or what made the fps "the relevant measurement"?
 

sarcasm

Distinguished
Oct 5, 2011
51
0
18,630
If I were to build an "as is" system for a casual user, I'd pick the FX-6300 over the i3 any day. But if for some reason the user is planning to upgrade then the Intel path is the way to go. Also an i3 is still a good option for a low power HTPC.
 

levin70

Distinguished
Oct 4, 2010
17
0
18,510
Please re-run all of these tests with overclocks on the AMD chips. Then lets talk. Its quite deceptive to run stuff like this and pit non-overclockable intel chips against overclockable AMD chips and not overclock. Thats what you buy an AMD chip for. And not some wimpy overclock using the stock heatsink. Make it real. You do tests on all this hardware and then you put out this drivel.
 

Fokissed

Distinguished
Feb 10, 2010
392
0
18,810
[citation][nom]mayankleoboy1[/nom]downvoted you.FACT : 99.9% of Windows SOFTWARE DEVS (not just game devs) use MS Visual Studio, which has its own proprietery, vendor neutral compiler. So all games perform according to the raw power of the processor itself.FACT : Intel compiler produces faster code for AMD processors, compared to AMD's own compiler.FACT : AMD does not have a compiler that produces windows exe code. It produces linux-only code.[/citation]
Everything you've said is completely true, however Intel Compilers have had questionable SSE2/SSE3 checks in the CPU dispatcher in the past. http://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler#Criticism
 

Fokissed

Distinguished
Feb 10, 2010
392
0
18,810
[citation][nom]marraco[/nom]Years ago, I asked Tomshardware to do analysis of the lower framerates. I was completely ignored. Then I got to Techreport, and ranted about the same issue, under the same username that I use here.Techreport understood. Then I asked for the results to be sorted from lowest to highest, and Techreport understood. Yet they refused to display the results as framerates, and choose to display in frame times.I think that framerates, even if they are meaningless for single frames, can be better understood because they are more intuitive.After Techreport started doing his "under the second" reports, then I got back to Tomshardware, and asked again for this type of analysis, and I gave as example what Techreport was doing. Tomshardware completelly failed to get it, and answered some garbage about also doing "games" reviews. TH didn't understood it at all.Now, when AMD is fixing his drivers due to Techreport work, then suddenly all hardware websites are copying Techreport, but like Tomshardware, most don't get it.The problem are those moments when after buying the best hardware, I don't get the promised experience, and I perceive a lot of small frame freezes (Far Cry 2 comes to my memory).I do not care at all if "n" video cards gives me 250 fps. What I care for are those instants when I perceive small freezes. It breaks immersion. They are the show stopper.The point is:1- Yes, I choose cards due to his worse percentile framerates. Not average framerates. So, I disagree with Tomshardware conclusion.2- No, the way Tomshardware display the results is not as good as seeing all the individual frames sorted from lower to higher (obviously, only the worst frames should be compared). Showing just xx percentiles doesn't tell the complete history. The line graph does. That's what Techreport do, and is for a good reason.3- There is something that even Techreport can't do. They use FRAPS. But FRAPS do not reports what is actually sent to the monitor.For example, if a game engine can't draw a new frame, it may choose to show the last frame again. That's reported as 2 frames, but it is felt as a single frame. A freeze. I think that Far Cry 2 does that, because it reports a high number of frames on my GTX670, with very low latencies, but I see constant, intermittent freezes. I think that the game engine itself is "cheating", but to analyze that, is necessary some specialized hardware, capable of analyzing the monitor output, and compare each frame to the last one, to check for differences.[/citation]
Well you could design some hardware that decodes every frame from the output and XORs each frame to the last, then runs a check for an all-zero buffer. This would have to be done out-of-place, making the hardware quite expensive.
 
G

Guest

Guest
thanks for the review and i have a point i hope to explain intelligibly.

you did a great job explaining how much latency there is and how often it happens. however by using the 75th and 95th percentile graph represent how often the latency occurs
Frame-Latency-Demo.png


does not inform on when the latency happens as with your first graph:
Frame-Time-Demo.png


you can see by comparing the two graphs that the vertical graph show when the highest latency occurs. what would be meaningful? if the highest latency occurs during a "valley" of lower latency, then it would be much more noticeable. whereas if the highest latency occurs during a peak of latency, then it would not be as noticeable.

i understand the amount of time to publish a review can be time consuming and adding another graph to each benchmark can increase that time by several hours. but to omit the timing (pardon the pun) of the latency may not fully inform on the overall stutter in gaming.


thanks.
 

logainofhades

Titan
Moderator
No matter how you look at it, most of this for people depends on budget. For those on a tight budget, AMD is probably still the better buy. Those that can afford to, will buy Intel. The APU's from AMD have kinda impressed me with their overall performance. Would be nice to have something that should my graphics card fail, my system isn't dead in the water and I could still game with decent FPS at low settings while waiting on a replacement. That is something someone on a budget, I think, should keep in mind. Not everyone is like me and has spare GPU's lying around should one fail.
 

...
how about this - if there's proof that ud5 indeed inhibits performance compared to ud7, only then it should be taken into account. (edit: are you sure you weren't comparing rev 1.0 motherboards? afaik those usually have issues that get ironed out in later revs.) i doubt it'll show up at stock settings unless there's sub-par power delivery in ud5 or inferior bios firmware that causes throttling. i've seen some benches where boards indeed bottleneck gaming performance with intel cpus but those were in double digits, in certain cases 10~20 fps(forgot the exact figures) on average. that's why i tend to ignore 5-6 fps difference if there's even any.
 

cleeve

Illustrious


You do not understand the difference between chipsets.

The motherboard chipset has no impact on CPU performance. It only impacts features.

There is no performance bottleneck whatsoever.




 

cleeve

Illustrious


If you can find credible evidence that the chipset affects CPU performance, I'll re-test.

You won't. That's because the motherboard has no impact.

Ever since the memory controller has moved to the CPU, motherboards offer nothing but features. Performance has nothing to do with them.

You do not understand what you are protesting.
 
G

Guest

Guest
Does anybody else think it's a bit suspicious that toms is howing the fx CPUs performing well below even the 3550 while the 8350 has been shown to be capable of beating or at least competing with the 3770k in many modern games?
 
Status
Not open for further replies.