FX Vs. Core i7: Exploring CPU Bottlenecks And AMD CrossFire

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]ricardok[/nom]Catalyst 12.10?? WTF??The tearing is being addressed with 13.20 beta and you could have used 13.10 too if you wished. Why stay with an old 12.10 driver?That's not even close to fairness..I wish I could see the same tearing test done on 13.20![/citation]
Did you read the article? Both systems use the same graphics cards with the same drivers. They are comparing CPU's. C. P. U.
 

oxiide

Distinguished
[citation][nom]Crashman[/nom]Scaling-down the CPU when running $800 in graphics is completely unrealistic. The 3570K wasn't missing at all, the only thing that was missing was a faster consumer-oriented AMD CPU.[/citation]
Its not about realism, in my opinion. This is an article comparing two CPUs from two manufacturers, and products should be compared against their closest competing product. I could just as easily assert that its not "realistic" to imply that someone buying an FX-8350 is comparing it against a $300 i7. By this logic, the FX-8350 isn't a realistic choice in this system either.

Besides, someone spending $800 on graphics is [hopefully] in-the-know enough to know whether or not their usage demands hyperthreading. If it doesn't, a i5-3570K is a reasonable choice no matter how much they're spending on graphics.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]bystander[/nom]Every game in the list is threaded. What are you going on about? Skyrim shows improvements up to 4 threads, and large improvements up to 3. The point is, the vast majority of the games in existence do not use more than 4 cores effectively and it would be extremely biased to only include those games. The idea is to get a typical cross section of games with some variety, which he has done, though it would have been nice to see more games in general.[/citation]
I had quoted you, where you had said that using more games that were well threaded would be biased somehow.

Anyway, the point wasn't threaded, was well threaded, which meant 4 or MORE cores.

The point of the article was TO IDENTIFY A BOTTLENECK. "WILL xyz CPU hold back a pair of flagship GPUs?". I've not really seen any results here that weren't expected.

[citation][nom]Crashman[/nom]Scaling-down the CPU when running $800 in graphics is completely unrealistic. The 3570K wasn't missing at all, the only thing that was missing was a faster consumer-oriented AMD CPU.[/citation]
But...but weren't you looking for a situation where a top tier CPU is a bottleneck (in ways that we don't know already)?

Problem is, that AMD has ONE top-tier desktop CPU, Intel has three for gaming. The 3570K, the 3770K, and according to Don in his "best CPUs for the money segment", the 3930K.

Simply comparing the 8350 and the 3770K like you would do for a regular benchmark doesn't make too much sense.

You're saying it isn't realistic for the 3570K to be used, i'd say it would be equally unrealistic for a 8350 to be used, they are priced similarly, and perform (on an average) similarly. And, it's well known that Intel>AMD for gaming ever since Sandy Bridge came out. So i don't see the point of making the "unrealistic" argument.

Might as well skip the debate entirely, and use all four processors...i mean, you are making a value comparison as well.

I hope you can see the point i'm trying to make, i'm not sure if i'm putting it across well.

Processor bottleneck would mean that performance is limited by either the cache latency/size, processor speed, core/thread count or architectural or IPC differences. Which one are you looking for?

Or are you looking for a platform bottleneck? In which case, wouldn't it be better to compare IVB to SNB-E?

Or are you looking at the rendering pipeline, and how the CPU affects that? If i'm not wrong, the pipeline would be affected by how the game chooses to implement it: does rendering happen on an exclusive core/thread or a core/thread that has other work? does cache come into the picture? do clock speeds and IPC matter?

Again, i don't know about the rendering pipeline in detail, i'm just writing whatever questions are in my head, so forgive me if none of this actually applies.
 
[citation][nom]ojas[/nom]I had quoted you, where you had said that using more games that were well threaded would be biased somehow.Anyway, the point wasn't threaded, was well threaded, which meant 4 or MORE cores.The point of the article was TO IDENTIFY A BOTTLENECK. "WILL xyz CPU hold back a pair of flagship GPUs?". I've not really seen any results here that weren't expected.But...but weren't you looking for a situation where a top tier CPU is a bottleneck (in ways that we don't know already)?[/citation]
It is biased because you seem to only want highly threaded games. They included a couple of the best examples possible yet that isn't enough for you.
 
i think it would help if one thinks of the comparisons as two $2000 s.b.m. performance builds based on intel and amd flagship cpus that are focussed on gaming. $2k gaming-focussed performance builds have been done before iirc. that way one can count both cpu performance and 7970 cfx performance. whiners might whine,"amd doesn't compete in the high end anymore blah blah..." but that does not mean one can't build a performance pc with an fx8350. disregarding the price, fx8350 is amd's highest end desktop cpu. keep in mind that previous flagship fx8150 had a much higher price (core i7) at launch and for a while after that.
when the games demand gpu muscle, both intel and amd are adequate (which, in c.a.l.f. speak - amd is as good as intel/amd can do anything that intel can do) but when the games demand cpu muscle, amd performs worse than intel. which is why a faster desktop amd cpu is needed for gaming. this is not to poke fun at amd, just pointing out that they need to do better. similarly, gpu bottlenecked gaming rigs will help hide fx's inadequacy.

it was amusing to see amd-biased people saying stuff like 'amd is 'value, low level', 'very few people would use a 7970', etc. how insulting is that to amd? a $200~ cpu is Not a value chip. a celeron, sb/ivb pentium, athlon ii x3 445 or a6 apu is a value chip. a lot of people, especially those who run multi-monitor setups run cards like 7970s or a couple of them. it's as if amd fanboys are berating their own favorite brand.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]bystander[/nom]It is biased because you seem to only want highly threaded games. They included a couple of the best examples possible yet that isn't enough for you.[/citation]
well, 2 not-so-new games isn't really enough.

And, i mean, we are discussing processors with 8-threads here, aren't we? Why should i NOT want to see highly threaded games? I don't see what you're getting at. Will it become biased because of the shared floating point resources on Piledriver?

Again, i'm not sure what exactly Thomas set out to benchmark here. Or at least, I'm not sure what he ended up benchmarking. It just seemed to be a regular performance comparison, with some stuttering analysis (which was something i hadn't seen before for these two processors, so yup, i learned something there).

Processor bottleneck, OK, but what specific type? What's the logic behind the entire exercise? what are you trying to isolate? ARE you trying to isolate anything?

Listen, it's not my intention to troll, argue with you or bash the article. I've been active on this site for almost 2 years now. I wish to see it grow, I'd love to help it. In that respect, i'm just trying to be helpful.

So assume i'm not clear on what happened here. Could someone explain? Don't tell me to read the article. I've done that. I don't get the point, beyond the frame analysis.

The objective was to find a bottleneck. What sort? The sort that would hold back a high-end multi-GPU setup at higher resolutions. So we have to look at things specific to a multi-gpu setup. Most of the results can be replicated with a single GPU too. Also, in some cases, it's simply an increase in graphics load shifting the bottleneck to the GPUs.

You want something like:
52720.png

52722.png

52721.png

http://www.anandtech.com/show/6670/dragging-core2duo-into-2013-time-for-an-upgrade

That's where you can sit and analyze stuff like cores vs modules and the effect of threads. However, the AT charts could do with a SNB/IVB pentium and the FX-8350, and probably the 3570K too. Right now we can't say, looking at those charts, whether a Core 2 Quad would have matched the A10 or the i3 (it would probably match the A10, from all the data i've seen so far).

Consider this: The FreeSpace 2 SCP gets bottlenecked on my Core 2 Quad Q8400 but not on my Friend's 3570K (120 fps vs 45-50 fps for the same scene, the engine caps it at 120 fps.). I know for sure it's not a GPU bottleneck because the GPU utilization is well below 50% (GTX 560). And this is the SSE2 optimized Inferno build.

Sleeping Dogs, Normal AA: CPU bottlenecked, High AA: GPU bottlenecked.

Planetside 2: CPU bottlenecked.
 

Crashman

Polypheme
Former Staff
[citation][nom]ojas[/nom]You're saying it isn't realistic for the 3570K to be used, i'd say it would be equally unrealistic for a 8350 to be used, they are priced similarly, and perform (on an average) similarly. And, it's well known that Intel>AMD for gaming ever since Sandy Bridge came out. So i don't see the point of making the "unrealistic" argument.Might as well skip the debate entirely, and use all four processors...i mean, you are making a value comparison as well.[/citation]All four would have been nice, but cut into time quite a bit. The article instead tried to pick a "best gaming processor" from each company, after seeing that the 3770K topped the high-end gaming benchmarks in the 3970X's review. The point from there was to reduce CPU bottlenecks, not create them.
 
[citation][nom]ojas[/nom]well, 2 not-so-new games isn't really enough.And, i mean, we are discussing processors with 8-threads here, aren't we? Why should i NOT want to see highly threaded games?[/citation]
The reason is, he is giving you a cross section of your typical games and from my experience, he actually did you a solid by including a higher percentage of highly threaded games than typically found.

There are VERY few games that use more than 4 threads. Probably less than you can count on 1 hand, yet he included 2 that show improvement with more than 4 cores if you go crossfire, which he did.

Your charts have a problem, they are only showing a single 7970. It takes crossfire/SLI to show many CPU bottlenecks.

I will agree with one thought, it would have been nice had he included more games. I would have liked to see Starcraft 2 as well as a few more games, but that does take more time. I just disagree with the idea that he didn't include enough multithreaded games. He included more than typically found.
 

Crashman

Polypheme
Former Staff
[citation][nom]bystander[/nom]The reason is, he is giving you a cross section of your typical games and from my experience, he actually did you a solid by including a higher percentage of highly threaded games than typically found.There are VERY few games that use more than 4 threads. Probably less than you can count on 1 hand, yet he included 2 that show improvement with more than 4 cores if you go crossfire, which he did.Your charts have a problem, they are only showing a single 7970. It takes crossfire/SLI to show many CPU bottlenecks.I will agree with one thought, it would have been nice had he included more games. I would have liked to see Starcraft 2 as well as a few more games, but that does take more time. I just disagree with the idea that he didn't include enough multithreaded games. He included more than typically found.[/citation]Heheh, AvP and Metro 2033 were put in as last-minute replacements for StarCraft 2. SC2 doesn't support Eyefinity resolutions.
 
[citation][nom]Crashman[/nom]Heheh, AvP and Metro 2033 were put in as last-minute replacements for StarCraft 2. SC2 doesn't support Eyefinity resolutions.[/citation]
I wouldn't have minded not seeing results for Eyefinity. It should show interesting results for CPU power.
 

zipperhead

Honorable
Jan 22, 2013
3
0
10,510
What with all this "actual" rig testing. If NASA can simulate the birth of Universe why cant all this testing and comparing be dumped into a model and neural networked for per user "best" comparisons. i.e have a wee app on your comp that you turn on every now and then when you not d/l pron.... that records how you use your system, what apps and games, then every now and then upload that profile to the cloud, to be injected into the model and viola you're instantly told what the best hardware platform is for you. Slap a wee sliding scale bar on their for how much dosh you wanna spend and then that's it.... forever. No more mindless apples and oranges comparisions. Jim != Bob therefore Jim's best rig likely != Bobs best rig. Just model + cloud + profile = Best per person.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
@Thomas, bystander: Thanks for the replies.

[citation][nom]Crashman[/nom]All four would have been nice, but cut into time quite a bit. The article instead tried to pick a "best gaming processor" from each company, after seeing that the 3770K topped the high-end gaming benchmarks in the 3970X's review. The point from there was to reduce CPU bottlenecks, not create them.[/citation]
Um, see i understand what you're trying to say, that the 3570K, being technically inferior to the i7, would be a potential bottleneck. (BTW i know this stuff takes time, and i respect the effort you've put into this, i'm just discussing this for the future, and my own understanding; i honestly don't expect (or want) you to re-do the article or something).

What i'm saying is this:
1. What was the bottleneck you were trying to reduce? As in, you must know it exists, right? From all the data we've seen, the 3570K isn't a bottleneck in games, at least not in comparison with the i7.
2. "Best gaming CPU" from Intel would be the 3930K, within reasonable limits, according to your own site, otherwise it's the 3570K. You're saying that the 3970X doesn't do much over the 3770K, I'm saying the 3770K won't do much over the 3570K. Same architecture, clock speeds, chipset and number of cores/threads being used; only difference is cache. Why then, would the 3570K possibly create a bottleneck? Unless of course the L3 cache is involved in rendering.

[citation][nom]bystander[/nom]The reason is, he is giving you a cross section of your typical games and from my experience, he actually did you a solid by including a higher percentage of highly threaded games than typically found.[/citation]
It's the same suite that's almost always used on Toms. So this is the same number that's typically found here. Dues Ex:HR and Sleeping Dogs both support Eyefinity too, could have been added in place of SC2.

Toms did a DE:HR review. Check it out, that's one game that shows evidence of major CPU bottlenecks:
CPU%20Clock.png

CPU%20Cores.png

http://www.tomshardware.com/reviews/deus-ex-human-revolution-performance-benchmark,3012-7.html

The advantage of the above benchmark is this:
There’s a huge drop in performance between the quad-core i5-2500K and the dual-core, Hyper-Threaded Core i3-2100 at the same clock rate, suggesting that this engine is optimized for threading. Having said that, there’s no notable difference between the Phenom II X4 and X6, so the game does not appear to use more than four cores. At less than four cores, Phenom II performance is drastically reduced, and the dual-core model doesn’t satisfy at all.

Actually, i'll just post a few charts and results from Tom's and Anand Tech. I've found some good stuff, i think.

p.s. pointing out that it's one GPU in those charts seems pointless, the same or similar order will establish itself with two cards unless it's a GPU bottleneck. Or, it might aggravate because the Core 2 Duo will have platform limitations too.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
AT Benches:
Metro 2033, 1080p: http://www.anandtech.com/bench/CPU/338?i=551.552.288.287.443.523.363.434.203.435.677.676.444.675

Metro 2033, 1024x768: http://www.anandtech.com/bench/CPU/337?i=551.552.287.443.523.363.288.434.444.203.435.677.675.676

You see a response to core/thread count till a point, after which it's architecture, and then IPC.

Dirt 3, 1200p: http://www.anandtech.com/bench/CPU/337?i=551.552.287.443.523.363.288.434.444.203.435.677.675.676

Architecture and/or memory bandwidth (as Thomas suggested), then thread count, but notice some sort of response to cache allowing the i5-2400 to equal the i3-3220 despite being slower; though that may be because of Turbo Boost. In fact, more likely because of Turbo boost i think. So the final response is to clock speeds then, though i'm curious as to what's happening to the i7-3820.

Skyrim, Win 8, 1680x1050: http://www.anandtech.com/bench/CPU/505?i=701.551.702.288.697.698.699.700.434.677.47

Clock speeds and architectural (and possibly platform/chipset) issues seem to be the i7-920's problem, though i wonder what's happening to the i3...could be that Skyrim favours more integer ops or more cache, giving the FX chips some win. Between the i3 and the i5 2500K, i'm sure clock speeds are a part of the equation, but then Skyrim's well threaded till 4Ts, so it should be able to use the addtional resources. I suspect it is indeed the added integer resources.

Beyond the 2500K it seems to be simply a case of higher clock speeds and IPC, though i suspect that thread scheduling is taking a toll on the 3770K, and for Skyrim, it's possible that a 4C/4T arrangement is optimal.

Diablo 3, Win 8, 1680x1050: http://www.anandtech.com/bench/CPU/506?i=551.701.697.702.288.699.698.700.434.677.47

Seems very similar to Skyrim, just more heavily dependent on clock speeds.

Batman: Arkham Asylum, 1680 x 1050: http://www.anandtech.com/bench/CPU/59?i=99.142.192.102.107.157.109.117.191.47.146.144.145.147.120.122.105.119.112.54.97.118.143.106.121.89.123
Older CPUs, just have absolutely no clue on what's happening here. :lol:
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
Tom's Stuff:
Crysis 2, 720p: http://media.bestofmicro.com/Z/P/364165/original/game_crysis_ii.png
Didn't embed the chart becasue it's huge.
Mafia 2, 720p: http://media.bestofmicro.com/Z/Q/364166/original/game_mafia_ii.png

Then there's Fritz, which is integer performance dependent (at least, that's a logical assumption), so it almost completely mirrors Sandra's Dhrystone results.
http://media.bestofmicro.com/0/B/364187/original/res_syn_fritz.png
http://media.bestofmicro.com/0/J/364195/original/res_syn_sandra_cpu_alu.png

Multi-monitor Dual-GPU benchmarks with a 2600K:
08-Metro-2033-5760.png

Compare:
MetroVeryHiCPUBottleneck2013.png

There were more, but they used 8xMSAA so didn't post them. But yeah, point was, it's a clear GPU bottleneck. Not useful for looking at CPU bottlenecks imo.

MoH
CPU%20Scaling.png

Double the GPUs and triple the horizontal resolution, and i won't expect anything different on a Frostbite 2 game. Compare:
BF3UltraCPUBottleneck2013.png

EDIT: Though i think i should mention here that...CrossFire has a problem on BF3, so we might as well not talk about this. I wonder, why weren't 2 680's used for this? Or rather, why use BF3 for testing which involves CrossFire? We saw this the SMB, too, i remember.

EDIT 2: The problem seems to be exclusive to 1080p, i don't think we'd see 66 fps on 5760x1080 had CF not been working.

FC3 would have been the ideal candidate for this article, since we don't know if it uses more than 4 cores, because we're too GPU bottlenecked with a single card, even at 1080p.
CPU-scaling.png

This is how the game behaves with a 3960X and mid-range GPUs in crossfire/SLI:
High-1920.png

High-5760.png


Finally, Chris' 3970X benchmarks:
skyrim%201680.png

skyrim%201920.png

skyrim%202560.png

On the other hand, The Elder Scrolls V: Skyrim does exhibit greater sensitivity to platform performance.

Four Sandy Bridge-E/EP-based CPUs enjoy the lead at 1680x1050, suggesting that some combination of high clock rates and large shared L3 caches help drive performance.

As with any workload that increasingly emphasizes some other component, however, scaling up to 1920x1080 and then 2560x1600 quickly levels off average frame rates. Our highest resolution tips the scales in favor of Intel’s Ivy Bridge architecture. Sandy Bridge-derived CPUs clump together in the middle, while AMD’s portfolio lags behind (albeit by less than 10 FPS, on average, under the High settings preset).

Now add two GPUs to the highest res above, and you'll see the 1680x1050 chart repeat itself.

Which is why i've not been satisfied with this article. I knew the Skyrim results before i had even seen them (i'm talking about the fps bar graphs, not the frame times).

Hope I could come across with all this...

Cheers
 
G

Guest

Guest
Couldn't read the charts at all. Buttons for each of those graphs so they don't overlap eachother. As it is now it's unreadable.
 

Crashman

Polypheme
Former Staff
Ojas, I only need to look at the last chart to see that 60.8>60.6. I know the differences are too small to mean anything, but they are differences and the article is trying to present a best-case scenario from the hardware side (so that nobody can claim the test platform was hindered).

As for the games not being picked to highlight CPU differences, that should be a good thing right? No looking for specific games to hurt one brand or the other, just a general impression of average differences?

And the point? We've been hearing for years that because ATI graphics were more CPU-dependant than Nvidia, Intel processors should be preferred for CrossFire. That sounds like an Intel-biased statement worth investigating, especially since Piledriver has pushed AMD's performance.
 

HKILLER

Honorable
Jan 8, 2013
85
0
10,640
I'm not really pro but i've had 6 different computers by now and to be honest AMD CPU with Nvidia Graphic has always gave me the best result!nice price and nice performance...my next system is going to be FX 8350 with Evga GTX 580 Classified with the same Mother Board used in this review...
 

Crashman

Polypheme
Former Staff
[citation][nom]HKILLER[/nom]I'm not really pro but i've had 6 different computers by now and to be honest AMD CPU with Nvidia Graphic has always gave me the best result!nice price and nice performance...my next system is going to be FX 8350 with Evga GTX 580 Classified with the same Mother Board used in this review...[/citation]Hey, do you think there should be another article comparing these results with SLI results?
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]Crashman[/nom]Ojas, I only need to look at the last chart to see that 60.8>60.6. I know the differences are too small to mean anything, but they are differences and the article is trying to present a best-case scenario from the hardware side (so that nobody can claim the test platform was hindered).As for the games not being picked to highlight CPU differences, that should be a good thing right? No looking for specific games to hurt one brand or the other, just a general impression of average differences?And the point? We've been hearing for years that because ATI graphics were more CPU-dependant than Nvidia, Intel processors should be preferred for CrossFire. That sounds like an Intel-biased statement worth investigating, especially since Piledriver has pushed AMD's performance.[/citation]
True, it's 0.2 fps of a difference, but then there's also a 100MHz difference. Wouldn't they be identical at say, 4.4 GHz, like you used in the article? And then...in the first Skyrim chart (and the AT bench) you have the 3570K lead the the 3770K by 0.4 fps. But then, i guess you're only talking about higher resolutions.

I only wished to see more CPU dependent games, because i thought the point of the article was exploring bottlenecks...
AMD and Intel continue serving up increasingly faster CPUs. But graphics card performance is accelerating even faster. Is there still such a thing as processor-bound gaming?
I wouldn't expect to see CPU bottlenecks in GPU-limited games, which is why i've been jumping up and down in confusion.

But then i guess you meant specifically for Cross-Fire and all, at least that's what you're saying now...didn't come across clearly in the introduction. Neither was i aware of the "pair ATI/AMD with Intel" statement.

The introduction is sort of...not too relevant then. It talks about processor bound gaming at high resolutions, of not comparing price, and mentions something about pairing AMD GPUs with AMD CPUs, trying to find a contradiction to "hard data" that you already have..

But the article ends up reading like a regular performance comparison and stuttering analysis, along with an efficiency and value comparison, so i got completely thrown of track as to what's happening here.

See, to my mind, saying that increasing graphics load should somehow increase CPU load is counter intuitive. I think that notion where the CPU would "hold back" a fast CPU was more to do with CPU preparing frames too slowly so that the frames processed by the GPU per second would fall. Which is fine, but i don't think that component has mattered for a long time beyond stuttering. I think it's now about what happens BEFORE the CPU has to prepare frames. In fact, i think it's always about that with discrete GPU systems, but hey, i won't claim to know what i don't, so i'll stop here. But yeah, the original holding back could have had a bit to do with the platform as well.

In the two odd (simple 2D) games i've helped code, we were calling the draw function in the end, and though we weren't using the GPU, i suppose that's how these games work as well. You compute everything, then draw the screen.

in that case, i'd assume the actual involvement with frames on the CPU side is small.

With BF3, for example, if i don't cap the fps at, say 50 fps, then my GPU sees ~98% utilization. If i cap it at 50 fps, then it's usually lower than 75%. Obviously, the CPU doesn't let the GPU process more than 50 fps, so the bottleneck shifts.

But in BF3 MP games, with the same GPU settings, the game is choppy, the fps bounces between 30 fps and my 50 fps cap, because the CPU can't keep up.

So here, the problem is clearly before the frame drawing part on the CPU side. At least, as far as i can see.

Thus in the article, you'd need to compare 2 CPUs, one from Intel and one from AMD that have identical single GPU performance for a particular, and then look at the CF benchmark, if you wanted to bust the CF myth, if it is a myth. If you would see issues with CF there on the AMD but not the Intel, you'd have found the bottleneck you set out to find.

I'm not contradicting myself btw, i was unaware of the CF specific agenda. Though...shouldn't you have compared a similarly performing SLI setup on the AMD and compared it to the CF one? I mean, looking at two different CPUs running the same CF setup would end up comparing the CPUs, not determining whether CF is a problem on AMD or not, since you can't know that without an SLI setup for comparison (or/and a single GPU setup for comparison to the CF).

There's an Intel tool that you can use that lets you analyze the render pipeline in great detail. I'll see if i can find the link to it. It's free, but it was too complicated for me to understand the traces. Or maybe it wasn't, and i felt too lazy...but maybe you'd find it more useful.
 
[citation][nom]ojas[/nom]AT Benches:Metro 2033, 1080p: http://www.anandtech.com/bench/CPU [...] 76.444.675Metro 2033, 1024x768: http://www.anandtech.com/bench/CPU [...] 77.675.676You see a response to core/thread count till a point, after which it's architecture, and then IPC.Dirt 3, 1200p: http://www.anandtech.com/bench/CPU [...] 77.675.676Architecture and/or memory bandwidth (as Thomas suggested), then thread count, but notice some sort of response to cache allowing the i5-2400 to equal the i3-3220 despite being slower; though that may be because of Turbo Boost. In fact, more likely because of Turbo boost i think. So the final response is to clock speeds then, though i'm curious as to what's happening to the i7-3820.Skyrim, Win 8, 1680x1050: http://www.anandtech.com/bench/CPU [...] 434.677.47Clock speeds and architectural (and possibly platform/chipset) issues seem to be the i7-920's problem, though i wonder what's happening to the i3...could be that Skyrim favours more integer ops or more cache, giving the FX chips some win. Between the i3 and the i5 2500K, i'm sure clock speeds are a part of the equation, but then Skyrim's well threaded till 4Ts, so it should be able to use the addtional resources. I suspect it is indeed the added integer resources.Beyond the 2500K it seems to be simply a case of higher clock speeds and IPC, though i suspect that thread scheduling is taking a toll on the 3770K, and for Skyrim, it's possible that a 4C/4T arrangement is optimal.Diablo 3, Win 8, 1680x1050: http://www.anandtech.com/bench/CPU [...] 434.677.47Seems very similar to Skyrim, just more heavily dependent on clock speeds.Batman: Arkham Asylum, 1680 x 1050: http://www.anandtech.com/bench/CPU [...] 121.89.123Older CPUs, just have absolutely no clue on what's happening here.[/citation]
Frankly, I don't really trust those charts from AT, where you just plug in random hardware and it spits out results. On average, I'm sure they are correct, but I have my doubts if they actually tested every one of those systems in those games. It seems more like they take a few benchmarks to create a pattern which is then reflected in the charts. I much prefer finding articles like the above where they do test it, and let you know the hardware that went with it. They have similar charts here on THG (though not as nice to use), and they are often really inaccurate.

As far as this goes:
p.s. pointing out that it's one GPU in those charts seems pointless, the same or similar order will establish itself with two cards unless it's a GPU bottleneck. Or, it might aggravate because the Core 2 Duo will have platform limitations too.
As you pointed it, the reason I would suggest crossfire to be used is to try and remove the GPU bottleneck. Using really small resolutions can also help remove that from the equation.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
@bystander: those charts are the results of the hardware they test, and very closely match Tom's own results (see skyrim for example). The only odd set of scores are the Batman ones.

the Core 2 Quad Q8400 result looks accurate, though for Batman, closely matches with my own benchamarks, assuming physx was "off". I get an average of 179 fps with the same settings, but at 1024x768 with a GTX560. I think AT is using a GTX 280 there, but at 1650x1050, so i guess the drop is believable, though i have a feeling it should be more.

But yeah, they usually do a review of, say, a CPU, and those same benchmark numbers are used for that comparison thing. It's just like what tom's does for the charts, just that you can isolate and compare results. I don't see the problem in that...

That's why you won't find Skyrim results on Windows 8 for the Q8400, because they didn't re-run the benchmark.

AT's as legit as Tom's, always seemed to be.
 
i read that nvidia's fxaa benefits from more cpu power. latest kepler cards seem to be a bit more cpu dependent that previous gen cards. i think exploring the cpu and gpu bottlenecks in multimonitor sli rigs is worth considering.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
Ok i poked around Intel's site, and came across a case study where they've used GPA to analyze and identify rendering bottlenecks in code:

http://software.intel.com/sites/default/files/minuslabgpacasestudy-v1.pdf

It's interesting, to say the least, and it sort of steps outside the scope of this article, because it assumes access to the game's source code. If anything, it's just an indicator of how GPA could be used to analyze things like you're trying to in the article.

p.s. Our game was single threaded, coded in C++ and used the Allegro graphics library, and we called the draw function at the end.
 

HKILLER

Honorable
Jan 8, 2013
85
0
10,640
[citation][nom]Crashman[/nom]Hey, do you think there should be another article comparing these results with SLI results?[/citation]
Agreed...
 
Status
Not open for further replies.