[citation][nom]Crashman[/nom]Ojas, I only need to look at the last chart to see that 60.8>60.6. I know the differences are too small to mean anything, but they are differences and the article is trying to present a best-case scenario from the hardware side (so that nobody can claim the test platform was hindered).As for the games not being picked to highlight CPU differences, that should be a good thing right? No looking for specific games to hurt one brand or the other, just a general impression of average differences?And the point? We've been hearing for years that because ATI graphics were more CPU-dependant than Nvidia, Intel processors should be preferred for CrossFire. That sounds like an Intel-biased statement worth investigating, especially since Piledriver has pushed AMD's performance.[/citation]
True, it's 0.2 fps of a difference, but then there's also a 100MHz difference. Wouldn't they be identical at say, 4.4 GHz, like you used in the article? And then...in the first Skyrim chart (and the AT bench) you have the 3570K lead the the 3770K by 0.4 fps. But then, i guess you're only talking about higher resolutions.
I only wished to see more CPU dependent games, because i thought the point of the article was exploring bottlenecks...
AMD and Intel continue serving up increasingly faster CPUs. But graphics card performance is accelerating even faster. Is there still such a thing as processor-bound gaming?
I wouldn't expect to see CPU bottlenecks in GPU-limited games, which is why i've been jumping up and down in confusion.
But then i guess you meant specifically for Cross-Fire and all, at least that's what you're saying now...didn't come across clearly in the introduction. Neither was i aware of the "pair ATI/AMD with Intel" statement.
The introduction is sort of...not too relevant then. It talks about processor bound gaming at high resolutions, of not comparing price, and mentions something about pairing AMD GPUs with AMD CPUs, trying to find a contradiction to "hard data" that you already have..
But the article ends up reading like a regular performance comparison and stuttering analysis, along with an efficiency and value comparison, so i got completely thrown of track as to what's happening here.
See, to my mind, saying that increasing graphics load should somehow increase CPU load is counter intuitive. I think that notion where the CPU would "hold back" a fast CPU was more to do with CPU preparing frames too slowly so that the frames processed by the GPU per second would fall. Which is fine, but i don't think that component has mattered for a long time beyond stuttering. I think it's now about what happens BEFORE the CPU has to prepare frames. In fact, i think it's always about that with discrete GPU systems, but hey, i won't claim to know what i don't, so i'll stop here. But yeah, the original holding back could have had a bit to do with the platform as well.
In the two odd (simple 2D) games i've helped code, we were calling the draw function in the end, and though we weren't using the GPU, i suppose that's how these games work as well. You compute everything, then draw the screen.
in that case, i'd assume the actual involvement with frames on the CPU side is small.
With BF3, for example, if i don't cap the fps at, say 50 fps, then my GPU sees ~98% utilization. If i cap it at 50 fps, then it's usually lower than 75%. Obviously, the CPU doesn't let the GPU process more than 50 fps, so the bottleneck shifts.
But in BF3 MP games, with the same GPU settings, the game is choppy, the fps bounces between 30 fps and my 50 fps cap, because the CPU can't keep up.
So here, the problem is clearly before the frame drawing part on the CPU side. At least, as far as i can see.
Thus in the article, you'd need to compare 2 CPUs, one from Intel and one from AMD that have identical single GPU performance for a particular, and then look at the CF benchmark, if you wanted to bust the CF myth, if it is a myth. If you would see issues with CF there on the AMD but not the Intel, you'd have found the bottleneck you set out to find.
I'm not contradicting myself btw, i was unaware of the CF specific agenda.
Though...shouldn't you have compared a similarly performing SLI setup on the AMD and compared it to the CF one? I mean, looking at two different CPUs running the same CF setup would end up comparing the CPUs, not determining whether CF is a problem on AMD or not, since you can't know that without an SLI setup for comparison (or/and a single GPU setup for comparison to the CF).
There's an Intel tool that you can use that lets you analyze the render pipeline in great detail. I'll see if i can find the link to it. It's free, but it was too complicated for me to understand the traces. Or maybe it wasn't, and i felt too lazy...but maybe you'd find it more useful.