I7 920 vs Phenom II 965 with an ATI 5870.(Finally!)

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


That is a very interesting driver development. Though the other two games still favor the i7. By the way, I checked my links. You were right, the one was with catalyst 8.6. But the other was with 8.8.

What is noteworthy though is that in that pcgameshardware.com review they are "only" using a single HD4870x2. They are most likely bottlenecked by the GPU. Compare with this:

http://www.tomshardware.co.uk/core-i5-gaming,review-31673-6.html

There the single card results are slightly higher than in your link. Maybe because tom's are not using AF. But in this tom's link all CPUs are pretty much equal at 1680x1050 4xAA/no AF. But the test with 2x HD4870x2 shows i7 pulling ahead suggesting a graphics bottleneck with a single card.

EDIT:
You added another link while I was posting. More interesting driver developments I see. That's good news for pII owners. Though they don't test with a pII CPU in that other link. Or with i7. But still, I'd assume Nehalem wouldn't benefit that much if at all since they might be fully utilising the GPU allready. And I think it's a safe bet to assume that pII would get better too since farcry 2 numbers on your other link shows that. Though it's premature for me to speculate with this small sample size but I'd guess pII and i7 will now perform more similarly with a dual GPU ATI solutions (different story with Nvidia GPUs). But still, with 4 GPUs i7 still seems stronger. But like I said earlier pII is plenty powerfull for modern games. Just that i7 is a little bit more powerfull.
 


At the time it was. It changed fast due to the flow of new tech but my old P4 system still can play TF2 maxed and get 30FPS. not as nice as my new system but still playable.

No one does play at low res anymore but it still is the only true way in a game to test a CPUs potential. It may be useless in higher res but still shows the current and future potential for a CPU.

I'm not saying Phenom II is not a good chip for future proofing for a time or even current gaming. Just that I doubt it will last as long as a Core i7 will. Thats what people went on about form AM2+. A drop in replacement, futureproofing. Seems like it only applies to certain aspects......



The very original crossfire was actually better than SLI in many ways. For one it did more than just upper half and lower half which did change performance. Second it could mix and match GPUs. You needed one CF GPU and the other could be an equal or lower GPU as it connected together via a dongle externally.

But that changed with the current CF and thus the performance did have a bit of a slide down.

And its not strange to see a GTX285 beating CF 5850s when nVidia is now pretty much the only GPU company being optimized for games. This allows for even older gen nVidia GPUs to keep up with and even outpace ATIs next gen GPUs.

ATI was there too. Now they stopped after being bought by AMD. not a great move since now they have to rely on driver optimizations which normally only bring 10-15%.



The Core i7 has the ability to feed more than two cards faster than most CPUs out there. Drivers don't make a major difference in EVERY game or for every OS/GPU. Most of the time it is GPU specific, game specific or OS specific. Normally its for the current and past gen.

My old HD2900 stopped getting major performance updates at about the 8.8 drivers. Sometimes they would be a all around update but rarely. Currently its the 4800/5800 series being updated in performance and its a per game.

9. 8 upped AMD platform based Crossfire performance, Intel is planning the same thing but its called cheating with Intel and just platform enhancements with AMD. Wont get into that though.

Still lets see 2 5870x2s in the game and see which CPU bottlenecks first on a clock per clock basis..... oh wait.... we have to do it stock since a Core i7 @ 2.66-2.8GHz keeeping up is good enough and clock per clock can't be anymore.

Man it just seems like no matter what its awlays changed to their perspective and what it should be. During the Athlon X2 days it was clock per clock. Now that Intel has the top teir its stock vs stock.
 



So you think it's more fair that the i7 be overclocked to 3.4GHz?
 


If its for a true comparison then yes. Clock per clock tells you that at this speed these CPUs do this. Of course turn Turbo off for a 100% fair comparison. In the enthusiast world we do OC.

But my main point is how certain fans of AMD tend to change their tune. When Intel became the better clock per clock and performance per watt CPU it became about value. And now its stock vs stock.

Its a one way street. I think be fair. Use dual channel for Core i7, turn off turbo and even SMT just to get a one on one IPC comparison. Then turn them all on to show what the CPU is fully capable of.

Its kinda like AMDs YouTube marketing. They take a decent AMD system price wise and then throw the balls to the wall highest end unnecessary Intel parts to make a giant price gap to make it more appealing. I know its marketing but if you put same price points then compare its different. A Core i7 920 keeps up with most Phenom IIs on a stock test. When OCed it tends to beat it (gaming).

But I guess some people don't want that to be seen.
 


The reason it should be stock vs stock is because clock speed isn't the only thing that affects the performance of a CPU. Stock vs stock lets you take one processor of X$ and another processor of Y$ and compare their value.
 


Wow, how messed up did you get that? Intel started the clock speed nonsense when they were losing heavily on ipc. It's only now that IPC is so important to intel because AMD have such a clock speed advantage.

It's no wonder you feel the way you do when you have that point messed up so badly jimmy.
 
You guys are arguing about how the i7 is so much better than phenom II while the phenom II does hold its own against it... but do you realize phenom II was made to compete against core2 architecture????
 
Hm. It says right here on toms that the QX6850 / QX6800 (chosen because they are the same platform, so they would perform like a Q6600 OCed to those clocks, which you know) near your 3.0ghz Phenom IIs.

http://www.tomshardware.com/charts/2009-desktop-cpu-charts/Performance-Index,1407.html

And I'm not saying the Q6600 is better than the Phenom II at all, that would be silly, Phenom II is better all around. But clock for clock it's mighty close.
 


No intel started the GHz race. AMDs Athlon started the clock per clock comparison since their chips ran slower but performed like such and such speed. And during the Athlon X2 days it went to performance per watt. Then just recently when Intel holds the lead it becomes a stock vs stock.

I understand stock vs stock but it doesn't give a full view of what the CPUs can truly do against each other. Sure a Core i7 920 can keep up with most higher clocked C2Qs and Phenom IIs but what if it was clocked to their speed? And its not really unfair since it an clock to about the same OC speed as the fastest stock Phenom II.

As I said, a all around comparison would be nice. stock vs stock, same speed, performance per watt but it just sems that it never would be that way only in a way to benefit someones view, and that mainly seems to try to go in favor of AMD.
 


There was no GHz "race". I know AMD was using the + sign for their models(2800+, 3000+, 3200+, etc), but the fact of the matter is different companies use different techniques. AMD use to have a higher IPC, now Intel does. The comparison needs to be stock vs stock because there's a price difference, and people want to justify the price difference when they pay for what they get. The i7 920 costs about $90 more than the 965 BE(now with the recent price drop). The 965 BE runs at 3.4GHz. Now suppose AMD came out with a processor that also costs $290. Although it might run at a much higher clock speed(3.8-4.0GHz), who's to say the i7 will still perform better?
 



Imagine if intel released an i7 with a 3.4ghz stock clock and priced it at $300. AMD would be a thing of the past [except for budget part of the market of course] considering how many people give a *** about stock clocks than clock for clock and max OC headroom. (not that I want that to happen =P)
 


If there was a Core i7 at 3.4GHz, it wouldn't be $300. It would a lot more than that.
 
Intel does that crap cause they can. Not saying it's ok, I've always hated the Intel Extreme series' prices, like w all do, but they could always release a locked multiplier CPU @ 3.4ghz (my point being that if intel did high stock clocks like AMD, which they can, they would dominate the market [arguable, I know, but you see my point], even though they already do). Although this would confuse the noobs into thinking it was as fast as the extremes because of the abundance stock benchmarks, so they wouldn't, and not to mention the stock power consumption would be pretty high.
 


Another way to test a CPU is to remove the GPU bottleneck and then test at high resolutions and high details. That's a realworld test. I prefer that.






That might be an interesting test from the technology buff point of view but I wouldn't call it fair. Not from consumers point of view atleast. Sure, all the technology buffs are consumers but most of the consumers are not technology buffs. And most people run their computers at stock clocks.



I agree and disagree.

In my opinion if you overclock one then the other one should also be overclocked. Both should be clocked as high as they can go and then compare them. That is a fair win for the i7. But I wholeheartedly agree that all around comparison is the way to go. Only that same clock speeds should be used when stock clocks are the same or they have the same overclock limit. I know you weren't trying to make an exhaustive list but for me the most important is price/performance comparison.
 


Hello JennyH,

I think you aren't looking at those results properly (the Catalyst 9.8 results that is). Situations where Crossfire technology is employed can shine some light on a performance bottleneck elsewhere in the system. In using the results you showed us, one can hypothesize that the Catalyst 9.8 drivers brought in some more efficient CPU code. This code alleviates some of the CPU bottleneck which was present (helping all the slower CPUs) but of course still leaves you limited by the GPU (therefore not helping the Core i7 much at all).

In this image you see that the improved CPU code helps a bit but the title is already quite CPU limited/GPU limited.
Catalyst9.8-Crysis.png


Here the improved CPU code helps the lower end products (i5 750 and 940BE) catch up to the Corei7 hitting the GPU bottleneck.
Catalyst9.8-FC2.png


Grid proves the hypothesis as it is a CPU limited game. The more efficient CPU code means nothing here as the title is not being held back by the GPU but rather the CPU. Since there is no GPU bottleneck, this is the true performance scaling between these processors.
Catalyst9.8-RDG.png


EDIT: Links don't work. They're essentially the Graphs here: http://www.pcgameshardware.com/aid,692942/Catalyst-98-reviewed-HD-4870-X2-up-to-47-percent-faster-failing-in-Anno-1404/Practice/
 




In Race Driver: Grid the results are sobering since the new driver doesn't deliver any fps differences worth mentioning.

According tho that, Grid wasn't actually one of the optimised games. Best guess is that grid is easily capable of max fps on any reasonable system and AMD didn't think it was worth wasting time on. If they had done, you'd probably see a flatter line something like you see on the others, or at least a bit closer than what is there.

Don't forget this is comparing a stock 940 BE (ddr2) vs an i7 @ 3.5ghz, so yes there will be a difference and should be. It would have been a lot better if we'd seen a 965 BE in the mix instead of the 940 tbh.
 

I wasn't offering you an opinion I was attempting to tell you that way it is.

When I see the following statement:
but in the Release Notes AMD notes massive performance improvements for Crossfire and CrossfireX setups, especially in CPU limited scenes

I interpret it this way:

Massive improvements with Crossfire and CrossfireX setups in CPU limited scenes. This tells us that AMD spent some time optimizing CPU code (and I would also think it is safe to assume that they threaded a lot of this code to take advantage of more cores as the FarCry 2 results clearly show when comparing an E6600 with a Q6600).

But any way you look at it there is a GPU bottleneck present. We see this as being quite obvious as most of the results even out with the new Catalyst 9.8 drivers. They increase performance but in doing so hit the GPU bottleneck. This is all confirmed when we look at the Grid results.

Again, you're not objective in your approach therefore I am attempting to rationalize with you.

Another thing worth mentioning is the fact that QPi link is not compatible with the PCI Express Bus. The Intel IOH (x58) actually tunnels the signal from the PCI Express Bus to though the QPi link to the CPU and the same occurs in the other direction. You can view a patent filed by Intel right here: http://www.patentstorm.us/patents/7210000/description.html

This adds a degree of complexity to the process (and some latency in the transfer) which could account as to why (when faced with a GPU bottleneck scenario) Intel systems lag AMD systems by a few Frames per Second.

Of course when we look at other CPU intensive applications.. Intel completely dominate. Gaming (the area where there is communication between the PCI Express Bus and the QPi link) is the only area we see this phenomenon rear it's ugly head.

All that having been said. A Core i7 Processor of the same clock speed is significantly faster than a Phenom II based processor. Gaming is the only area where we see these sort of discrepancies.
 
I wasn't offering you an opinion I was attempting to tell you that way it is.

When I see the following statement:
but in the Release Notes AMD notes massive performance improvements for Crossfire and CrossfireX setups, especially in CPU limited scenes

I interpret it this way:


1) I'm a lot more interested in the facts than your misinformed interpretations tbh.

2) Those same release notes make no mention whatsoever of Grid.

----

The following performance gains are noticed with this release of Catalyst™ 9.8:

Battleforge DirectX 10/DirectX 10.1 - performance improves up to 15-50% in CPU limited settings with the largest gains in CrossfireX configurations
Company of Heroes DirectX 10 - performance improves by up to 10-77% in CPU limited settings
Crysis DirectX 10 - Dual CrossfireX performance improves as much as 10% and Quad CrossfireX performance improves as much as 34% in CPU limited settings
Crysis Warhead DirectX 10 - Dual CrossfireX performance improves as much as 7% and Quad CrossfireX performance improves as much as 69% in CPU limitedsettings
Far Cry 2 DirectX 10 - Dual CrossfireX performance improves as much as 50% and Quad CrossfireX performance improves as much as 88% in CPU limited settings
Tom Clancy’s H.A.W.X. DirectX 10/DirectX 10.1 - Dual CrossfireX performance improves up to 40% in CPU limited settings with Quad CrossfireX performance improving up to 60% in CPU limited settings
UnigineTropics OpenGL - performance improves 5-20%
UnigineTropics DirectX 10 - Quad CrossfireX performance improves 5-20% in CPU limited settings
World in Conflict DirectX 10 - performance improves by 5-10%

----

Once again, Grid was not improved in Catalyst 9.8, probably the result of there being no need to improve a game that runs flawlessly at maximum settings anyway. You cannot use that as justification that the i7 is better in crossfire gaming on the basis of one game that wasn't optimised in catalyst 9.8. My point was that the previous benchmarks used Catalyst 9.6 and that they can safely be ignored in terms of todays experience.

Massive improvements with Crossfire and CrossfireX setups in CPU limited scenes. This tells us that AMD spent some time optimizing CPU code (and I would also think it is safe to assume that they threaded a lot of this code to take advantage of more cores as the FarCry 2 results clearly show when comparing an E6600 with a Q6600).

See, 6 months ago this would have been 'proof' that the i7 was a far superior gaming cpu, but now it's just a driver issue that has been resolved. Is the game now being bottlenecked by the graphics cards on *all* reasonably powerful systems? It looks that way doesn't it?

71 fps on the Q6600
73 fps on the Phenom II 940
74 fps on the i5
75 fps on the i7

So we've hit a gpu bottleneck and that is holding back the i7? That is what you are trying to say isn't it? And with better gpu's we'll see better results right?

http://www.anandtech.com/video/showdoc.aspx?i=3650&p=5

Crossfire 5850 equals Crossfire 5870. Far cry 2 is *cpu limited* at 75fps, not gpu. I think that's worth repeating with a quote from anand.

Have you ever wondered about what point Far Cry 2 becomes CPU limited? Well now we know. The 5850 in Crossfire manages to turn in the same score as the 5870 in Crossfire: 75fps. We’re CPU limited even at these high resolutions and settings.

See that number 75fps? Now, we *know* this has to be a gpu bottleneck because we *know* that the 5870 is a superior card to the 5850. We also know by experience that those numbers should be higher than 75fps with two such incredible graphics cards.

Now look at those numbers again

71 fps on the Q6600
73 fps on the Phenom II 940
74 fps on the i5
75 fps on the i7

What's the bottleneck? The bottleneck is the cpu at 75fps and the Phenom II 940 @ 3.0ghz is almost equal to the i7 @ 3.5ghz.

Again, you're not objective in your approach therefore I am attempting to rationalize with you.

You'd do a lot better if you got some facts instead of attempting to interpret or rationalise.
 


It's probably a case of what was tested actually. Farcry 2 has a 'benchmarking' tool that will throw up completely different results from actual gaming.

Edit - I probably don't have to mention that intel systems do better in the benchmarking tool, and not quite so well in the actual gaming results do I?
 


I doubt that's the reason. The numbers tom's get with a single HD4870x2 are in the ballpark. It's just that when they add another HD4870x2 they significantly increase the fps they are getting. That suggests i7 is not the bottleneck there but that the graphics sub system is. It is more likely that there is a driver issue with HD5000 series cards.
 


Yet we aren't seeing it anywhere else, just in Farcry2? And at the same 75fps bottleneck we've seen from two different sites, using the same 3.5ghz i7 and different graphics cards?

How many 'coincidences' does it take before you accept that the i7 is not any better than Phenom II in gaming? And wait, are we now talking about quadfire instead of crossfire? If so, that is almost guaranteed to be driver related again.
 

Come on, the i7 is better than Phenom II in gaming if you have a good GPU.
 


God almighty what is WRONG with you people?

http://www.anandtech.com/bench/default.aspx?p=102&p2=47

The i7 is not better than Phenom II in gaming period. Look at that Left4dead benchmark at the bottom - that goes against everything intel fanboys have been saying about the i7 being better at low res or on cpu bound gaming. You cannot get a game more cpu bound than Left4dead and guess what? The Phenom II 965 BE is faster than the i7 920.

http://www.anandtech.com/bench/default.aspx?b=48

There is the entire list with all the new cpu's, and the newer i7's top and the i5 also *just* beats the phenom II 965. But you know what, when the 975 BE is released it will go right to the top because THESE CPU'S ARE ALMOST IDENTICAL IN GAMING.

Only an absolute muppet can not take these very clear facts at face value. Left4dead at lowish res (1680x1050), with NO AA/AF has the Phenom II 965 scoring 6.4 higher fps than an i7 920.