I7 920 vs Phenom II 965 with an ATI 5870.(Finally!)

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


The mobo was fine. And until DDR2 is proven to be slower than DDR3, lets not get into that. DDR3 has no advantage over DDR2. Hell DDR2 barely started to push better performance over DDR until about 2 years ago. If memory performance truly mattered in games then it would show.

And lets look at your Legion Hardware results. lets take the most intense game: Crysis Warhead

http://www.legionhardware.com/document.php?id=869&p=19

So at 2GHz a Core i7 gets 40FPS. The Phenom II X4 965 gets 32FPS. At 4GHz the Core i7 gets 43 FPS, Phenom II 43 FPS as well. A 2.8GHz Phenom II gets the same FPS as a Core i7 at 2GHz. So that means that a Core i7 at 2GHz can give the same resuts as a Phenom II at 2.8GHz. That means it takes more processing power in a Phenom II X4 than a Core i7 920 to push the 5870.

of course each game is different but I am just looking at it logically. The most intense game that needs a uber system to run is a good way to do that.

Of course price always comes in. Phenom is great for budget, hell I want a X3 720 for my HTPC when I can afford to build one. But there will be times when for the price, Core i7 will eat a Phenom II up. And in the coming future with newer GPUs I am willing to bet a Core i7 will handle them better than a Phenom II.
 

randomizer

Champion
Moderator

Clock cycles at speeds above 3.2GHz are interpolated and not real clock cycles. :sol:
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


No, instead at adds another wrinkle to the decision about building a rig. If you look at the resolution and settings used you will see a theme: 25x16 resolution (currently highest available in a consumer display) and 4x aa+. These are extremely demanding settings. Yes, the 5970 is a beast of a GPU, however, if you were to turn down the resolution to more common settings you would see a larger disparity between Phenom II and i7 because the FPS would be more CPU bound than GPU bound. At 25x16 you are GPU bound meaning the CPU is waiting on the GPU to process the information before passing more information to the GPU.

Let's not make a bigger deal out of this than it is. Yes, when you are GPU bound the AMD vs. Intel argument is moot. That is admitted. However, if you are CPU bound then the i7 will be faster. Now, you have to assess whether the additional cost (6GB vs. 4GB, additional motherboard cost) is worth the extra performance. For me it was. I bought an i7 and took the SOB above 4.2GHz 24/7 stable. It's a beast of a CPU. Yes, I use all that raw power.

The above isn't true for everyone. At 16x10 a 4890 or 5850 will do and a Phenom II will be sufficient but it won't be the fastest. Outside of gaming... it's another story but that isn't the point of this thread.
 

ElMoIsEviL

Distinguished

You're starting to sound like me :p

**runs away**

I've explained this to her like 5 or 6 times before and she simply ignores it. But who knows... your English is better than mine so maybe it will sink in.
 

ElMoIsEviL

Distinguished

LMAO!!!

Post of the month :p
 

ElMoIsEviL

Distinguished

The consensus of opinion...

Whose consensus... yours?

We all acknowledge that those results have a few issues:

1. PCI Express bus used was 1 x16 and 1 x8 for the AMD rig.
2. DDR2 was used

BUT...
IF we look to the performance differences between DDR2 and DDR3 in an AMD rig, there basically are none (you even used to claim this yourself).

And if we look at the performance differences between a x8 PCI Express 2.0 port and a x16 the performance difference is trivial... truly trivial (1-3FPS).

Nothing which could alleviate the Phenom II X4s shortfall of over 60FPS in some instances.

The fact is that GPU Bottlenecks are the only reason why a Phenom II X4 system can keep up and at times surpass a Core i7 system. That is pretty much it. <--- and that's the consensus amongst the knowledgeable members of nearly every hardware forum.

Don't believe me? Try visiting other forums and pose the very same question. Or would you prefer if I put the question to a poll on other forums?
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


Let's not get carried away here...
 

jennyh

Splendid
The consensus of opinion...

Whose consensus... yours?

We all acknowledge that those results have a few issues:

1. PCI Express bus used was 1 x16 and 1 x8 for the AMD rig.
2. DDR2 was used

As mentioned a few times already by myself and some other poster...there is no such thing as 16x8x pci-e. It is either 8x8x or 16x1x.

BUT...
IF we look to the performance differences between DDR2 and DDR3 in an AMD rig, there basically are none (you even used to claim this yourself).

And if we look at the performance differences between a x8 PCI Express 2.0 port and a x16 the performance difference is trivial... truly trivial (1-3FPS).

Nothing which could alleviate the Phenom II X4s shortfall of over 60FPS in some instances.

The fact is that GPU Bottlenecks are the only reason why a Phenom II X4 system can keep up and at times surpass a Core i7 system. That is pretty much it. <--- and that's the consensus amongst the knowledgeable members of nearly every hardware forum.

Don't believe me? Try visiting other forums and pose the very same question. Or would you prefer if I put the question to a poll on other forums?

Once again you simply do not point out the only fact that matters. FPS > 60 is meaningless.

On top of that, I already mentioned what the real reason was. Sure the mobo isn't terrible but it clearly isnt optimal. In fact it's about the slowest 790 mobo available.

The real reason? When the i7 had a large lead (if you count 150 vs 120 fps as a large lead) a few months ago before ATI released better drivers people claimed it was proof that the i7 was faster in gaming. It wasn't, the new drivers proved that and any advantage the i7 had in multi gpu setups was lost.

How much faster do you think two 5870's are compared to a 5970? Are you telling me that the Phenom II bottlenecks in-between those? My what a coincidence that would be...if true. Which it isn't.

When ATI release new crossfire drivers again, it will again be proven. The reason why the i7 works better out of the box? That's almost certainly got something to do with the games being written on an i7 but who knows?

Until the i7 can hold on to a lead below 60fps, it's the worst choice for gaming. Dirt 2 once again proves that the i7 copes worse with the most demanding games.

Dirt 2 - http://www.tomshardware.com/reviews/dirt-2-performance-benchmark,2508-10.html

Remember this from over a year ago? http://www.bit-tech.net/hardware/cpus/2008/11/03/intel-core-i7-920-945-965-review/5

There are a lot more of those benchmarks out there. When the system is running a graphically advanced game - one that stresses the gpu a lot - the i7 loses any advantage it has. This is what matters, this is why the i7 is the poor gaming choice.
 

jennyh

Splendid


This is where you are wrong, it's where elmo is wrong and wuzy too.

We've already seen this argument many times. We've heard this for over a year. Since the i7 was released 13 months ago and paired with a gtx280, intel fanboys have been waiting on faster cards proving its superiority.

The gtx285, gtx295, HD 4890, gtx275, HD 5850, HD 5870 and HD 5970 have been released since then. The i7 has been benched with all of them.

You know what jimmy? The i7 still loses below 60 fps and it will still lose below 60fps even with the next gen cards, not that it will matter much by then.
 
Heres what I mean about xbits
Nvidia Corp. first demonstrated its code-named GF100 (NV60, G300, GT300, etc) graphics processing units (GPU) based on Fermi architecture over three months ago. Since the product is still not released, it is not a surprise that there are loads of speculations about performance of the next-generation GeForce. According to a representative at Nvidia, the company is happy about performance of the novelty.

“We expect [Fermi] to be the fastest GPU in every single segment. The performance numbers that we have [obtained] internally just [confirms] that. So, we are happy about this and are just finalizing everything on the driver side,”
http://www.xbitlabs.com/news/video/display/20091226130553_Nvidia_Is_Happy_With_Performance_of_GeForce_GF100_Fermi_Graphics_Card.html

The next day?

Nvidia Corp. will only release its next-generation GeForce graphics chip code-named GF100 (NV60, GT300, G300, etc) in March, 2010, a media report citing market rumours suggests. Even though the timeframe still belongs to previously announced launch schedule of code-named Fermi products, it was generally thought that the novelties will be released in January or February.

The leading supplier of graphics processors has reportedly notified its partners that it would officially launch the GeForce “GF100” graphics chip in March ’10, reports DigiTimes web-site. In addition, Nvidia plans to release a rather mysterious code-named GF104 chip in Q2 2010.
http://www.xbitlabs.com/news/video/display/20091228080808_Nvidia_to_Release_Next_Generation_GeForce_Graphics_Chip_in_March_Rumours.html

Now, they can be a good source, or do this. Their consistency isnt the best.
Their power ratings are often off, as well as their gpu findings, compared to the majority of other sites
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


Don't you find it a little ironic that ATI... a division of AMD... can't code drivers for their own processor... wait a minute. The mix of GPU drivers and CPU speed don't really make any sense.
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


Is this what you are trying to hang your hat on?

http://www.pcgameshardware.com/aid,692942/Catalyst-98-reviewed-HD-4870-X2-up-to-47-percent-faster-failing-in-Anno-1404/Practice/

 

jennyh

Splendid
That was one of the links yes. Notice that Anand still hasn't upgraded his benchmark and in FarCry 2 the Phenom still scores 50fps or so. Click the Farcry 2 tab on that and you'll see what I mean.

http://www2.ati.com/relnotes/Catalyst_98_release_notes.pdf

You can see a bunch of games where the Phenom II was performing poorly with xfire configurations. This led to suggestions that the i7 was far better in multiple gpu setups, but it had more to do with bad drivers than anything else.
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


You can't hang your hat on a driver revision 4 revisions back.

If you notice, the C2Q's also got a massive speed boost too indicating that it wasn't a Phenom specific problem. Was it a driver issue? Yeah, probably. However, the lack of i7's improvement shows that the i7 arch compensated for the poor driver coding. Thus when the coding was fixed to bring it back up to par... the i7 showed no improvements.

The improvements seem to be relegated to older quad cores. Note the Conroe shows no improvement, the Lynnfield shows very little but the Phenom X4 and Kentsfield do.
 

jennyh

Splendid


I agree on the surface it doesn't make much sense.

But the results are there to be seen. Something was holding back the Phenom II and a driver revision brought it up to parity with the i5 and i7.

Right now, the drivers for the 5xxx series are still immature and that is the main reason why some games on that xbitlabs review had the i7 a high amount of fps ahead.

ATI should fix it in cases where it matters, but then again if the Phenom II is scoring 100+ fps I can't see any pressing need for them to do so when their time would be better spent making real improvements.
 

jennyh

Splendid


Yes I know it wasn't phenom specific and the Q6600 also showed nice improvements.

I don't know why but I've been saying it all thread that the i7 performs better 'out of the box'. I have no idea why that is but at a guess I'd say it's due to games being coded on i7's and not on Phenom II's. That is why it takes a few months for the ATI driver team to get their cards working perfectly with other cpu's.

As for hanging my hat on that revision...the reason why it has slipped again is clearly because we are on to a new series of cards that need more driver work. The i7 is once again working better out of the box, and the games need optimising for the Phenom II again.
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


Meaning, again, the results are GPU bound or rather something else is at play. The C2Q and i7's should not be getting the same rates as each other... nor should the Phenom X4 unless they are bound by something. It should be noted that they also disable SMT and Turbo... both very useful features of i7.



So you are blaming Phenom's failing on ATI's coding division?
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790


That is an utterly absurd statement.

Because they were coded on i7's? Grasping at straws are we?
 

jennyh

Splendid
It's pretty clear that something in the old drivers was holding back the Phenom II and even the C2Q, and that clearly wasn't holding back the i7.

How hard is that to figure out? All that matters is, after the drivers were fixed, the Phenom II became pretty much equal to the i7 *even with crossfire*.
 

jennyh

Splendid


Yes ofc the games are gpu bound. The point is, before the drivers were fixed only the i7 was bound by the gpu - the phenom and C2Q were bound by something in the drivers not working properly.

All cpu's of that standard *should* be bottlenecking around the same point, which is the gpu bottleneck. There is never 60fps between the i7 and phenom II in *any* game - not for any hardware reason. Thinking that is even possible is just absolutely ludicrous - it is clearly a driver issue.

It should also be noted that the i7 was overclocked to 3.5ghz and the phenom II was at 3ghz 940 BE btw.
 

SuperFly03

Distinguished
Dec 2, 2004
2,514
0
20,790




Pretty sure I already stated that.

You can't hang your hat on driver fixes that will come in the future. You have to make your decision based on the information at the time. Ok, now, you also have to realize there was little overall benefit outside of Far Cry 2 which indicates, yes, a game specific issue. However, the other games did not change.
 
Read my old posts on this, where I credited Intel for i7 already handling this much better, but the overall improvements were on P2 % wise, compared to other cpus.
At the time, some wondered whether AMD had set some kind of proprietary scenario