I7 vs. Phenom ii x4

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

andygoaly

Distinguished
Feb 16, 2010
8
0
18,510
I am building my own computer for gaming. And I am torn between the i7 920 and phenom 2 x4. I am not looking to do any serious overclocking. I am trying to find the fastest of the two. Any advice is appreciated:)
 



Yes: Less fights might happen. Well except for the two minor points where you are completely wrong in your post:

1. The i5 is not better than the PhII out of the box. (In an earlier post you attempted to resort to the tired and old "clock-per-clock" argument when you realized that the i5 doesn't have a chance to win an out-of-the-box argument.)

2. There is no definitive "winner" or "loser" in benchmarks between the i5 and PhII at overclocked speeds; the scaling for both is not the same as frequency increases. In addition the PhII gains more GPU power as core frequency is increased. BTW: Thanks for your completely baseless argments in earlier posts. They made me do some number crunching in the Futurmark ORB looking at the 5870 GPU results.

In 3dMark Vantage the GPU scores for the PhII increase slightly faster than the i5's GPU scores. Don't believe me? Go do your own number crunching. This additional GPU power can easily explain some of the game results in various reviews done with overclocked PhII and I5/i7 chips. It also kind of ruins the entire "Get an Intel if you want more Graphics power" myth. You can then look at the 5970 results and explore the validity of the "Intel is better for more than one video card" myth.

But other than those two miniscule things your post is good. (At the least the parts that don't depend on those two things being true.)

EDIT: When I mention number crunching in the futuremark orb... I DO mean using the statistical averages. Some people seem to have a problem with anomalous results and focusing on them instead of focusing on the norm of the data. But I'm sure this little addendum to my post won't help those kind of people anyway.
 
This would be sad and unfortunate if the OP started this thread just to be a troll.

Otherwise, I would say for purely gaming purposes, a Phenom II is the better CPU in terms of monetary value since you'll spend less on the CPU and even less on the motherboard. Phenom II mobos cost $80-$170, while the i7 mobos cost $150-300+
The i7 is a beast of a CPU, but it only shines when used in heavy applications, not gaming.
 
Out of box I put phenom ii as less than or equal to. I don't see any false in this.

I feel 3dmark is iffy because you can't see if crosssfire is used or the clocks they use on the cards. If you can show me where an evenly clocked phenom ii beats i5 in a non gpu-bottlenecked situation I'll change the post.

I just don't see how you can think this when even yorksfield out performs phenom ii clock 4 clock.
 


The Phenom II has to be downclocked and the game benched at low resolution in order for it to count now? 😀
 
No, running at low resolutions makes the CPU work as hard as possible relative to the graphics card (physics and other CPU workloads do not decrease at low resolution), showing the capabilities of the CPU. Running at high resolutions with tons of anti-aliasing is what makes sense when testing graphics cards, as it puts the most workload on the graphics. In both cases, you are minimizing the chance of a bottleneck from some other components. That ensures that any performance differences seen are due to the part in question, not other parts.
 


Comparing how to bench a CPU vs a GPU is not even remotley close.

CPUs when in a game at high res are known to make little to no difference. Thats why a Q9650 @ 4GHz has no problems keeping up with PHII and i5/i7 in most cases. At high res though, the GPU does more work so thats how you see which GPU is better.

Then again, if it doesn't say AMD/ATI its all false, right? Thats what I saw from your posts on the STO review.......

CPUs are very different. The only way in a game to show what CPU has an advanntage is when you lower the resolution to make the CPU actually do the work. Now does that mean this is the only way to see? No. Even high res can tell you some things like bottlenecks, where in some highly threaded games C2Q can get FSB limited but rarely. But by no means if running a res of 1920x1080 and one CPU gets better than another by 2-5FPS means its better.

BTW, in your hand picked benchmarks you forgot to mention that the i5 750 is clocked at 2.66GHz and the Phenom II is at 3.4GHz so its not a true comparison.
 
Jimmy: actually, I would say that 2.66 (2.8 with turbo) I5 vs 3.4 PhII is fair. If a user buys one or the other (for roughly the same price) and doesn't overclock, that is representative of the speed that user will see. Look at price😛erformance, performance:watt, or similar, not performance:clock.
 
I personally like a mix/match of all of them. Clock per clock is important overall. If a 2.8GHz performs as well as a 3.4GHz, then it makes a difference. Hell thats what AMD used as their CPU naming before Phenom, Athlon 64 3200+ meaning it performed like a 3.2GHz CPU, at a lower clock speed.

If clocke per clock 2.8GHz = 3.4GHz it means the watt/performance will be better as well.

But still. Overall each part is good to look at.
 
People see 2.8 ghz on the i5, 3.4 ghz on the Phenom II and assume that means the i5 must be a lot better at 4ghz but it's just not like that because scaling is nowhere near linear.

The i5 tails off way before it reaches 4ghz.
 

Not necessarily. Some CPU architectures use less power at a given clock speed than others. Clock for clock is useless, aside from being an interesting technical spec.

If you have 2 CPUs, both $150, both quad cores, both 90W (actual usage, full load), and one happens to be at 3.2GHz and the other is at 2.2GHz, using different architectures so that the 3.2GHz model is 5% faster, do the clockspeeds actually matter that much? Yes, the 2.2GHz model is faster per clock, but (ignoring overclocking for now) they're the same price, use the same power, and the 3.2GHz model is faster overall. The logical choice for anyone looking to buy a CPU and not overclock is the 3.2GHz model, even though the 2.2GHz model is faster per clock.

Now, if the 2.2GHz model overclocks much better, that would be a logical reason to get it (and a 2.66/2.8GHz i5 does seem to overclock better than a 3.4GHz PhII). That's an entirely separate issue though.
 

Actual CPU performance does scale basically linearly with clockspeed. Any application that tails off is becoming bottlenecked by something else as the CPU performance increases.
 


=P He was just saying how a Phenom II would beat an i5 when the clocks are the same, so I asked for proof is all. :)

I've seen countless reviews use low resolutions and the reviewer explains why they use the low resolutions. Are all of these reviewers wrong? 😵

Has Tom's used countless pointless benchmarks when they use low resolutions? I know nobody uses these resolutions... but there is reason they use them.
 


I think jenny might be confusing bad scaling with gpu bottlenecking? I've seen synthetics of 4.0ghz i5s tearing it up.

~800mhz of extra comparable OC headroom is a hard thing to look past.
 
Synthetics and running a game at lower than intended are pointless except to try and prove who's cpu runs better on that particular benchmark. Has absolutely nothing to do with Real World usage. If I buy something to play a game at 1920x1600, I could give a rats ass how it runs at 640x480.
 


Running a i5 750 @ 4GHz is more like 1.34GHz OC headroom, 1.2GHz with Turbo included.

Thats a high OC and is normally obtainable on air.

Most Phenom IIs hit 3.8GHz on air, some 4GHz so why anyone would buy the highest end Phenom II is beyond me when they all reach the same clock speed.



Well you might want to know what the CPU can do at that res. Depending on that performance, a CPU can show you how long it will last before becoming a bottleneck. In a game normally the low end of the FPS spectrum is CPU dependant, but as res, textures and stuff like AA goes up it becomes less dependant.

If a CPU in a game gets 200FPS and another gets 100FPS, which CPU will likely become a bottleneck first for a better and faster GPU or more demanding game? In most cases, the one that gets 100FPS. That means that the newer GPUs wont perform as well on the CPU throwing 100FPS as it would on the one throwinf 200FPS.

But in the end thats only useful if you plan to keep the system for a longer period than the normal CPU/GPU life cycles (CPUs are about 1 years, GPUs are about 6 months).
 



Yeah I know what ya mean.

And what I meant was compared to Phenom II. :)

Phenom II 965 = 400-600mhz
i5 = 1200-1400mhz

1200to1400mhz - 400to600mhz = ~800mhz

800mhz 'difference' is I meant. 😛



The reason they show those is to see the CPU's actual performance when it is not bottlenecked by another component in the computer. (low resolutions is so the GPU bottleneck is taken away, this gives us an idea of how the CPU will perform in the future) The CPU is not a very important factor in gaming, however if your CPU is not strong enough it can hamper your performance dramatically. Most quad cores out there are actually overkill for today's GPUs. That is why in real-world gaming we can't even tell the difference between the i7 920 and i7 975 extreme in gpu-limited gaming. Which one do you think will work better in the future? Which one do you think wins low resolution and synthetic benchmarks? Yes comparing cpus of the same platform and cpus of different platforms can give somewhat different results, but you catch my drift. :)

Now I'm not saying real-world results aren't important... they are. It's just nice to know how your CPU matches up against the others instead of always seeing a 1-5 FPS difference.

Some of us give a 'rats ass' about future-proofing and don't like to buy a new CPU every time we upgrade. :)