Why is Core i7 920 better than Phenom 2 955

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


There will always be some people who will pay for the very best available, even if that best isn't good value (performance to cost ratio). They like the status symbol of having the best.

The rest of us are more sensible and consider the best bang for our bucks. That's why I bought a Q6600 at the time when it represented excellent value. At the moment though AMD is pretty much back in the game. I've seen the benchmarks where the Phenom 2 has closed the gap. Then consider an I7 is more expensive and must use DDR3. The Phenom 2 can run on DDR3 based motherboards, but those on a smaller budget can still buy compatible DDR2 motherboards. The Phenom 2 represents excellent value for money, even over the older Core2Quad 45nm line.

If I was buying a new PC now it would be AMD for sure. Intel offered something big with Core2Duo, but in comparison the I7 doesn't seem to have made the same impact. I wouldn't go as far to say I7 is a failure because it's not, but AMD is back to where they use to be and that being, offering great value to the performance on offer. Go AMD!
 
I7 has more L3 Cache,
HT technology
(which allows it to become an "" 8-Cores-CPU """ ),
A better memory controller system (Tri-Channel and up to 2000Mhz vs. Dual-Channel and up to 1600Mhz... right?),

It would make an interesting article comparing AMD/Intel memory controllers (hint hint, Tom's & Anand).

""Channel"" doesn't seem to mean that much to AMD procs - for each 10% increase in the IMC/NB speed bandwidth increases 3-4% (dependent upon CPU clock speed) and latency decreases 3-4%. Crank that sucker up to 26-2700MHz - LOL

I think the 'base' DDR3 speed is 1333MHz (JEDEC spec) and that is what Intel/AMD recognize. There are some AM3 motherboards that have DDR3 2000MHz on their QVL lists so JEDEC is a little behind the curve (with Intel, too).

But something like 4x2 DDR2 800MHz at 4-4-4-12 (IMC/NB 2700MHz ? 😀 ) versus 3x2 DDR3 1600MHz at 8-8-8-24 might make an interest comparison (and cost about the same).

Regardless - with the intro of i7, 8-16Gb mobos and the success of the AMD memory controller the 'desktop' is not starved for memory and bandwidth.

 
OK, coming from a gamers POV, keep this in mind, as theres more of us gamer enthusiasts than any other group, possibly combined.
From a gamers POV, i7 is simply not worth the cost
"Summary
When we first set out to tackle this article, we weren't quite sure what to expect. We've seen both the Phenom II and Core i7 up close and personal, and have a healthy respect for each platform. However, we are performance junkies around here and pride ourselves as overclocking enthusiasts that try to extract maximum performance from our systems, regardless of platform. And we have been suitably impressed by the overclocking prowess of the Core i7 lineup, as have many other enthusiasts the world over. But we also know that very few people are also able to take advantage of much of those increases in real world tasks that can truly harness that power to a great degree.

We've seen the pendulum swing from Intel to AMD, and back and forth, and as gamers, we had assumed that the strengths of the Core i7 platform would prove too much for a Phenom II to overcome in terms of gaming performance. So today's investigation set about not to prove a certain viewpoint, but to try to illuminate the facts of the unknown differences, not the least of which were our own experiences. And as we must admit, we were rather surprised at the results.

There are a few conclusions we can now definitively draw after today's exploration, plain and simple. It is fact that a Core i7 gaming rig will give you better overall performance in terms of absolute numbers and framerates; the difference isn't very much, but it does exist. It is also fact that such a system will cost considerably more money as well for what is essentially almost the same performance. When the results are then applied against those cost differences, it is also fact that the Core i7 system then becomes a very expensive option, costing hundreds of dollars more for little to no performance increase in most games.

Where things get really interesting is when you equalize the costs between the platforms and look at what you get for gaming performance in return. For a current difference of $215, you can purchase a second Radeon 4890 to go with a Crossfire setup in your Phenom II system and it will utterly crush a Core i7 gaming setup that will have only one graphics card. And we must admit, that provides a very compelling reason to consider an AMD gaming system, especially when we consider that a Phenom II X4 can overclock very well and also easily handle just about any regular use application. Unless you're doing a ton of video encoding or workstation renderings and animations, the cost for performance difference is very difficult to justify, particularly for a gaming setup.

As we said at the outset, passion can be a good thing when harnessed. And in this instance, using the gaming value presented by an AMD Phenom II setup can effectively let gamers harness far more additional graphics horsepower for their hard-earned money. "
http://www.pureoverclock.com/review.php?id=794&page=12
Sorry folks, if youve bought an i7 for gaming, youve spent money you neednt have
 


Actually its Speed and cost together. And now, as always Intel did not copy AMD per say. AMD just bought DECAlpha and used their design. Intel has had many of the same techs. AMD was the first to implement it into full scale user production.

QPI does not = HTT. They work very very differently, or as of now QPI would be the same speed as HTT 3.0 yet its faster.



Kinda. Or many of the other various 64bit CPUs that came and went back in the day.



Not that K8 did it right compared to Itanium. Its juts that AMD went the easy route and got us stuck with x86 for a longer period. They just basically grafted 64bit onto x86 where as Intel was trying to completely obliderate x86 by this time or even earlier with a superior 64bit arch.

but the way people and the market is, change is near impossible.
 
I meant as far as popular and ability.
People say, AMD may have come in with native first, but Intel got it right, in the same sense, regarding DT, so the same be said of K8 vs Itanium, as the 64bit thats surpassing Itanium now is much like K8s, which also was popular and at the top, and worked on both server/DT.
I think the Itanium and obliterating x86 is myth, made as excuses for its failure, or Intel doesnt know a thing as simple as what drives a market. Cant have both, its one or the other, and frankly, I dont think Intels that stupid
 


K8 was only popular since it was just x86 with 64bit grafted on. but its 64bit performance compared to Itanium ws not a sgreat.

And actually Intel and HP both wanted to move from x86. Its not really a failure since it has its own market niche that it controls.

But the only way we will ever see a true change is IF x86 hits a limit. Which t probably wont, more of the hardware limitations.
 
It controlled that market with promises never kept, and is why its now going out, with Nehalem, and AMDs poorer 64 being used instead.
I know its not exactly apples to apples, but close enough.
The market followed Intel, which wiped out several competitors. There is fear that LRB could be the same. Lots of promise, only need the SW etc etc etc
Im not disagreeing with you on the x86 issue other than whos to blame, as after all, maybe Intel was that stupid and thought they could change the world, but, unfortunately Im not. I dont buy it, nor do I buy AMDs "influence" as responsible for keeping INTELS x86 license alive
 


But most don't like change.

And even if that happens, it will cause major backlash.

"My computer can't run xyz, can I return it etc."
 


Yep. I know. but I think that x86 has been around long enough. If Intel can find a way to get IA64 to emulate x86 at the same performance levels while providing a superior 64bit experience then we can move away from x86 into better processing power.

BUT that probably will never happen.
 
i was thinking of it like x64 systems, you know tons more RAM.

But if the x86 is same as 32 then what's up with the different names?

and whats better about x64 bit than x86 bit? Don't we already have mainstream x64 bit procs? AMD64 ot every other processor now i7 phenom II C2d...

sorry for the noob questions, but i actually read through all 17 pages and now i have a few questions
 


IIRC, x86 started out life as 16-bit, with segmented memory addressing :). Sorta like CP/M-86 of the time. Then Intel grafted on 32-bit processing with the 80386.

However, according to Wiki, the term "x86" only became popular after the 80386 was introduced, and hence x86 usually refers to 32-bit mode, not the 16-bit "real" or backwards-compatible mode:

x86 is the most commercially successful instruction set architecture[1] in the history of personal computing. It derived from the model numbers, ending in "86", of the first few processor generations backward compatible with the original Intel 8086. Since then, many additions and extensions have been added to the x86 instruction set, almost consistently with full backwards compatibility.[2] The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA, and many others.

As the x86 term became common after the introduction of the 80386, it usually implies a binary compatibility with the 32-bit instruction set of the 80386. This may sometimes be emphasized as x86-32 to distinguish it either from the original 16-bit x86-16 or from the newer 64-bit x86-64 (also called x64).[3] Although most x86 processors used in new personal computers and servers have 64-bit capabilities, to avoid compatibility problems with older computers or systems, the term x86-64 is often used to denote 64-bit software, with the term x86 implying only 32-bit.[4][5]

I wonder if somebody could boot DOS 1.21 on a 6-core Gulftown system, in 16-bit "real" mode :pt1cable: ...
 
ok thanks for answering my question

but what about the benefits of x64 over x32/86 in PROCESSORS, i know about the RAM but what is the benefit in processors, and don't we already have x64 bit procs?
such as:
AMD64
Athlon II
Pentium dual
C2D/C2Q
i7/ P II
 


x64 is mainly 64-bit addressing, which means instead of 4GB virtually addressable memory total (which is usually reduced to a bit over 3GB since the video card memory is also mapped into that total), you now have up to 256 TB memory accessible. If that becomes insufficient 😀 then it can be raised to the exabyte level (giga < tera < peta < exa). Beyond that I guess we'll need x128 or maybe x256 :).

There's other features as well, such as 64-bit integer mode, extra registers, and probably most important, the additional SSE instruction sets which execute certain tasks incredibly faster than integer or FP coded algorithms.
 
when you talk about addressing, r u talking about the processors ability to address that or OS? as i said i know about x64 bit tech in RAM/OS but what does that have to do with procs?

the second part, are those the better things in x64 processors?
 


Both. You have to have 64-bit addressing registers (CPU hardware) as well as 64-bit instructions (OS) to accomplish this. Well actually you could fudge it with the registers (i.e., the segmented addressing mode that legacy x16 CPUs used to address a megabyte of memory, as 2^16=64K, and 2^20=1M). But the microops in the CPU had to support this of course, and that is hardware).

Of course you can operate 32-bit OSes on x64 CPUs since the hardware is backwards-compatible. But you couldn't operate a 64-bit OS on a 32-bit CPU such as a 386 or 486, at least without some emulation (i.e., translate to 32-bit ops).

BTW, segmented addressing & emulation are not too satisfactory since they tend to hobble performance. IIRC, when the 8086 16-bit CPU was out, Motorola had the M68000 CPU that had ~24-bit registers and could physically address 16M of memory, although internally it was 32-bit so its virtual address space was 4G. It was significantly better than the 8086 which is why Apple chose it to power the original Mac computer. Intel then came out with the 80286 which could address 16M physically and 1G virtually. Of course, in the early 80's memory chips were commonly 16K or 64K and quite expensive - a megabyte cost several hundred dollars, which back then was probably double what it is now. Now you can buy 12GB DDR3 for under $100 - that's probably around 100,000 times cheaper per byte, not to mention an order of magnitude better in perf. So if you're an AMD perf per dollar type, that could be a million times better :).
 


Hmm, this I did not know. So far I have W7 installed on an old laptop but never thought to check the mem usage since I only have 2 GB anyway. Lappy has a 7950 GTX with either 250 or 500MB video memory IIRC, but in either case XP 32 or W7 would report all 2GB of system memory. So, I'd have to install it on my old QX6700 DT with 4GB, that is if I weren't too lazy to check :). However since I need to reinstall XP, maybe I'll spend a couple extra hours & check. One thing I like about W7 is the fact that MS lost the silly floppy disc driver install. One thing I don't like about it is that I have to reinstall my MS 5000 bluetooth mouse every 3-4 days pretty regularly, as W7 RC loses the connection somehow.
 
I read some news articles when this feature of Win 7 was announced and it appears to only be a feature of the Desktop Window Manager. That is, Windows Forms don't reside in both system and video memory. I don't think they ever mentioned anything about other situations like games.
 
Yea, the reduntancy was just too agressive as gfx memory went up, they had to make the change.
You know M$, its like, what should I set my VM at? Wahtd they recommend? 2.5x?
They want as much leeway as possible, and is why theyre considered such hogs.
Problem is, much of this usage today ove xp is DRM crap.
Thats where people really need to point fingers. Thats the main reason why DX10 cant be compatible with xp, the kernel approach has to fit in with DRM, seperate, ya know, no piracy.
Just like the must have HW connect from front end to back, just to allow for DRM
GRRRRRRRRRRRRRRRRRRR
 
Instead of system resources claiming it and the card in use, now its just the card, anything 2D, not sure about games, as that is driver driven, and works thru there, but ultimately, it isnt taken until its used, same with games