Overclocking: Core i7 Vs. Phenom II

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
To THG team...
test proposition-request
On same AM2+ platform... Athlonx2 Vs Phenom Vs PhenomII
Upgrader's heaven or not?
 
[citation][nom]bernardv[/nom]This Phenom II overclock is very very far from optimal. Here are some points:1. 790FX board should be used, FX is the top end chipset.2. Ph2 doesn't support ACC, check the forums where skilled people got this CPU beyond 4GHz on air, they know what they were doing.3. If you're doing bench you should try to get an optimal RAM speed as well. Your RAM was running 30% below stock 1066!!! You must raise core do better.[/citation]
I agree on the memory issue. I'm running the cheapest DDR2-800 memory I could find on a Phenom 9550 and Sandra scores 9Gb/s (8.91 to be exact)
 
heh on another note not all Intel Core i7 (atleast the first batches/revisions) seem to be overclocking so well - an interesting read:

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3502

"Manipulating voltages within the OS using motherboard specific software tools can circumvent this condition to an extent, but the voltage has to be ramped up in small steps. Either way, 4GHz stable on this particular processor is way more hassle than it could ever be worth in a 24/7 system so I am stuck at 3.8GHz. Ok, this seems a bit demanding of me, a free 1.2GHz overclock from stock and I’m nowhere near happy! In fact, at this point I was pretty much convinced that both of the motherboards I was testing were duff, not the CPU, especially when I looked over at some of Gary’s early results on the same boards, as well as results of forum members.

I conferred back with Gary on his results, he’s got three retail 920s in his repertoire (with a fourth on its way) and his results are erratic in this department too. One of his processors needs high levels of Vcore to make 4GHz possible, well in excess of 1.50V, and refuses to POST with Bclk ratios set higher than 200, regardless of VTT/Core voltages. The other two (3837A)are better than mine for IMC VTT scaling and can also handle 6GB triple channel memory at 2000+MHz with a little persuasion. Something I found impossible on my 3838A processor with 3GB of memory let alone 6GB. Also, both of his processors allow for 3.8GHz operation at stock or below stock core voltages with VTT near 1.15V. Both processors allow clock speeds to reach about 4.3GHz on 1.45V of VCore, but VTT required is near 1.425V on air cooling with a 2:8 memory ratio, change the memory ratio to 2:10 for DDR3-2000 and VTT requirements hit 1.50V, which also happens to be the maximum amount his processors will allow before throwing up a C1 code on POST."
 
o(o.0)o - I'm actually really surprised... is it me or did amd finally do something right? (maybe...) on Air I'm @ 4 GHz on my 920 this is NORMAL (and prime'd overnight @ 1.35vcore) for just about anyone NOT on a stock cooler... the "v8" and Noctua are both *cheaper* than the thermalright extreme 120/true heatsinks... i opted for a thermalright true black (lapped cpu & heatsink) with 1600 mhz ddr3 ram.... the ram for 6 gigs was barely over 200 bucks... and would have been 130ish if i remember correctly for 3 gigs--- that's Corsair ram, that is also overclocked a bit. i was die hard amd until the XP3200+ which was a failure of a processor (wow look at me on water cooling and a 400mhz barely stable overclock...also YEARS ago...) so seeing this after years of disappointment is nice... although now that i just built a new box... a bit late. SO... the "upgrade" price for me would have still been HUGE seeing how my old DDR wasn't that great, the mobo wasn't AMD, and I've been going with nforce chipsets and nvidea videocards for a while now... that "200-something" dollar difference is negligible when spending 2k on everything (case, mobo, cpu, ram, powersupply, 3x 640gig hard drives, gtx280, dvd burner, and HSF) [replaced old computer to take it to the office with me... hehe shhh i swear i'm not gonna game at work -.- ]

so I've at least quadrupled FPS in CS:source, and in warhammaer online went from 10-20 fps with LOW-BALL settings to a solid 100 constant with Top end settings.

so all in all i'm extremely happy with my i7.

once you figure out that FSB x Multiplier is non-existent. it's pretty easy to overclock.

IMHO this still isn't *amazing* news for AMD... the i7 920 is the Bottom of the new tier for Intel. the phenom is the top for amd. albeit the price is still more for intel... amd is shaking in its pants @ the 965. which yes can hit 5ghz. and yes "extreme"ly expensive.

quickly enough... AMD will move to DDR3, the motherboards will get expensive while the ram costs the same as intel, and intel by that time will have new chips out, and these chips will go down in price. once again balancing things to where they were when i last bought my core2duo. for what i pay'd for that to build a whole system minus hard drives... it would have been completely noobish to build an AMD system.

i wish AMD would go back to being better than Intel flat out. but they're struggling and i'm never going to pay them for failures again.
 
It would have been nice if they tested the Phenom II with 8gb of ram or at least 6gb. DDR2 ram is dirt cheap and you can't get an accurate comparison with 4gb of ram on the AMD. That might be the reason for the Phenom averaging 70fps in COD at 1920x1200. My system averages 40-45fps and that's with an 8800gt and the old Phenom at 2.2gig and average cpu usage is %50. It doesn't seam right that a quad at 3.6gig would bottleneck at 70fps.
 
There have been lots of comments here complaining about the
supposedly lower than expected oc on the Phenom II. However, even if
the article had tested with the Ph2 running at 3.8GHz, it would not
have made any significant difference in the majority of tests (3.8
over 3.64 isn't even 5% better). The i7's overall performance margin
is so huge for many of the tests, especially for media encoding and
rendering, that the Ph2 would have to be at a much higher clock to
come close to the i7 scores (5GHz+). This clearly isn't the case for
the gaming results, but then the bottleneck for most games at high
levels of resolution/detail is not the CPU anyway. However, not
everyone uses systems just for gaming...

I'll be getting a new setup soon for video encoding (I have hundreds
of VHS recorded documentaries to digitise), so an i7 setup makes much
more sense for this, despite the initial higher system cost; as this
and other articles show, when under load the power consumption is
fairly similar, so an i7 setup (since it will finish conversions much
quicker) will use a lot less total power per conversion (ie.
performance per watt), and that means lower energy bills which will
more than make up for the difference in purchase price. So, for _me_,
the i7 is the best choice, unless it turns out that I could
get two Ph2 systems for the same price as one i7 system, but I won't
know that until the time comes (May/June).

But would I buy an i7 rig for gaming? No, not unless budget
did not matter. As many have pointed out, the bottleneck in games
is almost always the gfx card, not the CPU. Thus, given the same
budget, the price difference between the two alternatives could
be used to get a much better gfx card for a Ph2 system, or indeed
two gfx cards for CF/SLI.

For those games where there's a clear performance difference
between the two CPUs, the absolute frame rates are so high that
it doesn't matter anyway; for other games, the gfx card is the
bottleneck, so again the price difference favours AMD - buy a
better gfx setup with the spare cash. I'm also amused by anyone
making judgements based on games benchmark results where AA and
AF are not used; who in their right mind would buy a modern gfx
card of this kind and run their games with AA/AF both turned off?
It's interesting from a performance analysis point of view, but
not that relevant for making decisions. Likewise, I'm sure many
readers would want to see gaming results for 1680x1050, the most
common resolution used today AFAIK, and a mode which would more
likely reveal CPU performance differences, if any.

As has been said before, if budget isn't an issue, i7 is great. If
performance is what matters, i7 wins again. For certain types of
processing (media encoding, rendering, etc.), i7 also wins despite
the higher cost because the much faster completion times mean a lot
less power used which is money saved - more than making up for the
difference in system cost when the amount of time one is expecting
one's task to take is in the range of multiple months (I have about
2000 hours of material to archive).

For price/performance though, AMD has a lot to offer, and for gaming
AMD also has a clear advantage if one is given a fixed budget and
asked to put together two systems using all of that money - spare
cash to have better gfx in the AMD system.

But hang on though, now we've hit the same thing which has been
annoying me about almost all the reviews of the Ph2 that I've read in
recent weeks. Surely the real demand here for the Ph2 is data about
how it performs as an upgrade? Lots of people have AM2 setups into
which they could put a newer card like a GTX285 or 4870, but is the
Ph2 worth getting as an upgrade over their existing Athlon64 X2?
Articles on review sites seem to focus too much on the idea of buying
a complete system, whereas I suspect a major proportion of purchase
interest in the Ph2 are from those who want to upgade an existing AM2
platform, or even a setup that has the older Ph1.

For example, my gaming/general system has a 3.25GHz 6000+
(U120E cooler) and 8800GT (768 core); is the Ph2 worth bothering with
for a system like this for running newer games? What difference would
it make? How would it fair if paired with a newer gfx card for newer
games in such a mbd, compared to just sticking with the 6000+ and
putting in a newer gfx card? Do newer AM2+ mbds offer any advantage?
After reading dozens of reviews and articles about the Ph2, I still
haven't found an answer to this, yet I'm sure many would like to know.

Hence, cangelini, please can you put together a piece which shows how
Ph2 performs when used as an upgrade to an existing AM2 system that
has a decente Ahtlon64 X2. I don't mean comparing to the older range
of dual cores, but more the 6000+ and 6400+ CPUs. Take an AM2 board
with a typical older (but not old) card like an 8800GT or 9800GTX
(whatever, doesn't really matter), show what happens for gaming
performance with the CPU replaced by a Ph2 and whether or not it's
better to replace the gfx card instead, or indeed worth doing both,
and show to what extent there is any advantage in getting a newer AM+
board. That is, just how much of a performance loss is there when Ph2
is used in an AM2 board? That's what I really want to know, and from
responses I've seen to my saying this on other forums I'm sure there
are many other 6000+ (or similar) users who'd like to know too. The
i7 is a new platform, so this isn't an issue, but Ph2 can be used in
older systems, so why has no site yet covered the upgrade angle?

Has anyone here bought a Ph2 as an upgrade to an older AM2 setup? If
so, how did it go?

Ian.

 
quit crying, this test was as fair as it gets, As for the cooler complaint, i had my i7 920 at 3.8 on the stock cooler and it ran just fine. i scaled it back to 3.55 and got the memory clock close to stock speeds at 1600mhz so the timings would be normal, it benchmarks double the numbers of the highest core 2 quad in Sandra lite. As for the price, Intel comes up higher because they integrated the ram controller on the chip for ddr3 3 channels, They did this because the reason most 775 chips were not interchangeable was that the configurations changed set to set, by putting more of this on the chip they hope to alleviate this issue. Plus the x58 systems support both sli and crossfire. Back to heatsinks, in a thermal dissipation test i read intels stock heatsink for the pentium d which is similar to the i7s was more efficient then 70% of aftermarket heatsinks at the time, most tested where heatpipes. Most people say it sucks just because it LOOKS and FEELS like junk but my bet is it works as good as any other non heat pipe cooler if not better. the truth is AMD no longer Compares to Intel it just cant be done in a way that makes people happy because their too nostalgic to AMD to let it loose in a fair fight.
 
I guess I am puzzled that although it was mentioned in the writeup about the difference in the i7 test setup memory of 6G and the AMD test setup memory of 4G --- how could any of the tests run be valid when the i7 used DDR3 and the AMD used DDR2 ??

I looked up spec difference btwn DDR2 and DDR3 and they say:

The main benefit of DDR3 comes from the higher bandwidth made possible by DDR3's 8 bit deep prefetch buffer, in contrast to DDR2's 4 bit prefetch buffer or DDR's 2 bit buffer.

DDR3 modules can transfer data at a rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock.

In comparison, DDR2's current range of data transfer rates is 400–800 MHz using a 200–400 MHz I/O clock, and DDR's range is 200–400 MHz based on a 100–200 MHz I/O clock. High-performance graphics was an initial driver of such bandwidth requirements, where high bandwidth data transfer between framebuffers is required.

I respect Toms Hardware alot but doesn't anyone feel that the DDR2 vs DDR3 difference slants most of the testing results in favor of the i7 unfairly -- why didn't you use DDR2 for BOTH test platform ??
 
[citation][nom]bmullan[/nom]I guess I am puzzled that although it was mentioned in the writeup about the difference in the i7 test setup memory of 6G and the AMD test setup memory of 4G --- how could any of the tests run be valid when the i7 used DDR3 and the AMD used DDR2 ??I looked up spec difference btwn DDR2 and DDR3 and they say:The main benefit of DDR3 comes from the higher bandwidth made possible by DDR3's 8 bit deep prefetch buffer, in contrast to DDR2's 4 bit prefetch buffer or DDR's 2 bit buffer.DDR3 modules can transfer data at a rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock. In comparison, DDR2's current range of data transfer rates is 400–800 MHz using a 200–400 MHz I/O clock, and DDR's range is 200–400 MHz based on a 100–200 MHz I/O clock. High-performance graphics was an initial driver of such bandwidth requirements, where high bandwidth data transfer between framebuffers is required.I respect Toms Hardware alot but doesn't anyone feel that the DDR2 vs DDR3 difference slants most of the testing results in favor of the i7 unfairly -- why didn't you use DDR2 for BOTH test platform ??[/citation]

ok bmullan i take it your a noob with hardware - the i7 has a integrated tri-channel DDR3 controller - IT CANNOT USE DDR2

take a look at my previous posts about DDR 1 vs DDR 2 vs DDR 3 - you actually loose performance between them, especially if they are clocked at the same speeds.

The benifit of DDR3 is higher clock speeds (design wise) but the down side - you get a longer latency penalty (some cases more then 50% easily aka a longer wait time between cycles etc) - DDR3 as a spec is meaningless.

History shows with AMD the switch between DDR1 (400mhz) and DDR2 533mhz and even cheap 667mhz showed A PERFORMANCE LOSS.

Conclusion: Intel with the newer platform at present IS FASTER then AMD (at what cost - different story)
 
I guess I am puzzled that although it was mentioned in the writeup about the difference in the i7 test setup memory of 6G and the AMD test setup memory of 4G --- how could any of the tests run be valid when the i7 used DDR3 and the AMD used DDR2 ??

I looked up spec difference btwn DDR2 and DDR3 and they say:

The main benefit of DDR3 comes from the higher bandwidth made possible by DDR3's 8 bit deep prefetch buffer, in contrast to DDR2's 4 bit prefetch buffer or DDR's 2 bit buffer.

DDR3 modules can transfer data at a rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock.

In comparison, DDR2's current range of data transfer rates is 400–800 MHz using a 200–400 MHz I/O clock, and DDR's range is 200–400 MHz based on a 100–200 MHz I/O clock. High-performance graphics was an initial driver of such bandwidth requirements, where high bandwidth data transfer between framebuffers is required.

I respect Toms Hardware alot but doesn't anyone feel that the DDR2 vs DDR3 difference slants most of the testing results in favor of the i7 unfairly -- why didn't you use DDR2 for BOTH test platform ?? [citation][nom]apache_lives[/nom]ok bmullan i take it your a noob with hardware - the i7 has a integrated tri-channel DDR3 controller - IT CANNOT USE DDR2take a look at my previous posts about DDR 1 vs DDR 2 vs DDR 3 - you actually loose performance between them, especially if they are clocked at the same speeds.The benifit of DDR3 is higher clock speeds (design wise) but the down side - you get a longer latency penalty (some cases more then 50% easily aka a longer wait time between cycles etc) - DDR3 as a spec is meaningless.History shows with AMD the switch between DDR1 (400mhz) and DDR2 533mhz and even cheap 667mhz showed A PERFORMANCE LOSS.Conclusion: Intel with the newer platform at present IS FASTER then AMD (at what cost - different story)[/citation]
[citation][nom]apache_lives[/nom]ok bmullan i take it your a noob with hardware - the i7 has a integrated tri-channel DDR3 controller - IT CANNOT USE DDR2take a look at my previous posts about DDR 1 vs DDR 2 vs DDR 3 - you actually loose performance between them, especially if they are clocked at the same speeds.The benifit of DDR3 is higher clock speeds (design wise) but the down side - you get a longer latency penalty (some cases more then 50% easily aka a longer wait time between cycles etc) - DDR3 as a spec is meaningless.History shows with AMD the switch between DDR1 (400mhz) and DDR2 533mhz and even cheap 667mhz showed A PERFORMANCE LOSS.Conclusion: Intel with the newer platform at present IS FASTER then AMD (at what cost - different story)[/citation]

Thanks for the knowledge! I've not spent time looking at details of the i7 so was unaware of the DDR3 rqmt. But my point is still that the tests might have waited until the memory components could be equalized more as I still don't see how having one setup w/memory that has 1/2 the memory data transfer rate of the other setup creates meaningful results. The i7 may still trounce but it would be interesting to see if any of the results changed.
 
What's the point of waiting when people have to make purchasing
decisions now? One can do little else but compare what is available
from the various vendors, coming at the question from different
angles to try and satisfy as broad an audience as possible: best
price, best performance, best power consumption, and so on.

Insisting that the memory should be in some way equal makes no
sense. One may as well insist that everything else was equal
aswell, in which case the products would be identical anyway.
The i7 platform is faster, but at a higher cost, though with
performance and power consumption advantages in some areas that
make the extra cost worthwhile. On the other hand, the AMD
platform is cheaper and clearly is more than enough to power the
slew of current games, while any difference in available budget
would wisely be used on a better gfx setup which is the main
bottleneck for games atm.

You can't compare a real product to vapourware. The two platforms
are what they are and people want to compare them now; no
doubt the comparisons will be revised once AMD is using DDR3, and
by then i7 will likely have moved on aswell (price changes, clock
increases, cheaper mbds/RAM, who knows?) so the goal posts keep
moving, just as they have always done.

Ian.

 
Note that WinRAR's multi-threading compression is not exactly the same a single-threaded compression. If you compare the resulting archives, you will note that they are not always the same. IME, the single-threaded one compresses better.
 
isn't it obvious that 3.8GHz outperforms 3.64GHz? Compare with both CPU's overclocked to same clock speed.
 
shrihara,

Simple clock speed is not, has never been and never will be an
indicator of overall performance. The existence of Core2 compared
to P4 is proof of this. Performance depends on a whole range of
factors (too many to list here). I'm amazed that anyone would
still think such a thing. My own benchmark work shows these
effects all too well.

Besides, the useful reality for potential buyers is to know how
much each CPU can be oc'd. If the i7 was deliberately set to
3.64GHz, there would be an equally vocal outcry that the i7 was
being unfairly restricted.

Since this article is about overclocking, it's perfectly logical
to oc each CPU to its best potential. Whether or not this process
has been done fairly in each case is another matter, eg. same
budget spent on cooling, etc.

If clock speed was all that matters, I wouldn't be using a 900MHz
SGI Fuel as my main desktop. 😀 (feels faster than my 6000+,
outperforms an Athlon64 2.2GHz)

Ian.

 
This comparison doesn't seem fair to me. Power consumption wise in particular. At load they are measuring the power usage on the phII with 1.6 volts. cmon guys. You either got a bad chip or you don't know what you're doing with your overclock. I have a phenom II chip at 3.7 Ghz at 1.45 volts completely stable no crashes or freezes even leaving it on for a week straight. And yeah chips can vary... but I believe cooling does have an affect on the overclocking potential of this chip. I'm using the new Sunbeam direct contact cooler with 6 copper pipes, performance is near the thermeltake 120. it handles higher voltes much better then the one you used.

It doesn't seem right comparing the two processors at different frequencies. Why not compare them both at the same speed? Because it's apparent to me the phenom II can be overclocked more with less volts then you achieved.
 
[citation][nom]Anonymous[/nom]This comparison doesn't seem fair to me. Power consumption wise in particular. At load they are measuring the power usage on the phII with 1.6 volts. cmon guys. You either got a bad chip or you don't know what you're doing with your overclock. I have a phenom II chip at 3.7 Ghz at 1.45 volts completely stable no crashes or freezes even leaving it on for a week straight. And yeah chips can vary... but I believe cooling does have an affect on the overclocking potential of this chip. I'm using the new Sunbeam direct contact cooler with 6 copper pipes, performance is near the thermeltake 120. it handles higher voltes much better then the one you used.It doesn't seem right comparing the two processors at different frequencies. Why not compare them both at the same speed? Because it's apparent to me the phenom II can be overclocked more with less volts then you achieved.[/citation]

thats the same as my Q6600 - 3.5ghz at 1.45v, 1.6+ to get even 3.6ghz - meaning that sample is at its max, and the other reviewers that got theres beyond 4ghz got those golden sample chips and the average still sits at or below 3.8ghz

who runs there 65nm/45nm cpu @ 1.6v anyhow? thats too high even for a 90nm cpu.
 
shrihara,

Simple clock speed is not, has never been and never will be an
indicator of overall performance. The existence of Core2 compared
to P4 is proof of this. Performance depends on a whole range of
factors (too many to list here). I'm amazed that anyone would
still think such a thing. My own benchmark work shows these
effects all too well.

Besides, the useful reality for potential buyers is to know how
much each CPU can be oc'd. If the i7 was deliberately set to
3.64GHz, there would be an equally vocal outcry that the i7 was
being unfairly restricted.

Since this article is about overclocking, it's perfectly logical
to oc each CPU to its best potential. Whether or not this process
has been done fairly in each case is another matter, eg. same
budget spent on cooling, etc.

If clock speed was all that matters, I wouldn't be using a 900MHz
SGI Fuel as my main desktop. 😀 (feels faster than my 6000+,
outperforms an Athlon64 2.2GHz)

Ian.

Actually you are wrong about "Simple clock speed" has never been used as an indicator of overall system performance. It has only been in the last 8 years or so that manufacturers were hitting ceilings as to how high they raise frequencies. That is why there have been such advancements in the silicone. They were forced to innovate so they can keep there high profitable margins and keep up market share against their competitors. Do you not remember the intel p4/p3, amd xp days?
 
cdrkeen,

Wrong again. Compare different CPUs at the same clock speeds
going back much more then 10 years and you'll see huge differences
in performance. SGI's R8000 at 90MHz could be 10X faster than a
Penium90 at certain 64bit fp codes. Likewise, their R10K at 195MHz
produced SPECfp95 scores far in excess of Intel CPUs at the same
clock speed. Ditto for other CPUs from other vendors.

You can't compare based on clock speed alone, period. To say so
is just plain false.

Ian.

 
Ian, My point was that it hasn't always been that clock speeds were not the decider in performance. Never said it was always that way before 8 years ago.
 
And my point is precisely the opposite, that no matter how far
back you go in mainstream CPU history, clock speed has never been
a useful measure of performance. Never. Check the CPU Info Centre
at Berkeley for a good historical perspective (download the main
PDF). Of course, if you're only going to think in terms of x86
CPUs then it's hardly surprising one might conclude that MHz
in some way correlates with performance, but that's an incredibly
narrow perspective to hold and is false anyway upon closer
inspection.

Even in the days of PIII, a motherboard could be far more important
than the CPU. I've seen a PIII/600 easily beat a P3/1GHz+
because the 600 was on a far better mbd. And for a while there
were mbds for Intel's CPUs which allowed the CPU to thrash
the bus too much, killing performance.

Doesn't matter how far back you go, the same thing applies. Look
back to 1988: the MIPS R3000 at 33MHz is 3X faster for int and
almost 10X faster for fp than Intel's best 386/33 of the day.
Likewise, at around the same time, the 25MHz 68040 is 2.5X faster
for int and 5X faster for fp than the 386/33.

So like I say, it has indeed always been the case that clock speed is
not a decider of performance. I learned this lesson a long time
ago from working with Alpha CPUs - they look great on paper with
their high clocks (eg. 21164a @ 600MHz, at a time when the fastest
Intel was the PII/300) but are easily beaten by other vendors,
eg. MIPS R10K @ 250MHz. The 21164 was hurt by a small L1 cache;
Alpha didn't really get moving until the 21264, at which point
it was pretty cool, like having a clocked-up MIPS.

It irks me to see generalised comments about CPU performance that
are based solely around the legacy of x86. The world of CPUs is
much wider than that.

Ian.

 
it's not generalized. I don't have enough time to write a novel. When I get more time I will respond to your comment to prove my statements more. I still don't agree, and I will find the sources to back that up.

Good day.
 
And btw this discussion wasn't intended to be looking at the worlds many different processors, if you want to make it more broad thats your prerogative but I am thinking in the lines of this article. we could talk all day long about the differences of different performing processors. It's not very relevant to what I was trying to point out.
 
x86 has always been somewhat more homogeneous, and of course MHz
has long been used for simple marketing, but it was never a proper
measure of performance even before the days of HT, dual-cores, etc. The 440BX was the 1st mbd to completely blow away the MHz
myth, with older CPUs running much better on the board than
higher clocked CPUs on different boards.

If you're talking about different clocks within the same processor
type then of course one can draw some basic conclusions, but
using MHz to compare completely different CPUs is just dumb.

Ian.



 
Status
Not open for further replies.