The Gigahertz Battle: How Do Today's CPUs Stack Up?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
@Senor_Bob

It should make zero difference. At the setting they had it at it was effectively an e6600 and they did include a screenshot of CPUz.
I would agree that "seems" like it is effectively an e6600, as the QX6700 that they labeled a q6600 in the graphs "seems" like a q6600. I'm not accusing them of lying, just that it could have been more obvious since many people zoom straight to the graphs. I just wonder why they didn't call it what it actually was the way they did the e4300, which obviously isn't supposed to run at 2.4 GHz (according to Intel, not the THG community 😀). Actually I guess it's not supposed to run @ 2.4 GHz according to the THG community either, that's way too slow. I doubt it affected the outcome of the tests at all, but I would be curious to see if an underclocked FX-62 would perform better than the 4600+ they used.

In fact, I would be more interested to see comparisons of the top processors against the lesser ones in the same line at equivalent clocks (although I'd rather see an OC'd e6600 than an underclocked X6800) to see how much/little performance overclockers are trading off versus buying the more expensive parts. I think we all already knew that C2D > X2 > P4 as someone previously stated.

I'm not an AMD nut (P4 Northwood user in fact 🙁 )I just thought that guy had one possible point who suggested an underclocked FX, that seems to be a more appropriate comparison to an underclocked intel Extreme since those two series are supposed to compete with each other (on the rare occasions that both vendors have a comparable product simultaneously).
 
But the P4 was MADE to be clocked high. It was built to speed. Hell NetBurst had the capacity to operate at over 8GHz.
errr... Where? Even 65nm models weren't able to get past 6.2 GHz even cooled down with liquid nitrogen... And somewhere, the point is to be able to use those processors in a desktop system - meaning, on air with a heat spreader fitting inside a small case.

CPU-Z's record charts has a handful of high 7Ghz results last I checked (probably sixth months ago), and there was some news fairly recently about a group in Europe (I believe) that got either a P4-630 or 631 (don't remember at the moment) to over 8Ghz, admittedly using LN and customized hardware (as in, beefed up caps and such). So technically it does have that capacity on nitrogen, but you're completely correct on the second half of that statement; good air is the best you can rationally claim for "design capabilities" of a chip.

Just nitpicking... (though didn't Intel promise us 10+Ghz netburst chips when they first rolled out the architecture?)
 
Eh, I'd like to point out that cost per processor was not within the scope of this article at all. THG haa another article they publish periodically that focuses greatly on $/performance.

@Senor_Bob

It should make zero difference. At the setting they had it at it was effectively an e6600 and they did include a screenshot of CPUz.

They really ought to have included a couple more AMD CPUs and some benchmarks that highlight the AMD IO and memory capabilities.

According to THIS, the x6800 would benefit from the reduced multiplier on a 965 as the NBCC* (North Bridge Core Clock) = (11/9) x 266 = 325 vs and e6600's NBCC* of 266.

*Note CPU-Z does not report NBCC.

That thread STILL hasn't been updated since november and it doesn't say that, at all. It says that it would be exactly the same as an e6600 as the concept they are discussing supposedly doesn't apply to the "extreme" edition CPUs (which makes zero sense to me and causes me to question the validity of any of the new statements they make). Since the NB is is only used for memory access in the benchmarks run (no IO, no GFX) the theoretically increased NBCC would not have a performance advantage as it would not improve memory bandwidth or timings. If the extreme CPU reacted the same way to having it's multi lowered as other processors it could actualy suffer a performance PENALTY due to an increased NB strap, if such a higher strap even existed on the mobo used in the test. At 325mhz it shouldn't be bumping it up to the 333strap yet but there are some mobos that seem to move to higher strapping early to achieve higher clock rates (but ussually reduced performance). If it went to a higher strap the increased NB latencies would lower NB performance even with a higher clock rate.

So it shouldn't be doing that and even if it did it shouldn't matter.

If you take an 6800 and lower the multiplier to 9x and run with stock memory speeds at a 1:1 ratio, then the result is an identical duplication to the spec of what an E6600 is. There is no benifit, nor any gain, just simply the performance of running an E6600.
 
errr... Where? Even 65nm models weren't able to get past 6.2 GHz even cooled down with liquid nitrogen... And somewhere, the point is to be able to use those processors in a desktop system - meaning, on air with a heat spreader fitting inside a small case.

I'm just saying that NetBurst is actually able to operate at clock speeds in excess of 8GHz, if you keep it cool enough. Intel themselves had Presler operating at, IIRC, 6GHz on liquid nitrogen.

But as I said, it ran into the law of physics and then crashed an burned.

Thing is, in this case you see that in practice the maximum on-air cooling you can get for all cores are:
- P4: 4.5 GHz (nice on air overclocks)
- K8: 3 GHz (highest speed found at retail)
- C2D: 4 GHz (higher speeds aren't officially released yet)
Thing is, for the P4 not to look too bad, it needs 200% of the clock speed of its opponents - and its max clock speed is (at best) only 150% of its worst competitor (here, 90nm AMD).

Exactly, so why did they use a slow Pentium 4? Why not use a 3.6GHz Cedar Mill/Prescott processor?

For the sake of a fair test, the Pentium 4 needs to be set at its fastest speed, as that's its only point of performance.
 
Does this mean that the CPU charts are getting updated?
They assembled all those CPU's, spent all that time running the tests.....yet they can't? find an E6300, or don't have the time to test one. I think won't is the more appropriate word. :x
Word. At the rate they're putting the 6300 off, they might as well add it with the 4xxx CPUs at the same time. :evil:
 
you still miss the point of this article's test: compare the performance per MHz of the 3 (4 with Core) available at retail CPU architectures.
For maximum performance at peak efficiency, look at Tom's CPU charts: AMD 6000+/FX, C2D QX6x00, P4-D9xx.
What appears here is that C2D is 15% faster at same clock speed than the K8 and Core, and 60% faster than P4.
It also appears that threaded apps run faster on multicore CPUs. No kidding.
End of story.
 
Wish you guys would have overclocked an Athlon XP to put in this test. (I know it's possible; I used to have one at 2.6ghz)

I stated that earlier in the discussion, i have an XP-M that will run at 2.7, but I run at 2.4 24/7. Love to see it stomp the Netbust around.
 
Here is a price/performance analysis for Gigahertz Battle.

The prices are current prices from Newegg
The performance is seconds to Video Encode with DivX 6.5 as reported at
http://www.tomshardware.com/2007/03/26/the_gigahertz_battle/page13.html#video

Intel Core 2 Quad Q6600 Kentsfield.................115 sec......$846
Intel Core 2 Duo E6600 Conroe.......................116 sec......$313
Intel Core 2 Duo E4300 Allendale 1.8GHz.........125 sec......$169
AMD Athlon 64 X2 4600+(65W) Windsor AM2....148 sec......$126
AMD Athlon 64 X2 4600+ Toledo S939..............158 sec......$200
Intel Pentium 4 2.4B Northwood S478..............252 sec........$63

Seems like the Athlon 64 X2 4600+ AM2 is the clear winner for value.
Actually at those prices, time encoding divided by price, the Pentium 4 is the clear winner.

My quick calculations said:
Seconds / Price

Q6600
0.13593380614657210401891252955083
E6600
0.37060702875399361022364217252396
E4300
0.73964497041420118343195266272189
4600+ 65w
1.1746031746031746031746031746032
4600+
0.79
P4
4
Greatest number wins... what I found them to be ranked:

1) Pentium 4
2) 4600+ 65watt
3) 4600+ standard
4) E4300
5) E6600
6) Q6600
 
Actually at those prices, time encoding divided by price, the Pentium 4 is the clear winner.

True, very good point, but unfortunatley electricity isn't free in that many places of the world. The C2D and the (mid range) athlon's have very close energy consumption, when you take the system as whole into account (not just CPU!)

The Pentium 4 goes from the clear winner, to last as fast as you can say "Dell" :wink:
 
Stop me if I'm wrong, but the P4 used here is a single core affair. If you compare it to an Athlon64 or a Core2 Solo (which cost pretty much half the price of their dual core siblings at the same frequency) then your math goes down - since a single core at DivX encoding isn't twice as slow - it goes down merely 30-40% (if that much).
For example, pit that P4 against an Athlon64 3800+ (which runs at 2.4 GHz and costs 90 bucks).
See the P4 go down in flames.
 
Core 2 Duo is king, but how do other processors today compare on a clock-to-clock basis? We benchmarked comparable CPUs from AMD as well as Intel to see.
Rereading it; the funniest thing in this article is:
It would have been great to include Pentium 4 and Pentium D processors for socket 775, but it would have only been possible to reach a 2.4-GHz core clock speed at very limited FSB speeds.
8O ; if you clock a 2.8GHz (FSB800) prescott to 2.4G it only goes down to FSB 685, and that's not bottlenecking even for a dual core (as JJ once proved it and because most mobile chips have a 667MHz FSB).
Even if these guys don't want to make such assumptions, I inform them of the Extreme Editions for both P4s and PDs on which they can swing the multiplier down without dropping the FSB....
The truth is that on half of those benches, the old Northwood P4 kicks the prescott (and it's derivations) off and tehy just wanted to spare them the n-th humiliation; the only moment in the whole CPU history when real world performance decreased instead of increasing :lol:
 
Stop me if I'm wrong, but the P4 used here is a single core affair.
That is correct.

If you compare it to an Athlon64 or a Core2 Solo (which cost pretty much half the price of their dual core siblings at the same frequency)
I don't think there is a Core 2 Solo. I am 99% sure the Core 2 line is dual core only as of now (and quad core using two dual cores in a single package). Nevertheless, let's take a look at the performance including a single-core A64:

Seconds / Price

Q6600
0.13593380614657210401891252955083
E6600
0.37060702875399361022364217252396
E4300
0.73964497041420118343195266272189
4600+ 65w
1.1746031746031746031746031746032
4600+
0.79
P4
4
Greatest number wins... what I found them to be ranked:

1) Pentium 4
2) 4600+ 65watt
3) 4600+ standard
4) E4300
5) E6600
6) Q6600
Dividing encoding time by price is the wrong way to compare these numbers. You are actually rewarding the P4 for taking the longest (i.e. the P4 would win due to its time even if all of the prices were identical). A better metric would be seconds * price, where the P4 still wins, but not by nearly as much. If (as Mitch074 suggests) I add the Athlon64 4000+ (non-x2) that the article tested and mix some data a little (since the benchmark is for a Clawhammer core 4000+ and the price I could find is for a San Diego core 4000+) then the single core A64 wins.

Seconds Price price*sec
115 $846 97290 Intel Core 2 Quad Q6600
116 $313 36308 Intel Core 2 Duo E6600
158 $200 31600 AMD Athlon 64 X2 4600+ Toledo
125 $169 21125 Intel Core 2 Duo E4300 Allendale
148 $126 18648 AMD Athlon 64 X2 4600+(65W) Windsor
252 $63 15876 Intel Pentium 4 2.4B Northwood
185 *$74 *13690 AMD Athlon 64 4000+ Clawhammer/San Diego
* mixed Clawhammer performance data with San Diego price

I don't really see this test as being the key performance criteria for these CPUs, and someone more ambitious could feel free to come up with a weighted average of price/performance on all the tests. Just remember to multiply price * performance metric when high is bad (e.g. encoding time taken) and divide performance metric by cost when high is good (e.g. frames per second).
 
Stop me if I'm wrong, but the P4 used here is a single core affair.
That is correct.

If you compare it to an Athlon64 or a Core2 Solo (which cost pretty much half the price of their dual core siblings at the same frequency)
I don't think there is a Core 2 Solo. I am 99% sure the Core 2 line is dual core only as of now (and quad core using two dual cores in a single package). Nevertheless, let's take a look at the performance including a single-core A64:

[...]

Dividing encoding time by price is the wrong way to compare these numbers. You are actually rewarding the P4 for taking the longest (i.e. the P4 would win due to its time even if all of the prices were identical). A better metric would be seconds * price, where the P4 still wins, but not by nearly as much. If (as Mitch074 suggests) I add the Athlon64 4000+ (non-x2) that the article tested and mix some data a little (since the benchmark is for a Clawhammer core 4000+ and the price I could find is for a San Diego core 4000+) then the single core A64 wins.

Seconds Price price*sec
115 $846 97290 Intel Core 2 Quad Q6600
116 $313 36308 Intel Core 2 Duo E6600
158 $200 31600 AMD Athlon 64 X2 4600+ Toledo
125 $169 21125 Intel Core 2 Duo E4300 Allendale
148 $126 18648 AMD Athlon 64 X2 4600+(65W) Windsor
252 $63 15876 Intel Pentium 4 2.4B Northwood
185 *$74 *13690 AMD Athlon 64 4000+ Clawhammer/San Diego
* mixed Clawhammer performance data with San Diego price

I don't really see this test as being the key performance criteria for these CPUs, and someone more ambitious could feel free to come up with a weighted average of price/performance on all the tests. Just remember to multiply price * performance metric when high is bad (e.g. encoding time taken) and divide performance metric by cost when high is good (e.g. frames per second).
Thanks for the precision. About Core2 Solo, I'm not sure they exist yet, but from what I could read here and there it should represent Intel's solution for low-cost and/or low power systems (such as embedded or small laptops).
 
Is there a "I
hbeat.gif
AMD" shirt? I think he needs one of those too.

He's complaining about the difference between disabled and native caches. If that isn't nit picking, I don't know what the hell could be. The difference is nominal at best.

Raise a E4300 FSB strap to 266, and drop the multi to 7. Now compare the scores in cache dependent tasks to an E6300.

You mean there is ANOTHER baronmatrix here? *Sigh* :?
 
Intel core solo is around fore some time now.

http://www.intel.com/products/processor/coresolo/

Indeed it is, but that is Core Solo, not Core 2 Solo. Core Duo (Yonah) also exists but is not the same thing as Core 2 Duo (Conroe/Allendale/Merom). As far as I know the Core line are laptop processors, where Core 2 are for desktop and laptop.
 
Actually at those prices, time encoding divided by price, the Pentium 4 is the clear winner.
...
Greatest number wins... what I found them to be ranked:
1) Pentium 4
2) 4600+ 65watt
3) 4600+ standard
4) E4300
5) E6600
6) Q6600

True, the EncodingSec/$ is best for the P4, as it often is for old products whose price has droppped due to relative obsolence. But what I'm looking at is the "sweet spot", the spot on the price/performance curve with the max performance just before where the cost starts increasing rapidly due to newness. Make a graph of my figures and you'll see that after the 4600, the price increases rapidly with very little increase in performance.
 
Putting the P4 into this mix was what was weird to me. You take a 2 year old product and claim it to be a system of today. In CPU world P4 is not a compter of today, it is what most of us are handing down.

Actually the P4 is a 5 year old product. The P4 first broke 3Ghz in 2002. The Athlon 64 X2 4600 is the 2 year old product - those came out in 2005.

When I go through these benchmarks and look for something I actually use, I only find one app - iTunes. How many folks out there use ogg, Lame, 3ds Max, etc?

Not many. Fact is, most of us don't and won't.

My main beef with the article is the focus on encoding and rendering using programs almost no one uses.

Why not use the mainstream packages? Why not use Nero, Roxio easy media creator, adobe CS 3? How about some benchmarks with Firefox and IE 7 rendering? There are a ton of mainstream apps to choose from, but these sites always use ogg? Lame? DivX?

All that focus on niche products doesn't really tell anyone what they need to know - how will this chip affect the applications they run.
 
How many folks out there use ogg, Lame, 3ds Max, etc?

My main beef with the article is the focus on encoding and rendering using programs almost no one uses.

Why not use the mainstream packages? Why not use Nero, Roxio easy media creator, adobe CS 3?

I use Lame. and DivX comes closest to my main usage which is video encoding. I presume that these benchmarks are good for acid testing specific aspects of the CPU, where Nero would only test your burner

My complaint is they keep changing versions, so it's hard to compare BMs to previous tests. But let's not get too OT.
 
Why not Sony Vegas or Pinnacle Studio 10 which are pretty mainstream, DVD Shrink, Vso ConvertXtoDVD or even a Folding@Home Work Unit, since there are a lot of guys that fold.
 
It's not because you don't use these that no one ever does!
Vorbis is used in many games. Prominent among them is... Microsoft Halo. Look it up, there's libvorbis.dll in the install directory!

Lame is used by several MP3 encoders. One is Sony's own limited CD copy (they use it illegally), but it's used in many many MANY cases.

Xvid has the advantage over DivX of outputting primarily MPEG-4 streams. Due to its implementation being based on specs, one can be certain that its output is valid. It gives a good idea of what encoding speed one can expect from both DivX and Quicktime with minimal overhead. It is moreover available on more platforms than either of those products.

With them being open source, the latest version always uses the latest tech available and may thus show each CPU under its best light.

Nero uses a specific MPEG-4 encoding algorithm, which isn't optimized for quality and/or speed - while XviD is. Roxio uses Lame I think, and Adobe's product cannot be easily scripted to show results (XviD and Lame can run from the command line and/or be included in scripts easily, making such benchmarking easier).

Your 'niche' products are ALL OVER the software you use - testing one of them gives results for DOZEN of your 'mainstream' products.

Ignorant fool.
 
Folding would be a good comparison i think. There are alot of people folding and with many different configurations. Add in the different overclocks and you get a pretty good idea of what a particular platform is capable of.