The Gigahertz Battle: How Do Today's CPUs Stack Up?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

How about some benchmarks with Firefox and IE 7 rendering?


Or maybe adding 6 + 9.

I bet real world comparisons like that would close the gap considerably.

The reality is, at the present time, Intel consumer chips are faster cheaper and have a lower lifetime operating cost. I'm sure everyone would like to AMD be more competitive, and hopefully they will be in the future, but right now they're just not. No amount of benchmark tweaking is going to change that.
 
It's not because you don't use these that no one ever does!
Vorbis is used in many games. Prominent among them is... Microsoft Halo. Look it up, there's libvorbis.dll in the install directory!

Lame is used by several MP3 encoders. One is Sony's own limited CD copy (they use it illegally), but it's used in many many MANY cases.

Xvid has the advantage over DivX of outputting primarily MPEG-4 streams. Due to its implementation being based on specs, one can be certain that its output is valid. It gives a good idea of what encoding speed one can expect from both DivX and Quicktime with minimal overhead. It is moreover available on more platforms than either of those products.

With them being open source, the latest version always uses the latest tech available and may thus show each CPU under its best light.

Nero uses a specific MPEG-4 encoding algorithm, which isn't optimized for quality and/or speed - while XviD is. Roxio uses Lame I think, and Adobe's product cannot be easily scripted to show results (XviD and Lame can run from the command line and/or be included in scripts easily, making such benchmarking easier).

Your 'niche' products are ALL OVER the software you use - testing one of them gives results for DOZEN of your 'mainstream' products.

Ignorant fool.


Actually you are the fool.

How the real application works is very important. Why use WinRAR, when you can use the original limpel-ziev algorithm? That is your logic. Test with the algorithm, not the application - so we have a bunch of 'synthetic' benchmarks masquerading as real-world applications.

If the LAME encoding tells us nothing about how a burner like Roxio performs, then who cares how fast a processor is at LAME? If DivX encoding doesn't tell me how fast iTunes or Windows media player will work, what good is a DivX benchmark?

An finally, most importantly, if these programs (LAME, DivX, etc) truly reflect real-world application usage ---->>> WHY NOT USE THE REAL WORLD APP??? Your answer - lazy benchmarkers - is no good.
 
a burner app has NO need for CPU power past a certain processor power - this level has been reached when the P-2 went past 300 MHz for MP3 on-the-fly encoding/decoding. What it requires is a fast burner, end of story.
Using Lame directly as a benchmark allows you to test both Roxio and any other CD burner/ripper that uses Lame - on top of batch processors, video apps that do encoding with Lame, etc. Same thing for Vorbis and Xvid.

It's not because those apps don't have a GUI by default that they are useless: most of your apps that provide the same service are nothing more than GUIs sitting on top of a DLL which is either lame.dll or libvorbis.dll under another name (if renamed at all).
In fact the difference between those apps and a bench is that the app calls for the software's service through an API call, while the bench uses a batch parameter. Once on a thread of its own, difference=null.

It's like saying Linux is useless, has no future and should disappear, and spend your time on the Web, looking things up with Google and spending all your time on various high-response, never crashing websites.

As such, why would benchmarkers spend time manually installing and launching 20 apps to get the same results a simple command line, started once (or twice to get different parameters) gets them?

You're free to waste your time - personally I do use the command-line version of Lame to access high-speed variable bitrate high quality encoding, same thing for Vorbis.
 
a burner app has NO need for CPU power past a certain processor power - this level has been reached when the P-2 went past 300 MHz for MP3 on-the-fly encoding/decoding. What it requires is a fast burner, end of story.

No one questioned that. YOU made the statement that encoding speed didn't matter when burning to a CD, so what relevance is encoding speed for burning? Certainly based on your own statements, it isn't relevant.

It's not because those apps don't have a GUI by default that they are useless: most of your apps that provide the same service are nothing more than GUIs sitting on top of a DLL which is either lame.dll or libvorbis.dll under another name (if renamed at all).

I'm quite aware of how libraries work - I've been programming on Unix boxes for about 15 years now.

However, you are making a lot of assertions and assumptions that almost certainly false.

The core assumption you are making is that there is only one way to skin a cat - one way to do encoding. One algorithm. That's patently false.

It's like saying Linux is useless, has no future and should disappear, and spend your time on the Web, looking things up with Google and spending all your time on various high-response, never crashing websites.

I have no idea where you got that drivel.


As such, why would benchmarkers spend time manually installing and launching 20 apps to get the same results a simple command line, started once (or twice to get different parameters) gets them?

Maybe they should use the programs the general public uses for their benchmarks, instead of trying to make it easy on themselves? There are lots of ways to automate virtually any benchmark - including any GUI bench. Apparently you don't know quite as much as you think you do..
 
oh, I do know how to automate a bench. It's just that most GUI softwares don't provide scripting, requiring supervisors to be automated, which adds overhead.
It is, for example, admitted that using stuff like FRAPS skews results somewhat.

On the other hand, running command-line lame.exe allows you to get:
- encoding time (at a thousandth of a second precision)
- used CPU time (because you don't use lame.exe as real-time very often)
- presets used
and all that is generated on-the-fly by lame, in text only (you can redirect it to a file - you know, pipes?) or merely displayed on-screen (no unneeded disk access).

Now, if you'd rather measure processor efficiency with a software that WILL introduce hard disk access overhead, graphics subsystem overhead, TCP/IP use overhead - because it decided to pop up a wizard or phone home at this instant - and generally use up system resources other than CPU time (which will impact CPU efficiency anyway), then go ahead.

You may have done 15 years of Unix programming; lucky you, you don't know how screwed up the NT kernel's thread scheduler and memory controller are.

The main advantage of those apps is that they produce REPRODUCIBLE results - while most of those apps, for the reasons I underlined before, just don't.

Make a test is reproducible is one of the points of a benchmark, especially in an environment as complex as a PC; using your 'generic apps' is like making different drivers test drive cars at different point in time, with their cell phone turned on and changing weather to decide which car is fastest.
On the other hand, using lame and vorbis and Xvid encoding means that it's always the same driver in a well controlled environment testing the cars.
 
(Y A W N N N N N . . . . smacks lips a few times . . . )

amd THIS . . . .intel THAT 😳

Where are the multi-threaded/64-bit apps to run on these big cache/multicore CPUs and fancy new architectures ???

Instead of flaming back and forth let's CURSE the software industry . . .

!@$#%%$#$$#!@ chain-smokin' Redbull drinkers . . .
 

Latest posts