Updated SPEC benchmarks

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
I've read <A HREF="http://www.amdzone.com/articleview.cfm?ArticleID=1296" target="_new">amdzone's</A> article on the G5 and its SPEC CPU 2000 benchmark scores; however, I thought it was amusing to see that even amdzone only considered 3.06Ghz-level scores to put next to XP3200's. I've checked some interesting CPU2000 results at spec.org (btw, that search engine gave me some trouble) and here's what I've found, in case anyone is interested (if you're not, ignore this!)(all numbers are base)

<b><font color=blue>Intel CPUs</font color=blue></b>
<i>3.0Ghz P4</i>, Int <b>1164</b>, FP <b>1213</b>
<i>3.06Ghz P4</i>, Int <b>1099</b>, FP <b>1092</b>
<i>1.0Ghz Itanium (McKinley)</i>, Int <b>810</b>, FP <b>1431</b>
<i>1.5Ghz Itanium (Madison)</i>, Int <b>1318</b>, FP <b>2104</b>
<b><font color=green>AMD</font color=green></b>
<i>XP3200</i>, Int <b>1044</b>, FP <b>873</b>
<i>1.8Ghz Opteron 144</i>, Int <b>1095</b>, FP <b>1122</b>
<b><font color=red>Apple</font color=red></b>
<i>G5 2.0Ghz</i>, Int <b>840</b>, FP <b>800</b>
<b>IBM</b>
<i>Power4 1700Mhz</i>, Int <b>1113</b>, FP <b>1598</b>

<b>====UPDATE====</b>
<b>Update: Completely new figures for 1.3 and 1.5Ghz Itanium 2</b>
Itanium 1.3Ghz: int 875, fp 1770;
Itanium 1.5Ghz: int 1077, fp 2041;
8xItanium 1.3Ghz: int 79.4, fp 141; (rates)
16xItanium 1.3Ghz: int 158, fp 278; (rates)
32xItanium 1.3Ghz: int 311, fp 541; (rates)
64xItanium 1.3Ghz: int 601, fp 1053; (rates)
4xItanium 1.5Ghz: int ???, fp 82.2; (rates)
8xItanium 1.5Ghz: int 98.3, fp 164; (rates)
16xItanium 1.5Ghz: int 195, fp 327; (rates)
32xItanium 1.5Ghz: int 385, fp 644; (rates)
<b>Update: Comparative Opteron rates</b>
4x844: int 46.1, fp 44.2; (rates)
4x842: int 41.5, fp 40.6; (rates)
4x840: int 37.4, fp 37.3; (rates)
2x244: int 24.2, fp 24.7; (rates)

<i><b>Update: New SPEC scores for Madison have appeared on SPEC's database. Bear in mind that they come from SGI, which is a company that traditionally gives lower SPEC results than HP. </b> HP already posted considerably better results for Integer operations, and those are in the "intel CPUs" section above... And also, SPEC rates have been benchmarked too, for those of you interested in scalability. Included here for convenience are the prices of those processors:
<b>
Opteron 840, $749
Opteron 842, $1299
Opteron 844, $2149
Opteron 244, $800±50
Opteron 144, $670 ( :smile: !!!)
Itanium 1.3Ghz, $1200 ( :smile: ... not bad)
Itanium 1.5Ghz, $4200</b></i>

<b>====End of Update...====</b>

I've found those numbers to be very interesting... Note that, while the XP3200 is faster than the 2.0Ghz CPU used in G5, the 3.0Ghz P4 is also considerably faster than the XP3200. I've read it somewhere that the 3.2Ghz P4 scores around 1250 or so in both FP and int base... so the 3200 is no match.

Of course, the other processors are for different market segments, and we should keep that in mind when looking at the numbers...

Anyway, going over to the server cpus, Opteron looks good. In fact, it looks very good, but just not invincible-good.

Anyway, what can we really expect from A64, if launched at 2.0Ghz? I'd say that Opteron's architecture is already a good indication of A64 performance levels (am I thinking right here?...), so a 2.0Ghz wouldn't score much more than 20% over the 144 Opteron, or around 1300 or so... which is an excellent score <i>today</i> and is more than enough to compete with the 3.2Ghz northwood, but what about prescott?...

Then again, this <i>is</i> just one particular benchmark and, as such, shouldn't be considered the "final truth"...
What do you people think?... is SPEC a good indication of performance?... it's synthetic anyway.
<P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 07/01/03 10:14 PM.</EM></FONT></P>
 
yes for Intel and AMD.As IBM kind of cheat as normaly L3 cache is share betwwen module in spec as only 1 module is test all the cache go to 1 module unlike real world that will split in 8 this huge 128 MB L3
Apple score is not made with fastest compiler you can expect a 1000 int and 900 FPU from the chip.

Madison score from SGI on linux 1050 and 2050 but SGI on 1 ghz mackinley offer 600/1200 GCC is slower that HP compiler they should reach 1200/2100 maybe more or less and 2200 FPU on intel compiler.

P4 3.2 scoe in the 1220/1220 range for bolt

[-peep-] french
 
yes, I saw that on spec's site... the ibm setup had a completely different cache architecture...

so a 1.5Ghz, 6MB cache Madison might score about 1150±50 int_base and 2150±50 fp_base? that floating point performance looks good... and the int performance has caught up excellently... we'll hopefully be seeing more on that on the net in the following weeks...

Deerfield might just be an excellent choice for the workstation market... let's see how well it stacks up.

IA-64 is using a rather impressive design, isn't it?... a 1.5Ghz chip has a godly FP performance... talk about IPC...
 
Deerfield might just be an excellent choice for the workstation market... let's see how well it stacks up.

At some software only under 3DSstudio and the like still performing largy under P4 but Xeon MP is also slower that plain Xeon even with there large cache is some software that there workload is not suitable to IPC clock speed will be king.

Actualie about 4 to 6 time better IPC that a P4.the core is good but need faster access I/O.

[-peep-] french
 
Don't forget SPEC is much a compiler benchmark, as it is a CPU benchmark; Intel's ICC compiler is head and shoulders above the others (especially SSE2 vectorizing), which makes the Intel cpu's score so well on SPEC. Commercial software compiled with ICC might (and often does) benefit from this speed boost, so its not invalid to look at SPEC, but please note the large majority of software is compiled with GCC or MS compilers, not ICC. When you look at SPEC scores compiled with one of these compilers, you see an entirely different picture.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
I wouldn't say at all the Integer performance is bad. Consider it is 1GHZ and is giving out 800. Imagine at twice the clock, it's already much ahead of the competition.
However, the clock speed limits that, and if they do improve it, it should not be 50% better only (clock speed increase as well), since you got the clock speed and cache.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
compiler is a part of CPU performance like nividia program ''the way it mean to be'''

ICC produce about 10 to 15 % faster int code compare to average compiler.GCC is not to bad.On X86 all corporation use intel compiler opteron is test with ICC when if there some 64 bit compiler for X86-64.

[-peep-] french
 
Um the nVidia program is nothing but a way to market their GPUs and a way to prevent programmers from coding for all platforms!
I'd say compilers are in a completely different class, something legal and honest.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
I'm not considering buying Madison myself.

I work at a physics institute and we write our own software for our own purposes and compile them.

I'm interested in its capabilities only as a tool for doing research. Programming our own experiments and simulations on it, that is all. We're perfectly capable of recompiling our code if needed. And we can learn to use a processor to its fullest.
IM sure he has the 4k to shell out for a processor which has roughly 300 applications for
What were you thinking? That I plan on running Doom 3 on Madison? Of course not. Madison is a server/high performance solution, and you're thinking <i>desktop</i>.

And it's a departmental/institutional purchase and many people might be using the computer. How do you guys think that institutions purchase Xeon racks? With spare change? (btw, we've got a few of those around too)
 
Most of the high-end systems are built to do one or two things specifically. The lower down the "food chain" in terms of market, the bigger install base you have in terms of applications. Server markets have traditionally not had a very difficult time transitioning to new architectures because there isn't such a huge install base of software to be ported as there is in the desktop world and server software companies are usually very quick in porting their software.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
The lower down the "food chain" in terms of market, the bigger install base you have in terms of applications.
That makes sense, and that's what I was talking about. The Xeon racks we have around here don't have a large software base at all; in fact, they mostly have an operating system, that's all. You can then use them all remotely with any software that anyone around this physics institute cared to write. You can also recompile.

But you're definitely not going to use any proprietary software at all. Red hat linux and open source programs only. Completely different from "lower down the food chain".
 
At québec there a new super computer for Montreal university there only the os on it you can rent the computer for X time and test what you for free if your are coming from university as a sientific.Share supercomputer for 4 university

[-peep-] french
 
yes ISV support in exchange for good coding for Nvidia card and bug free.

Nerverwinter Night was a good exsemble of nidia grip.a Radeon 9700 is as fast as a geforce2 and DX 8 is disable on the game if you dont use a nvidia card.

This part of buying a nividia card you buy more that the hardware itself but the driver the game support and a warrenty that most game and app will run fine and fast.

Like Intel use there compiler create guide for optimization for XEon itanium also there chipset have a edge.

Spec it a platform test.Compiler CPU chipset OS need to be good at every thing.

[-peep-] french
 
That is false. nVidia made sure NWN would NOT be coded at all for ATi. There is a difference between encouraging coding for your platform and telling the programmer to shut coding for any other platforms. This is not competition, this is unfair cheating.
Just like the drivers recently, nVidia is only cheating, not providing fair competition.

This way it's meant to be played BS is just PR from nVidia, and anyone who falls for it has to be a damn fool or nVidiot.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
yes cheape move sure but i dont care i want my game to work.It was working but i am not suppost to read faq to make it work.Does the programer have put anti ATI code i dont think so.

Nividia cheating yes but ATI Quarck issue buggy driver for 8500 on AMD in the early day.

Bolt have a poor record lately as the target audiance are younger and noting relate to corporation have you hear cheating or rabing on Quadro or Fireopengl no cuz they are relate to corporation.Do you hear intel babling or AMD almost never specialie on Intel side they never leak something before they need they dont create hype they stay professional.AMD should do the same and control there Fan the little story dont help AMD selling big systemes on the market those It maneger are sensitive to reputation

[-peep-] french
 
yes cheape move sure but i dont care i want my game to work
Two things:
1) Sure, if you like clippy graphics, low quality images, just for better performance.
2) Most GF FX owners have had nothing but instabilities operating their lovely "stable and high quality" NV3x.

Nividia cheating yes but ATI Quarck issue buggy driver for 8500 on AMD in the early day.
Difference being ATi did it once, ADMITTED TO IT, and stopped. nVidia still cheats, still DENIES, and refuses to remove the detection cheats. ATi removed them in Catalyst 3.5. If nVidia continues, I won't care at all if they go under, it was their own undoing. Us consumers will only buy from the companies that deliver solid products and provide us with some reliability like with drivers. Can the same be said about nVidia?

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
) Most GF FX owners have had nothing but instabilities operating their lovely "stable and high quality" NV3x.

I dont own a Geforce FX

Sure, if you like clippy graphics, low quality images, just for better performance

That what i got on NWN on my ATI on my geforce it run fine.



Difference being ATi did it once, ADMITTED TO IT, and stopped. nVidia still cheats, still DENIES, and refuses to remove the detection cheats. ATi removed them in Catalyst 3.5. If nVidia continues, I won't care at all if they go under, it was their own undoing. Us consumers will only buy from the companies that deliver solid products and provide us with some reliability like with drivers. Can the same be said about nVidia?

They have cheat or not is not my point the point is ISV support more Nvidia more that any others.Also cheat were present only in 3Dmark2003.

[-peep-] french
 
nVidia support is starting to fall, that's how I see it.
Their DirectX 9 library is weak in performance. CineFX isn't succesful at all because it is proprietary.
ATi plays by the general rules. Who will be chosen in the long run?
We'll see soon, but this "The Way It's Meant to be Played" alliance is bullcrap, even UT2003 isn't THAT much pro-nVidia and I'd be hard-pressed to believe it has ANY nVidia optimizations that ATi didn't get.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
We'll see soon, but this "The Way It's Meant to be Played" alliance is bullcrap, even UT2003 isn't THAT much pro-nVidia and I'd be hard-pressed to believe it has ANY nVidia optimizations that ATi didn't get

But fast to not put any SSE instruction to get sure AMD look good

[-peep-] french
 
If it supported SSE, it would support Pentium 4 as well, logically.
Even then, I was not aware that SSE and SSE2 had THAT much of an impact in games. Have you seen the latest "improvements" by such? Are there? Anyone got proof that they can help a lot?

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
 
Well, under games, I don't really know, because they're mostly not open code, and can't be recompiled into different versions for testing at all. (that would be illegal)...

So it's hard to tell. Aceshardware has put an interesting <A HREF="http://www.aceshardware.com/" target="_new">news article</A> which compares various compilers with and without P4's SSE2 optimisations, when creating a floating point benchmark (which is a logical area to look for differences, as the P4's FPU isn't the greatest and could use help). In that article, you can see how enabling SSE2 makes a huge difference when using Intel compilers. I do not know, however, which compiler is mostly used for games. But using an Intel one should make a huge difference. Using GCC, for example, will cripple Intel's platform rather considerably. Unreal Tournament could, for instance, use GCC instead of Intel's compiler. And what is even more "conspiracy-theory" would be to consider that AMD might even have paid for the UT developers to do so... would make sense in my mind.

What kind of operations does a game do, anyway? More integer operations or more floating-point ones?... I can't actually get a grasp on that... But I would tend to think that they're more floating-point-oriented.

<i>Edit: If you're interested, but don't want to check aceshardware, you can just check <A HREF="http://www.aceshardware.com/files/news/images/flops_compiler_graph06302003.gif" target="_new">this graph</A>, which compares how much floating point performance can be extracted from the P4 using different compiler and different compiler settings. Compilers from intel, GCC and Microsoft's Visual Studio are compared in that graph. Check it out, it's interesting.</i><P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 06/30/03 10:11 PM.</EM></FONT></P>