Opteron 150 vs. Xeon 3.6 Nocona

darko21

Distinguished
Sep 15, 2003
1,098
0
19,280
Our preliminary look at Intel's 64-bit Xeon 3.6GHz Nocona (which happens to be identical to the Intel 3.6F Pentium 4) stirred up a bit of controversy. The largest two concerns were:

We tested Intel's Xeon server processor against an Athlon desktop CPU.
We chose poor benchmarks to illustrate the capabilities of those processors.
Fortunately, with the help of the other editors at AnandTech, we managed to reproduce an entire retest of the Nocona platform and an Opteron 150 CPU. We also managed to find an internet connection stable enough for this editor to redraft en entire performance analysis on his vacation.

<A HREF="http://www.anandtech.com/linux/showdoc.aspx?i=2163" target="_new"> Opteron 150 VS Nacona 3.6 </A>

If I glanced at a spilt box of tooth picks on the floor, could I tell you how many are in the pile. Not a chance, But then again I don't have to buy my underware at Kmart.
 
We chose poor benchmarks to illustrate the capabilities of those processors.
**ROFL** It wasn't any better this time around, but at least it's better than nothing. Now if only the gross disparities could be <i>explained</i>. Is it the processor, the benchmark, or even the compiler that makes the differences so large? Or is there even another reason?

<pre><b><font color=red>"Build a man a fire and he's warm for the rest of the evening.
Set a man on fire and he's warm for the rest of his life." - Steve Taylor</font color=red></b></pre><p>
 
Congrats to Anand for actually listening to founded cricism, and responding so quick. I've not read it yet fully, just glanced over it but so far it looks like:
1) seems they did a decent job this time, listening to valueable input from their readers. THG, take notice !
2) Nocona gets its ass whipped in pretty much any application benchmark, sometimes really badly too.
3) we *stil* have no 32 and 64 bit numbers for gauging EM64T 🙁

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
I like Anands comment about the opteron beings "the highest performing AMD workstation money can buy."
While the "xeon 3.6/pentium4 3.6F is the highest performing Intel workstation money can't buy"
Now I am waiting for some "interesting" posts from certain people.
Bob
 
Personaly I think if he thinks multi configuration will benifit Nacona over opteron he is dreaming.

""After all is said and done it became difficult (nearly impossible?) to justify the Xeon processor in a UP configuration over the Opteron 150, <font color=red> but perhaps we will see significant changes in dual and four way configurations. </font color=red> We have Linux benchmark shootout between the two processors coming up, as well as a Windows analysis too. ""


Just a guess here but I'd think 90nm opterons at 2.6 giz will be widely available before nacona @ 3.6.... (2 or 3 weeks.) ** EDIT** could be a little longer for server chips but from what I have read 90nm mobiles are shipping right now so could be a couple of months for opterons on 90nm. **EDIT** If they make one with 2 megs of cache with the extra features sse3 improved prefetch etc it will be embarrassing for intel especialy in multi configed systems. and I got a feeling 64bit OS with 64 bit apps will benifit opteron even more so.

Just a guess, time will tell.





If I glanced at a spilt box of tooth picks on the floor, could I tell you how many are in the pile. Not a chance, But then again I don't have to buy my underware at Kmart. <P ID="edit"><FONT SIZE=-1><EM>Edited by darko21 on 08/12/04 06:18 PM.</EM></FONT></P>
 
Now I am waiting for some "interesting" posts from certain people.
Don't hold your breath. The certain nameless fanboi who started the certain "interesting" thread plugging the botched results...I expect he will quietly ignore the new, more accurate results, now that they don't actually support his position.

As for Kubicki's Linux expertise or lack thereof, he seems knowledgeable enough about Linux, judging from his comments. I think he just got in a hurry and got sloppy the first time around. Probably his limited time with the Xeon box had a lot to do with that. Not saying it was an excusable mistake, just a plausible scenario.

<i>"Intel's ICH6R SouthBridge, now featuring RAID -1"

"RAID-minus-one?"

"Yeah. You have two hard drives, neither of which can actually boot."</i>
 
I am glad they made adjustments and an apology for the problems before hand. It takes alot for a reviewer to do that and it makes me feel better about the quality of thier reviews. Ive always respected the job anandtech does and after this i was pretty caught off guard. this new review may stil not be perfect, but at least they admit that this time and seem a bit more cautious in thier take on the numbers.
 
My thinking is that Nocona did quite well.
It is almost a suprise that it runs 64 bit at all.
As far as I am aware, it is still memory limited to <4 gigs.
It realy hasn't had the development time needed to get real optimization in linux.
Do I think it's as good as the opteron? Almost.
Will the dolts who buy xeon over opteron be happy? Probably.
EM64T seems to work. Looks like we will be stuck with X86 for a while yet.
 
This is obviously an interesting article!

However, I'm inclined to think that the gargantuan difference suggested between Intel and AMD is exaggerated. It's probably not that big. It is a very well-known fact that ICC produces dramatically bigger results in P4/Netburst architecture than GCC; after all, if Nocona was really that behind Opteron, P4s would be lagging terribly behind A64, which they're not. Not by that huge margin, they aren't. And previous Xeon generations were <i>not so terribly overpowered</i> by Opteron. They were, of course, overpowered, and I'd still expect them to be, but this is a little too much.

Most people I know use ICC when dealing with Intel architecture.

I'm sure some of you will think that I'm pulling this out of my hat, but there's actual proof for that. Take a look at this quote from aces:
As you can see from the results, <b>Intel's C/C++ compiler is the leader</b> when SSE2 is used in all but the second test, where it is beaten by MS Visual Studio .NET 2003. In five of the tests, ICC 7.1 with SSE2 puts quite a bit of distance between it and the rest of the field, presumably it is able to vectorize these tests. In contrast, GCC 3.2 is often the weakest seen in these results. As Andi Kleen explains on the General Message Board, <b>GCC is not particularly optimized for the Pentium 4</b>
I know many of you will go screaming at me because AMD64 architecture features SSE2, but I'm venturing a guess that ICC would benefit Intel more than AMD, and change the performance figures by a lot. The full compiler comparison can be found <A HREF="http://www.aceshardware.com/read_news.jsp?id=75000387" target="_new">in this link</A>. Granted, this is old, but I seriously doubt that in one year, GCC became faster for Intel's CPUs than ICC. Would so much change? (does anyone have more info on that? did something change dramatically? Am I terribly off?)

Heck, just consider that the results show Nocona trailing Opteron with half the performance and only rarely does nocona get near opteron performance. I agree Opteron is better, but not that much better. They said it themselves:
As we can see above, the difference between the two CPUs seems exaggerated and difficult to trust
Indeed, it does.

It's still highly disappointing that we have no idea of xeon's 64-bit capabilities... like 64-bit vs 32-bit on nocona, and 64-bit vs 32-bit on opteron... why didn't they do that? It would be much more insightful.

<P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 08/13/04 02:11 AM.</EM></FONT></P>
 
Point is, just about *NO ONE* uses ICC under Linux so using GCC for a "linux shootout" seems not only reasonable, its the only option really. For a windows test, using MSVS would make most sense.

You do bring up an interesting point though, and indeed if Pentium 4 doesn't fall as far behind A64 on windows as nocona does here, its quite likely at least a big part of the difference is due to heroic optimization efforts at intel.

>Indeed, it does.

Funny, when intel leads by some huge margin, there wasn't any reason to doubt it ? I don't find these results one bit surprising.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
if ICC does show the opposite of the results with GCC, then what does that leave to conclude? if an opteron runs better with gcc while the xeon performs better with ICC, then its a wash. the only thing you could do is compare top score form gcc versus top score from icc i suppose, but it just seems like its hard to make any sort of comparison where the optimizations have gotten to such a point.
 
>Indeed, it does.

Funny, when intel leads by some huge margin, there wasn't any reason to doubt it ? I don't find these results one bit surprising.
I never said that. You're right, it was also unbelievable that Nocona could lead by that huge amount.

I mean, if one processor was so superior than the other, the other one would have been eliminated by now, wouldn't it?
 
the only thing you could do is compare top score form gcc versus top score from icc i suppose, but it just seems like its hard to make any sort of comparison where the optimizations have gotten to such a point.
Well, in theory, if I understand correctly, software only helps the hardware to achieve a certain result or complete a task. Therefore, the best thing to do would be to compare each platform at its best: GCC with Opteron and ICC with Nocona... Or else, you're <i>still</i> not comparing Intel's best with AMD's best.

Just my thoughts.
 
Not if you are mainly benchmarking "borderline cases" (word ?), like tiny apps with a very limited set of instructions and a tiny memory footprint (like the prime generating algorythm) where differences between cpu architectures could be HUGE; for instance it could happen to fit into L2 on one cpu, but not the other, or could be entirely dominated by the speed of a single feature (SSE performance, memory acces latency, whatever,..)

Also not when the result could be 200% algorithm dependant (eg one 3D renderer could well be 2x faster on a certain cpu, while another program, using a vastely different approach could show the exact opposite).

Therefore, the larger/more complex the benched app becomes, the more telling the result is IMHO because chances of giving an unreasonable weighing to a minor feature/performance factor becomes smaller. For that same reason I think MySQL performance is much more telling (at least for database-type workloads) than prime or encryption algorythms (unless you buy a server to do nothing but that). And for that reason still, the best possible benchmark is the app you will actually run on that machine.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
> GCC with Opteron and ICC with Nocona... Or else, you're
>still not comparing Intel's best with AMD's best.

I guess that depends what you are trying to determine. If you want to determine, which hardware/software combination could provide the fastest results, then yes.

If you want to give a more user-centric conclusion, like which hardware platform will likely perform best "overall" (for a certain OS), yo should use the tools that are being used by the developpers who's software you will use (which means GCC on Linux, and mostly MSVC and GCC for Windows). Maybe also ICC for small, critical pieces of code, to get an idea of "peak performance" but just about no one uses Intel's compiler for compiling complex projects on either windows or linux.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
And for that reason still, the best possible benchmark is the app you will actually run on that machine.
For the end user, that much is undeniable. If you plan on using 3DSMAX, that's obviously the only benchmark that should interest you.

If you plan on running doom, the same thing applies (in fact, nvidia knows that! that's why they're running that whole Doom propaganda!)
 
lmao!

how can practically the same CPU have such different test results?


Anandtech, wow, you just lots alot of credibility with me

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>

Brand name whores are stupid!
 
Don't hold your breath. The certain nameless fanboi who started the certain "interesting" thread plugging the botched results...I expect he will quietly ignore the new, more accurate results, now that they don't actually support his position.

<b>I eagerly wait to see what he has to say.</b>

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>

Brand name whores are stupid!
 
>how can practically the same CPU have such different test
>results?

Well, not accidentally mixing up the results helps, as does using the correct compiler flags..

>Anandtech, wow, you just lots alot of credibility with me

On the contrary, I applaud them for listening to valid critism and reacting the way they did instead of the more typical THG approach of ignoring or maintaining they where right (remember the fake P4 pic?). Anandtech and Kristopher specifically, just gained a lot of credibility with me.. takes a man to admit you where wrong, and it takes dedication to rectify them so quickly.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
yea good points.


i guess Anandtech aint so bad..

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>

Brand name whores are stupid!