Intel Xeon and AMD Opteron Battle Head to Head

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

zeezee

Distinguished
Jun 19, 2004
142
0
18,680
C'mon, let's be fair. A typical decision to buy an Opteron or a Xeon is not based on their Winrar or DIVX encoding performance.

In addition, this is what they say in the article:

We arranged to get two 1U server systems from the UK-based OEM and channel provider Boston Ltd...

Today we will be looking at processors from these two firms designed for the server market.

Though sometimes they refer to processors or markets like "Server/Workstation".

If they really wanted to look at their workstation performance, why on earth would they order a server configuration with onboard graphics controllers, etc. from a Linux cluster specialist and funny enough, install Win3K Enterprise which is surely not a workstation OS?

Even if we assume that they benchmarked these processors' workstation performance, they still could have mentiones their intent like "This article reviews these processors' workstation performance", or something along these lines.

Nope. I believe this article is poorly written and horribly biased but equally entertaining.
 

zeezee

Distinguished
Jun 19, 2004
142
0
18,680
One more spectacular quote:

At over 40 W, the RAM consumes a large amount of power...

Somebody better tell them that [the amount of] power is measured in Watts. At 40 Watts, the RAM consumes 40 Watts of power.
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
C'mon, let's be fair. A typical decision to buy an Opteron or a Xeon is not based on their Winrar or DIVX encoding performance.
Not alone, but along with other benchmarks they measure CPU application performance. Certainly videoencoding is what many people are using CPU workstation (and server) performance for today.

We arranged to get two 1U server systems from the UK-based OEM and channel provider Boston Ltd...
Today we will be looking at processors from these two firms designed for the server market.
Though sometimes they refer to processors or markets like "Server/Workstation".If they really wanted to look at their workstation performance, why on earth would they order a server configuration with onboard graphics controllers, etc. from a Linux cluster specialist and funny enough, install Win3K Enterprise which is surely not a workstation OS?

Xeon/Opteron are used extensively both as a workstation and as servers. As long as they are testing CPU bound applications, there is no problem using a server as testbed. Any graphics testing would not be correct with that underpowered graphics - but they are not doing any benchmarking depending on graphics performance. And Win2K is not performing that different on CPU benchmarks than Windows XP, so no problem there as well.

Nope. I believe this article is poorly written and horribly biased but equally entertaining.

Apart from the error you found :trophy: in the article on power usage, I can not agree with you. The tests are valid for a comparison of CPU performance, mostly for workstation usage, but also with some server relevance (like video encoding aka. Realmedia/Windows Media server).
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
One more spectacular quote:

At over 40 W, the RAM consumes a large amount of power...
Somebody better tell them that [the amount of] power is measured in Watts. At 40 Watts, the RAM consumes 40 Watts of power.

Whats wrong with what they are writing? It makes perfectly sense to me.
The sentence show they think that a 40W usage for RAM is a lot. Which is certainly a drawback of FB-DIMM RAM (another RAM failure from Intel in my opinion).
 

SinaM

Distinguished
Oct 12, 2006
4
0
18,510
Hi there friend,

Just wanted to quote a little piece from intels website:

"In modern mainstream processors, x86 program instructions (macro-ops) are broken down into small pieces, called micro-ops, before beig sent down the processor pipeline to be processed. Micro-op fusion "fuses" micro-ops derived from the same macro-op to reduce the number of micro-ops that needed to be executed. Reduction in the number of micro-ops results in more efficeint scheduling and better performance at lower power."

I hope that helps? :)

As for the server/workstation comment...as mentioend before it has been noted :)

Thanks for the feedback!


Cheers
 

zeezee

Distinguished
Jun 19, 2004
142
0
18,680
Certainly videoencoding is what many people are using CPU workstation (and server) performance for today.

Xeon/Opteron are used extensively both as a workstation and as servers.

And Win2K is not performing that different on CPU benchmarks than Windows XP, so no problem there as well.

but also with some server relevance (like video encoding aka. Realmedia/Windows Media server).

No comments for the above. I guess you are arguing for the sake of arguing only. I only wish yiou had seen/played with a Win3K Server once in your life.
 

zeezee

Distinguished
Jun 19, 2004
142
0
18,680
Hi there friend,

Just wanted to quote a little piece from intels website:

"In modern mainstream processors, x86 program instructions (macro-ops) are broken down into small pieces, called micro-ops, before beig sent down the processor pipeline to be processed. Micro-op fusion "fuses" micro-ops derived from the same macro-op to reduce the number of micro-ops that needed to be executed. Reduction in the number of micro-ops results in more efficeint scheduling and better performance at lower power."

I hope that helps? :)

As for the server/workstation comment...as mentioend before it has been noted :)

Thanks for the feedback!


Cheers

Let's limit ourselves with the test. Intel's quotes in their website are far from impressing me as they are manuplative, inaccurate and one-sided. Same site claimed that Intel was the performance leader when Opterons were mopping the floor with their Xeons. Anyway, let's stick to the discussion.

Yes, if you look at only the executed program's total power requirement, i.e. when the OS doesn't do anything else but execute that particular program, the power consumption will decrease (and the performance will increase) due to the optimization of micro ops within a single x86 instruction (micro-op fusion) and the optimization of multiple x86 instructions (macro-op fusion).

Consequently, the total power requirement of the Xeon will decrease (Total Watts) but the power it uses per hour with this optimized micro-code scheme will increase. Imagine, with the unoptimized code, Integer/FP processing units would wait untill the memory loads had been completed or loads/stores would stall waiting for the calculation results. With the optimized code, all units will work at the same time resulting in quicker execution time (performance increase) but increased power per unit of time.

Therefore, unless you stuck a powermeter to the divx encoder task, you should have recorded a higher power consumption/minutte but fewer minuttes to execute the program. And this wasn't what happened in your test.

Finally, in a typical server environment (as in your test setup), there is always something to be done by the CPU. If it finishes the execution of one task earlier, it will have something else to do which will keep the power consumption rather at the same level (but this is my assumption, debating that the CPU might run at idle once the task is complete will not change the overall concept. If the CPU runs at idle, that means a more powerful CPU than needed was incorrectly chosen for the server).

Sorry, too much talk. Op Fusions are great improvements over design which improves the CPU performance. As far as power saving is concerned, they are far from explaining a 30% difference in power consumption difference in your test. Therefore, I still stand by my position.

On a side note, after selling power hungry heating equipment for years and IT departments' paying higher electricity bills than production machinery, Intel will naturally relate any improvement in their design to power savings in their web site. That's why I find their explanations manuplative and inaccurate.

And one question. How did you configure the Win3k Enterprise? Which server roles did you assign?

Thanks & Regards
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
I only wish yiou had seen/played with a Win3K Server once in your life.

I only work with them daily. Before Windows 2003 server I have worked with Windows 2000 Server, Windows NT4 Server, Windows 3.51 Server, OS/2 2.0, OS/2 LAN Manager Server version 1 and 2. In fact I evaluated and specified OS/2 LAN Manager server (and Windows NT Server) as a standard in a major multinational corporation.
So yes, I have seen/played way to much with Windows Server for my own good :lol:
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
Intel's quotes in their website are far from impressing me as they are manuplative, inaccurate and one-sided.

Well, technically Intel are correct, if you presume that better performance will lead to shorter execution time, and that the processor then can return to a power saving state earlier.

Consequently, the total power requirement of the Xeon will decrease (Total Watts) but the power it uses per hour with this optimized micro-code scheme will increase.

Why would it increase? Real power savings are only made when parts of the CPU can be shut down, and power savings are not that granular that optimized code execution leads to increased power usage. Sorry, CPU's don't work that way (as you saw in the test).

Finally, in a typical server environment (as in your test setup), there is always something to be done by the CPU.

Any proper testing should disable these services. I am sure that THG doesn't run with AD, DHCP, IIS or any such things enabled, when benchmarking CPU's.

As far as power saving is concerned, they are far from explaining a 30% difference in power consumption difference in your test. Therefore, I still stand by my position.

As I said earlier here we agree, and Intel is a little fast on claiming op-fusion as a power saver, even though theoretically it is true (if the processor can return to a power saving state). And it should not be repeated uncritically by journalists. Anyway the Core architecture has tons of powersaving features due to its mobile roots. Better to mention them and the 65nm process, than confuse the image with the relatively irrelevant op fusion.
 

Johanthegnarler

Distinguished
Nov 24, 2003
895
0
18,980
You are very right, because in the end.. no matter how much we want to fulfill our dorky needs.. the companies our companies make us buy from, are really in control.

At least in my company i can only make suggestions, then the big guys talk to the sales people on the other end and get pwnt. Then i set them up.. and then we do it all over again.

Luckily our servers are useless. Automotive Sales industry is so freakin boring and lame.. its not even worth the effort to try and have a sweet server. They just buy what is sold to them.
 

zeezee

Distinguished
Jun 19, 2004
142
0
18,680
Intel's quotes in their website are far from impressing me as they are manuplative, inaccurate and one-sided.

Well, technically Intel are correct, if you presume that better performance will lead to shorter execution time, and that the processor then can return to a power saving state earlier.

That's what I am saying. Just tell me if this is what happened in the test. Their power consumption numbers are analyzed under three titles. Idle, Idle with power management and CPU Load. Their measurements are not task-based.

Unless they measured the total power to complete a task, say DIVX encoding, or total power consumption till the slowest of the three CPU's has finished the execution of whatever test they run for this test, Micro/Macro op fusion will effectively cause higher power consumption.

Consequently, the total power requirement of the Xeon will decrease (Total Watts) but the power it uses per hour with this optimized micro-code scheme will increase.

Why would it increase? Real power savings are only made when parts of the CPU can be shut down, and power savings are not that granular that optimized code execution leads to increased power usage. Sorry, CPU's don't work that way (as you saw in the test).

I have a problem with what I see in the test, that's why we are having this discussion. For the sake of my nerves, let's not use this test as an example for anything.

As far as the way that CPU's work goes, if a CMP/Jump pair is executed in a shorter amount of time as a result of Macro-op fusion, more CMP/Jump pairs will be fused and executed. This will reduce the amount of idle times in the processing units and they will add, multiply, subtract, and, or, xor more numbers.

Hope you see my point.

Finally, in a typical server environment (as in your test setup), there is always something to be done by the CPU.

Any proper testing should disable these services. I am sure that THG doesn't run with AD, DHCP, IIS or any such things enabled, when benchmarking CPU's.

Not just those. Since you work with server OS's every day, tomorrow, logon to a Win3K, right click on My Computer, do a Properties. Select the advanced tab, click on Settings in the Performance group. You will understand what my problem is when you see the Processing Schedule group of the newly opened window.

On top of everything, what is the logic behind using an operating system in a test and disabling functions such as DHCP, IIS, AD, etc. which that operating system is designed to run?

Funny enough you are probably right, Most likely they didn't assign any of these roles during the test. They must have simply installed the Win3K and run the test... but not with intentions like a pure CPU performance management. Otherwise, one of the authors wouldn't have responded with a copy/paste from Intel's site immediately :)

Anyway, have fun.
 

dean7

Distinguished
Aug 15, 2006
1,559
0
19,780
Yeah... sadly, I think that's how most large organizations are. Once you get to 10,000 + employees, with server engineering teams of 40+ people, it's not like you can just read this THG article and say "hey everybody, let's buy this CPU!"

Now, if you were smart and felt very strongly about it, you could arrange a departmental meeting and do a presenation. But, by the time you do the presentation and shift the buying around to X CPU for Y type of application, a newer CPU will already be out.
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
Once you get to 10,000 + employees, with server engineering teams of 40+ people, it's not like you can just read this THG article and say "hey everybody, let's buy this CPU!"
Now, if you were smart and felt very strongly about it, you could arrange a departmental meeting and do a presenation. But, by the time you do the presentation and shift the buying around to X CPU for Y type of application, a newer CPU will already be out.

Thats a heavy organisation :!: . In the end of the 1980's I worked for 230.000+ employees company, but I was charged with specifing servers and establishing standards. The standards had to go around an IT comitee, but those people where more interested in looking at the business and organizational aspects of the standards, than looking into the technical stuff. I pretty much alone decided what technologies to buy, and from whom within the 3-4 pre-approved vendors. Even today the same company does not have more than 4 persons in the same role, so 40+ people just for evaluating server technology - thats a big group to discuss with.
Do your (if you work there?) company still buy P3 servers? :lol:
 

none34

Distinguished
Oct 31, 2006
1
0
18,510
I think compare the performance under differnet OS will more fair.
Because I hear that AMD did not performace well with Windows.
I advise to run jobs under Linux OS.
 

hdc090360

Distinguished
Nov 1, 2006
1
0
18,510
I think one point that nobody has made is, YES the 5100 processors are superior to the much older Opteron! No surprises there. However, if anyone thinks that the Xeon would be as fast, as low powered and with 64 bit extensions if not for AMD then go read an Intel processor roadmap from 3 years ago. Our servers would all be running Itanics or 32 bit netburst (shudder) and processors would cost twice as much.

3 cheers to Intel for releasing a great processor. Now lets get behind AMD so processors keep getting better and cheaper. We need both companies.

P.S. I can't work out why you would run desktop benchmarks on a server and expect them to mean anything. The processor in a server is only part of the story.
 

Belaird

Distinguished
Nov 2, 2006
1
0
18,510
I'm sorry but my opnion these benchmarks have no valid bearing in a server environment. They are fine for the desktop and the power user, but in real business world use under extreme loads and performance requirements they don't compare.
I have use both Intel Xeon and Opterons side by side in heavy transaction server applications as well as in a VMware ESX environment, and the opteron still wins. The woodcrest series does close the gap, and is very close to the new opteron rev k series, but in a 70-100k transaction/sec environment my money goes to the more cost effective solution, the opteron. I've switch almost 50% off my server environment to opteron and unless intel can produce a box that can really deliver 30-50% better performance/cost then the opterons, I have no reason to switch.
 

songkeung

Distinguished
Nov 3, 2006
3
0
18,510
I just wonder if someone has took a look on the 4-way (dual core) servers. It seems Intel has not yet made available its micro-architecture to its Xeon MP series.

As a result, for 4-way dual-core servers, AMD still beats Intel so hard.

HP DL580 G4 (Intel CPUs) - SPECfp_rate = 105
HP DL585 G3 (AMD CPUs) - SPECfp_rate = 154

Source: http://www.spec.org
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
I just wonder if someone has took a look on the 4-way (dual core) servers. It seems Intel has not yet made available its micro-architecture to its Xeon MP series.
As a result, for 4-way dual-core servers, AMD still beats Intel so hard.
Please look at earlier discussion in this thread before posting. Yes, XeonMP is based on Netburst, and therefore being deficient. Tigerton is Intels codename for the Core2 based XeonMP 4 socket, 16 CPU :!: platform.
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
I'm sorry but my opnion these benchmarks have no valid bearing in a server environment. They are fine for the desktop and the power user, but in real business world use under extreme loads and performance requirements they don't compare.
As discussed earlier in this thread, THG is using a AMD and a Intel platform which are as similar as possible, and then trying to compare Xeon against Opteron as a pure CPU comparision. It does not claim to be a server test, even though it is using server hardware/OS (which indeed is confusing).

I have use both Intel Xeon and Opterons side by side in heavy transaction server applications as well as in a VMware ESX environment, and the opteron still wins.

That is a rather strange result :eek: . Nobody I have heard of has gotten similar results. Xeon 5100 wins clearly in every server test I have seen.

For real server tests, please look at:

http://www.anandtech.com/IT/showdoc.aspx?i=2793

http://www.anandtech.com/IT/showdoc.aspx?i=2772&p=11

Xeon Woodcrest clearly wins. Maybe you should look at your setup to see if there is any problems with the testing methology or a configuration error.

If you want to see a clear Opteron win, go to 4 socket systems, where intel's Xeon MP is still Netburst based. Look at:

http://www.anandtech.com/IT/showdoc.aspx?i=2745&p=5

Ouch, that must hurt for Intel :twisted: I don't really know why anybody would buy Intels Netburst based Xeons. They are beaten in 2P and badly beaten in 4P.
 

diplomat696

Distinguished
Nov 30, 2004
275
0
18,780
I have a question for you guys, I was talking to a friend last night in regards to the intel xeon 3.2ghz 130W dual core processor 65nm.

On paper this looks like it is faster than the core 2 duo extreme however I am sure that this probably isnt true based on the prices.

Can someone explain to me what is a faster chip for a workstation pc between say the xeon I mentioned and a core 2 chip (not core 2 extreme perhaps the E6400 or something like that which is in a similar price range to the xeon that I mentioned above.

3.2ghz with 4mb cache sounds good to me but im guessing there is something in the cpu architecture of the xeon which is either older or doesnt run as fast or something when compared to the cor duo's

Any information would be greatly appreciated
 

bga

Distinguished
Mar 20, 2006
272
0
18,780
I have a question for you guys, I was talking to a friend last night in regards to the intel xeon 3.2ghz 130W dual core processor 65nm.On paper this looks like it is faster than the core 2 duo extreme however I am sure that this probably isnt true based on the prices.
No, it is based on the older Netburst architecture.

Can someone explain to me what is a faster chip for a workstation pc between say the xeon I mentioned and a core 2 chip (not core 2 extreme perhaps the E6400 or something like that which is in a similar price range to the xeon that I mentioned above.

On pure CPU performance, the E6400 is probably 10-20% faster than the Xeon 3.2GHz(Dempsey), from what I can see in a quick lookup in THG's CPU guide.
But the Xeon platform (MB and RAM) is much more expensive than the Core 2 platform. So, for the same money, you will get much more performance from a Core 2 platform. If you need a two processor platform or masive amounts of RAM (Over 4GB), then consider the slower clocked 5100 series Xeons which are Core 2 based. They will still be faster than a Netburst based Xeon.
 

diplomat696

Distinguished
Nov 30, 2004
275
0
18,780
yeah thats what I figured, I need to stop my friend from buying one of those and tell him to get a damn core duo asap :)

or maybe I shouldnt and just pwn his benchmarks after he gets everythinig running mwahahahahaahhaha
 

turpit

Splendid
Feb 12, 2006
6,373
0
25,780
Good article, but its a moog point. New technology beats old technology. Big news there. :roll:

So Opteron beating Netburst grade chips was a moot point as well? New Technology beating old technology?



No No No...It was a "moog" point. This type of point is one made by "horde" moops, unable to comprehend the meaning of the word "moot" :wink: