AMD Vs Intel 4 Specific Server Apps.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I also wish to go Opteron, however the partners I'm dealing with view AMD as a poor unreliable company (VERY hard to pursued).

Just get RealVNC and connect to my server for a demonstration.

www.top500.org - How many Xeons vs Opterons do you see the Top500 using ?, I mean these old ones need to wake the hell up and smell the coffee... ignorance is not an excuse, how many Australian IT projects have failed in the last 6 months alone.... Do they really want their names added to the list of growing failures ? (hell, they prob do). Just look at the problems Customs had right before Christmas (really picked a good time didn't they ?). The cause was underpowered systems, they heavily overestimated what could be done, knew the risks, and did it anyway. (The one person at fault was advised it 'simply would not work' 100's of times, but did it anyway).

HDD performance isn't my rigs strongest point (see sig), but I am beating the iRAM drives in 'some' areas.. not seek time obviously using a Tyan K8WE S2895 with Opteron 270's (Don't bother with the 250's and single core stuff, get maximum benefit today, x64 options for the future, sitting there ready to go, with 2 processor cores per socket). Having the option to move up to 8 GB Reg ECC PC3200 (128x4), using NUMA (built into Windows XP Pro, x64 Edition and server editions) will help in terminal server heaps.

Not that Intels Xeons are bad, but we've had enough issues with them anyway (Mostly people thinking HyperThreading = double performance 😛, sad but very true, most the stuff we do it is better off disabled - I have a long document explaining why too, in great detail exactly what happens, just mail me). The Quad-Opteron is my 'home PC' (hehehe), just wanted to hammer them and check for any differences. 😛 , Intel need to start pushing those Xeon LV's really soon, they've lost significant market share to AMDs' Opterons in since around Q4 2003).

It is all about non-biased technical documentation, and using whatever performs the best, with the best price/performance ratio, power consumption, TCO, reliability, etc for the tasks at hand... I can help you here.

Note: You are not going to be using Ghost, or 'typical' imaging on the server(s) anyway because of the RAID-5/6 controllers, (it won't work, use their imaging / rebuild software), and that is the only real reason I continue to keep a few Intel systems around. (Intel IDE controllers work better with Ghost in a DOS env, that is about it).

For workstations in a work-place, go Intel, there are advantages, but for the heavy-metal where performance is required and IDE controllers are not even being used the same excuses don't really work do they ? 😛

The reason some admins fear anything but Intel is because they don't want to look like a dick, as they lack experience in everything but Intel CPUs, platforms, chipsets, etc.... they also fear being replaced by younger / smarter people.... it is a very lame excuse, they are only like half a tech when you think about it. (Become familar with ALL hardware computing platforms you lay your eyes on, and some you don't.... that is the best advice I can give you to help your workplace move to a new level, possibly with you at the helm one day).

Also: Real Australians use http://www.StaticIce.Com.Au 😉
 
China's 2nd biggest Cluster is Opteron based.

Some of the largest military, scientific and academic users in the US and around the world use Opterons.

As I stated above at LEAST 45 of the top 500 supercomputers on earth are Opteron based machines and the vast majority of the top 500 run Linux, BSD or Unix. Almost none run windoze.

This is despite the fact Intel has a virtual monopoly and has been involved in illegal anti-competitive behavior for decades.

Penguin Computing, HP/Compaq, IBM, Sun and others all make Opteron systems.

Please note that HP makes AMD systems despite the fact they compete against Intel which happens to be their partner in developing the Itanium processor which is commendable. Also HP has been pushing Linux and I applaud them for that.

I would recommend custom built machines if that is possible, otherwise get them from a major vendor like Penguin Computing, HP/Compaq, IBM, Sun, etc.
 
I have to disagree with "HT is a minus".
In most cases HT brings you a lot more performance. Especially in multitasking. It fails at DiVx but at most other tasks it is at least the equivalent of a non HT CPU running at the same clock.
The pros heavily outweigh the cons.
HT isn't the inovation of the 21 century but it certainly makes a difference. In most cases an improvement.
 
I have to disagree with "HT is a minus".
In most cases HT brings you a lot more performance. Especially in multitasking. It fails at DiVx but at most other tasks it is at least the equivalent of a non HT CPU running at the same clock.
The pros heavily outweigh the cons.
HT isn't the inovation of the 21 century but it certainly makes a difference. In most cases an improvement.

Mate you are ****ing clueless aren't you ?

Where do I start ?:
- "HT brings you alot more performance in most cases", no you gain about +20% thoughput at the cost of (possibly getting up to) double the response times. In SSE2 you may gain up to +99% (double) though, but such cases are 'extremely rare' not 'most cases', not even close.
- HT 'fails' at DivX - No it doesn't, it performs about +85% better when used.
- The pros heavily outweigh the cons (half truth, can argure either way)
- Server tasks are usually memory performance dependant, the CPUs would ideally sit at 30% load (average over 6 hours), and memory is the main 'bottleneck', HyperThreading hurts memory performance (read a TechDoc or WhitePaper one day and get a clue), add to this AMD has a NUMA capable platform now which can aggregate their memory performance (each processor acts like a northbridge to the other) to 12.8 GB/sec or 25.6 GB/sec (peaks yes, but still well over the Xeons which typically have 6.4 GB/sec, peak, being shared by 4 logical processors which adds overheads.)
- We are not talking about video encoding here, (which is surprising to some, does not actually scale up/down with memory performance, it is mostly CPU and registers, HT actually helps here, you claim it doesn't ?) - (Sheesh, Idiot, Everyone here can show you HT helping DivX, care to RealVNC someones box ?), nor are we talking about physics engines in games, or machines that are performing mostly authentication (sure they do some, but not 'mostly'), which, as en/decryption benefit from HT/High SSE2 performance, would perform better with HT on.

Sun Microsystems implementation of MultiThreading was far more thought out. Go read about it, should keep you busy and educated for about 2 weeks or so.

Also when moving to an x64 server operating system, the Opterons gain more in Drystone / MIPS than the Xeons do.

H/T doesn't run one thread +20% faster, it runs two threads but intead of being executed at half-speed, they execute +20% faster than half speed. Thus the appearances. Anyone dealing with real-world applications knows this already.

http://www.xbitlabs.com/articles/cpu/display/replay.html
http://www.xbitlabs.com/articles/cpu/display/netburst-2.html
Get reading, and may I suggest get learning aswell. 😛

Also bear in mind that H/T has security flaws:
http://www.daemonology.net/hyperthreading-considered-harmful/
http://www.daemonology.net/papers/htt.pdf
Now I know you may like backdoors into Terminal Servers, but the admins don't want you there remember ?
 
I do not wish to be truculent but this is not true. Hyperthreading hurts Intel's server performance and causes security issues.

Intel insists that their CPUs be benchmarked with Hyperthreading turned OFF.

As you can see here:

http://www.spec.org/cpu2000/results/res2006q1/

and here:

http://www.spec.org/cpu2000/results/cpu2000.html [ WARNING: VERY LARGE FILE ]

Intel CPUs score a bit better with Hyperthreading turned OFF in most server tasks.
 
Intel CPUs score a bit better with Hyperthreading turned OFF.

Should be edited to:
"Intel CPUs score a bit better with Hyperthreading turned OFF when used in most server based applications / roles... a minority of cases do exist where it helps. eg: a server performing almost only authentication tasks, where throughput of logins is more important than response".

Just for that 'all inclusive' finishing touch 😉

============================================
Now back to notes:

One must bear in mind almost no servers in the original post are performing 'almost exclusively' authentication based tasks. The Terminal server is a so-so grey area..... However, an Opteron 270 would be a far better investment and last quite some time.... it appears this 'department' doesn't upgrade nearly often enough so it is justified TCO wise, as it may very well 'keep performing well enough for even more users' for 1-2 additional years, down the track, than it is required to, even if they 'forget' to upgrade it every 2 years or so to keep it 'scaling with demand'.

If you have 100 users, try and plan for at least 150, if not 300. That way you save your own ass, and Government jobs are all about 'saving your own ass' these days. (At least in Australia).

Least if your managers screw it up, it is them that are liable, not you. If it works (well regardless really, keep your own notes) document the project and add a summary to your resumé, so if you need to leave in a hurry (eg: dumb project coming up and they want you as a scapegoat) you are well prepared for it.
 
I went ahead and changed it to:

Intel CPUs score a bit better with Hyperthreading turned OFF in most server tasks.

Which should cover it.

However, an Opteron 270 would be a far better investment and last quite some time....

I totally agree with your comment about the Opteron being a much better investment.

Furthermore I would argue that AMD CPUs in conjunction with Linux, BSD and other open source software would offer a MUCH better TCO and would provide a lot more value to Australian tax payers!


I believe the OPs superiors would be in a lot of trouble with the taxpaying voters in Australia if the public were to find out the organization could have saved tens of thousands of AU dollars by going with AMD and open source.
 
Why don't you get some reading
http://www.2cpu.com/articles/43_3.html
I have HT P4 and i probably know what i'm saying. I don't need benchmarks. Benchmarks are a crap way to test something and in some you'll find X>Y and in others X<Y.
I don't do server applications but i don't see why having an extra fake core wouldn't help.
For me HT on means
better multi tasking
better performance in certain games
And Ht off means
better performance in other games
usual single threaded apps it doesn't make much of a difference(and the difference is 20% at MOST).
Stop basing on facts that come out off your small mind. We all are good at theory but when it comes to something practical things change.
And HT can be turned off whenever needed (THAT makes it an Improvement).
As for servers keep whining LOL, the author's boss doesn't trust AMD thus will go for Intel.
EDIT
After further reading it seems that HT brings no improvement at servers.
I admit it's nice that you pointed that out.

P.S. Thx for calling me Idiot, fact based freak ^**%.
 
and, as likely above:

- Replacing / Fixing hardware using staff in the same building if they do it themselves. vs. Having to wait... sometimes quite some time.

- Why wait 2-4 hours for '3rd party OEM company tech' (eg) to replace a HDD in a failed array, when you can do it yourselves faster, and at a lower cost. It takes 5 minutes and is the piss easiest 'preventative fix' you'll ever perform on a server. (The drives will fail, eventually, just have 'tested each quater' ones on hand to use as hotspares).

- Can you really be without a server for 2 hours ?, While simply waiting for '3rd party OEM company tech' to arrive, just to perform 5 minutes of work ?

- Same applies for data backups, mirroring, (clustering if used, etc), How long can a given 'server / backend' core business machine (S/W or H/W) be out of action before money is really getting burned paying 100 (?) people to 'wait for the servers to come back online with all their data' ?

(I think wusy, or someone else, may have covered this already in more depth, call this a 'page 2 reminder')
 
I agree and would advise every organization to keep spare hardware on site whenever possible!

I speak from experience, having to wait for a vendor to drive to your location or having to wait days for a part to be shipped can cost a lot of time, money, sleep, peace of mind and jeopardize your job security.

This is yet another reason why you should NOT run all those critical services on a single box as many of us have pointed out above.
 
Why don't you get some reading
http://www.2cpu.com/articles/43_3.html
I have HT P4 and i probably know what i'm saying. I don't need benchmarks. Benchmarks are a crap way to test something and in some you'll find X>Y and in others X<Y.
I don't do server applications but i don't see why having an extra fake core wouldn't help.
For me HT on means
better multi tasking
better performance in certain games
And Ht off means
better performance in other games
usual single threaded apps it doesn't make much of a difference(and the difference is 20% at MOST).
Stop basing on facts that come out off your small mind. We all are good at theory but when it comes to something practical things change.
And HT can be turned off whenever needed (THAT makes it an Improvement).
As for servers keep whining LOL, the author's boss doesn't trust AMD thus will go for Intel.
EDIT
After further reading it seems that HT brings no improvement at servers.
I admit it's nice that you pointed that out.

P.S. Thx for calling me Idiot, fact based freak ^**%.

Note, the below comments are not directed at you, just notes to assist anyone skimming this forum thread for insight into the situation at hand. - It was just easier to use your above post, to save time / as a reference, even though it says I am replying to you... it is more of a general reply to some of the issues present.

Each bolded section is given a number, from 1, counting upward.

1 - What only probably ?, we are talking Xeons here anyway, likely the older 533 FSB model ones aswell, all sharing that 4.3 GB/sec 😛

2 - We are doing server apps, and sometimes it will help, and sometimes it won't. Use the right benchmark for the task, Thus you don't get any X>Y then Y>X scenarios, you've planned for them all.

3 - Exactly, but servers may be running the same single theaded process, and have it loaded and executing on 4 processors (logical or otherwise), but with HT ON, they max the chips and each 'instance' of the code slows to 60% performance, you get thoughput at the cost of response in most server applications when HT is ON. (As many have said above). Like I said +20%, and +20% of 'half'(or 50%) (what one would normally expect when executing 2 processes on one CPU with HT OFF) is equal to 60% 😛 (best case as you've said, so it is even less effective 'typically').

4 - Why the hell would anyone base this on FICTIONAL information ? (BannanaBoat. laaalaaalaaa.....) (Best comment ever)

5 - I wish my mind was small, better than dealing with ignorance on a daily basis, aswell as on the TG forums (the last place I'd expect to see it).... Ignorance to you could well be bliss, thx for the warning 😉 (I am kidding of course, our posts compliment each others)

6 - If ones practical and theoretical scenarios are differing, then the theory is wrong or 'an imcomplete scaled down version for simplicity', this is normal in the business world..... the real-world.

7 - You can't just go rebooting servers every 2-24 hours to toggle HT on and off for the task at hand. Normally servers are managed remotely and you can't even get into the BIOS to turn HT on/off. Using process affinity locks doesn't help either (prefetching, OoO, register renaming can't look as many instructions ahead, only 40 instead of 80, or only 80 instead of 160, either way you may aswell have just left HT off). --- Sort of a flipside comment, did you change your mind while writing your post ?

8 - I do sincerely apologise for calling you an idiot though (above). You ain't so bad, just more that I am on a personal crusade to turn Australian Government IT around, for the better - :) . I did underestimate you based on the post alone, and that was a mistake on my part 😉

Please don't take offense at this post, you've helped us both point out a few things often omitted.

Also note that your linked article "Hyper-Threading Performance Analysis - DivX and Conclusion - Published on 2002-09-30 13:57:42 By: Jim_) was... for starters created by someone simply called 'Jim_' on the internet 😛, it is also using DivX software from prior to Sept 2002... The current versions (we are Feb 2006 now, almost 30 months from the article publish date and 'old version' software used) of most video encoders are actually gaining in HyperThreading, WME9 x64 (4 isolated threads) has been out for a month... WME9 (x86/32) (2 isolated threads) even longer still.... but yes a upper mid range Opteron, even a single core one, can encode video, even in when only x86 32 bit Protected Mode, just as fast as a high clocked Pentium 4 / Xeon with HT on or off.


Note: It appears we are actually in agreement on several points.
 
Probably=Certainly in that context
Settled.
Linux,stop trying so hard,Intel is the only way. ><
"I also wish to go Opteron, however the partners I'm dealing with view AMD as a poor unreliable company "
^^
I'm so Intel BIAS i can't stand it . :lol:
 
Not to take away or prove any one wrong...but take a look at this url

http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/ssmsam.mspx

About the second paragraph, you'll read this:

"To date, the largest configurations supported by SQL Server 2000 running on Windows Server 2003 are 32 processors with 512 GB of memory. These systems demonstrate excellent SMP scalability, both on audited benchmarks and in real applications. Today, a single CPU can support 14,000 users accessing a 1-terabyte database, an 8-processor node can support more than 92,000 concurrent users accessing a SQL Server managing billions of records on a 8-terabyte disk array, and a 32-CPU node can support 290,000 users accessing a SQL Server database hosted on 24-terabyte disk array. The largest of these servers are capable of processing more than 1 billion business transactions per day."

"Today, a single CPU can support 14,000 users accessing a 1-terabyte database"

Now of course, your nowhere near those numbers on networking and I/O's... but it gives you the idea of what a cpu can handle...which is [IMO] not your problem...

You didn't say [or I didn't see] what OS your running... but when your installing Windows 2003 Server, they give you the option to install all the stuff that your running on one system, so sitting there and saying that you need 3 or 4 servers [IMO] No... why would Microsoft give you the option to put them on one system if it's not designed for it.... [Giving your size]...meaning if you had a 1000 users... then of course... utilize another server for certain things... but in your case...No.

I also see you have 2gb of RAM installed... I recently read about a Microsoft/H.P. study on the amounts of RAM utilized when using "Terminal Servers"... and in the report they say that each connection uses about 8 to 12 mb's of RAM and can go up to and over 50 mb's... depending on whats being used.

The url below is another study/report that will give you the same numbers

http://www.sessioncomputing.com/scaling.htm

about 85% of the way down, you'll see this answer:

"256MB for Windows Server + 10-15MB for each logged on user + RAM required by applications, i.e. a user can easily consume 40-50MB or RAM by running Outlook, Word, Excel and Internet Explorer simultaneously. Since many applications use dynamic memory allocation and share memory, you can't just multiply the number of users times the amount of RAM one user consumes. I generally recommend 30-40MB per user as a baseline for users of Microsoft Office."


You might want to think about updating your Server OS... if not running 2003 or maybe your system needs a service.

I would also look into updating your drives to a "raid" system, for quicker I/O's...

but as far as what type of CPU...I wouldn't worry... your way above and beyond what you'll need....

IMHO
 
Nice articles btw:

Depends on what the SQL is used for (as we both know).

Those 'best case' examples indicate a 'average' user can only perform 20 transactions per hour... (and you need to plan for your heavier, more productive users else they complain about performance).

Now in hard working places people tend to do more than one transaction every 3 minutes, they could be doing 1 every 5 seconds.... or every 15-30 seconds in reality.

100 users - 2,000 queries per hour could very well actually be:
100 users - Up to 36,000 queries per hour. or:
50 users - Up to 18,000 queries per hour. etc...

Check: Are your users still complaining about slow response ?, if so then do something as adding staff will only make it slower, adding hardware may be the cheapest way to increase productivity.

Averages are dangerous things, as they exclude spikes, and the spike load may be a 40x heavier load (on some components) than the 'average'. Perhaps these averages included idle time, lunch breaks, etc. 😛

It totally depends what the SQL is being used for, how 'fast' the users work, how well the software they are using which accesses the SQL is designed & configured, coded, etc.

Wanting 6-8 GB on a terminal server (or split over two terminal servers rather) for performance terminal sessions wouldn't be out of the realm of possibilities here either, it depends which applications staff are running, how memory heavy they are, etc, even with only 50 sessions. You need to plan 'against this' so the system isn't 'slow' by the day it is implemented.... and hopefully runs fine for at least 2 years afterwards... Perhaps staff are also using applications that burn resources (there are many around, thanks to IT booms, poor devs for the time, and low quality software rush jobs)... If they are just using e-mail and word, and nothing else then yeah it is enough.... also depends how fast the sessions are connected, the slower they are the less they can hit the server... (which is a double edged sword productivity wise anyway).

Sure anything would be an improvement on their current server though.

Once all the infrastrucuture is 'cool'... The biggest improvements can be only really be made in the software the users are running most likely, eg: better designed, doesn't 'machine gun' the SQL Server, etc... getting developers to fix code on the other hand... is that even possible ?

The people who made it could've:
- coded it 4 years ago,
- pissed off never to be seen again,
- likley where paid a large sum of money for something that is may be 'designed' to 'keep them employed on an ongoing basis' (hint, hint, look here).

I've been around here long enough to see all the above (sadly 🙁 )
(Note:: Poster & myself both work in Australian IT, in Gov sections, likely they are dealing with similar, if not the same, problems we've dealt with before.)
 
Also note that your linked article "Hyper-Threading Performance Analysis - DivX and Conclusion - Published on 2002-09-30 13:57:42 By: Jim_) was... for starters created by someone simply called 'Jim_' on the internet 😛, it is also using DivX software from prior to Sept 2002... The current versions (we are Feb 2006 now, almost 30 months from the article publish date and 'old version' software used) of most video encoders are actually gaining in HyperThreading, WME9 x64 (4 isolated threads) has been out for a month... WME9 (x86/32) (2 isolated threads) even longer still.... but yes a upper mid range Opteron, even a single core one, can encode video, even in when only x86 32 bit Protected Mode, just as fast as a high clocked Pentium 4 / Xeon with HT on or off.


Note: It appears we are actually in agreement on several points.
LOL,you aren't finding anything based on the "new" version on the Internet. searched myself,everything dates back before 2003.
And having a HT P4(crappy 2.4Ghz) i wished it did offer better but it doesn't. Probably my old progs are outdated (and they are,i only update NAV and gfx drivers+games) but heck, this CPU is outdated as well.
So lets see what's on the list Intel 650(?) 3.6 ghz CPU with HT + arctic freezer of course(or something better,i don't want it to keep me warm during the summer :? ). Then i'll redo the DivX tests with updated software. Until then( and it'll take a while since i'm starting this rig from 0 and i need more $$$) i remain to be convinced 8) .
 
while HT might not be best in a server situation (it can be disabled), I have found a definite speed increase both in normal every day life, and in benchmarks by having HTT turned on. Ripping CD's with Windows Media Player uses two threads... something very rare. Usually when I run benchmarks, I only use about 55% CPU, because one thread is taking 50%, and the other side is free for everything else, but when I use WMP to rip CD's to MP3, it uses 100%. If you have apps that take advantage of HTT, you have a definite advantage. :)
 
I agree with you there Windshear and your right. HT does make a noticable difference in the desktop world. Of course, the AMD X2 is pretty impressive I must say. I'm really into this new machine of mine. Could this be the end of Intel for Luminaris? hhhmmmm :?

Now in the server market, that's a different ballgame my friend. AMD just shines and I know that from experience.
 
Have you ever considered SUN servers??

They're excellent performers and are very affordable (you can buy one starting at $745 u.s). They even come with Solaris 10, Red Hat or even Windows Server 2003.

Take a look here and here

You can show the benchmarks to your boss and maybe he'll change his mind.

(SUN doesn't use Intel processors. You must guess why?) :wink: