Overclocking Intel’s Xeon E5620: Quad-Core 32 nm At 4+ GHz

Status
Not open for further replies.

JOSHSKORN

Distinguished
Oct 26, 2009
2,395
19
19,795
I wonder if it's possible and also if it'd be useful to do a test of various server configurations for game hosting. Say for instance we want to build a game server and don't know what parts are necessary for the amount of players we want to support without investing too much into specifications we don't necessarily need. Like say I hosted a 64-player server of Battlefield or CoD or however the max amount of players are. Would a Core i7 be necessary or would a Dual-Core do the job with the same overall player experience? Would also want to consider other variables: memory, GPU. I realize results would also vary depending on the server location, its speed, and the player's location and speed, too, along with their system's specs.
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]JOSHSKORN[/nom]I wonder if it's possible and also if it'd be useful to do a test of various server configurations for game hosting. Say for instance we want to build a game server and don't know what parts are necessary for the amount of players we want to support without investing too much into specifications we don't necessarily need. Like say I hosted a 64-player server of Battlefield or CoD or however the max amount of players are. Would a Core i7 be necessary or would a Dual-Core do the job with the same overall player experience? Would also want to consider other variables: memory, GPU. I realize results would also vary depending on the server location, its speed, and the player's location and speed, too, along with their system's specs.[/citation]

Josh, if you have any ideas on testing, I'm all ears! We're currently working with Intel on server/workstation coverage (AMD has thus far been fairly unreceptive to seeing its Opteron processors tested).

Regards,
Chris
 
G

Guest

Guest
You could setup a small network with very fast LAN speeds (10Gbps maybe?). You can test ping and responsiveness on the clients, and check CPU/memory usage on the server. Eliminating the bottleneck of the connection and testing many different games with dedicated servers one can actually get a good idea of what is needed to eliminate bottlenecks produced by the hardware itself.
 

Moshu78

Distinguished
Apr 27, 2010
27
0
18,540
Dear Chris,

thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.
Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.
I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.

Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...

Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.

LE: missed the text under the graphs... seems you are aware of the issue. :) Still would like to see the CPU tests performed on more GPU muscle or on lower resolutions/older games. This way you'll be able to get to the real interesting part: where/when does the CPU bottleneck?
 
G

Guest

Guest
Looks to me to be a pointless exercise. I have been running an i7-860 @ 4.05 Ghz and low temps for more than a year now so why pay for a motherboard that expensive plus the chip?
 

Cryio

Distinguished
Oct 6, 2010
881
0
19,160
I have a question. Maybe two. First: Since when Just Cause 2 is a DX11 game? I knew it was only DX10/10.1 . And even if it is [though I doubt it], what are the differences between the DX10 and 11 versions?
 

omoronovo

Distinguished
Feb 2, 2010
10
0
18,510
[citation][nom]blibba[/nom]Note: Higher clocked Xeons are available.[/citation]

However, I'm sure everyone is aware of how sharply the price of Xeons rise above the lowest-of-the-low. I expect a Xeon capable of 4.5ghz (a good speed to aim for with a 32nm chip and good cooling), you would already be over the costs of purchasing a 970/980x/990x, especially considering how good a motherboard you would need to get - a Rampage III extreme is possibly one of the most expensive X58 boards on the market, offsetting most of the gains you'd get over a 45nm chip and a more wallet friendly board - such as the Gigabyte GA-X58A-UD3R.
 

compton

Distinguished
Aug 30, 2010
197
0
18,680
This is one of the best articles in some time. I went AMD with the advent of the Phenom IIs despite never owning or using them previously, and I didn't once long going back to Intel for my processor needs. But I think that may have changed with the excellent 32nm products. The 980X might be the cat's pajamas, but $1000 is too much unless you KNOW you need it (like 3x SLI 480s, or actual serious multithreaded workloads when TIME = $$$). The lowly i3 has seriously impressed the hell out of me for value/performance, heat, and price/performance. Now, this Xeon rears it's head. While still pricey in absolute terms, it is still a great value play. Intel has earned my business back with their SSDs -- now might be the time to get back in on their processors, even if Intel's content to keep this chip in the Xeon line. Thanks for the illumination.
 

K2N hater

Distinguished
Sep 15, 2009
617
0
18,980
Interesting article. I come to the conclusion we could build a 2P Xeon box to overclock for a similar price than a single 980X while being clearly cheaper to upgrade and having 8 physical cores +8 HT cores.

By the way, concerning power efficiency the top pick is the L5640. While not a cost-effective processor a 60W TDP for its 6 cores is quite impressive.
 
G

Guest

Guest
There is one additional benefit to using the Xeon. You can actually use ECC memory on your consumer class Intel system. With the huge amounts of memory (thus much greater chances of a memory error) we use these days on our systems I find it hard to believe anyone would trust their main systems to non-ECC memory.

Unfortunately Intel has used their regained near monopoly position to take away that option from their consumer chips. Until they see the light I've been force to use otherwise less powerful AMD CPUs on my main systems and recommend likewise to my clients and acquaintances.
 

nevertell

Distinguished
Oct 18, 2009
335
0
18,780
[citation][nom]Moshu78[/nom]Dear Chris,thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.LE: missed the text under the graphs... seems you are aware of the issue. Still would like to see the CPU tests performed on more GPU muscle or on lower resolutions/older games. This way you'll be able to get to the real interesting part: where/when does the CPU bottleneck?[/citation]
But they're testing whether or not is necessary to use these kinds of cpus in gaming PC's, and for that, you do need to enable gamy setups.
 
Pretty good article. I wonder how a AMD Opteron 6128 Magny-Cours would stack up? Could you try OCing the 6128? Its only $275 on newegg.
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266

I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643
 
[citation][nom]blibba[/nom]Note: Higher clocked Xeons are available.[/citation]

The point of the E5620 is to get a 32 nm LGA1366 chip for less than the i7 970 sells for. The only other 32 nm Xeon that sells for less than $870 is the Xeon E5630, which is merely 133 MHz faster than the E5620 but costs a couple hundred bucks more. All of the rest of the 32 nm Xeons are very expensive and more expensive than the i7 970 and i7 980X.

[citation][nom]elbert[/nom]Pretty good article. I wonder how a AMD Opteron 6128 Magny-Cours would stack up? Could you try OCing the 6128? Its only $275 on newegg.http://www.newegg.com/Product/Prod [...] 6819105266I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.http://www.newegg.com/Product/Prod [...] 6813131643[/citation]

I have the setup you describe there with two Opteron 6128s sitting on an ASUS KGPE-D16. Note that I run Linux and some programs they run won't run on WINE. Here's roughly how it would stack up with these units being tested:

- 3DMark Vantage: won't run on my system. I'm predicting it will come in under the stock Xeon E5620 since the stock E5620 is quite a bit behind the 4 GHz units, and the 6-core i7 970 is barely faster than the other quad-core units at 4 GHz.

- Sandra Arithmetic & Multimedia: should beat any one of those there due to having 16 real cores.

- Sandra Memory Bandwidth: eight channels of DDR3-1333 is more than twice as fast as their systems. The only question is to whether or not Sandra likes NUMA or not. If it does, then two Opteron 6128s would be much, much faster. If not, then it would be much lower. I'm downloading it right now and will update when I get to run it and tell you for certain.

- COD2: lower score than the E5620 since this is a clock-speed-limited benchmark that does not scale beyond four cores.

- Metro 2033: would probably be similar to the other units since this is not a CPU-limited benchmark.

- DiRT2: would be slightly lower than the stock E5620 since we see no scaling advantage with the i7 970 and a small decrease in framerates with the stock E5620 versus the other chips.

- Just Cause: not a CPU-bound game, just like Metro 2033

- iTunes 10: this is a single-threaded benchmark and the Opteron 6128s would be considerably slower than the stock E5620.

- Handbrake: should beat any of the chips there since this is well-threaded. I can't directly compare to their test since I don't have their same 1 GB VOB file to work on.

- DivX: should be the same story as Handbrake.

- XviD: not a very well-threaded program, and any of the chips there will beat two Opteron 6128s. XviD on Linux is poorly-threaded too.

- MainConcept: same as Handbrake and DivX, with the two 6128s likely being much faster than the Xeons and Core i7s.

- Photoshop: unknown. Photoshop loves Intel CPUs and is moderately-threaded, so I couldn't tell you if it would beat an i7 970.

- 3dsMax: the Linux version of this app scales very well, like the Windows version tested here appears to. The 6128s should beat the chips here handily.

- AVG: AVG isn't that well threaded beyond four cores, so the 6128s would not do all that stellar in this application.

- WinRAR: same as AVG, it's not a very well threaded program.

- 7-zip: is very threaded and two 6128s would be faster than any of the chips here.

- Temperatures above ambient: impossible to directly compare, but my 6128s run about 35 C over ambient (52-57 C) full-load using Dynatron A6s heatsinks with roughly 2000 rpm on the fans. The Dynatron A6s are far smaller than the units used to cool the LGA1366 chips.

- Power consumption: my system is obviously set up differently from theirs, but the CPU idle/load power consumption figures in my box are roughly in line with the 4 GHz chips and higher than the stock Xeon E5620. That is because I have two CPUs in the machine instead of just one like they do. A single Opteron 6128 has an idle power draw within a few watts of a single Xeon E5620 but consumes 20-30 W or so more power at full load.
 
After reading http://www.tomshardware.com/reviews/game-performance-bottleneck,2738.html, one of my conclusions was that massive overclocking was not a requirement for decent gaming. The comment was up-voted nine times; nothing in this article changes that opinion, where the GPU(s) become the issue long before the CPU. I might feel very differently about a render farm, but then where time is money and the cost is justified, I think most businesses would just buy the faster chips outright over risking stability and longevity by overclocking.
 

amnotanoobie

Distinguished
Aug 27, 2006
1,493
0
19,360
[citation][nom]jtt283[/nom]After reading http://www.tomshardware.com/review [...] ,2738.html, one of my conclusions was that massive overclocking was not a requirement for decent gaming. [/citation]

At least now we know that there is a Xeon alternative over the the i7-930. Power consumption and temperature would be your main concern for getting this chip rather than stock performance.
 

theoutbound

Distinguished
Aug 30, 2010
141
0
18,680
I'm still not sure if the Xeon would be any better value over an i7 given that you will also have to pay a premium for a board that supports the Xeon. Great article Tom's.
 

nerrawg

Distinguished
Aug 22, 2008
500
0
18,990
Great article Chris, like the way that you record your reasoning and thoughts throughout the article. I would also like to second Musha78; I think after Thomas's Quad-SLI article it would be an interesting follow on to know whether or not the extra cache (12 MB instead of 8)is involved in the excellent 3-way SLI scaling results he got. As this appears to be the only real difference between the 920/930 and the Xeon in this article it might justify getting the Xeon for a high end gaming rig instead of the i7 (it doesn't seem to be possible to take advantage of the lower thermal output per frequency of the 32nm process due to the frustratingly low multiplier).

Otherwise it would appear that in terms of gaming the core i7 and Xeon are equal for all intents and purposes of gaming (but not price).
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]blibba[/nom]Note: Higher clocked Xeons are available.[/citation]

And significantly more expensive. If you're going to spend that much more, don't bother with a 2P-capable processor--just get the Core i7-970 or -980X :)

Regards,
Chris
 
G

Guest

Guest
Just wanted to note that the Xeon E5620 (and similar) will work in the Asus P6X58D series mobos as well, not just the ROG mobos. MSI's Big Bang Xpower will also work. Direct test on a Gigabyte X58A-UD3R rev. 2.0 did NOT work.
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
[citation][nom]Moshu78[/nom]Dear Chris,thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.LE: missed the text under the graphs... seems you are aware of the issue. Still would like to see the CPU tests performed on more GPU muscle or on lower resolutions/older games. This way you'll be able to get to the real interesting part: where/when does the CPU bottleneck?[/citation]

Moshu,

If a single GeForce GTX 480 is limiting your game performance, personally, and you'd need a second $500 card to break past it, then I have to believe you'd be in the group of folks able to spend more for a faster CPU, rather than trying to push a $300-$400 chip past a very stubborn BCLK ceiling.

To run SLI here just didn't make sense in my mind. I thought about using two cards, but because the PCIe bus was playing into my overclocking attempts, I limited this little test to just one :) As is, I already fried the onboard NIC.

Thanks for weighing in!
Chris
 
Status
Not open for further replies.