Haswell-Based Xeon E3-1200: Three Generations, Benchmarked

Status
Not open for further replies.

dgingeri

Distinguished
I have two Dell T110 II servers, one with an E3-1230 and one with an E3-1220v2, that cost me less than $800 each. I can tell you, they are great little machines, perfect for self-teaching Windows Server or ESXi. I now have the E3-1220v2 set up as my Windows 2008r2 router/DNS/DHCP/file/print server, and it uses a mere 45W of power when idle. The E3-1230 is my ESXi 5.1 machine right now.
 

g-unit1111

Titan
Moderator


That's kind of the way I see it. I don't think the Xeon is anything to write home about like some people on this board do, but the average user and/or gamer won't notice a lick of difference between an i5, i7, and low end Xeon. I would only recommend them in instances of things like Photoshop and heavy duty CS5 usage, but even then an i7-4770K or i7-4820K would be a better choice.
 

InvalidError

Titan
Moderator
While ARM chips may be doubling performance on a fairly regular basis, you need to keep in mind that ARM chips are starting from pretty far back. By the time they catch up with mainstream x86 chips, they will most likely hit very similar IPC and frequency scaling brick walls as x86 chips and won't gain much ground beyond that.

The only real threat from ARM is to profit margins: once ARM catches up, it may become more difficult for Intel to maintain the large premiums they currently command across most markets.
 

the1kingbob

Distinguished
May 27, 2011
153
0
18,680
Did Toms looks at the AMD 6100, 6200, and 6300? That would make to be an interesting comparison since the underlying architecture changed from the 6200 to 6300 (i think, maybe 6100 to 6200)
 

dgingeri

Distinguished
"I would only recommend them in instances of things like Photoshop and heavy duty CS5 usage, but even then an i7-4770K or i7-4820K would be a better choice." They wouldn't be any advantage in either case. The advantage of the Xeon isn't performance, but stability. It's use of ECC memory makes it much better for purposes like a high end workstation for an engineer or digital artist so their work isn't lost or interrupted by a memory error and crash or in servers where it can stay running reliably for months at a time.

In addition, the chipsets and platforms used with Xeons are more stringently held to industry standards, making them known quantities for device makers. Enterprise raid controllers are frequently unsupported on a standard desktop system with a Core i7 4770 and Z87 chipset, while they would be supported on a Xeon E3-1275v3 with a C226 chipset, even though the actual silicon design is exactly the same between the two.

There really isn't any difference in the silicon itself between a Haswell Core i7 and a Haswell Xeon E3, so there won't be a performance difference. The difference is in the stability of equipment surrounding each.
 

pjkenned

Distinguished
Aug 6, 2011
12
0
18,510
InvalidError - these are not in the same league as the ARM chips. Avoton and Rangeley are the real ARM competitor. I JUST got two Avoton 8-core platforms in the lab as this article was going live (benchmarks here: http://forums.servethehome.com/processors-motherboards/2444-intel-avoton-c2750-benchmarks-supermicro-a1sai-2750f.html ) If ARM was targeting Centerton (the Atom S1260), they are targeting a platform way behind Avoton and the E3 reviewed above.

the1kingbob - I have AMD Opteron 3000, 4000 and 6000 series chips in the lab and use them daily. The Operton 3300 series would be the closest platform but the performance is significantly behind the Haswell Xeon E3-1275 V3. Those Opterons also do not have integrated GPUs like the E3-12x5 V1 V2 and V3 chips so are hard to compare.
 

InvalidError

Titan
Moderator

And I never said they were - at least for now. But ARM might get there if they manage to sustain their current improvement pace for a few years while AMD and Intel remain stuck for most intents and purposes.

Yes, Intel released some cut-down x86 chips to compete with ARM for low-power market segments but this is only a temporary fix since Intel will likely add much of that stuff back in to keep up with ARM as ARM performance ramps up. The interesting part in 3-5 years will be where ARM will go once they hit the same steep diminishing return slope AMD and Intel are on.
 

amk-aka-Phantom

Distinguished
Mar 10, 2011
3,004
0
20,860
What annoys me the most about this article is that it doesn't test the Xeons against desktop processors. I am trying to build a server for running a rather old financial application (written in Delphi) which interacts with an MS SQL database and the budget allows either for one of the E3 v3 Xeons priced similarly to the i7-4770 or the 4770 itself. Now, there won't be any ECC RAM, I've done a lot of research and it appears to be a severely overrated feature, especially for small-scale servers. So that's already one major feature of Xeons that won't be needed. But what about the performance? 4770 and most E3 v3s appear very similar in clock speeds, number of cores/threads and so on. I would like to see them benchmarked against each other in various applications.

Also, I don't understand people who actually buy prebuilt Dell/HP/etc servers for small-scale stuff: they use poor quality hardware (Seagate drives and not WD, for instance, some no-name RAM brands, etc.), warranty is short, power supply and case cooling is freaking noisy and inefficient (we have a couple machines here - one is an Intel ATX "server" enclosure and one is a Dell blade server - both run louder and hotter on idle than my gaming rig with 11 fans on full load)... And they cost twice as much as a custom-built rig (even with "server/workstation" grade hardware!) with same or better specs. What's the point? *shrugs*

Another annoyance is how many threads on the Internet simply slap a "server grade" label on Xeons, chipsets like C226, Intel server boards, etc etc and say that they're better for everything "professional" than quality desktop hardware just because "it is server grade hardware". I am really sick of hearing this. Tom's, can you please do a solid article comparing IB/Haswell Core i5/i7 (non overclocked, because you can't OC Xeons, at least they are not meant for it) with E3 v2/v3 and E5/E5 v2 Xeons? (Maybe E7 too, though they will shred i5/i7 in all professional tasks due to sheer amount of cores and threads, despite using an outdated Nehalem architecture)
 

InvalidError

Titan
Moderator

It depends on what your server does. For mission-critical stuff, a single undetected error at the wrong place over the system's entire lifespan can be several times more expensive than the extra cost of ECC.

I had a DIMM with a single bad bit on it that memtest86 did not catch on the first pass. Wasted a couple of days trying to figure out why my system became so unstable after a few days before I decided to let memtest run overnight again to find a dozen errors all on the same bit at the exact same address in the morning. Then I spent two days trying to fix all the OS files that got damaged, gave up on that and ended up spending two days re-installing the OS and all my programs. Even at an entry-level wage of $15/hour, all my wasted time due to that single bad bit would have cost me over $400.
 

shompa

Distinguished
Apr 2, 2007
72
0
18,630
X86 have never been the fastest CPU. It always have been RISC based CPUs. Intel only became successful because of Windows/WinTel + that they where fast enough and cheap. Not to many people wanted to buy 4000 dollar RISC processors.

The funny thing is that Intel is now facing the same problem. Today they have the 4000 dollar CPUs while ARM is fast enough.

And we have the 64bit issue.
X86 uses EXTENSIONS.
ARM/RISC uses complete 64bit instruction sets.

Thats why 64bit on Windows is 3% slower (and now all IT "experts" believe that 64bit have no merit exempt for more memory. They seem to forget that real computers where 64bit in 1990 and there where no 4 gig memory back them, It was for PERFORMANCE RISC started to do 64bit)

Intel can/will not compete with ARM. ARM gets its 10 cent licensing fee per CPU. Intel in other hand is used to make 80% profit on 400-4000 dollar CPUs. Thats why Intel can blow away billions in pumping out a new CPU generation/revision each year. Let us all admit: Evey update since Sandy Bridge have only given us less then 20% more performance. Imagine any other company spending 4 designs to only get 20% + 2 node shrinks!

"ARM is to far behind".
ARM outsells Intel by 100-1 today. Thats a hint.
Take an A7 SoC and compare that to intel per mhz and you will find out that ARM is actually faster. The only thing making Intel faster today is that they clock higher and have more cores because they can use 45-150watt.

Who knows how fast an ARM chip would be if they used 150 watt.

We should all be happy with X86 dying. Real 64bit is huge!
+
for customers its better that a SoC cost 25 dollar then Intel who needs to use a huge die area just to CISC = Intel can never compete on price.

Intel will be like RISC today.
Highend servers and Highend PCs.
The rest will be RISC. Like it was 15 years ago and it should be. BTW. The cloud existed 15 years ago + on Unix. Its fun how the dark ages of Windows turned back the IT industry 20 years. The question is who will save us again now when Steve Jobs is dead.

(And yes: It was Steve who brought us working Unix on desktop = why we have smartphones/tablets today. And tablets are now outselling PCs and Smartphones have been outselling PCs 3-1. All running Unix/Linux beside MSFT 3% retards)
 

1991ATServerTower

Distinguished
May 6, 2013
141
4
18,715
Don't forget that Intel forces customers to choose between: VT-d and VT-x OR Overclocking. They don't let you have both in either the desktop or server lines.

For the same price one can get a Xeon with 8 threads and full virtualization support or an overclockable i5 with 4 threads (whose warranty gets voided with overclocking...).

AMD doesn't do this with its unlocked CPUs or APUs and even their multiplier locked chips have all the features enabled. That's appreciated, because it lets the consumer decided what price/performance/features THEY want.

Intel doesn't care about anything but money though. That's why there hasn't be a 4 core Intel CPU for less than $185, ever. That's why Intel disables useful things to create artificial market segments. I'm all for companies making money, but price fixing and purposely limiting products to entice people to spend more money than they really should have to is not OK.

Thankfully anyone who wants to use a lot of VMs can get full hardware support from AMD for as little as $160 (FX-8320) and have quite acceptable performance.
 

amk-aka-Phantom

Distinguished
Mar 10, 2011
3,004
0
20,860


While it's true that ECC would be helpful in such a situation, don't forget that it would not be able to combat a faulty RAM module for long - at some point you would still have to replace the module, the sooner the better.
 

amk-aka-Phantom

Distinguished
Mar 10, 2011
3,004
0
20,860


To be honest, that's not a big deal. Overclockers generally don't run VMs, professionals who run VMs are usually CBA to OC. There might be some stability issues with VT-d if you OC, causing them to leave it out (iSpeculate)... as for VT-x, it IS present in unlocked Intel CPUs, even as far back as i7-2600K.
 

kristi_metal

Honorable
Jun 18, 2012
40
0
10,540
I presume there is no big difference of performance between these CPUs and their desktop counterparts, the only improvements being the support for ECC memory and a better binning. And no OC for these chips, either.
I hope i am not wrong.
 

1991ATServerTower

Distinguished
May 6, 2013
141
4
18,715


That's a completely unfounded and unhelpful generalization.

Intel has had VT-x since the Pentium 4. However, VT-x is only half of the virtulization technology available. Unlike Intel, AMD doesn't hamstring its virtualization support by breaking it into two pieces and withholding half of it to create artificial, price inflated markets. With AMD it's either there or it's not, but it's been there in its entirety on almost every chip since 2006.

Sorry, but your sweeping generalizations and placations do not a justification make.
 

dgingeri

Distinguished


You clearly have not worked with server grade hardware. I have been for the last 3 years, and I can tell you, there is a major difference.

First off, ECC memory is not overrated. It makes a big difference. You don't want your mail server crashing every week due to memory errors, or files getting corrupted on your storage server.

Second, I have dealt with cheap, overpriced servers (Supermicro, HP DL180, generic OEM parts) recently, and I have dealt with inexpensive, good quality (Dell R520, R515, T110 II) servers, all within the last couple months. I know from my experiences that a good quality server makes the difference between having a system that just works when it's set up, and running through weeks of troubleshooting only to find out it's a cheap Realtek NIC chip causing performance issues that slow down the whole system.

I own two Dell T110 II machines right now. They cost me, with 3 year warranty, less than equal components I would have had to put together to get equal performance. One is a E3-1230 and the other is a E3-1220v2. One was slightly less than $800, and the other was slightly over $600, with dual port iSCSI and TOE offload NICs, one with 8GB and the other with 16GB of memory. Also, these are near silent and extremely reliable. I use one for my router/DHCP/DNS server, among other functions.

Yes, they both used Seagate drives, but they are Constellation ES drives. From my experience with an install base of over 3000 Seagate drives and over 1000 WD RE3 drives, along with others, that the Seagate Constellation drives are the most reliable, by far, of any drives on the market right now. Hitachi Ultrastar would be slightly behind them, WD RE drives behind those a ways, and Toshiba/Fujitsu drives being far in the back.

Third, yes, the performance of Xeon processors would be nearly equal to the Core i5 and i7 series processors. Oddly, Core i3 chips since IB have had ECC functionality for low level server work. Intel just steps the different chips differently, such as lower clock rates and higher core count in the E5 line. (I'd love to find a E5-2400v2 chip with quad cores and >3.2GHz clock rate, but the quad cores cap out at 2.2GHz. A single socket LGA1356 board would be nice, too, but those don't seem to exist.) I believe it is all to provide tiers for VM hosts rather than high performance levels for servers. There really wouldn't be a point to comparing the two platforms.

Finally, as a professional systems admin, a DIYer, a gamer, my family's tech support and system builder, and finally an overclocker, all for over 20 years, I do many things with my system that most people don't. I run VMWare Player on my main machine, which has a Core i7 3930k overclocked to 4.5GHz and 32GB of RAM, to give me practice and training on Windows and Linux server builds. I have all my storage on one, separate, machine running Starwind iSCSI. Finally, I have the two Dell boxes, one of which runs VMWare ESXi 5.1 for additional, long term VM servers. I'm all over the place. I need to keep current on software to stay relevant in my career, but I also like to game and play around with various configs.

In short, you really don't know what you're talking about. You need more experience in the rest of the world before you go spouting out things like this.
 

iamtheking123

Distinguished
Sep 2, 2010
410
0
18,780
Just want to point out to everybody that there is some additional validation work on the silicon that happens with these Xeon parts that doesn't happen with i7 parts. Basically even though the design is the same (or similar), the Xeon is going to be more reliable. For a home user 99.999% vs. 99.99999% doesn't matter though.

The Xeon space is really sweet for Intel. 93% of the market is what I'd call a captive audience.
 

RooD

Distinguished
Jul 5, 2004
213
0
18,680
I game with my E3 1245 V2 it is a nice chip for the money and goes well with my 780GTX I always buy Xeon chips for my rig that way when I retire the gaming PC I can swap it over and run my servers,
 

Shneiky

Distinguished
amk-aka-Phantom could you go home please. Just the fact that you are saying Seagate is low quality and WD is "top-of-the-line" made me take all your comments as irrelevant. I guess you really do not know what it is to saturate the render farm at 100% for 12 days and barely making the submission to your client.

Xeons exist for a reason. The features the Xeon platform brings exist for a reason. If it does not fit your needs, then find what suits them, instead of sprouting nonsense.
 
I would like to see a Xeon / i7 comparison for the content creation hobbyist gamers out there. Yeah, it might be a niche area, but for people that want and use the extra L3 and eight threads, a $260 1230 or $275 1240 is a very attractive alternative to a $310 i7 ( $320 for the K. ) Considering Haswell's OC limitations, how much performance do you give up between a 4770K and 1240v3? I'd like to see those three CPUs go head-to-head, especially in a performance value metric. Considering the Xeons can use cheaper B85/H87 boards with no extra cooling, that could be a very smart way to go.
 
Status
Not open for further replies.