Intel Sandy Bridge-EP versus AMD Interlagos

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
All we have committed to is a.) higher speed DRAM support and b.) a new memory controller that will have greater throughput.

Between those two there should be some very nice increases in performance.
 
Damn 20MB cache? You guys realize that this could fit a small program just on the cache itself? And this is probably 8x the amount of RAM that was available in the days of DOS.

Quick question @MU and JS:
-Peak memory bandwidth using 4x DDR3-1600 is 51.2GB/s, more than double what mainstream DT is at 21.3GB/s using 2x DDR-1333 and almost 2x more than Nehalem
I'm assuming this is Quad Channel?
 


Question: why is server memory significantly slower than microcomputer memory? Is it because of these JEDEC needed approvals? And how does it work?

In addition, this is the best server motherboard that my research could find:

http://www.evga.com/products/moreinfo.asp?pn=270-WS-W555-A1

thanks

CG
 
^ The SR2 is by no means a true server motherboard, it's basically a 2P board geared for OCing rather than stability,etc. The SR2 is aimed at high end OCing, mainly with WCing or LN2/Dice runs.

As far as why ECC RAM is slower, from my understanding is it is partly due to the JEDEC and partly due to the nature of the ECC.
 


I would think it would depend on the HTT links though. When Intel put QPI out, it was faster than the e\3rd gen HTT link. Hopefully AMD does push up the HTT links though. Make it a bit spicy of a competition.

As for AMD hanging on, I think they wait to move to the next one till the pricing has gone down since Intel starts adoption right away. Its the safer bet I would say.



Yep. All LGA2011 mobos and CPUs will be quad channel DDR3, and all AMD server based CPUs should be too. 640K was all DOS needed so yea... more than 8x.



ECC basically makes sure that every bit is correct. Its needed in the server world since the data is extremley important.
 


Basics of ECC are the fact that it corrects errors found. Due to magnetic interferance from other parts, a single bit in DRAM can be flipped to the opposite (0,1). Since it is going to check every bit in a 64bit or 32bit 'word' it causes it to take more time, in essence be slower. Normal desktop DRAM is normally not ECC so it doesn't check every bit and can therefore perform faster.
 
^ Thank you. Basically what I thought. But seriously, what's stopping the ECC RAM from getting faster? I get how it takes time to check, but could the manufacturers not increase the speed of the chip that detects/corrects errors?

From my understanding, each RAM DIMM had a dedicated IC that checks for errors, so in theory, increasing the speed of this IC should allow for faster RAM correct?
 
As you increase speed of the memory you may also increase the change for memory errors. That is why there are customers that I talk to that prefer either slower memory or underclocking memory. If you are in an environment where you are experiencing high numbers of correctable errors, you take that action.

Overclocking cpus will do the same thing. When you get them real fast, sometimes 2+2=5. Not a big deal in a game. REALLY big problem in an application on a server.
 


So ECC memory is mostly useful for scientific, financial, or military servers?
 


ECC is mostly useful for any server.

Let's say you have a file sharing server that 15 people are relying on for all of the documents that they are working on. There is a memory error that causes the server to reboot. 15 people lose the document changes that they were working on. Let's call that 20 minutes worth of work each that needs to be recreated. And there is the 5 minutes that they waste waiting for the server to reboot and re initialize so that they can access their documents.

25 x 15 = 375 minutes of lost productivity.

That is not even counting the time of the technician to figure out what just happened. Someone has to go look at the logs and figure out why the server rebooted and everyone lost their work. Tack on another 30 minutes of a wasted technician's time.

If the average employee is making $75K a year, that is $.625 per minute. Or, that little memory error just cost your company $234.75. Now think of that as either some executives or highly paid engineers or larger numbers of users and you see how a single memory error could be in the thousands.

If you experience one memory error per month that could be a significant amount of money. For the typical server, the cost of ECC memory is easily covered by saving the company from just one reboot.

I would never put a server in a company that didn't have ECC memory, regardless of the usage.
 


I read somewhere that AMD are going to give us more details this autumn
 
Totally agree. You don't want to be the guy explaining to the CEO why your company can't do its billing, or pay its payroll, because you cheaped out on memory and all your data got garbled.
 
If you experience one memory error per month that could be a significant amount of money. For the typical server, the cost of ECC memory is easily covered by saving the company from just one reboot.

I would never put a server in a company that didn't have ECC memory, regardless of the usage.
Agreed on absolutely running ECC on a workstation or server.

@jf: Does AMD have a response to new RAS features from Intel (in the form of the Xeon 7500,etc). From my understanding, these Xeons are targeting the market dominated by IBM Power CPUs where reliability is at the top of the list. Will or does AMD have similar features and target this market?

Any plans by AMD to work with a company like SeaMicro to build something like this: http://www.anandtech.com/show/3768/seamicro-announces-sm10000-server-with-512-atom-cpus-and-low-power-consumption/3
 
Xeon 7500 series has some RAS features that does make them compete with Power, and, more importantly, Itanium.

My theory is intel is probably spending billions on keeping itanium alive, and it does not drive that much revenue for them. (Whenever they talk about itanium revenue, they do so a the system level, not the processor level).

Long term it might make more sense for them to transition itanium apps to x86 then continue to keep that platform. Some of those RAS features are good, but the real question is who would pay 2.5X the cost of the processors just to get them.

The 4P market (today) is ~4-5%. The number of people that will pay the premium is probably a pretty small sliver of an already small market.

We think there is a bigger market opportunity that exists is bringing 4P technology and performance to the 2P market.

Every generation of processors has new RAS figures, so you will see new ones with bulldozer.
 


Pretty much the servers that a company will use will have ECC RAM. My work server doesn't because we use it to store images (ISOs) and programs that we need, nothing super important like finances.

That Atom system is pretty cool. I don't see it lasting very long though since Intel pushed out its 48 core Terascale based CPU for testing a few months back. If it uses the same tech as Terascale then a 48 core choulc be able to do the same job as 65 server systems since Terascale was doing about 130 servers worth of work but only used 62w of power.

As for Itanium, it was a good idea but bad execution. Its a true 64bit system that doesn't rely on any of the x86 based march. But as said it was executed badly. It emulated x86 instead and thus it gave a pretty large performance drop. Thats why x86-64 is what we use.

If anything for Intel at least, terascale seems to be their future along with photonics. As for AMD, they haven't really said anything beyond a few years. I wouldn't doubt it if they were working on something much like Terascale.
 
@jf:

Can you tell what RAS features the Server variant of Bulldozer will have?
The 4P market (today) is ~4-5%. The number of people that will pay the premium is probably a pretty small sliver of an already small market.
True, but in the 4P+ market (basically, HPC), the people are willing to pay a notable premium I believe. Here the Xeon 7500 make sense.Btw, is AMD even doing anything in the HPC area? If you look at the current Top500 Intel controls about 80% of that list. Any reason why AMD is not putting much effort in to getting more market share here?

But yeah, Itanium was/is a failure imo. Intel should kill it off and be done with it. It was a good idea, but the execution of the marketing, consumer response,etc $ucked.
 


1. No, I cannot comment on the features.

2. If you think that the 4P+ market is HPC, then you are really far off. If you go look at the HPC market you will find few, if any 4P's because the price/performance is so much better with 2P. We recently changed our pricing and now 4P and 2P parts are the same price. This is causing a lot of people in the HPC world to consider 4P (greater density, less cabling, fewer physical systems to manage).

I would be willing to bet, with the price of the 7500 series, that there will be few, if any 4P HPC platforms based on that architecture.

Are we doing anything in the HPC area? Well, the #1 and #4 supercomputers in the world are both based on Opteron and 7 of the top 21 are all Opteron. There is only one 7500-based platform in the top 20 and I would be willing to bet that they cut an amazingly good deal on those processors to get the win.
 
^ Good point. Yeah you are right, most of the HPCs are using 2P. So, WHO is using 4P systems then?!?!?

Also, I noticed that the #2(Nebulae) is using Tesla card(s). Do you think AMD has an advantage over Intel in this regards? I would expect more Top500 computers to begin using GPGPU computing in a few years, I'd say 1/3 of the HPCs will use GPGPU tech by 2020.
 
4P is database and virtualization, mostly.

There are more customers looking at GPGPU, but there are some underlying software things that need to happen in order to really have that become more mainstream for servers.

I am not sure that I would be comfortable putting a number on the GPGPU percentage, it is too difficult to determine at this point.
 
There are more customers looking at GPGPU, but there are some underlying software things that need to happen in order to really have that become more mainstream for servers.
True, but you know these things will eventually trickle down to the mainstream level. Software is indeed holding stuff back, mainly, there needs to be more languages that allow for multi threading to be used with ease.
If you think about it, a current gen 2P server is probably have about the same performance as the first super computers.
 

TRENDING THREADS