Sandy Bridge-E: Core i7-3960X Is Fast, But Is It Any More Efficient?

Status
Not open for further replies.

fstrthnu

Distinguished
May 5, 2010
77
0
18,630
Aand yet more evidence that most people looking for a high-end processor will be perfectly fine with the i5-2500K or the 2600K
 

sam_fisher

Distinguished
Dec 24, 2010
191
0
18,710
[citation][nom]fstrthnu[/nom]Aand yet more evidence that most people looking for a high-end processor will be perfectly fine with the i5-2500K or the 2600K[/citation]

I guess it just depends on what you're doing. If you have a high end workstation and are using programs that are going to utilise all 12 threads, quad channel memory and 40 lanes of PCIe, and you need that processing power then it's probably not a bad investment. Whereas for most users the 2500K or the 2600K will do fine.
 

benikens

Distinguished
Jun 8, 2011
324
0
18,810
Ironically, when it comes to performance, Intel’s Core i7-9360X is the real Bulldozer. Since its power consumption levels are lower than the Gulftown-based Core i7, it should also deliver amazing performance per watt as well. Is that really the case?

It's i7-3960x, not i7-9360x
 
another informative, in-depth article about efficiency. great work guys!
3960x might very well be the $1k cpu that's worth the (over)price unlike the older 980x.
sb-e shows that both single threaded and multi threaded performance as well as efficient power use can be ahcieved by a 32nm, 6 core, 130 tdp cpu (but you gotta pay a lot for that).
when you bring price into the equation, quad core sb i5 and i7(95w tdp) are the best way to go (i wonder how an i7 2700k fare if it was tested alongside these cpus).
 
G

Guest

Guest
I wanna know how it performs on DAW apps. I hope it will be included in future benchmarks.
 

ukee1593

Distinguished
Jun 8, 2009
290
0
18,810
I am still very pleased with my i5 2500 after reading this article. Sandy Bridge-E's efficiency might be impressive for a high end CPU ... but it still cannot beat the practicality of the Standard Sandy Bridge.

I can't wait until Ivy Bridge!
 

aldaia

Distinguished
Oct 22, 2010
535
23
18,995
[citation][nom]fstrthnu[/nom]Aand yet more evidence that most people looking for a high-end processor will be perfectly fine with the i5-2500K or the 2600K[/citation]
Agreed, 2500k is still the sweet spot in the triple trade-off performance/power/cost. This is what I will choose if i needed a replacement, considering the applications I run.
[citation][nom]sam_fisher[/nom]I guess it just depends on what you're doing. If you have a high end workstation and are using programs that are going to utilise all 12 threads, quad channel memory and 40 lanes of PCIe, and you need that processing power then it's probably not a bad investment. Whereas for most users the 2500K or the 2600K will do fine.[/citation]
Right, but if i have a truly highly parallel application, then, a server with several interconnected nodes offers more bang for the buck. I would consider 4 nodes based on 2500k that probably are cheaper than a single 3960X and offer me much more computing power. It all depends on your appliccation.
But clearly the 3960X is for a niche market, either because it really fits your needs or the "bragging rights" niche market.
 

AppleBlowsDonkeyBalls

Distinguished
Sep 30, 2010
117
0
18,680
[citation][nom]aldaia[/nom]Agreed, 2500k is still the sweet spot in the triple trade-off performance/power/cost. This is what I will choose if i needed a replacement, considering the applications I run.Right, but if i have a truly highly parallel application, then, a server with several interconnected nodes offers more bang for the buck. I would consider 4 nodes based on 2500k that probably are cheaper than a single 3960X and offer me much more computing power. It all depends on your appliccation.But clearly the 3960X is for a niche market, either because it really fits your needs or the "bragging rights" niche market.[/citation]

The i7-3930K is pretty decent for the price, though. At the same clocks as the 3960X it's the same speed, and all reviews featuring both have it achieving the same overclocks, sometimes at lower voltage. Unless it's for bragging rights or epeen the 3930K is clearly a better choice since the extra cache seems to be useless for desktops and it isn't even better binned.


 
The more benchmarks I read the happier I am with my i7 2600 :) It is right behind the new big boy, and only cost $250 at my local computer hardware store compared to $1000 to get a few extra seconds off.

What will be really interesting to see is what happens with the IB release. Last time the mainstream SB could meet or beat the old high end chips, for 1/3 the price. I wonder if the IB release will do the same thing, or if Intel will downplay the performance so as not to piss off their high-end buyers again.
 

AppleBlowsDonkeyBalls

Distinguished
Sep 30, 2010
117
0
18,680
[citation][nom]CaedenV[/nom]The more benchmarks I read the happier I am with my i7 2600 It is right behind the new big boy, and only cost $250 at my local computer hardware store compared to $1000 to get a few extra seconds off.What will be really interesting to see is what happens with the IB release. Last time the mainstream SB could meet or beat the old high end chips, for 1/3 the price. I wonder if the IB release will do the same thing, or if Intel will downplay the performance so as not to piss off their high-end buyers again.[/citation]

Ivy Bridge is a die shrink that is based mostly on lowering power consumption and getting higher IGP performance. CPU performance improvements will be few: according to Anandtech 4-6% higher IPC than Sandy Bridge, and since Intel is focusing on power consumption clock speeds won't be much higher than SB, so about a 5% improvement there too. About 10% more CPU performance max, so don't expect too much. Sandy Bridge-E will still be significantly faster in multi-threaded.
 

TeraMedia

Distinguished
Jan 26, 2006
904
1
18,990
Based on the results for some of the multi-threaded tests, it appears as if the turbo boost on SB-E is getting modulated more often than the turbo boost on 2600K. It would be very interesting to see a multi-threaded test in which turbo boost was turned off, and the clocks of both were set at the same rate, e.g. 3.6 or 3.9 GHz, whatever the cooler will bear. Also supporting this idea is that several of the configurations appear to max out at right around 200-210 watts peak power. So if the thermal limiter threshold is kicking in for SB-E to keep it within its power budget, that could explain the "better, but not way better" performance between SB-E and 2600K. Would such a test be feasible, Toms?
 

danraies

Distinguished
Aug 5, 2011
940
0
19,160
I work in engineering and many of our employees have heavily multithreaded applications running at their personal machines sometimes for days on end. This is obviously the kind of place SB-E chips will thrive unless IB blows them out of the water. Obviously these $1K chips are not the right choice for enthusiast gaming PC's and they're arguably not the best choice for servers as they get outshined by cheaper chips over several nodes. However there are certainly applications where 6+ cores at 3.3ghz+ are worth $1K and SB-E steps in where Bulldozer failed.
 
G

Guest

Guest
@danraies

actually no, even with this mass of seething power, if it's taking all day to finish one of their runs then this thing is only really going shave off an hour or 2, it wont make a drastic impact, and you might want to go back and revisit those bulldozer benchies cause if i recall correctly, much to my surprise, bulldozer did quiet well in productivity apps department

if you really want to increase through put and money is no objective, look into a tesla setup or offload all the work to a grid setup
 

billj214

Distinguished
Jan 27, 2009
253
0
18,810
There are a lot more cost conscious people reading this forum that know there is no reason to gain ~10% performance increase for $1000. Intel knows there are people who have money to burn and will always buy the fastest CPU just because they can.
I have many friends who are speed junkies and want something just because it's the fastest, kind of like an addiction! Triple SLI or Crossfire with i7 990X water cooled etc.

I'm sure everyone remembers the Sandy Bridge launch and were amazed, just wait for Ivy Bridge because it will be the real "Bulldozer" IMO.
 

andywork78

Distinguished
Oct 31, 2011
296
0
18,810
I made good choice of Bulldozer....
Ivy is great really good.
Price fail so hard...
Because looking for best for best cost more then 1k....
eeee not for me....
 

mapesdhs

Distinguished
[citation][nom]aldaia[/nom]... Right, but if i have a truly highly parallel application, then, a server with several interconnected nodes offers more bang for the buck. ... [/citation]

A few points to ponder:

a) Many 'truly' parallel apps don't scale well when running across networked nodes, ie. clusters. It
depends on the granularity of the code. Some tasks just need as much compute power as possible
in a single system. This can be mitigated somewhat with Inifiniband and other low-latency network
connection technologies, but the latencies are still huge compared to local RAM access. If the code
on a particular chip doesn't need any or much access to the data held by a different node then
that's great and some codes are certainly like this, but others are definitely not, ie. a cluster setup
doesn't work at all. When 2-socket and the rarer 4-socket boards can't deliver the required
performance, companies use shared memory systems instead, which are already available with
up to 256 sockets (2560 cores max using the XEON E7 family), though often it's quite difficult to
get codes to scale that well beyond 64 CPUs (huge efforts underway in the cosmological community
atm to achieve good scaling up to 512 CPUs, eg. with the Cosmos machine).

b) Tasks that require massive processing often require a lot of RAM, way more than is supported
on any consumer board. Multi-socket XEON boards offer the required amount of RAM, at the expense
of more costly CPUs, but it does the deliver the required performance too if that is also important.
ANSYS is probably the most extreme example; one researcher told me his ideal workstation would
be a single-CPU machine with 1TB RAM (various shared memory systems can have this much RAM
and more, but more CPUs is also a given). X58 suffered from this somewhat, with consumer boards
only offering 24GB max RAM (not enough for many pro tasks) and the reliability of such configs was
pretty poor if you remember way back to when people first started trying to use non-ECC DIMMs
to max out X58 boards.

c) Many tasks are mission critical, ie. memory errors cannot be tolerated, so ECC RAM is essential,
something most consumer boards can't use (or consumer chips don't support). Indeed, some apps
are not certified to be used with anything other than ECC-based systems.

d) Some tasks also require enormous I/O potential (usually via huge FC arrays), something again
not possible with consumer boards (1GB/sec is far too low when a GIS dataset is 500GB+). Even
modern render farms have to cope with such quantities of data as this now for single frame renders.
It's often so much more than just the CPU muscle inside the workstation, or as John Mashey put it
years ago, "It's the bandwidth, stupid!", ie. sometimes it's not how fast one can process, it's how
much one can process (raw core performance less critical than I/O capability). Indeed, even
render farm management systems may deselect cores in order to allow the remaining cores to make
better use of the available memory I/O (depends on the job), though this was more of an issue for
the older FSB-based XEONs.

And then sometimes raw performance just rules, the cost be damned. Studio frame rendering is
certainly one such example; I've no doubt IB XEON will be very popular for this. Thousands of cores
is fairly typical for such places.

SB-E is great, but it's at the beginning of the true high performance ladder, not the end.

Ian.

 
Status
Not open for further replies.