Intel Core i7-5960X, -5930K, And -5820K CPU Review: Haswell-E Rises

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Um I'm a total noob. Can someone tell me approximately how much of an increase in performance I'd see using any of these over my i5 4670k? My CPU is not overclocked.
I'm running a 780 ti and Gskill Ripjaw 1600 RAM.
Um I'm a total noob. Can someone tell me approximately how much of an increase in performance I'd see using any of these over my i5 4670k? My CPU is not overclocked.
I'm running a 780 ti and Gskill Ripjaw 1600 RAM.
i see u are a total noob, and that is why u have a 780 ti and ( LOL 1600 ram)
 
Disappointed to not see a test with 2 or more Titan Black's for 4k gaming vs. a Devil's Canyon cpu. It'd be good to understand if Haswell-E can help with multiple top end GPU's in frame rates/time variance etc. With a single card the GPU will always be the bottleneck on current gen cards so it's unlikely to show any difference.
 
Is it possible to see a side by side comparison between IvyB-E and Hsw-E? Yeah, we're looking back instead of forward, but I'm curious to see the actual difference between these two families, considering the biggest complaint about IvyB-E was the chip set it was paired up with...
 

That greatly depends on your usage scenarios of the machine. I'm not saying every SB-E system warrants an upgrade to HW-E, but you risk sounding like you're saying the exact opposite in that no SB-E at all should go to 2011-3.

My original point was that comparing pure clock rates across CPU generations is absurd. People conveniently forget that a Haswell @ 4.4GHz will meet or beat a SB @ 4.8GHz while likely using less power. And if you can use AVX2, the HW is the clear winner. Let's not forget the new extra features on current mboards like SATA Express, M.2, etc that weren't available on the original X79 models.
 

Too bad the chipset still connects to the CPU with the good old DMI2 interface. If you want to avoid that bottleneck, you have to use add-in-boards or chips connected to CPU-hosted PCIE lanes.
 

Yeah, that was disappointing. I was hoping they'd up it to 3.0 for this one.
 
RedJaron writes:
> My original point was that comparing pure clock rates across CPU generations is absurd. ...


True, I was simplifying a bit, but what I meant was that for raw overall threaded performance,
the speed boost from an oc'd 5960X over an oc'd 3930K is not really that much, around 30%
to maybe 50% at best. I talked to someone yesterday who's in this very position; opinion was
it's not enough to justify the cost involved, in his case combined with the issue of the max RAM
for the new 5Ks still only being 64GB (Intel made a mistake there IMO, should have been more).

That's why I said I thought Intel would have been better off having some kind of mid-range 8-core,
rather than fiddle around with cripplied PCIe for the low-end. Having only the top-end as the 8-core
makes the option a tad unaffordable to many.

IMO the cheaper option should just be the 5930K, middle an 8-core with lesser cache or whatever,
top-end a better 8 core with more cache, extra PCIe, etc. I guess Intel's still thoroughly wedded to
its paranoia of not harming XEON sales, when in reality anyone who might be affected in that way
can't afford XEONs anyway, but will look at these new CPUs and conclude they're not much of an
upgrade over the earliest SB-E, unless of course one's existing CPU isn't oc'd at all, or if the newer
SATA3 connectivity is useful, etc.

Btw, in legacy terms this means the 4820K is now a rather peculiar chip, being a 4-core but with
a full 40 PCIe V3 lanes. Given the way the dependency on CPU power reduces as GPU loading
and PCIe loading increases for multi-GPU/multi-screen/high-res setups, it could even be possible
that an X79/4820K system is faster than an X99/5820K system, and certainly cheaper atm. Weird.
I really don't get Intel's decision to meddle about with PCIe lane provision in this way.


> People conveniently forget that a Haswell @ 4.4GHz will meet or beat a SB @ 4.8GHz ...

Actually it's generally about the same or less; I did a lot of checking while prepping for posts
way back. Also, it's waaaay easier to get SB to reach 4.8+ than it is to have HW at 4.4+. One
doesn't even need water to have a 2700K at 5.0 (old TRUE + 2 quiet NDS 120s works fine);
takes me 3 mins to sort this in the BIOS with an ASUS M4E/Z.


> ... while likely using less power. ...

Really not that much of a factor in the discussions I've had with solo pro types.


> And if you can use AVX2, the HW is the clear winner. ...

Hardly revelant until apps make use of it.


> ... Let's not forget the new extra features on current mboards like SATA Express, M.2,
> etc that weren't available on the original X79 models.

That is indeed one area that may persuade some to upgrade (I'm certainly not keen on
using a Marvell SATA3 controller to run a RAID1), but it'll be person/app/situation-specific.

With the pricing as it is for the 8-core, as I say for most typical solo pros I don't think the
cost will be regarded as being worth the mild performance boost. It would certainly be
useful when deadlines are tight, but again the cost is a thorn.

What's infuriating is there's absolutely no reason why Intel couldn't rerelease the original
SB-E for X79, unlocked to 8-core, even shrink it for lower power. X79 could easily use an
8-core consumer CPU, Intel just chose not to because they didn't have to. And btw, check
back through the original reviews, there were plenty of expectations on the part of hw sites
that X79 would eventually have an 8-core option; would they have said all that without
initial hints from Intel?

I really don't get why Intel made the new middle-ground $600+ CPU only a 6-core. It's just
not enough IMO. If all they're doing is setting performance/feature levels based on what AMD
can do in contrast, well, that's a mistake; their main market here is upgrades over X79 I reckon.
As some reviewers have said, Intel is competing with itself here. HW-E is obviously great for a
solo pro who's never had a 6+ core CPU before, but for the cost involved it's just not that
much better than SB-E for most users.

Reading reviews, the wide oc variability of these new 5Ks is also worrying. I found several reviews
where HW-E samples wouldn't go over 4.2. Elsewhere, one site managed 4.7, though at a pretty
crazy voltage, and a heat/temp level I'd never be comfortable with.

I really hope Intel can make these CPUs better at some point. If they offered an 8-core middleground
CPU for around $600 to $700 (by that I really mean between 400 and 500 UKP) which is 2X faster than
a 6-core SB-E (ie. stock vs. stock, or oc vs. oc; whichever), then I think it'd be a worthy upgrade for an
exsiting SB-E setup.

As for IB-E, the rationale is even weaker of course.

Ian.

 
Here I come 4k surround ultra crysis 3 with the most xtreme build that has 5960x in it with quad 295x2 😀 ~~~~~ :) ~~~~~ 🙁 never mind... its still gonna run at 15 fps :_( ~~~~~ plus I got no money for that D:
 
Here I come 4k surround ultra crysis 3 with the most xtreme build that has 5960x in it with quad 295x2 😀 ~~~~~ :) ~~~~~ 🙁 never mind... its still gonna run at 15 fps :_( ~~~~~ plus I got no money for that D:
 

Even DMI3 is a pretty modest bump: going from 5Gbps to 8Gbps per lane. I was expecting 10-16Gbps to avoid having to introduce yet another DMI revision when USB3.1, SATA-X and other interface updates get rolled in after Skylake.

Then again, with Intel starting to make PCIE lanes configurable between SATA and USB, I would not be too surprised if the next step after DMI3 was rolling most of those directly into the CPU to bypass the PCH altogether for all the core components.
 
This is a comprehensive and fair shoot-out, as always.

However, I once again find myself wishing that some major test would include the type of software I use: A DAW. (digital audio workstation.)

Yes, some of the others test apps do a fairly good job of "simulating" a DAW's massively-threaded environment, but most don't totally apply for one important reason: A heavily laden DAW session is almost completely NON-dependent on the GPU.

Additionally, if one does midi compostion in a DAW, it is possible / likely that you will have 3, 4 even 6 or 7 "virtual instrument" or sampler plugins concurrently streaming massive data from your drives. That places a rather different burden on parts of the system from even heavy video editing.

Maybe in the next go-round you guys could consider this?
 
Additionally-

what every DAW user would love to see, is a test, using a heavily-laden DAW as described above, between "officially supported" ram speed and medium OC'd ram.

Rendering results like "2 seconds faster" mean nothing here. What's important are things like track count, number of streaming VSTi's running concurrently, and most importantly, how low the HW buffer can be set for a system using, say, 50% total cpu cycles.

While such tests are more the pervue of DAW manufacturers and DAW forums, few if any people involved there have the resources to do this testing. Perhaps it wpould be enough for Toms to create ONE large, dense test-session, which pulls around 50% total cycles from a cpu in the middle of the pack, then test that for a few key paraemters. - Again, the critical one being how low the ASIO HW buffer can be set.

- Or maybe I'm just dreaming ........
 

If DAW scales so well with threading, it should only be a matter of time until DAW software starts using DirectCompute for even more performance - DSP stuff is one of those other compute-intensive things GPUs should be very good at..
 
so the X99 is only available to the 2011 sockets? they wont be putting the X99 in for the 1150. so when does the Z97 get DDR4 support etc. if DDR4 is lower power and better speeds and latency clock for clock. then laptops and desktops will benefit just as much!
 

Never.

The memory controller is built into the CPU so it is the CPU that determines what memory standards the motherboard can support. The chipset has absolutely no bearing on that aside from its compatibility with said CPU.

Since the 97-series are only intended for use with Haswell and Broadwell CPUs which only support DDR3, there will never be a 97-series motherboard with DDR4.

If you want a mainstream CPU that supports DDR4, you will have to wait for Skylake late next year, shortly after Broadwell-K's launch, assuming Intel does not delay it.
 



Though you have a point here, the guy buying such CPUs most likely will game at above 1080p .. but this would have implied using 2 GPUs at least in the test.



You Said it. 295X, 2x290X, 2x780Ti, 2xTitan. Any of these options would've been perfect for this review
 



Though you have a point here, the guy buying such CPUs most likely will game at above 1080p .. but this would have implied using 2 GPUs at least in the test.



You Said it. 295X, 2x290X, 2x780Ti, 2xTitan. Any of these options would've been perfect for this review
 


It's not a question of availability; S2011-3 is what HW-E is designed to use. X99 and S1150 is
a mismatching of concepts; the one has nothing to do with the other.

InvalidError covered the rest nicely.

Ian.

 
Nice picture you rarely see good illustration that really show how things work, i hope the picture is accurate so i dont look dumb! Lol #

So still nothing with a 5ghz base clock?
 
Nice picture you rarely see good illustration that really show how things work, i hope the picture is accurate so i dont look dumb! Lol #

So still nothing with a 5ghz base clock?
 


Then you would end up with the intel equivalent of the AMD fx 9590
 
Status
Not open for further replies.