Intel Core i7-5960X, -5930K, And -5820K CPU Review: Haswell-E Rises

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I would totally go for the i7 5930k it has the best clock rate and would be sick!!! That paired with a bunch of ram, a good video card, and big hard drive it would be a beast!
 
Something I would like to see Tom's Hardware start including in evaluation of CPU's for gamers is not just framerates in game, but also the ability the CPU gives to stream while gaming.

I know for me this is the main reason I would consider buying a 6 core cpu (or even possibly an 8 core) over a cheaper 4 core. Then I can stream (or local record) at a decent bitrate while still having good performance in game.
 
Interesting idea. I thought though that NVIDIA cards had some kind of tech in them that allowed game recording
without impacting on performance (much)? Can't remember now...

Ian.

 

both latest amd and nvidia have fixed function logic called v.c.e. (amd is on v.c.e. 1.0&2.0 afaik) and nvenc resp. inside their gpus and igpus capable of streaming. nvidia calls it's shadowplay. intel has quick sync but i don't know if it's used for streaming. afaik, amd cpus(am3+ and igpu-less) don't have this capability.
 
In the CPU monthly you keep saying that the 4690k is the recommended CPU for over $240. If its so great then why aren't you including it in the testing results?
 
5820k + MB+16 DDR4 2133: $727
4790k+MB+16 DDR3 1866: $599
4690k+MB + 16DDR3 1866: $519

If you buy 4790K or 4690K you will be stuck with an under performing CPU and outdated memory. Given its track record Intel will be changing the CPU pin configurations or RAM slots on MBs to facilitate DDR4 on next generation of CPU/MB. The only technological upgrade route for those that buy a 4790k or 4690k now is the GPU.
 

Under-performing? At the rate CPUs have been improving for the past five years, that "under-performing" CPU will remain practically on par with whatever comes out for the next four years. With DDR4 having slower timings at much higher prices than DDR3 modules with the same clock speed, DDR4 is currently a downgrade so DDR3 is far from being obsolete yet.

So I would not worry about current PCs being "obsolete" for another two or three years at the very least.

For now, DDR4 is a niche-market thing for servers and extreme enthusiasts. AMD has no plans to move to DDR4 yet and even Intel is not quite committed to forcing DDR4 on mainstream desktop since Skylake rumors say it will support both.
 


Yes and no. Technically, the next gen is Broadwell. We've only seen a tablet version of this CPU so far. There's no telling if Intel is even going to release anything beyond that with Broadwell, and the only thing it has to offer is slightly better iGPU performance and lower power consumption.

The next gen after that is Sky Lake, which will support DDR4 and PCIe4, so it will definitely feature a new socket and new chipset. It also won't be released for at least another year. Since most things on the PC are more limited by storage than anything, I don't see these making any significant difference anyway.

However, I would hardly call the 4790k an underperforming CPU. The 4690k is already top of the gaming CPU chart. The 4790k offers no extra performance in gaming, just like the Haswell-E series. Sure, Haswell-E has more cores, but no game right now utilizes more than 4 cores. We are entirely GPU limited in the game space right now. This may change in time, but by the time it really makes a difference, it will be time to upgrade again anyway.

The increases Intel has offered recently have been increasingly irrelevant. Intel itself is becoming decreasingly relevant. They're still top of the heap for PCs, but they're not quite as powerful as they once were. I built my older sister and her family a Core i5 2500 machine several years ago, and it is still more computer than they can use. Even that old iGPU is more than they can use. This is the case with most home and business use. There's just no software out there that uses what we have.

Heck, the reasons to upgrade to the newest software are even decreasing. The biggest reason is security, but even in that case, it is affecting fewer and fewer people. If the software isn't getting upgraded, what is the use of upgrading the hardware? My sister still uses Office 2007, and has no plans on moving away from it.

Face it, the high end desktop is becoming more and more elitist. Fewer and fewer people actually need that high end.

Now, if you're talking the server side, that's a whole different story.
 
Can someone tell me approximately how much of an increase in performance I'd see using any of these over my i5 4670k?
First you should know, those series are not designed for normal public users.
Answering your question:
For gaming: probably ~5%
Software: The only benefit is on multi tasking.
I've saw 4770 outperform 5960, in some cases.

But are going to build a 3000$ system just for gaming? That's ridiculous.
Stay with the i5 It's very good "beast!", I'm using one myself.
Again the x99 are for workstation not gaming.
Hope that helped :)
 
That's a biased response. There are people who have the money and like to build & use high-end
systems for gaming. They have multiple screens, play at high resolutions, have the details maxed
out, and they're happy to pay what's necessary to achieve this, ie. 3 or 4 way SLI/CF if need be.
I saw one poster on a Skyrim thread who had four Titans.

If anything you have it completely the wrong way round. People of the category above are normal,
they form a proportion of the gaming market (and a very important section at that), they are members
of the public. They're not the majority by numbers, but they constitute a far more significant proportion
of many stores' income by value. I talked to a computer shop keeper in CA who told me that without
customers who buy the high-end parts, his store couldn't survive; he often makes a loss on things
like simple HDDs because he has to just to draw the custom in. High-end gaming is like the hifi market,
it's where the money is, it's what drives the tech, lessons learned filter down over time, etc.

Sure, most people don't need this level of performance, but it's wrong to say these CPUs are only for
workstations. Indeed, in many cases pro users could not or would not use such systems at all, for
all sorts of reasons. They can be a good compromise on the way to being able to afford a proper
multi-socket workstation (I've helped lots of people in that regard), but it's often no trivial task for
a solo professional to work out what to buy as usually it' not their field of expertise.

Recent mainstream chipsets like Z97 don't have enough PCIe lanes to support high-end gaming,
and I thoroughly dislike the storage compromises Z97 imposes if one wishes to have more than
one GPU. It's pretty obvious the 5820K is a good compromise for 2 GPUs on a board with a PLEX
chip, though personally I'd still prefer the 5930K to lessen such issues as much as possible.

Saying it's ridiculous to spend $3K on a gaming system is so incredibly silly. If I want whacko
frame rates for the smoothest gaming experience at the highest possible resolutions &
detail, and I can afford to pay for it, why the heck shouldn't I be able to buy that? Supply &
demand. I applaud the fact we live in a world where such demands can be met, instead of
all having wear the same clothing. I can't afford to buy such kit, but I'm glad there are people
who can & do as they help move the tech along, often in ways which help budget-starved
solo pros at the same time (I can see multiple 980s being very popular for AE users). What a
sucky world it would be if it were never possible to have things like the Veyron.

5% of gaming users? Maybe, but it's a heck of a lot more than 5% of revenue for many
store owners. Besides, lots of 'normal' gamers (to use your word) also like to be able
to process video, experiment with apps like Blender, etc. HT helps here, so the mainstream
i7 is a useful step up. Only DDR4 pricing is holding back 5820K adoption to the same people,
but that will change.

Fine if you don't want to push forward and explore the far horizon (or can't because you don't
have the money), feel free to stay safe in the mainstream, but that's no reason to suggest
others shouldn't push the boundaries of the tech world. What makes me laugh is the fact that
some aspects of 'mainstream' gaming we had a few years ago with X58 (lots of PCIe) are now
considered high-end. How did that happen?? To me, the modern PCIe compromises of Z97 feel
much more akin to the way P55 was setup compared to X58.

If I was younger, richer and had the free time, I wouldn't hesitate to put together a mega 5960X
multi-980 gaming rig with a bunch of screens and blow my mind with modded Skyrim. 8) I'm not,
and don't, but I'm glad there are people out there who are & do; more power to 'em.

Ian.

 
I don't think I've ever spent less than $3,000 on a gaming system (desktop) since 2003. My current build is going to run about $5,200. It is an expensive venture and it is something that I have had to save for; however I believe it is worth it "if" gaming with mulitple GPUs, higher resolutions is your thing. For the heavy workload stuff I believe Xeons are the weapon of choice, though the 6-8 core enthusiast Intel processors offer somewhat of a middle ground for those who like to game and do heavier workloads.
 
Good for you! 8) Btw, an oc'd consumer 6-core is often about the same as a 10-core XEON.
XEONs don't really shine until they're used on a multi-socket mbd, or when they're used for
tasks that genuinely run better with a larger number of cores/threads even though the
clock is lower. One ironic side effect of a prosumer using an oc'd consumer chip: single-threaded
performance is better than any XEON, which can help a lot with interactive aspects of tasks
like Maya, ProE, etc.

Ian.

 
Hi, does anybody knows which Test file is used for the Blender Cycles test?
Btw. Blender is as version 2.72 so 2.68a is like you test on Windows 98.

Cheers, mib2berlin
 


They may use their own scene, but I know they've used the Mike Pan BMW scene in the past so it's quite possibly still that one. It's an old scene now.

For anything else Blender 2.68a would probably be ok to test with, but given that Cycles is still under relatively heavy development a lot has changed in 15 months.
 


I had a 2500k overclocked to 4.8 which I then replaced with a 2700k also overclocked to 4.8. Reason was that the 2500k hung up playing the latter stages of Empire TW when the calculations got too complicated. My 2700k was starting to also start having problems. Some games are more than simple arcade shooters and require both higher power GPUs to run at ultra settings as well as taxing the CPU.

 
I run the 4690k @4.7Ghz on air with 2x 770 SC ACX Sli. Plenty of cpu power for any Sli set-up. Ideally the 5820X is the cpu to buy today especially with the DDR4 memory.
 
Nice review.

I too am disappointed by Intel's choice to cut down the PCIe lanes on the 5820k.

Ideally, I am looking at building my next rig for the following purposes:

1) better than 1080p gaming (REQUIRED)
2) near HD video capture and PROCESSING (REQUIRED)
3) 3D / 4K / Occulus Rift gaming and possibly movie playback. (DESIRED)
4) possibly feeding a 4K or 3D projection system or TV. (DESIRED)

Keep in mind I am just starting to really look at hardware again.

I would love to see a comparison article, comparing the 5930, 5820 and possibly the 4790 and 4690 paired with a couple GTX 970s.

Some questions I have would be whether micro-stuttering still an issue with multi-GPU setups? How would g-sync work / apply in these situations? and how about ~4K video or 3D like Occulus Rift?

I would like to see a comparison of what value these varied CPUs, platforms, and PCIe lanes might bring to bear in terms of higher end, higher quality video gaming on a 27" monitor or Occulus Rift setup.

I also place a high value on future proofing.

Thanks..
 


Take a look at this article:

http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/
 


Hey thanks for the reply.
I remember looking at PCIe bandwidth a few generations ago (before purchasing a 9800GX2), and I had not considered the effect of the newer PCIe revisions yet this time. The article was definitely helpful. Thank you for that.

I still would like to see some testing with actual dual card configurations like I mentioned, not only for in game stress tests, but also running high resolutions, 3D, video capture, and video processing. I am wondering when bus traffic limitations might impact concurrent processes.

Thanks again.
 
Generally speaking any wallet strapped to the ass of a person with a brain will dodge any Intel processor with an "X" on it like the Bubonic plague... the application is so ridiculously small yet somehow they're so ridiculously popular! However this one time for a change there's actually a difference, which is nice. Certainly not enough to convince me but still nice.
 

The bigger thing for me is why did they move the VRMs inside the CPU?
That truly makes no sense since the VRMs get *very* hot and that will contribute to the CPU's dissipation
greately

 


My bad, I misunderstood.
 
Status
Not open for further replies.