Broadwell-E: Intel Core i7-6950X, 6900K, 6850K & 6800K Review

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Hello Chris or anyone responsible,

What happened to "Print" button? I can't find it anywhere. Is it done with?

Do note that some people like me always wish to read or print as "Single Page" format. Hence, I'd urge you to re-instate the missing/dropped Print function so that it motivates me to visit your site more often.

Thank you
 
My 6800k is running stable 4.4 OC at 70-80 degrees under full load rendering movies with a Corsair H115i
 

I see. Migrating to a common platform is good, yet it doesn't mean some good old features should be abandoned.

I hope that Print function will be added soon. I'll send an e-mail to Purch and Tom folks too.

Thank you.
 
Managed to get an i7-6900K stable at 4.4GHz with 1.35V
At 4.5 GHz choked a lot. Even at 1.45V.
So i woud guess its max freq is 4.4GHz.
 
It occurred to me that Intel really created a lot of confusion by calling the Extreme processors i7's. Why not i9's, or maybe even i8's, since they lack a GPU (could use even numbers to denote that)?
 


3 years of 3.0 to 4.0 GHZ boost clocks is getting old. its all im saying,
 


Clock frequency alone ? who said anything about alone? If intel was less interested in on board graphics on K cpus we most likely wouldn't be having this discussion.
 

You said you wanted a 5GHz CPU. Having a CPU pipelined to hell for the sake of achieving higher clock frequencies does not necessarily yield any meaningful net performance improvement: the FX9590 still gets beaten by Intel's Pentium and i3 in lightly threaded code due to piss-poor IPC while wasting as much as four times as much power. Few people are genuinely interested in going back to the days of ~200W CPUs at home or in the datacenter. For those who are, IBM has the Power 8 and Intel has its 16 cores multi-socket Xeons.

The only way we'll see any significant performance increases in the future is extra cores but mainstream software is either lightly or poorly threaded, which translates to little if any performance gains in most applications. Intel is not going to cannibalize future CPU sales by making affordable multi-core CPUs available before average people actually need them.
 

You specifically said "I want 5 GHz", with no other info. The point being made is that simply demanding higher clocks is pointless, because clockspeed on its own is meaningless. Also, considering that it seems that most Intel CPUs can be overclocked to at least 4.5 GHz these days, a 5 GHz CPU is only about 10% better than a current overclocked CPU anyway.

Regarding on board graphics: Intel already makes -K CPUs without integrated graphics in the form of LGA 2011 CPUs.
 

And Intel's LGA2011 CPUs aren't magically any faster on a per-core basis than the LGA115x CPUs, it only adds some number of cores which will be going grossly under-used in 99.9% of mainstream software.

Single/lightly-threaded application may even take a performance hit from the larger caches' and more complex uncore circuitry's increased latencies.
 


Have to say I wouldn't mind a higher thermal socket limit if it meant one could have a top-end consumer CPU with lots of cores and a genuinely higher clock, but Intel's made it very clear via their pricing that even if they did or could make such a thing, it would be priced too high anyway, deliberately so. Need I remind everyone yet again that the 3930K was an 8-core with 2 cores disabled? We could have had a proper consumer high-end a long time ago, but it didn't happen because there was simply no need for Intel to release an 8-core (a perfect example of lack of competition slowing down progress) as back then AMD's best products couldn't even catch Intel's mainstream 4-core SB.




And if he doesn't want mad cooling, just get a used 2700K, ASUS M4E, simple TRUE with an NF-P12, 5GHz every time with good temps and little noise, no problem. I've built seven so far.




Remember though it can make a difference in some games that are coded for it, and relevant CPUs do have more PCIe lanes allowing for more flexible CF/SLI setups without compromising storage options. Plus, I think a lot more people these days meddle with home video/imaging which with modern apps does make decent use of multiple cores (eg.Handbrake, GIMP, etc.)

Also, there's the continued oddity of the venerable i7 4820K, a 4-core with no onboard gfx in a high thermal limit socket (which means huge headroom for oc'ing) that has a full 40 PCIe lanes and thus would provide a faster platform in some gaming/pro scenarios than a 5820K or 6800K because it doesn't have to compromise on SLI/CF setups.

I've moaned about the PCIe crippling before (because I think it stinks). Apart from the later entry 6-core CPUs being affected in this way, it's a pity that the later products don't include a 4-core option either (with full PCIe) because that would again allow those who want an absolute high clock (while benefiting from the newer IPC) to get something decent without running into the thermal ceiling so quickly in the way that now affects X99 CPUs so badly. I hate the new naming style, but something like a 4c 6850K with full 40 PCIe would be kinda neat, I'd much prefer that over Skylake which has to survive on mbds with nowhere near enough PCIe lanes and a lower thermal socket ceiling (so sick of seeing x8/x8 on Z170 mbd specs, caveats about lane allocation re storage choices, etc.)

And note that Intel does make XEONs like this already, eg. the E5-2637 v4 and E5-1630 v4, the latter inparticular is rather interesting - well priced (cheaper than the 6700K!), 40 PCIe lanes, max Turbo of 4GHz, the same high thermal socket limit (140W). I bet an unlocked consumer version of this would be rather popular (compared to Skylake it would be a bit like the way Nehalem was so attractive at first launch on X58), and its existence proves Intel can make such a thing if they wanted to, I mean the product pretty much already exists.

But instead we get the crippled and confusingly named 6800K. *yawn*

Ian.

PS. Shameless plug! 8)
 

And what percentage of all games currently available would that be? Well under 1%. It will take a whole lot more than 1% of the market to justify CPUs with more than four cores in the mainstream. The number of people who are deep enough into heavy video editing to care about LGA2011 either already own an LGA2011 system regardless of cost or cannot justify the cost for their usage.

Lack of competition is a weak excuse when there is simply next to no demand in the mainstream for CPUs with more cores. Launching affordable 8C16T before they become a mainstream necessity would drastically reduce Intel's future income. AMD will probably do the same with Zen by pricing the 8C16T variant somewhere between the i7-6850k and i7-6900k.

Competition is not much of a factor in niche markets and high core count CPUs will remain a niche market until a large proportion of mainstream applications become massively threaded with significant performance benefits. I do not expect either of those things to happen any time soon - it's been over six years since four cores or 2C4T CPUs have become budget mainstream and the vast majority of software is still heavily if not entirely dependent on a single thread.
 
I recently wrote a utility program where the almost all of it was parallelized (99% of it.) I was quite miffed that I was only seeing a 6x speed increase on the 4C/8T 5620 Xeon compared to running it on a single thread.

Amdahl's Law was kicking in. 🙂
640px-AmdahlsLaw.svg.png


But yeah, MAINSTREAM SOFTWARE... In fact, even that utility could have benefited from a higher clock speed and - perhaps, I'm not 100% sure - additional instructions and specialized circuitry that have been added in later CPUs.
 


What is your evidence for this? ED uses more than four, many other games do too. And in the future it's where the sw side will inevitably move, and has to move. But one cannot expect the sw to follow if the hw isn't there to support it. Your argument is the wrong way round, games companies are not going to evolve the sw if they can't see a hw market that can support such efforts.

Please point me to any recent roundup of CPU usage/etc. issues for all currently available games. Indeed, toms hasn't done an article like this in ages.

When I was building a 2nd gaming PC for ED, I chose used X79 parts precisely because reviews showed ED does benefit from the extra cores, and I know from other reviews that there are plenty of games which do now make use of more than 4 cores. I don't for a second believe it's as low as 1%. Cite your sources for that please.





I'm not talking about heavy editing, I mean just what anyone is doing these days who is meddling with home video, which nowadays is bound to be HD, and pretty soon 4K. Heck, plenty of home users are trying to do this stuff with mainstream systems, which btw is why it would make sense for there to be a 6c/8c option in the mainstream segment. Intel could do it, but they choose not to.





False logic chain. Intel's the one that didn't bother to seriously improve its own products in any single step since SB, because there was no competition from AMD, which has resulted in loads of people feeling little need or value in upgrading. You're looking at it from the wrong end of the chicken/egg connection.





You have no evidence for that at all. You're speculating on a potential path through history which didn't happen.

Intel could have released an 8c 3930K, they chose not to, because they could milk more money in the meantime while AMD had nothing to offer. Fine, but then they just let it all stagnate, no improvements to the X79 chipset, rubbish improvements to mainstream CPUs, garbage interface material to hobble oc'ing with IB, crippled PCIe with later top-end CPUs. They've created a lack-of-demand mess of their own making. Don't take my word for it, read reviews of all the recent Intel release since SB! Conclusion after conclusion mentions things Intel has done which one chip at a time are harming the attractiveness of this market for enthusiasts, overclockers, etc. Read comments reviews from endless people who say they see no need to upgrade over their existing SB/IB/SB-E/IB-E, and even in many cases X58 users.





Ah the old niche market argument. Problem with that is that in retail, the high value items are where the real profits lie. One store owner told me it's the big stuff that keeps his shop open, even though he doesn't sell that many of them. The cheaper parts carry little margin, especially storage.





But who is going to bother coding in that way if the relevant products are so insanely expensive? 😀





Well it's certainly not going to happen any time soon as long as Intel produces stuff that offers little incentive to upgrade, or costs more than an entire mainstream machine, while continuing to confuse purchasing decisions by crippling PCIe provision, etc. It's been years since SB-E and yet relevant CPUs have barely changed with I/O provision, which is bizarre when the top-end model now has 66% more cores.

This is a computer tech chicken & egg catch22 and it's entirely Intel's fault that we're now stuck. I just don't know why people in the tech world are so accepting of Intel's continued meddling in the value of what one receives for the money paid. Crippled PCIe is just one example, hence why I continue to contrast the 4820K with later abominations like the 5820K and 6800K. The massive price spike with the 6950X is another example.

That XEON I mentioned proves they could produce something totally affordable with a decent spec for both X99 and mainstream, but they don't. Likewise, the E5-2640 v4 proves they could make a 6c or even an 8c for the mainstream segment which still sits within the same power limits as Skylake, but they don't. I didn't buy Skylake for my 2nd gaming PC because it was just stupidly expensive, ditto the ridiculous price hike in DDR3 before the launch to make initial high DDR4 pricing look less outrageous (and now miraculously it's all dropped way back down again, what a surprise).

Anyone know any sales numbers for the 6950X?

I expect you'll continue to disagree with me, which doesn't bother me of course (and I get why someone would come to the conclusions you have), but siding with Intel, etc. in this way won't stop the PC market from continuing to degrade if Intel doesn't change direction and actually makes stuff that's worth the money.

PC tech is becoming too expensive, so fewer people will buy into the high end, meanwhile there's no improvement in mainstream options even though XEONs prove Intel could make them, so the viable hw market continues to decline and thus coders have no incentive to support more cores.


Intel, build it and they will come. Keep going as you are, and the market will just get worse and worse, while the value of used X58/P55 boards will keep climbing because that's one of the few oc'ing options thats actually still a challenge for those who like doing it for fun.

Ian.

 

There is nothing surprising about that: if your code spends 16% of its time waiting for synchronization or interprocess communication of any form from other threads, your maximum scaling is 6X (the inverse of your wait+overhead %) of single-core performance regardless of how many threads you split the load across. If you want code to scale to 100 cores, you need to reduce your overheads to less than 1% of execution time including any performance degradation from cache thrashing between threads and bus snooping across multi-socket/multi-chassis systems.

Making code scale smoothly across a large core count requires very careful planning. Most everyday algorithms require a complete re-imagining to gain any sort of scaling. Some things, like parsing algorithms, are extremely difficult to parallelize since correctly interpreting the next character in the stream requires context from everything else before it.
 

It is basic logic: PC sales are dropping because four years old PCs are still more than good enough for the vast majority of people to run the vast majority of their software, so there is little incentive for them to upgrade regardless of what new CPUs become available regardless of price. For a company that wants to maximize profit over the long term, it makes no sense to make more processing power available at a lower price until volume more than makes up for the lower margins. Doing so would also hurt its low-end server and mid-range workstation margins where the bulk of single-socket LGA2011 sales are.
 
zOMG. Are we really having this debate in 2016?

Look, the reason Intel doesn't sell > 4 core CPUs into the mainstream market is that mainstream software generally makes poor use of them. Some of that is lazy programmers, but there are also lots of tasks that don't parallelize well - an those that do are generally better served by running on GPUs.

So, I hate to break it to you guys, but single-thread performance still matters! Adding more cores helps some tasks, but a linear increase in single-thread performance will result in a linear increase in the speed of all tasks!

So, all we're saying is that we were hoping for a CPU with Broadwell-level IPC @ 5 GHz. Of course, it wouldn't be terribly power efficient, but I hoped that Intel might have tuned their 14 nm process significantly, since the introduction of the original Broadwells.
 

As you have seen with every generation from Sandy to Skylake and will likely see again with Kaby and Cannon, that ain't happening. Intel is throwing every timing margin gain they get at increasing the execution dispatch port width, making the re-order buffer deeper, enhancing branch prediction, increasing cache and register file concurrency, shuffling the execution ports' functional assignments and various other IPC enhancements across the pipeline.

Increasing clock frequency is much lower on the priority list and gets whatever timing margins are left after architectural tweaks, which is how we end up with the occasional regressions in typical achievable overclocks.

At the end of the day, a 7% IPC improvement is worth about the same as 300MHz higher clocks at 4GHz, usually for a fraction of the power draw increase.
 
Actually, Broadwell broke a trend, by regressing in clock speed. This, coupled with the massive launch delays and initial rollout of the lowest-speed SKUs first, had me hoping their 14 nm process had kinks that they might've worked out in time for the launch of Broadwell-E.

Also, you're neglecting the fact that the big architectural improvements tend to come in the second generation of a given node. There was legitimate reason to expect some clock speed increase, in Broadwell.

In any case, I guess we get to wait and see what the Skylake-E parts will look like.
 

Ivy Bridge consistently overclocked 200-300MHz lower than Sandy despite being 22nm vs 32nm and the first generation Haswell failed to impress in the overclock department by hitting frequencies another 100MHz lower than Ivy.

As far as "big architectural changes" are concerned, I do not remember any. Making buffers deeper and wider, adding an execution port, shuffling instructions between ports, adding a register file bypass path, etc. are all relatively minor architectural tweaks and those account for the bulk of what has changed since Sandy.
 
I wasn't talking about overclocking. But overclocking on Ivy was worse because they switched from solder to an inferior TIM, under the heat spreader. Also, per-unit-area, the heat dissipated by Ivy actually increased, further compounding the problem.

I assume Haswell used the dividends from the feature shrink to improve IPC, like you were saying.

Broadwell being a tick, the natural assumption would be that it'd clock higher. Clearly, they had issues with 14 nm that weren't properly sorted until Skylake.
 
Status
Not open for further replies.