Overclocking Intel’s Core i7-7700K: Kaby Lake Hits The Desktop!

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

AJC9988

Commendable
Dec 5, 2016
2
0
1,510
So, the best way to compare a new generation of chip which can be used on the same chipset and socket would be to compare equivalent scenarios in three categories: 1) Clock for clock comparison to show IPC improvements with changes to architecture, 2) a voltage to voltage comparison overclock to see how much more speed you can achieve with the same voltage, and 3) a heat comparison, meaning overclocking while trying to achieve the same heat target while completely stable. By breaking the comparison into such categories, it examines the expected performance gains of the new chip over the previous generation and can give a better idea to consumers of whether the upgrade is worth it. For heat, it is best to attempt to achieve the same heat under a uniform test, like P95 Small FFT sets, then examine the heat for different uses of the CPU (different tasks to see if the temp over ambient dropped, say, for performing encoding tasks, etc.). As it stands, your article leaves the largest questions related to the value of the new chip for an upgrade up in the air. If you still have the chip, please consider expanding on this review in this manner (so long as your NDA allows for it).

Edit: After reading other comments, doing the review on a second or third board to attempt to control for firmware inefficiencies would be nice. I know I have asked for a lot here, but this is the type of information I enjoy comparing. Thank you for taking this under consideration.
 

InvalidError

Titan
Moderator

A volts-for-volts comparison is not really possible since the voltage required to hit a given clock frequency can vary by more than 100mV from best to worst chip of a given model, as does the power at a given frequency and voltage. You'd need to repeat the test across several samples to get an idea of what the yield of each CPU model actually is and compare those stats instead of individual chips.
 

Crashman

Polypheme
Former Staff
IPC is unchanged because the architecture is unchanged. Heat is variable between supposedly identical processors because some interface the heat spreader better than others, and others have hot spots that equate to a minor manufacturing variance that isn't severe enough to cause the processor to quit working. So, the best we can actually do is compare more boards. To do anything better would require us to get batches of CPUs and compare averages, and that's only after finding the best motherboard from which to extract those averages.

Sorry to say that finding an impeccable answer is both simpler and more complicated than your request would appear to indicate.

 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
It's come to my attention that the key difference in performance between Skylake and Kaby Lake wasn't even tested here. It's the iGPU!
For a fair comparison the interesting benchmarks are those done without a discrete graphics card.

I get that, but my point is that parallelism is of little to no interest when comparing CPU performance.
The raw computing power for running a single thread with no parallelism whatsoever is "expected to" (courtesy of Moore's Law) increase exponentially over time as new generations of CPU are developed. That's obviously not what's happened with Intel's CPUs since Sandy Bridge. The increase in (stock) clock speed combined with increased computing (not to be confused with threading) efficiency should result in a more than eightfold performance difference between 2700K and 7700K.
 

TJ Hooker

Titan
Ambassador

I think you're confusing instruction level parallelism with thread level parallelism. ILP can allow a CPU to execute multiple instructions from a single thread concurrently, increasing instructions per clock (IPC) and improving single threaded performance. In other words, it contributes to "raw computing power", along with clock speed.

Also, Moore's Law says nothing about (single threaded) CPU performance. It simply states that transistor density is expected to double every two year or so. Simply throwing more transistors at a CPU doesn't necessarily make it faster. I'm not sure how you arrived at an 'expected' 8-fold increase in performance from a 2700k to a 7700k.
 

InvalidError

Titan
Moderator

Instruction-level parallelism does matter since that's what enables the CPU to execute multiple instructions from the same thread per cycle - it is called superscalar execution and has been in x86 chips since the Pentium.

Intel's current CPUs have to look 192 instructions into the future with all that speculative execution (branch prediction) to keep about four of its seven instruction issue ports busy most of the time. Even if you somehow increased raw processing resources tenfold without impacting timing closure (achievable clock frequency), branch mispredictions will still limit how many instructions can practically be executed out of order.

Even if you somehow managed to increase all execution and scheduling resources tendold without impacting timing closure (achievable clock frequency), you might only get another 20% worth of single-thread performance. That's 20% more performance for 10X the cost.

The rapidly increasing difficulty of increasing single-thread IPC is why most high-performance computing is moving to simpler CPU architectures (lower IPC) and massive core counts instead - HPC developers are used to writing software with massive thread-level parallelism.
 


Makes sense.

I'm so 1440p-centric these days. I wasn't even thinking about the CPU constraints / bottlenecking at lower resolutions.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
That's simple. It spans a time frame that's 6 years. With an expected doubling in performance (at a fixed cost) every 2 years that's a 2^3=8 times performance!

Perhaps it's not that off anyway, provided the performance of the iGPU is added to the performance of the (core) CPU.
After all, the HD 630 (in 7700K) is much more potent than the HD 3000 (in 2700K).
LATE ADDITION: While the HD 630 is more potent than the HD 3000, it's still less than I expected compared to the HD 530.
 

SCANNERMAN777

Honorable
Apr 7, 2016
36
1
10,545
I see Intel is milking the increments for all they can get as usual. Some things never seem to change. The chip seems to have the promise of a better gaming chip at best but overall improvement as compared to the Skylake hardly makes the expenditure worth the extra trouble or cash. An Intel Classic at best. Perhaps in 5 more years they might come up with something worth getting. I wouldn't hold my breath.
 

TJ Hooker

Titan
Ambassador

Nobody ever claimed or expected it would be a major improvement over Skylake. It's been years since upgrading from one generation to the next has been worthwhile, and I don't see that as an especially bad thing. It will probably cost the same as Skylake and perform a little bit better, so for anyone building a new system, or somebody waiting for the cumulative improvements from several generations, it will be a good buy. Anyone else can safely ignore it. How is that milking?
 

SCANNERMAN777

Honorable
Apr 7, 2016
36
1
10,545


Right on the money~!
 

weilin

Distinguished


Intel hasn't targeted people who owned the previous 2-3 generations in many years now... Kabylake is targeted towards Nehalem/Westmere, Sandybridge generation owners. For them, this should be a 30-40% performance bump which would be worthwhile for some/many.

Also, the driving force nowadays is no longer CPU power but IO.
Ivybridge (z7x family) brought native USB 3.0 support. (and option to include Thunderbolt 1)
Haswell (z8x family) brought more USB 3.0 (2 to 4) and SATA 6.0 support (2 to 6) (and option to include Thunderbolt 1)
Broadwell (z9x family) added M.2 support (and the option to include Falcon Ridge for Thunderbolt 2),
Skylake (z1x0 family) added PCIe 3.0 (and the option to include Alpine Ridge which includes Thunderbolt 3, USB 3.1 with Type-C, HDMI 2.0).
It will be interesting to see what Kabylake chipsets bring to the table...

I own a Haswell (z97) platform, My M.2 is limited to PCIe 2.0 2x, I have no USB 3.1 and no [strike]USB-C[/strike] USB Power Delivery support. My next upgrade will hinge, not on computing power, but on support for the new interfaces and standards as I am withholding my money for the first platform to come with [strike]native USB-C support[/strike] the complete feature set integrated into the chipset. This will become especially important once I upgrade to a USB-C phone...
 

Karadjgne

Titan
Ambassador
Here's my issue. 60fps-1080p monitors. They dominate monitor usage by a huge margin. Then add gpus capable of minimums higher than 60fps. After that, all you need is a cpu capable of keeping up. And that's Sandy/Ivy. My i5-3570k and i7-3770K is more than enough to handle any gpu on the market, and strong enough to maintain higher than 60fps at any setting. So what if skylake is @20% better, that's beyond what I need, basically useless. It's all top end performance, when top end is seldom used when any game I own shows @ 50% load.

Maybe if I had 1440p / 4k monitors and pushing a pair of 1070/1080's skylake/kaby-lake would make a visual difference, but honestly Intel's gains in performance since Sandy-Bridge have been far too small to warrant upgrade
 

Crashman

Polypheme
Former Staff
They all have native USB-C support. USB-C is just a connector, it can go to any USB port.
Really, just think about this. USB-C can be used for Thunderbolt, USB 3.0, or USB 3.1 currently. And because of a few updates, they're actually calling USB 3.0 "USB 3.1 Gen 1".
I think what you may be looking for is native support for USB 3.1. But all the add-on hardware used to increase the current limit on boards that use USB 3.1 add-in controllers for Type-C will still be there on boards that have native USB 3.1 on the chipset, because that's power not data and it's provided by the motherboard not the chipset.

 


I might be mistaken, but actually I don't think that increasing the resolution will really impact CPU usage. It should be about the same for any given FPS at any resolution, You would just be increasing GPU load mostly.

 

weilin

Distinguished


I may have oversimplified my requirements to simply the connector; I can see how that could cause confusion to others... I've more or less associated key features like USB PD with Type-C because I personally don't think I'll see that standard implemented on a Type-A connector. That's my fault, I'll go edit my post.

What I'm looking for is USB 3.1 gen 2, USB power delivery 2.0 and the USB Type-C connector. The other alternate modes (audio, displayport, HDMI, thunderbolt etc) are nice to have but not required. Yea, I realize utilizing a 3rd party controller is almost the same thing. But having it native in the chipset means all this would utilize the same PCIe lanes as the native USB 3.x controller. Leaving me more PCIe lanes open for future expansion.

In the meantime I'm still looking for a 3rd party PCIe card supporting the features above for my z97 chipset. It's surprising hard to find a PCIe card supporting PD spec... You wouldn't happen to know of any would you?
 

Karadjgne

Titan
Ambassador
As far as I know, there's many things in a game that are cpu bound, more so than gpu and maintaining levels of settings will require more from a cpu at higher resolutions. Granted, most of it is gpu load, but when talking about things like shadows, movements, viewing distances etc it's the cpu telling the gpu what to imprint per pixel, and with 4x the amount of pixels, that's going to increase cpu activity. Even more so for amd gpus that have cpu based physX etc. Any decent cpu is capable of keeping up with the gpu necessary for 1080p, but to keep up with the video power needed for 4k you'll need a cpu with some balls.
 

SCANNERMAN777

Honorable
Apr 7, 2016
36
1
10,545


Probably the wiser choice. I waited for X99 and I'm still using my X58. Now I'm waiting on a cooler that will work with a Xeon.
 

SCANNERMAN777

Honorable
Apr 7, 2016
36
1
10,545


Hear, hear. I recognize that tune only too well. It's one of the reasons I stated on another thread that Intel is milking the increments for all they're worth. I've never really been able to fathom all these piddly 2% - 5% improvements Intel comes up with each year like they're this season's singing Savior and then they charge outrageous prices for??? A: Not having any competition. Of course people buy because everybody wants the latest and "greatest". But seriously, how great can a 5% (tops) improvement be? This is the reason I kept my X58 platform so long.
 

InvalidError

Titan
Moderator

The small incremental performance improvements are an inevitable consequence of linear programming: every CPU architecture eventually reaches a point where it cannot extract any further meaningful single-threaded performance out of a thread. Intel reached that point five years ago and AMD is catching up.

The same thing will happen to AMD, ARM, MIPS and everyone else regardless of instruction set. The only way forward beyond that is multi-threading but most developers try to avoid threading like the plague, so CPU manufacturers aren't in much of a hurry to meet nearly nonexistent demand and focus on pushing fewer incrementally faster cores instead of massive core count increases.
 

Karadjgne

Titan
Ambassador
Believe it or not, AMD has been far more innovative about cpu design than Intel. AMD had 6 core and 8 core general use cpus years ago, it took Intel top line 2011 to match core for core. If only developers had run with thread usage and actually used those 8 cores, instead of concentrating on single core performance in games, things between and and Intel would be vastly different. However, they programmed for the minimum cores ppl had, which was 2 and not for maximum which was 8,thereby allowing everyone decent access to games. Kinda sucks amd got shafted by programming greed
 

InvalidError

Titan
Moderator

That is much easier said than done. Tons of everyday algorithms and code structures don't thread very well. If you have to put mutexes or other synchronization objects around high traffic constructs, you lose tons of multi-threaded scaling to wait locks, gaining next to no performance in the end, sometimes even getting worse performance than if you hadn't bothered with making that part of the code multi-threaded.
 

TJ Hooker

Titan
Ambassador

I'm not sure if I'd call that particularly innovative. Going from single to dual core? Yeah, for sure. Beyond that, increasing core count as it becomes beneficial and practical just seems like a natural progression. And the thing is most "general use" computing doesn't seems benefit all that much from the extra cores in hex and octo core CPUs, at least not yet.
 

Karadjgne

Titan
Ambassador
Then why the sudden influx by Intel? It can't be from the success of BF4 and its multi-thread superiority. Used to be a few 2011 and a few Xeon. Now the list of E5 Xeons is huge, 2011+ is getting a higher following than ever and ppl are begging for Hexa laptops. Kinda makes me think more and more that kaby-lake is a hold-off release and Cannon-Lake is going to end up a game changer to answer Zen+. Just reading the article and seeing those 'beta' results wasn't exactly inspiring.
 

TJ Hooker

Titan
Ambassador

The key here was the "general use" computing. Xeons are server chips, and LGA 2011 are high end chips for pro-sumers and enthusiasts. I wasn't aware of people begging for hexa core laptops, but I can only assume they're looking for a portable workstation. I wouldn't call any of those "general use". For most consumer applications, including gaming, there's little to no benefit going beyond a mainstream i7, and even with that you're into the realm of diminishing returns over an i5. And even when people are buying LGA 2011 CPUs, that doesn't necessarily mean they actually needed the extra cores. To be honest, while some of the people I see on this site talking about buying an HEDT CPU seem to have a legit use case for it (e.g. video editing), many just seem the kind of people who aren't concerned with performance per dollar and have more money than sense.
 
Status
Not open for further replies.