Overclocking Intel’s Core i7-7700K: Kaby Lake Hits The Desktop!

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Er, dude, if you were already thinking of upgrading to the 6700K, why wouldn't you pick the 7700K? They're going to be similarly priced.
 


I definitely agree on this. I realy don't get why people are complaining. We get an extra 200MHz basically for free with this launch, being able to use skylake architecture to its full potential now. That's exactly what optimization means.

And it even makes a bigger difference for non overclockers. The base+boost frequency were the ones affected the most, they are really high now. Just like the 4790k improving on the 4770k. Many people didn't even bother to OC the 4790k, since its base frequencies were so high already.




Go and read Devils Canyon review and come back.
This is not Haswell (Skylake), this is Devil's Canyon (Kaby Lake). And I remember that review to be quite positive.
And, to be honest, we are seeing a 7.5% performance improvement, about the same we saw on Haswell. It might come from IPC or from clock speed, but it's performance nonetheless.
 
Remember that there was a 40W range between resetting firmware by CLR_CMOS, and resetting it through the GUI, under a certain oddball circumstance, which is why we guessed that a properly developed firmware would produce power numbers somewhere between those extremes. I would have said "closer to the lower number", given the types of changes Intel stated and the small TDP delta within Intel documents, but I didn't want to "say too much" before this CPU was tested on a second board.
 


Answer here:

http://www.tomshardware.co.uk/tgpublishing-acquired-by-best-of-media-uk,news-24829.html

Even tho I have oft chided THG for listening to Mom and the old mantra "if ya don't have something nice to say, just don't say it", I have to compliment Crash here ... not many would have noticed or put the T & E to track this down.

And the logic of the "it's only an incremental improvement" complaint escapes me :

a) So have the previous 5 generations
b) You walk into a car dealer and they have last year's model and this year's model on the floor. This year's model is the same price, has sightly better acceleration, a few convenience features built in. You gonna buy the 2017 model or the 2016 leftover ?

How often do you see folks even thinking about dumping their builds they bought during black Friday sales ... or 3...4...5 months ago and posting whether it makes sense to upgrade ? The 10xx series was 50% faster than 9xx and yet no one was running around yelling "sell / upgrade"

But no doubt, some did :)

 


"For true performance enthusiasts, the real news is that Intel’s new mainstream-socket enthusiast CPU will reach new overclocking heights."

Shill much? Stock is 4.5GHz lol, and Skylake had no issue reaching 4.8GHz (Often at stock clocks). These are not "New Heights" considering SB reached this with ease, and again - SO DID SKYLAKE.

This is almost literally the same CPU sold again with a different name. Their won't be a price drop because it's the same processor. In fact Skylake seemed to be more efficient.

Kaby Lake Hits The Desktop!
 


Yup that must have been when TH became a shill factory. Like I said - I actually don't have a big problem with this CPU. I have a problem with a once scientific and decisive website just doing blatant marketing.
 

Oh, now I see what the problem is: You want to believe that Skylake could easily reach 4.80 GHz at a reasonable voltage coolable with air cooling. I can assure you that our tests have shown that readers can expect the Core i7-6700K to reach 4.5 to 4.6 GHz at 1.30V using big air or mid-sized liquid. I can assure you that sites claiming 4.80 GHz from Skylake are using far more voltage, with either far lower stability standards or far greater cooling. And I can assure you that we're seeing an actual 200 MHz gain that Intel credits to improvements in the die process. Finally, I can assure you that Intel's "die process improvement" claims appear completely credible at this time.

The problem isn't that we've lowered editorial standards. The problem is that you believe we should lower our testing standards, so that 4.80 GHz Skylakes seem ordinarily achievable. You would then have us ignore that those lower testing standards would also make 5.10 GHz Kaby Lakes seem ordinarily achievable, so that we could falsely claim that the clock improvements aren't noteworthy.

It's good to finally understand your perspective, thanks.
 
Ahah, yes, "Don't buy this, wait for that instead" Clearly a marketing play. Intel will obviously make a bunch more money by selling you the 7700K in a month rather than the 6700K today, even though they'll be priced similarly and have similar manufacturing cost. Because if that were not the case, I'd be a shill for the customer, and that idea would just eat you soul :)

 
Please test Starcraft II! I know it's not optimized but a benchmark result from Stracraft is a good representation of how a CPU performs in most gamees!
Thanks.
 


I really cant tell if you are purposefully trying to ignore his point and deviate or if you genuinely don't understand what he is trying to say because what you said was obviously nothing to do with his main point which i thought was obvious. Its not about the data provided its the OPINION of the author (you) that he has a problem with. You gave a very positive review of this CPU even though it offers so little over the previous generation, I mean how can you be ok with that? As a tech enthusiast you should be as pissed as everyone else and yet you created this nice little puff piece about it. I mean you obviously have no journalistic integrity at all.

Also as a hardware site that relies on its fans for readers maybe its not such a good idea to insult them, especially when you do so in a way that makes you look stupid for either not understanding his main point or proving it to be true.
 
Easy answer: No. I understand that his point is to deride, insult, and call out things that simply aren't there, just because the article recommends the 7700K over the 6700K in overclocking. I understand that like him, you want me to be angry that the 7700K has only a 200 MHz advantage over the 6700K. But what would my excuse be to call the processor junk? That its overclocking advantage isn't large enough? Are you the arbiter of "large enough"? Is he? What you're clearly not seeing is that any improvement is just that, an improvement. It overclocks better.

It's not as if the article told owners of the 6700K to ditch their working processor and buy a new one. In fact, it expressly said the opposite, to only replace your 6700K with a 7700K if your 6700K has already failed.

The article stated that more motherboards needed to be tested, and would be tested. We could say that not having more motherboards tested in the first piece is a problem, but then again, not getting the article out quickly would also have been a problem. So we published the first piece, and then an update. That's not a compromise of integrity, but of expediency. In case you missed it, here's the first update:
http://www.tomshardware.com/news/intel-core-i7-7700k-kaby-lake-overclocking-update,33119.html

You see, the problem with that nastiness is that it feigns outrage over problems that don't exist. The 7700K isn't a downgrade, and Intel isn't significantly revising its price structure (at least as far as anyone knows). So if you were hoping to buy and overclock a 6700K, you'll win by getting the 7700K instead. Anyone who calls that a loss should probably consider their own perspective.

 

Excellent summary. I did not feel that the article was suggesting I go out and buy something, at all; just that if I were already planning to buy, at what would likely be the same price, it made sense to get the 7700K, particularly if I'm interested in overclocking.

 

Well, I have to admit you are right on this. I worded it incorrectly. Probably should have said "what I hoped for".
 


I have no intention to upgrade to Kaby Lake. Just pointing out a fact. I am more than happy with my Skylake CPU.
 

Many people are still more than happy with their Sandy/Ivy CPUs too and this is hurting Intel's desktop/laptop sales.

At the moment, I have no foreseeable need or plan to upgrade my four years old i5-3470. Keeping the same PC for four years is something I have never done before. In the past, three years was usually where the upgrade itch kicked into high gear. Nothing resembling an itch yet, one year past its usual due-date.
 
I have almost entirely killed the upgrade itch by asking "How can I make this system better for me by spending money on 'xyzzy'?" There's typically no appreciable improvement, so I skip it. That does nothing for the "What if" itch though, but I've been able to satisfy that one with some of the things I've done for my reviews, and I can then describe the effects. If I think it answers questions people might have, it might be worth an experiment or two.
 
I will say that either manufacturer isn't giving a whole lot of performance gains to make it worthwhile for PC gamers. This might be another console thing. IDK.

On the other side. . .it's somewhat relative. Whether tick/tock or whatever there was a time we would really need to wait 2-3 years for a upgrade to be worthwhile. I went from 8086>Pentium II>Pentium IV>Q9300>2600k>4930k. With those gaps I was happy. Some will nitpick having to have the best of each class/gen.

I will say in against, that outside of Hecta's I'm not seeing anything in 3 gens worth the money to change to for gaming. I even recently reused my leftovers to and a new case/psu to rebuild my 2600k and its kicking but at 1080p.
 

I've seen this complaint countless times, and I have to ask... is it really that bad a thing? Basically, for the last number of generations if you buy an i5/i7/-xxxxk, it's a pretty safe bet that you can go at least 3-5 years without upgrading and still not have to worry about a CPU bottleneck while gaming. This sounds like a pretty good thing to me. Are people really so eager to have to dump a bunch of money into a new platform every few years that they want their CPU to rapidly become obsolete so they have to upgrade?
 
looks like they are reaching the limits of this design they currenly have incrementally improved since 2010

a bit of mhz more but lots of watts and heat in exchange sounds like what happened with amd, not yet that bad as whit old amd cpus but starts to look like that, the two previous generations were smaller improvements with little heat oand power consumption increase

well, wait and see what they have, or what they don't have
 

It isn't a limit of the design, it is a fundamental limit of conventional linear computing: there is only so much instruction parallelism that can be extracted out of a single instruction stream before dependencies make it cost-prohibitive to attempt achieving any more than that. Every mature CPU architecture will eventually reach that point and ARM won't be an exception.

The only thing that can fix that is software developers embracing multi-threaded programming but most developers want to avoid the extra complexity unless it is absolutely necessary, so multi-threaded programming adoption in mainstream applications and games has been very slow.

There isn't much motivation for CPU manufacturers to offer massively multi-threaded mainstream CPUs when massively threaded mainstream software is still nearly nonexistent after a decade of quad-core desktop CPUs.
 
I think you're missing the point.
The hardware design limits how fast a thread can be executed.
Earlier, around 1980-2010, you could expect a substantial performance increase per core with each new generation of CPUs.
That's obviously no longer true! (The CPU manufacturers have somewhat tried to make up for it by adding more cores and threads to the CPUs.)
 

I didn't miss the point, you missed mine.

Hardware design cannot overcome the basic fact that looking further ahead to find instruction-level parallelism in a single thread is cost-prohibitive. That's why Intel cannot push it much further than it already has. Throwing more transistor at that problem yields rapidly diminishing benefits. The same thing will happen to ARM.

It isn't merely a hardware limit, it transcends instruction sets and architectures. The only way to overcome it is thread-level parallelism and that's squarely a SOFTWARE challenge at this point.
 
Status
Not open for further replies.