Skylake: Intel's Core i7-6700K And i5-6600K

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
It's common slang in the USA, it applies to video games as well; for example when an update drastically reduces or eliminates damage or removes something beneficial to the player.



 
I think we all know, that it's going to suck, no need to confirm it. Your theory is on point sir.



 
Very few users running these chips are going to be using integrated graphics anyhow. In order to make them more cost effective for gaming enthusiasts, who will make up the the majority of the market for these unlocked chips, it makes sense. Nobody is going to be trying to game with the integrated graphics on any serious level anyhow. Mostly the integrated graphics will be used for troubleshooting or temporary use between cards or RMAs.

There will be some users who strictly want the processing performance and the ability to watch HD video, which is easily accomplished with any recent gen iGPU.
 
That will yield a BIG increase, especially with the CPU overhead issues with DX11 games that appears to afflict AMD drivers. Even if AMD cards have a sudden surge with DX12, Maxwell 2.0 is improved enough over Maxwell 1.0 and Kepler to keep the 980 Ti there or thereabouts at the top of the pecking order.

I'm in the unenviable position of having an entirely archaic setup. I could replace my PII X3 710/AMD 780/4GB DDR2 800MHz configuration with something a lot faster, but I'd still have an HD 4830 holding things back; likewise, upgrading the graphics would still leave untapped performance on the table thanks to the slow CPU. At least, in your situation, your CPU is still an enthusiast option. :)
 
That's good to hear. I loved the 4000, 5000, and 7000 series with AMD, they really kept me happy. Then I see the 980Ti and my jaw hit the floor. Doesn't need to be watercooled, runs on air, outlasts the Fury X. I'm not an AMD fanboy, I love their hardware, but they really just fell so short this time around I was extremely disappointed. I haven't used Nvidia since the 7800GTX, so I'm looking forward to what this baby has in store
 
Still 4 cores.... Im sticking to my Q6600.

You're insane. I upgraded from a Q6600 to a i7 920 and it <mod edit> blew it away.
Then I upgraded to a 2600K and that rolled all over the i920...

You literally have NO IDEA.


***Watch the language please, use of expletives is unnecessary.-DB
 
Considering the IPC stagnation in the last few architectures from Intel (ever since Nehalem really, although the Sandy Bridge improvement was somewhat noticeable as well), the IPC improvement in this generation seems downright impressive!

Too bad THG dropped the Visual C++ compilation benchmark. Professionally I'd be highly interested in seeing how the architectural improvements manifest there. I guess I'll have to go to other sites for that...
 
Is it just me or these test are really strange?
Core i7 5960x slower than 5820K an 5820k slower than most 4 cores CPUs on every Photoshop / Office test?
Are these softwares really so bad in multicores CPU?
And If Windows 10 is so much faster, why is there no test on the other CPUs running under this version of Windows?
Maybe I should just upgrade my OS and not my entire system....
 

Which is easier said than done: to increase clocks, you need to increase timing margins through combinational logic between flip-flops. To do that, you need to reduce the amount of logic between FFs and this is typically achieved by adding pipeline stages. Adding pipeline stages increases the instruction execution latency and ends up compounding data dependency pipeline stalls - look at Intel's Netburst and AMD's recent architectures. They both proved that seeking higher clock rates through longer pipelines does not provide much if any net performance gains after ~15 stages - whatever is gained on clocks is lost on stalls. The only thing you gain beyond that point is a more expensive power bill to feed those extra FFs and associated extra control logic.

There are some chips like the Power7 which run at 5GHz stock but the cores are much simpler: instead of relying on deep, speculative out-of-order execution to achieve good single-threaded performance, IBM scrapped most of that in favor of quad-threading the cores and relying on massive thread-level parallelism, making the Power7 an horrible CPU for lightly threaded workloads.

What happened when Qualcomm tried lengthening their ARM CPU's pipeline to achieve higher clock frequencies? They ended up with a less power-efficient chip that got beaten in benchmarks by slower-clocked chips. Achieving the best possible throughput is a delicate balancing act between pipeline length, how much work can be done by each stage within a given clock period and how efficiently you can keep the pipeline running. Longer pipelines are much more difficult to run efficiently since you run into far many more data dependencies.
 


Just run the same benches on your 920.. but I'll tell you right now.. what you'd get out of a Skylake over the 920 would make you wonder why you didn't upgrade sooner..



Given the size of the chip, the process it's manufactured on, the materials it's made with and the heat output it's actually quite hard to wring a lot more performance out of these things. You want large jumps in performance? you're going to need chips nearly as large as the old ones were.
 


I don't have all of these applications. Even if I did, I don't have the same input files, so the results wouldn't be comparable.

To be honest, the main benefit for me would be the occasional video encode and maybe some improved game performance (although my GPU is the weakest link for that). My 920 has done fine so far, and I haven't even overclocked it. In fact it's undervolted, so it will probably last forever.
 


Yep. I only meant from a business-strategy perspective. I agree that it basically sucks balls for the rest of us that would love to make use of the best tech.
 
Skylake was supposed to be the CPU generation that brings significant X86 performance gains. I don't see that. Intel is concentrating too much on iGPU performance + no high end competition. ~10% CPU performance gains over generation is to be expected now. Kaby lake, Cannon lake, probably no different. It's depressing but that's how it is 🙁
And probably no 6 core laptop chips for a few more years if this is any indication.
 


I was thinking the same thing for a while now but then that got me thinking, do we NEED that much power?

Take a look at windows 10 and DirectX 12 for instance, the performance gains over previous generations of windows and APIs is staggering. Windows 10 and DX12 are very efficient at handling their workloads, and because of this it makes all computers feel 2x faster than before.

I'm not saying that more power is bad, but we really need to sort out our needs from our wants. Most of us don't need that type of power, but we want it.

Plus, like I was saying earlier, software is making huge improvements in optimizing efficiency of workloads on hardware. So instead of putting all the stress of making faster PCs on hardware, if you put the stress on both hardware and software it will be more beneficial because the hardware side can also work on power savings and other ideas it wants to initiate into it's cpus.
 


I think you may be looking at the gaming charts too close. Gaming probably won't see major improvements, as it takes a game which is CPU bound to make it matter. Or someone who plays for high FPS, which means they should test these at lower settings.

It did show some nice improvements on CPU tests. Only the 6-8 core CPU's were out performing them on other tasks. When Skylake gets 6-8 cores, it'll do even better.

Anyways, we are at a point in time which you just don't upgrade often. I still have an i7 920 OC'ed, as I'm waiting for a major bump before upgrading. I'm very close now.
 
TechyInAZ; That is multimedia performance. Windows 10 doesn't increase X86 performance. Still takes as long to render a window of tabs in Google Chrome (CPU bound), searching my SSD, searching my email database, compressing files, time for programs to launch on my SSD (mostly CPU bound).

I know that games are usually GPU bound so not much improvement with more X86 performance.
But look at the numbers on page 7 of the 4790K vs 6700K (4/4.4GHZ vs 4/4.2GHZ)

blender 16.6% improvement
Cinebench R15 single 3.5% SLOWER
Cinebench R15 multi 7.85% improvement
Adobe CC media encoder 9.3% improvement
WinZip Pro 19 11.9% improvement
Sisoft Arithmetic single threaded 8.5% improvement, though i5 6600K is 36% improvement (fluke or real?)

Sisoft arithmetic multi threaded 8.5% improvement
Page 8
AutoCAD 2015 2D performance 13.8% improvement (not bad)
AutoCAD 2015 3D performance 10% improvement

On page 6, office benches, not very impressive at all.

So for a good 2 years since Haswell, Skylake isn't very impressive with CPU performance increases of the base instruction set. Intel is spending too much time with new instructions, architecture changes and iGPU rather than raw performance of the X86 instructions IMHO.
Yeah, upgrading every 6+ years of your mobo and CPU (or laptop unless you game and no external GPU option) will be the norm for people in the know
 


This is likely due to them not having any real ways to improve performance on a per instruction basis. So they are trying to find improvements with better instructions and architecture changes.

Unlike GPU's, CPU's are mostly used in a serial fashion, so they can't just toss more cores at it and see major improvements.
 
Can't they increase the computational power of the integer and floating point units and add units (for each core) so it flows through the pipeline quicker, while increase prefetch power to keep the pipeline full? Not a CPU expert but that would be great. Per core performance is what counts.
 

Intel's Haswell, Broadwell and presumably Skylake CPUs already try to read 192 instructions into the future to keep their existing execution units (issue ports) busy on every cycle. Adding more execution units/ports would not help since the scheduler is already struggling to find enough work for those it already has.

The problem is that normal software contains data dependencies (ex.: c = a+b; e = c*d, 'e' cannot be calculated until 'c' is known), conditional/indexed branches (if/then/else, case, switch, etc.), indirect branches (ex.: virtual function tables and remote procedure calls) and other constructs which limit how effective out-of-order and speculative execution can be at shuffling instruction order to minimize pipeline stalls.

You simply cannot brute-force your way through the intrinsically serial nature, in both code and data, of typical general-purpose code running on general-purpose CPUs. All the fancy tricks may help but produce rapidly diminishing returns.
 


I'm not expert either, but you'd think if it were easily possible, AMD or Intel would have done such already. Especially AMD, who is far behind Intel as it is.

Like I had said before, CPU's are serial devices, and most things do not improve by throwing more units or cores at them. They have to improve them, not toss more at them. GPU's see a lot more advancements, simply because everything is so parallel, allowing them to just add more units and see improvements.
 

Forgive me if this idea is just plain stupid (I'm not an expert on this), but what if Intel created almost like a co-processor in place of the instruction scheduler. It would be like another core or a processor in a processor, but it wouldn't have the capabilities of another cpu core, instead it's purpose was to schedule instructions. It would be able to blast through sceduling with ease. I don't know if it's possible, but maybe Intel can do something like that and then add more parts to the cpu?
 
Status
Not open for further replies.