Intel 7000 Series (Kaby Lake) MegaThread! FAQ & Resources

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


True, but nothing like a good kick in the rear from AMD to make it happen in a timely manor. Intel has been making people pay a premium for a while now and it's about time.

I know CannonLake will be decent, but I think that Intel likely wasn't looking to make any major leaps in anything but efficiency or iGPU performance. (i.e. the last 5 generations) but if Ryzen lives up to the hype and "leaks", Intel will have to offer a bit more.

It's entirely possible we are looking at a repeat of history. Athlon 64 kicking the crap out of P4, then Intel pulled a rabbit out of the hat with Core2. All it takes for Intel to wake up is some decent competition.
 


Probably named after some place near one of their FABs much like most every uArch is.



To be fair if Intel was in full Core 2 overdrive and was pushing 8 core 16 thread $300 dollar top end i7s for the mainstream, AMD would not have survived. They barely had any sales for the past 3 years, and not as many as would be wanted the last 5 years. Do you think that PD could have sold enough if you could have bought a i7 6900K for $300?

I think Intel knew this and purposely slowed down on certain things because they do not want to be put at the center of AMD dying and being torn apart for it as they would have been.
 


AMD could probably lobby for unfair competition and trade practices if Intel started those kinds of price wars. AMD does own the rights to the x64 architecture though, so they could potentially invalidate, or threaten to, the current agreement with Intel.

One could argue that there are non-x86 alternatives to both AMD and Intel so a monopoly isn't quite possible in this regard.

 



And there is the core (no pun intended) of the question. Did Intel slow down or was it complacency or even possibly treading water until a new architecture can be brought to the market?

I understand the monopoly issue, but I'm not so sure Intel had as much choice about slowing down. Considering that it's been 6 generations now since their last significant performance increase, that is a huge amount of time to be voluntarily running in cruise mode.

I mention the Core2 leap in performance because it seems that it might take a architect change to get performance moving ahead again. I guess we will see with CannonLake, but I have to admit, my expectations for CL's performance is pretty low.
 
My hope is that Intel is shunting resources to non-silicon R&D. But it is likely being directed at LGA3647 and their integration with the Altera FPGA platform.

HPC and cloud services (sigh), are the future. If trends continue we'll all have dummy terminals connecting to mainframes again. But we'll call it the cloud and think it is new.
 


IKR. I am still determined to avoid cloud computing, but it gets harder every day. I also remember the mainframe/terminal days with little fondness.

Meanwhile, our mainstream processors are already too small for their pin count on socket 1151. We are very close to the IPC wall with the present architecture. I don't believe that moving to 10nm will improve that very much, if at all.

More cores can push performance up in some workloads, but software is lagging badly with the cores we have now.

Even though I just built a new system, I am intrigued by 6c/12t mainstream processors with graphics. The obvious other shoe is an 8c/16t with graphics mainstream processor at 10 nm. I just don't see what the usage case would be for general purpose computing.
 
That's is what I've been thinking, has Intel hit a wall with the current architecture and needs a drastic change like P4 to Core2? I doubt 10nm will bring much in the way of performance, maybe the average 5-10% from efficiency/clock speed gains.

I'm not sure if the reason more games aren't better threaded is because the performance gains aren't worth it, lazy devs or just that each game seems to be a rehash of the previous but with new graphics. I'd think that games like BF1 would reuse quite a bit of the previous code to save time and money.

Intel drives the market and as long as they are selling dual-core and single core CPU's, programs will try not to skew too far away for compatibility. If Intel made their lowest part for desktops and laptops a quad core then I think we would be seeing CPU's with hyperthreading become better used.

I asked before in a different thread, but...any progress on carbon or quantum computing? I saw an article a while back but nothing since. Is this something Intel is pursuing or maybe they are too far out to be worthwhile for now?
 
The games aren't threaded better because of the console chips. If the console core count goes up all the new games will follow. We like to think that we are the mainstream gamers, but we are not. It's not just the sales figures either. The game developers choices are to optimize for a computer, which is what a console is, that is frozen in time for 5-10 years. That's a long time in computer years. The other choice is to try to get it to run on mountains of disparate hardware and firmware with a short shelf-life at the top of the heap. They prefer to optimize for the lowest common denominator and then allow us to throw compute at it to make it run better. Tough love, but we sign up for it.

Intel is working on carbon nanotubes. I don't know how serious they are. Everyone has a few scientists working on quantum computing. The reward for being first is substantial, but the risks are substantial to go along with it. Intel is supposed to be working on a complete redesign of the architecture to launch around 2020-2022 timeframe. The timeframe is about the only information out there. Rumors about rumors.
 


Core 2 to Core I was pretty big but to be fair it has been 6 generations since the last significant desktop performance increase.

Just an example but Haswell to Skylake was double the memory bandwidth alone. A lot of stuff that is improving around or in the CPU just is not being utilized by consumer desktops. I mean it would be great if games could utilize the full force of a CPUs memory bandwidth and new instruction sets like AVX because we would have seen vast improvements. Or even more cores.

Not too sure on the 10nm but Intel seems to think it might be more than people think. Per them, taken with a grain of salt, they will be able to fit 2x the transistors per chip than Samsungs 10nm.

https://liliputing.com/2017/03/intel-not-10nm-chips-equal-better.html

I actually would not be surprised though since everything I have heard is that Samsungs nodes are not true nodes and they are naming it that just for the sake of naming. Not knocking them as they are decent but I think Intels 10nm will be more of a true 10nm.

And you can't really compare Netburst to Conroe with current uArch changes. Netburst was a pretty bad uArch with plenty of problems, high leakage until 65nm was one of them. To me it is why Ryzen is good but not overly impressive, it just didn't have a good uArch to beat. However anything Intel does now has to beat their previous offering and everything Intel has put out since Core 2 has been good parts with some nice new features.
 
 


If nothing else, Ryzen has shown itself to be proficient in the workspace area and Intel. Since these really aren't targeted for gamers and will go up to 10 cores. Intel may have to rethink their pricing scheme. However, the platform is much more expansive with quad channel RAM and all the extra features.

I have to admit that I am surprised at the 140w TDP on SkylakeX. Ryzen is 8c/16t in a 95w envelope.
 


Interesting.



Depends on performance. Intel has the room to drop price if needed. I doubt they will undercut AMD though. AMD needs this influx of cash Ryzen should bring and Intel needs AMD to do well.



The platform is not as horribly expensive as people make it to be. Sure if you want the best motherboard and RAM but even the lowest end motherboard is almost as nice as some top end mainstream boards and costs about the same. Hell my current board was $300 and it is a Z87 board.

As for the TDP, not sure why people seem to focus on that so much. Intel always pushes a higher TDP even if it doesn't hit it. The 6800K up to the 6950X all have 140W TDPs.
 


Not expensive...expansive. Meaning that it has more features than AMD's current top end. Quad channel RAM, 44 PCIe lane, etc. It has a lot more room to expand than anything competing with it. While AMD's CPU's may perform similar, the chipset doesn't have the expansion capabilities that came built in with the Intel.

I think the TDP is a issue because it means the potential of a lot of excess heat to remove. Intel has been pushing efficiency and AMD has been hammered for years about the TDP of the FX series. It only makes sense to call Intel out for being less efficient.
Intel = 6c/12t - 140w
AMD = 8c/16t - 95w
How times change...
 


Sorry I mixed up the word. That is true, 44 plus 24 on the board.

As for power, you are mixing up TDP with power draw. Look at these reveiws:

http://www.tomshardware.com/reviews/amd-ryzen-7-1800x-cpu,4951-11.html

Stock settings the 1800X hits 144W power pull in the torture test, runnin 3.8GHz.

http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-7.html

The 6800K at stock hits under 100W while at 4GHz (200MHz faster) hits 118W.

http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html

A more fair comparison, 8c/16t vs 8c/16t, at stock the i7 6900K hits 141W and while at 4GHz it hits 153w, 10w more for 200MHz more.

However we have to keep in mind that these CPUs also have a quad channel memory controller and more cache, the 1800X has 20MB of L2+L3 while the 6900K has 20MB of L3 cache itself plus another 256KB per core (2MB) ontop of that. Both of those create additional power draw along with the larger PCIe controoler you mentioned.

I would not call it less efficient either. In fact it is currently an even efficiency but that will probably change once Intel launches its 10nm process. Or it wont if Intel decides to pack more cores in that extra space.

I would say that Intel is pretty efficient but AMD is better. To be fair even the i7 6950K overclocked on a torture test did not use as much power as the FX 9590 did at stock settings. The FX series just was not good. The FX 8150 was a 125W TDP ship but used a ton of power under load.

Again, Intel is normally pretty good at overstating their TDP.
 


I would characterize this as back to the original schedule. There are numerous other rumors about exactly what a Skylake-X and a Kaby Lake-X are. I confess to being completely confused. Meanwhile, Intel is as mum as usual concerning their plans.

The Coffee Lake-X has me aggravated, because I don't care a fig about it. I want a Coffee Lake-S that will fit my Z-270 board. Maybeso, I'm not getting it, though. I can wait until 2018 for it, as long as it shows up.

What do y'all think?
 


Preferably I want Intel to go back to a single platform. Kill mainstream/enthusiast and merge them. I am sure that they can create a setup that if you get a lower end CPU with only dual channel support it can be done yet still support quad channel in the high end options.

I think if they want to be truly competitive they will probably have to.
 
That would mean making boards with expensive wiring and sockets, would kill off the $50 micro ATX market for sure. Basically let AMD absorb that market share.

I know they do it for yields, but it would be nice to have the HEDT chips be the top of the line architecture and have it trickle down to consumer, rather than the other way around.
 


Try Overclocking it anf you will see the diffrence from pressing to 5ghz of OC and 5ghz a breeze to achive it.

 
Part 2 of AdoredTV Skylake X Review...

Some unusual benchmark results (part 1 loosing to Ryzen an prev gen in certain Bench's) and also some bad Gaming PC Per (part 1 also)..
https://www.youtube.com/watch?v=FKfP8TEKqQU

An a harsh review from our very own Paul Acorn from Toms, check it out here:
http://www.tomshardware.com/reviews/intel-core-i9-7900x-skylake-x,5092.html

"Intel responded that the loses are a result of the Mesh architecture as opposed to the ring architecture on broadwell"

"Intel's mesh fabric and AMD's Infinity Fabric demonstrate how highly parallel architectures require more sophisticated interconnects. In some cases, they introduce performance regressions compared to simpler configurations that connect subsystems more directly."

So it looks like the Mesh is causing problems after all... according to Intel an Tomshardware that is !

Jay
 


"Boiling Lake". HAHAHAHA! It's too damn fitting.

Thanks for the video, it actually makes a lot of interesting points.

Cheers!