Intel Kaby Lake: 14nm+, Higher Clocks, New Media Engine

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador
At the risk of oversimplifying it, you can think of Xeon Phi as a GPU that's built from x86-64 cores, and devoid of the graphics-specific engines found in a conventional GPU.

The benefit is ease of programmability. Choose your language and your OS. Use existing binaries and libraries (on the new, second gen series).

The disadvantage is that GPUs still have higher raw performance (and higher performance per Watt), since they lack the legacy and single-thread performance optimizations of x86 cores.

So, people who can effectively rewrite their code in CUDA or OpenCL will do better with GPUs, whereas those with legacy code and even some server-oriented apps can easily use Xeon Phi (Knights Landing).

If you're interested in knowing more, there's a ton of info out there.
 

neblogai

Distinguished
When we are talking about Intel gaming CPUs, going more cores or into 10nm- 7nm is not really needed as it does not bring $ benefit. Smaller process does not bring anything useful here- you can see this from the fact that gaming CPUs waste so much of chip area for graphics cores. It seams, Intel is happy with the status quo- as long as games mostly use one or two fast cores and up to 4 in total, having fastest cores is most important- which Intel has, and has no issue in producing them. Putting more cores in affordable chip might be good for the industry and gamers, but it is of no use to Intel- if 4 cores can still be sold as enthusiast part- why bother with more, or deal with issues of 10nm, etc.? Tech journalists also do not care much: when testing CPUs, they don't choose multithreaded CPU heavy titles, but generic list of AAA titles that are made for biggest market (i3 and i5). Everyone is happy with not advancing, so we stay in status quo.
 

InvalidError

Titan
Moderator

There are non-coherent schemes where most of the burden of managing coherency is shifted to the programmers/device drivers/OS. One of the extensions to AGP/GART added such capabilities to provide more efficient memory access for GPUs and those carried over to PCIe.
 

bit_user

Polypheme
Ambassador
I don't know how we're off on this tangent, but to bring it back to my original post, Sodani is talking about using MPI between partitions of 18 cores (9 tiles). So, once you hit a point of diminishing returns with cache-coherent, shared memory, you break up the memory spaces and communication between them instead takes the form of message passing or something else.

Typically, memory-mapped devices (e.g. PCI, AGP, PCIe, etc.) are treated as non-cacheable address ranges. You could view writes or DMAs to/from these addresses as a kind of message passing.

The HSA standard, being pushed largely by AMD, actually provides a mechanism for cache coherency between CPUs and GPUs. This makes a lot of sense, for APUs. Otherwise, communication between them could involve unnecessary round-trips through main memory.
 


We all know that number of cores and threads don't translate to real world performance. However your welcome to buy an amd 8 core cpu for $150 that gets outperformed by a 2 core intel CPU.

Funny no one seems to complain about the price for a few extra percent of efficiency from a platinum psu.
 

InvalidError

Titan
Moderator

I think his point was more along the line that on the desktop, the same or nearly identical 2C4T CPU is called an i3 and costs less than half as much. I wouldn't pay a $200 laptop CPU tax either.
 

TJ Hooker

Titan
Ambassador
Why is anyone even paying attention to the prices in that table? You can't buy any of those CPUs yourself and install them, I doubt OEMs are going to actually be paying that price, and then final cost of a laptop can vary greatly even among those using the same CPU. That number seems largely meaningless from an end user perspective.
 

InvalidError

Titan
Moderator

They are still a general indication of the minimum price point you can expect to see laptops based on those chips for.

Also, unlike desktop chips which carry an MSRP for single units that includes retailers' margins, those BGA chip prices are for trays of 1000 from Intel's authorized distributors. Once you add the system integrators' and retailers' margins, you will usually end up paying considerably more for those chips than distributor pricing even after the large OEMs' discounts.
 
G

Guest

Guest
The survey doesn't sound accurate. People spend twice as much time creating videos than streaming them?
 

bit_user

Polypheme
Ambassador
I had the same thought.

I assume this is due to a sample bias - maybe they polled "creatives", or just users specifically interested in high-end laptops and tablets. Because, if you do make videos, it tends to be a fairly time-consuming process.
 

hst101rox

Reputable
Aug 28, 2014
436
0
4,810
Hope that'll mean ~3.3GHZ base, 4.2GHZ turbo 4 core CPUs will come to laptops, top end. CPU progress sure has slowed. A Haswell CPU is still quite competitive clock for clock. Since this is the end of 'tocks' for a while, because next is the die shrink; Really nothing on the horizon for x86 performance increases until 2019 i guess, if ever. I doubt the top end clock speed will go up much more than Skylake but could be pleasantly surprised.
 

Lightening02

Respectable
Aug 28, 2016
173
0
1,860
I'm most likely going to be upgrading my CPU from an A10-5800K soon. I was looking at getting an i5-6600K. Would it be worthwhile to get a Kaby Lake since it comes out so soon or should I just get a Skylake?
 


No, not worth the wait. Kaby Lake is basically Skylake with higher clocks.
 

InvalidError

Titan
Moderator

Desktop Kaby Lake will be coming out in early 2017, so you still have quite a few months of waiting left for that. If I wanted to upgrade in the near future, my main reason for holding back would be to see if Zen's performance and pricing will force Intel's hand regarding pricing.
 

bit_user

Polypheme
Ambassador
I don't even understand the logic behind this statement. Why should a user care whether performance comes from architectural improvements or clock speed? As long as it doesn't come from increasing TDPs, performance is performance! And the delta shown for these mobile SKUs is the biggest single-generation jump since Sandybridge.

The question "should I upgrade now or wait" comes down to how much performance increase Lightening02 wants. For that, he should consult some benchmarks (I sure wish the CPU Charts had been getting updated, *hint* *hint*) including both his and the Skylake he'd buy, find some relevant ones, and then look at the % delta. If the % is too small for him, then wait. It's really quite simple.

For myself, I'm fairly certain I'll be upgrading my Sandbridge-E to Kabylake-X. I might consider a workstation version of Zen+, if it arrives around the same time.
 
I have a, A10-6800k, and I would definitely wait for Kaby. 4-5 more months maybe, but Skylake with higher clocks? It definitely seems to be worth it. Why pay the same for a processor with 300MHz less?
But of course, this is totally meaningless until we see the OCing potential...
 


waiting 6 months for Kaby Lake is a long time for a bit higher frequency; you can overclock to achieve that now, and seeing what his current CPU is, and his possibility of waiting, he obviously does not need it for professional purposes of any kind and is most likely a gamer.

My answer still remains; there is zero purpose to waiting so long for a chip; buy what is available now. He could wait for Cannonlake if he wants to. Should he do that? I don't think waiting another year and a half is a good idea. In the same wai, I don't think waiting until March for what little improvement Kaby Lake will be is worth it either.
 

bit_user

Polypheme
Ambassador
First, you have no idea how well Kaby will overclock, but it's highly likely that an overclocked Kaby will be faster than overclocked Skylake. Furthermore, there's no reason to think it will overclock better/worse than if it had simply undergone architectural improvements.

Second, not all people are equally patient. Like I said, the only way to answer this question is by saying a Skylake i5 6600k (for instance) is X % faster at task Y. Then, let him decide if he wants to upgrade now, or wait.

And, in order to keep this exchange from being completely useless, here are some comparisons I found (neither are from this site, because the CPU Charts haven't been updated with Skylake and the Skylake review doesn't include his CPU):

http://www.cpu-world.com/Compare/521/AMD_A10-Series_A10-5800K_vs_Intel_Core_i5_i5-6600K.html
http://www.anandtech.com/bench/product/675?vs=1542

Note that the second review compares against the i7-6700k, which has a list price of $350. I wanted to use the i5-6600k in both, but it was inexplicably lacking (in spite of being probably the most popular current CPU with gamers).

Here's Tom's Skylake review, but the closest CPU included in the benchmarks seems to be the A10-7700k:

http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-5.html
 


The CPU he is looking for is an I5-6600K or the Kaby Lake equivilent of the I5-6600K; there is going to be a better Skylake CPU than the I5-6600K Kaby Lake equivilent, and that is the I7-6700K. You're right, it is about that question: does he need more performance, and how much? Well, if the I5-6600K is not enough performance for him, then an I7-6700K will do. That's what is available now, so that's what you get right now. He's not talking about the I7-6700K Kaby Lake Equivilent, he's talking about the I5-600K Kaby Lake equivalent.

His CPU is old and underperforming. He needs a new one. He should get one now, he should not wait.
 

bit_user

Polypheme
Ambassador
From a quick look at the links I posted, it's true that the A10-5800K doesn't compare well against either k-series Skylakes. I think we can agree that upgrading now would be a worthwhile improvement. It's up to Lightening02 to decide whether the (assumed) 12% further speedup of desktop Kaby is worth waiting until maybe Feb '17.
 


No, he can't decide! That's why he asked us instead of making a decision himself :p
 

Lightening02

Respectable
Aug 28, 2016
173
0
1,860




Well, I certainly started quite a long discussion. I use my computer for gaming and I'm trying to do this upgrade for around $500-$600, the lower the better. Hence why I'm not looking at an i7, $100 more is too much for me. I could wait until February or March, but how much would Kaby Lake processors and 200 series chipset motherboards be at launch compared to Skylake stuff that's been available for a while already? I'm not looking for massive performance improvements, just something that won't bottleneck my GTX 970 and give me some room for future upgrades.

If you want some perspective on my current build, It's a cheap micro ATX motherboard with 8 GB of RAM and a processor OCed to 4.3 GHz with a stock cooler. I'm impressed it hasn't caught on fire yet.
 

hst101rox

Reputable
Aug 28, 2014
436
0
4,810
I still have my Asus G50VT laptop Core 2 Duo X9100 unlocked so overclocked to 3.45GHZ, SSD, as my main machine. It is barely quick enough, it is acceptably fast for browsing on Chrome, doing day trading with bloated software such as Thinkorswim, Fidelity ATP. I wouldn't wait for Kaby Lake. I'd just get a mid range Skylake and be happy, the gains are so little. Not expecting more than 300MHZ stock clocks. Going from your AMD to the Skylake should provide quite the jolt. (EDIT: and nothing exciting until 2019 or later)

For me, will probably save up for a 6820HK unlocked laptop (MSI GT72 probably) and then overclock as much as possible prime95 stable and save that config. Should last me 10 years or so..

For a previous discussion about a year ago, on how CPUs are not increasing in performance very much at all year to year, thanks to InvalidError for his expertise there and here ; http://www.tomshardware.com/forum/id-2751074/skylake-intel-core-6700k-6600k/page-4.html
 

InvalidError

Titan
Moderator

It shouldn't but it will require BIOS support as usual for second-gen chips on first-gen motherboards of a given socket and for what little performance improvement it will bring compared to Skylake, the upgrade won't be worth the trouble for most people anyway.
 
Status
Not open for further replies.