Intel Fires Back, Announces X-Series 18-core Core-i9 Skylake-X, Kaby Lake-X i7, i5, X299 Basin Falls

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador

You think it was even possible? After all the trouble they had sorting out 14 nm? And now having to delay 10 nm for another generation of desktop CPUs? Do you have any evidence to support this notion?
 


Absolutely technically possible, looking at you Xeons, but they would have had larger die sizes. However Intel stagnating on increasing core counts likely made them many extra billions in cash so I can certainly see why they didn't. Plus Intel has a competitor around still which I'm certain is better than getting deemed a monopoly even know they have pretty much effectively been a Monopoly. That slow paced gig is up now so Intel will be moving 6 cores mainstream this year or very early 2018 and I would venture to guess 8 cores next year to 18 months. AMD's next Ryzen release, one would expect higher clocks by refining the process, a little better IPC, maybe some tweaking on the CTX latency. Intel knows to have an 8 core ready for when that hits.

Also if you missed it Intel announced just the other day they have IceLake tape in ready which I found very funny. I would bet you it has been damn near tape in ready for over a year they just sitting on it until AMD finally got Ryzen out so they could milk whatever they could out of the old architecture up until that point. Good plan business wise sucks for us consumers though.
 

bit_user

Polypheme
Ambassador

I was talking specifically about IE's claim regarding process improvements. Not long ago, Intel had the lead on process. I don't think they got there or subsequently fell behind due to laziness. If there's something I'm missing, please educate me!
 


They still have a process lead:

https://www.extremetech.com/extreme/234681-intel-reportedly-wooing-apple-as-a-customer-for-its-arm-foundry-business

No one has a 14nm close to Intel yet. In fact Samsungs 14nm, the one AMD is using, is very similar to TSMCs 16nm.

They have had working 10nm for a while, but it is hard and takes a ton of money to do. AMD is lucky they do not deal with it anymore. With how much money they were bleeding I doubt they could have made it to 14nm on their own.
 

InvalidError

Titan
Moderator

Just look at die sizes and how much of the area is dedicated to graphics vs CPU cores. The IGP accounts for as much if not more die area than the CPU cores and L3 cache and the die size, including beefed up IGP, has gone down almost 50% since Sandy Bridge. So yes, if Intel made a Sandy-sized chip without IGP on 14nm, it could fit 16+ cores on it.

Another way to confirm it: the 6950X may have only 10 cores in a slightly larger die than SB, but it has three times as much L3 cache, two extra memory channels, 28 extra PCIe lanes and a sizeable seemingly dead/unused area about the size of three cores. Trim features that aren't necessary on a mainstream CPU and you have enough room to squeeze ~16 cores in SB's footprint.

 


Correction, the i7 6950X only has 24 more PCIe lanes, it has 40 total, than SB.
 

bit_user

Polypheme
Ambassador

Okay, so you weren't saying they were lazy on process, just not as aggressive on core count as you think they could've been.

Well, if you feel dual-channel memory is warranted for 4 cores, then I'd say when the aggregate clock speed approximately doubles, why wouldn't you also double the memory bandwidth?

As for PCIe channels, some people doing GPU compute actually need that much PCIe bandwidth. And you need memory bandwidth to feed those PCIe channels. The main market for those CPUs is certainly as much workstations as it is gamerz.
 

InvalidError

Titan
Moderator

First, the hypothetical half-of-Moore's-Law CPU I'm describing would be a mainstream part - what would/should have happened to ~$300 CPUs had Intel not stagnated for the better part of 10 years. Not an HEDT part. So this would be on a cost-sensitive platform.

Memory-wise, if you look at 8-10 cores Broadwell-E workstation benchmarks, most don't show all that much scaling going from dual to quad channel. Things may get a little different at 16+, we'll see with ThreadRipper soon and i9 whenever that one launches. In mainstream computing where data sets are typically smaller, the benefits from extra channels will be less and a mainstream platform cannot be burdened with significant costs that provide marginal benefits in most intended applications. Same goes for multi-GPU compute.

If AMD wanted to, it could squeeze 16 cores in a Sandy-sized die too. It would need to ditch the external fabric ports and do a Skylake-X style cache shuffle and trim (double the L2 per core, halve the L3, 20% net reduction in total L2+L3 per CCX) to get there. Packing eight cores per CCX and merging their L3 shares should enable some further area savings and performance improvements too.

While AMD may look like a hero for bringing 16C32T to HEDT under $1000, that is less than half-way to catching up with where things may have been had the market been properly motivated to keep moving. Of course, most home users having no foreseeable use for this much processing power doesn't help with said motivation where mainstream CPUs are concerned, especially when that level of processing power is currently still worth over 5X as much in the server space. GPUs are headed that way too with newer chips designed for AI/datacenter first and gamers forced to pay increasingly more to get the newest "gaming" (neutered compute) GPUs early.

While AMD may not catch up with half-rate Moore's Law, I suspect Ryzen/TR/EPYC won't be the last disruptive catch-up step.
 
Status
Not open for further replies.

TRENDING THREADS