Intel's New Roadmap Revealed: 10nm Ice Lake In 2020, 14nm Cooper Lake 2019

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Yeah but here is the thing this is in theory since no high speed CPU's have been made on these processes to date. The high power CPU designs always give up some density for better leakage, active power, and performance. Sure if we are talking making SRAM I might agree since theory/white paper specs usually work out but that is not the case on high speed CPU's. For example Gloflo/IBM's design was going for high power chips from the get go so it may already be closer to the density they will use say versus Intel(fabricated nobody really knows). Just saying we should take a wait and see approach before saying one is more dense or better than the other until the we see actual chips because these specs you posted will not be what is used to make any Core i5 / i7's or Ryzen 5 / 7's as all of the foundries use different densities for SRAM and CPU's.

 

bit_user

Titan
Ambassador

Then, why not do a case swap with your existing rig? At least you'll be able to enjoy the new case ...and keep it from mocking you.
 

FunSurfer

Distinguished
PCIe 4.0 support news? (for using 1180Ti or 1280Ti and an NVMe SSD with a mainstream MoBo that split the PCIe lanes to x8 x8 when using both slots. Maybe 3.0x8 won't be enough for 1180Ti.)
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785


Well, that would be a bit like putting lipstick on a pig.

It's still a *GOOD* pig, more then sufficient for my current bacon needs, but probably not worth the effort to do a transplant, especially since I keep kidding myself that Intel's going to deliver a processor worth upgrading to any day now....

Also I get emotionally attached to my hardware, and this one is like a comfortable longtime girlfriend. If I leave her, it's going to be for someone younger and hotter.

(well, maybe not hotter, but definitely cooler)
 


I expect about 2020 for both Intel and AMD. AMD has said they want to move to PCIE 4 when the replace AM4. I don't think Intel has said but I would suspect Ice Lake would be when Intel moves to PCIE 4.0 but they could do it sooner.

With that said we won't have a single 1280Ti or a single NVMe SSD pushing more than a 16 lane PICE 3.0 slot can handle. So really if there ware more lanes of PCIE 3 there is no bottleneck anytime soon. AMD teased a z490 chipset but killed it but I wouldn't be surprised if something like that came out with Zen 2 Ryzen which did have more lanes.

 

bit_user

Titan
Ambassador

I guess you mean desktop, but this roadmap is for server. It looks like Cascade Lake will keep the Purley platform (socket LGA-3647), which is limited to PCIe 3.0, while Cooper Lake will launch on the Whitley platform (socket LGA-4198) with PCIe 4.0 support.

On the desktop, it's possible Intel could do a refresh of the current chipsets to add PCIe 4.0 support, but more likely they'll just wait until the generation after the 9000's.
 

bit_user

Titan
Ambassador

@FunSurfer seems worried about a 2x 1180 Ti setup. I'd guess any performance impact will probably be limited to the low single-digit %, based on previous analysis.
 
Intel did not improve much from 2013-2014 with i7-4th gen (Devils canyon) at all, only minors like DDR4, some extra stuff m.2 , failed intel ssd (forgot the name), and clearly with benchmarking, single core from 4th gen is same as 8th gen (only missing 2c/4t), Im not really moving off 4th gen at all. I dont see improvment.

Oh also, price at premium at pcie lanes..
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785


On a shared *server* resource like those found in the cloud or other multi-users system in the data-center, the risk of someone else peering into another user or companies data by way of shared cache is obvious.

For a Desktop system it's a bit less obvious to me. I suppose a user application might leverage it to gain access to the same users data that it's not supposed to have access to, but I cannot even find anecdotal evidence of this happening.

I'd be interested to hear about the Desktop risk specifically you're concerned with, and any real-world examples that have actually occurred.

 

InvalidError

Titan
Moderator

Side-channel attacks are highly timing-dependent. On servers, the server may be handling certificates and other highly sensitive data hundreds of times per second, which gives the attacker many opportunities to attempt side-channel attacks. On a PC, passwords, certificates and other sensitive data gets processed only once every several minutes or longer, so side-channel attacks are orders of magnitude less likely to be successful.
 


Yeah I understood. Anyone can move to an HEDT platform from Intel(Well some of them with more lanes) or AMD and you will have plenty of PCIE 3.0 lanes. No single GPU is going to saturate a 16x PCIE slot likely for a few more generations of GPU's. Don't get me wrong PCIE 4.0 will be nice and 5.0 will be really nice but its not that its holding most desktop / enthusiast builds back if you can move into HEDT platforms.
 

stdragon

Admirable


Meltdown was the "big one" and already patched at the OS level.

The other, Spectre (and there are many new variants being discovered as time goes on) is far more difficult to leverage. Unfortunately, it requires a BIOS update that contains the latest CPU microcode in addition to having mitigation enabled at the OS level too.

The goods news at the Desktop level at least, is that the most vulnerable vector - the web browser (Firefox, Chrome, the usual popular ones) - are already patched against Spectre at the application level. But that's not to say malware that contains the exploit couldn't arrive from an infected file or worm on the network. Th real scary part is that it could in theory scrape authentication from memory and go to town taking out portions of the network or other such services. It's purely hypothetical and unlikely, but also exceedingly BAD if it turns out to be possible. So yeah, the threat is overblown with PoCs .....until it's not!. It's that threat that keeps network admins like myself up at night.

I will say this however, if the threat of Spectre emerges, I have no doubt it will find its way in a successful wave of Ransomware. I found the link below interesting as it contains some insights provided by SonicWALL.

http://www.eweek.com/security/ransomware-attacks-spiked-in-first-half-of-2018-sonicwall-reports
 

bit_user

Titan
Ambassador

Moving to a i9 or TR isn't cheap, and comes with some performance compromises.

I currently have an aging HEDT, but I'm looking to replace it with a fast "mainstream" desktop. If I buy Intel, my only qualm is that the NVMe wouldn't be direct-connected to the CPU. I assume the difference would hardly be measurable, though.
 

stdragon

Admirable
Core count is somewhat of a bell-curve in value. Dual core doesn't really cut it these days as the OS and apps such as browsers and games are multi-threaded...up to a point. Eventually, you'll have more cores than you really need. In addition, the process (or a single website) might be so complex that the only way to speed up execution is to throw more cycles in the pipeline (higher frequency).

IMHO, I'd say the best value now is a Core i5. If you wish to spend a little more, get the fastest clocked i5 over a slower clocked i7.
 

InvalidError

Titan
Moderator

Giving everything an x16-wide interface simply isn't practical.

Forget about GPUs. The reason why PCIe 4.0/5.0 are getting pushed hard right now is NVMe SSDs where PCIe 3.0 x4 is currently a severe bottleneck and a higher lane count per drive would blow through even EPYC's 128 lanes budget in no time when servers may have 10+ SSDs attached to them. (Ex.: mirrored RAID5)

There is no shortage of PCIe 3.0x4 NVMe SSDs capable of saturating the interface and the same will probably happen to PCIe 4.0 from launch, which is why PCIe 5.0 is being fast-tracked by the PCI-SIG and many equipment manufacturers are contemplating skipping 4.0 altogether. PCIe 4.0 is too little too late for NVMe and will be one of the shortest-lived PCI variants yet.
 

bit_user

Titan
Ambassador

You don't think deep learning or > 100 Gbps networking have anything to do with it?

AMD is rumored to be adopting PCIe 4.0 for both Vega 20 and 7 nm EPYC.
 

stdragon

Admirable
If you really need 10+ SSDs in RAID, most likely they'll be behind a RAID controller. True, that controller would be the bottleneck limiting the potential throughput, but it would still achieve the goal of aggregation to present a single large volume. If in fact you really need the throughput of 10+ SSD combined, at that point it might be easer to just use 256GB+ RAM for read/write caching.
 

InvalidError

Titan
Moderator

You aren't going to achieve remotely decent processing scaling on deep learning if your algorithm needs 40+Gbps of bandwidth between nodes to do its thing - if your nodes can process data so fast that they require 100+Gbps of network bandwidth, you may want to refactor your algorithm to increase the amount of local processing and reduce bandwidth. Servers are a fairly similar story: unless the server is doing little more than spitting out chunks of data from RAM, workloads rarely reach the point where a single server actually requires 100Gbps in/out.

Server virtualization looks like the primary application to me: if a VM requires better storage and LAN performance or QoS than what the shared pool can provide, then PCIe 4.0/5.0 can afford dedicating NVMe drives and however many 10/25/40Gbps LAN ports (fail-over, multipath redundancy, load-sharing/LAG, etc.) to them without introducing major bottlenecks or obliterating the PCIe lane budget. Double the bandwidth, double the flexibility.
 

InvalidError

Titan
Moderator

With Intel's repeated delays and increasing number of in-between generations since Haswell, maybe sometime in 2021.
 
Status
Not open for further replies.