Intel's Future Chips: News, Rumours & Reviews

Page 146 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
I wonder if it'll be enough to fend off TR3 though...

You can clearly see those are aimed to entice people from getting Ry3900(X) and Ry3950X parts, but once TR3 arrives, it'll be another ball game as the whole platform gets a proper lift in... Everything.

Cheers!
Yeah, TR3 will trounce these 10 Gen parts, that is why they are trying to get them out before TR3 arrives in November, or after the 3950X delay, is expected to arrive.
 
Yeah, TR3 will trounce these 10 Gen parts, that is why they are trying to get them out before TR3 arrives in November, or after the 3950X delay, is expected to arrive.
The only "real" advantage Intel has, from what I've gathered, is AVX512 full bandwidth in these parts, but I wonder how much of an advantage it really is?

How big is the software ecosystem for these monster CPUs that actually puts to good use said instruction support?

Cheers!
 

InvalidError

Titan
Moderator
According to the rumor mill, Intel has cancelled desktop processors with 10 nm until 2022 when we will see 7nm desktop products.
There were no 10nm desktop parts in Intel's leaked 2018-2021 roadmaps, so nothing to cancel there as they were not planned to begin with. Intel probably scrapped those years ago when it became clear that getting to commercially viable 10nm was going to take much longer (2-3 years longer) than expected.
 
According to the rumor mill, Intel has cancelled desktop processors with 10 nm until 2022 when we will see 7nm desktop products.
Take with a grain of salt.

EDIT: Source added (in german)
https://www.hardwareluxx.de/index.p...t-10-nm-plaene-fuer-den-desktop-komplett.html

Wouldn't be surprised/ While Ice Lake does look to perform relatively well compared to Whiskey Lake, even with a clock speed disadvantage, in my mind it would make more sense to skip it with their very fast rollout of 7nm.

7nm should come along with Golden Cove uArch which should be an everything performance increase. Probably also not economically effective to go to 10nm for a year before a superior process tech is available.
 

InvalidError

Titan
Moderator
As I have already written a long time ago, desktop Icelake, should it ever happen at this point, will most likely be Broadwell-II: a chip that may technically exist but be practically impossible to actually get and won't make sense anymore by the time people can actually get one because the next generation's launch will be imminent by then. Just like Broadwell, Icelake will be effectively exclusive to mobile, embedded and integrators.

From your article: "If Intel still has plans for 10nm desktop processors, they will likely succeed Rocket Lake-S, perhaps in late 2021", which is consistent with the lack of 10nm desktop parts on leaked roadmaps since the latest set I've seen only covers up to 2021. The update above the story also hints that Intel may intend to stretch the definition of 'desktop' by including 10nm server chips re-purposed for HEDT and mobile chips embedded in NUCs, AiO, pre-built, etc.
 
Yes, I noted that update as well. Pretty creative, (Well, not really. Kind of lame actually) if they include NUC products as desktop. HEDT I can somewhat see, because whether by original intent or not, many of those do make their way into desktop applications either through trickle down second hand usage or purchase for intent of being used that way in a good many occasions. After re-reading that article, it's clear that it's more clickbait than anything else.
 
Yea Intel is only denying to save face with investors. They wont like the idea of a product with that much research not launching everywhere. However in the tech world the best solution is to focus on high margin areas for 10nm (mobile, HPC/Server) and then push lower margin areas when they have more volume of the newer process. Short of cores I honestly do not think Intel has much to worry about with even Zen 2+ or Zen 3. Intel and AMD are both and IPC walls that will need a lot of work to get around.

However one possible idea they could be doing is once 7nm is ready they push high margins markets to it and then push lower margin to 10nm. So we may still possibly get a 10nm desktop mainstream product. Not a for sure but it could be possible.
 

InvalidError

Titan
Moderator
However one possible idea they could be doing is once 7nm is ready they push high margins markets to it and then push lower margin to 10nm.
Maybe. Then again, FPGAs, server chips, networking/telecom chips, etc. have very long (often 10+ years) product cycle life, so Intel could just dedicate whatever 10nm production capacity it has to maintaining long-term support commitments instead of launching mainstream parts it cannot manufacture in sufficient volumes with only two or three 10nm fabs and no plans to build or migrate any more now that most of the 10nm warchest has been re-allocated to accelerating migration to 7nm.

My bet is that 10nm will get mostly skipped for the mainstream aside from a few Broadwell-like cameos for people who are sufficiently desperate for an Intel-based upgrade.
 
Aug 29, 2019
35
7
35
More like late 2000's; there's a reason why Intel went to servers first, as they would benefit the most from an implicitly parallel architecture. Intel would then ramp clockspeed, get MSFT's server OS's to work out the architectural kinks, then eventually release to general consumers, probably around the time Windows 7 would have come around.

x86 and it's descendant architectures have been tapped out for a very long time, why do you think Intel keeps coming up with dedicated instructions to do specific operations faster? And now that die shrinks are basically coming to an end, there really aren't any more ways to squeeze performance out of it. And adding more cores is only efficient up to a point; nevermind you're still limited by how fast an individual core is capable of going.

Of course, now that x86-64 is entrenched and has legacy SW support behind it, we're stuck with it until some other architecture can emulate x86 and x86-64 in hardware, which is going to be damn impossible. We're stuck with it now, probably for a few decades to come.

As a software developer dealing with Legacy software for last 24 years, I would agree it stuck with architexture but it also what make its so popular. In large world of dealing with Point of Sale software, it very hard to change the computers when you a dealing with 500+ installations and each with 8 or so computers.

The problem I see today, it very hard for computer and software manufactures to think out side the box. The problem is not just solved with just adding more cores to the problem. What needs to happen is better instructions need to be added on cheap extension to existing from 32 bit to 64 bit. Yes it solved it purpose of adding more memory to application, but it made software developers sloppy - it my earlier days I would count every clock cycle and make sure no memory leaks. In today world the solution is just more memory and more cores.

I was very into computer architectures at my first job and I heard early on about RISC vs CISC and what I heard is that CISC manufactures like Intel and AMD, actually use some RISC technology to make instructions faster by allowing more to executed per second. It been hard to really find a true performance comparing RISC vs CISC, but I believe that RISC has a major problem where it takes much more executed functionality - this may work good on some benchmark applications on compiler but a major application or game will have disadavantage.

I think the key getting over the box, is off loading some of processing to other functions, we are seeing this in GPU's but it would be nice to see in IO level including storage, networking and other communications aspects

Just some random thoughts from someone who has been in computer business developing for a long while.
 

InvalidError

Titan
Moderator
It been hard to really find a true performance comparing RISC vs CISC
The distinction has been mostly dead for over a decade. RISC chips have become far more complex than they used to be thanks to instruction set bloat and modern x86 is fundamentally RISC engines hidden behind an instruction decoder. With the uOP cache letting Ryzen and Netburst/Core CPUs bypass instruction decoders altogether most of the time, RISC got robbed of its single biggest remaining advantage: no instruction decoding required for most operations.
 
My bet is that 10nm will get mostly skipped for the mainstream aside from a few Broadwell-like cameos for people who are sufficiently desperate for an Intel-based upgrade.

Agreed. Intel's bet that it could do 10nm without EUV has pretty much ended in "total failure". Intel should really take this chance to sit down and re-design or create a branch new CPU instead of iterating core again.
 

InvalidError

Titan
Moderator
Intel should really take this chance to sit down and re-design or create a branch new CPU instead of iterating core again.
Intel already has two generations worth of new CPUs ready to go if it had the fab process to actually make them. As far as "brand new core architectures" go, there is no such thing to be had anymore. General-purpose CPU designers have re-evolved performance-oriented designs dozens of times over the past 20 years and they are all converging on nearly identical designs regardless of ISA, the only way to get any more performance out of individual cores on a given process is to more carefully balance details around the fab process' capabilities.
 
There are always new ways to do things. They simply haven't been thought of yet. As an engineer, I would think you would know that more than most. Lots of things that were improbable or not possible, until they were. Someday, there will be something so different from any architecture that is in use today that it will be "new" even if it originally derived from something existing. Maybe not anytime soon, but at some point, it will happen.
 

InvalidError

Titan
Moderator
As an engineer, I would think you would know that more than most. Lots of things that were improbable or not possible, until they were.
The only such thing I can think of is ditching conventional computing for quantum which follows completely different rules. As far as conventional linear computing goes, I don't see how any designer could make any significant break from conventional architecture. Name one fundamental CPU core design improvement from the past 20 years that isn't something DEC-Alpha didn't already have in 90s and isn't natural progression like faster RAM and IO. The only one I can think of is the uOP cache which the Alpha didn't need in the first place simply due to being a classic RISC ISA that didn't need x86-style complex decoding.

Practically all of the fundamental principles used in modern CPUs are 20+ years old, just scaled up as process tech and better design tools to model, simulate and manage complexity allow.

There is a very finite number of sensible ways to execute a sequential instruction flow (approximately one based on how similar all modern architectures are regardless of ISA) and I don't expect that to change for as long as programming remains fundamentally sequential.
 
The only such thing I can think of is ditching conventional computing for quantum which follows completely different rules. As far as conventional linear computing goes, I don't see how any designer could make any significant break from conventional architecture. Name one fundamental CPU core design improvement from the past 20 years that isn't something DEC-Alpha didn't already have in 90s and isn't natural progression like faster RAM and IO. The only one I can think of is the uOP cache which the Alpha didn't need in the first place simply due to being a classic RISC ISA that didn't need x86-style complex decoding.

Practically all of the fundamental principles used in modern CPUs are 20+ years old, just scaled up as process tech and better design tools to model, simulate and manage complexity allow.

There is a very finite number of sensible ways to execute a sequential instruction flow (approximately one based on how similar all modern architectures are regardless of ISA) and I don't expect that to change for as long as programming remains fundamentally sequential.

Good luck going quantum. Last time Intel tried to ditch x86 was Itanium and no one wanted to.
 
Yeah, I definitely can't name anything like that. I don't have the first clue what it MIGHT be that changes that landscape, but eventually there will be something.

Heavier than air flight, nuclear energy, warm superconductors, force fields, invisibility and teleportation were all things considered to be impossible, and suggestions to the contrary were thoroughly debunked and laughed at, except that all those things are possible and exist, now. So, I have no doubt this will be the same. Just because we can't or haven't thought of it THIS year, doesn't mean it WON'T be a thing, next year, or in ten years.
 

InvalidError

Titan
Moderator
Just because we can't or haven't thought of it THIS year, doesn't mean it WON'T be a thing, next year, or in ten years.
Finding a non-sequential ways of executing sequential instructions would violate causality. That's like asking a physicist to break the laws of thermodynamics. Some things simply cannot be done no matter how much R&D you may try to throw at it. If you want much faster software, you will have far better luck crossing your fingers that more developers can be bothered to write efficient massively multi-threaded code to give CPU designers a reason to scale their products accordingly.

Heavier than air flight was obviously possible since thousands of bird, insect and even a few mammal species have been doing it for millions of years. Nuclear energy was also known to be possible for a while since stars have been at it for billions of years, scientists just didn't have the tools necessary to determine how stars actually worked earlier and we're still no closer to sustainable fusion than we were 50+ years ago when scientists though they would crack it within 30 years.
 
Yes, but you are saying that from a modern point of view, AFTER the fact, because we know it to be true, now. 100 years ago, they would have said the same thing about those things, and likely did, that you are saying about this. I get what you say, I just don't agree. I think there will be something, somewhere along the way, that makes the whole thing change. And it won't be anything anybody has thought of trying yet, or we'd have already heard something about it, so it will be something completely different. Or maybe we'll just be exactly where we are now in 50 years. Who knows. Not me.
 

Karadjgne

Titan
Ambassador
Something will have to give. Silicon as a base material is about used up. If Intel is having these kind of issues now, just trying to reach 7nm, wait until they try 5nm. They'll have to figure a way for cpus to run at 1.0v or less, just to stop physical burnout and bleeding. Which won't happen anytime soon. So when game devs want to push BF10 on 16k monitors at 600Hz, it's going to take cores, not speed, to get the job done. Short code strings, fast IPC, multiple threads.

Or go backwards and take half the stuff back out of the cpu, like the memory controller, igpu etc and free up i/o. Modular cpus, multiple sockets for different performance levels and applications. Daughter boards. Infinity Fabric on a motherboard scale, not a cpu scale.

MSI already built a modular motherboard, don't see why Intel couldn't do similar with a cpu, but I'm betting that's already gone across the minds of some amd folks.
 

InvalidError

Titan
Moderator
I think there will be something, somewhere along the way, that makes the whole thing change.
Some things never change because they are already about as good as they can get. The Harvard model on which practically all CPUs and GPU CU/SM with L1 I-cache are based is 75 years old. If there was a significantly better way of executing programs in general-purpose CPUs, I think someone would have come up with it by now. AFAIK, there hasn't even been so much as alternative proposals. ("Dataflow" is fundamentally Harvard with out-of-order execution on top. Can't blame Harvard for not getting that far ahead of his time since he only had relays and tubes to work with.)
 

aldaia

Distinguished
Oct 22, 2010
533
18
18,995
Most, if not all, of the concepts used in modern microprocessors where developed many years ago during the early years of computing. Since then we have basically rediscovered/adapted/refined them.

1959 - IBM 7030 Stretch
Interrupts, Memory Error Detection and Correction, Memory Interleaving, Memory Protection, Multiprogramming, Pipelining, Instruction prefetch, Operand prefetch, Speculative Execution, Branch prediction, Write buffer, Result forwarding

1962 - Burroughs D825
Symmetrical MIMD multiprocessor (first 4 core computer ;-)

1962 - Atlas computer
Virtual Memory, paging

1964 - CDC 6600
Multiple functional units, out of order execution, multi-threading

1964 - Illiac IV
Frst SIMD computer

1964
IBM begins design of the Advanced Computer System (ACS), capable of issuing up to seven instructions per cycle. (project scraped in 1969)

1965 Maurice Wilkes
Cache memory

1967 IBM360/91
Dynamic instruction reordering, reservation station, register renaming

1968 IBM 2938 Array Processor
Precursor of VLIWs in the 80's which are the precursors of the Intel Itanium

1969
Work begins at Compass Inc. on a parallelizing FORTRAN compiler for the ILLIAC-IV called IVTRAN.
Honeywell delivers first Multics system (symmetric multiprocessor with up to 8 processors).

1971 - Intel 4004
Has none of the above advancements, but its often cited as the first microprocessor (it's actually the second, but that is another story). Microprocessors will rediscover all of the above concepts during the 90's and 2000's but in fact there is nothing new under the sun. Everything we know today in microarchitecture has its roots more than 50 years ago.
 
Last edited:

InvalidError

Titan
Moderator
Most, if not all, of the concepts used in modern microprocessors where developed many years ago during the early years of computing.
And are pretty obvious evolution once you have the means of actually building them. None of these would have been realistically feasible before the transistor got perfected to a usable state in the mid-50s so computer scientists could try some new tricks out... and most of them were not practical to "re-discover" on micro-processors until ASIC technology could accommodate the extra transistors without negatively impacting performance (lower clocks) or disproportionately increasing cost, so most didn't get re-introduced until we got to 32bits.
 
Status
Not open for further replies.