News Intel details 14A performance and new 'Turbo Cells' that unlock maximum CPU and GPU frequency

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I don't know about orders of magnitude, but I'm no expert. It seems to me like power-density and fabrication costs are going to be very big challenges.

New materials could be a game changer, though. That's where I'm pinning most of my hopes. Although, eh... I don't know if I really care that much whether computers get any faster or not.
I think neuromorphic chips with spiking neural network architectures will adapt extremely well to 3D, since they don't require full activity at all times and intrinsically have low power consumption. And what do you know... real brains are also 3D.

But new materials may help classical computers make that transition, and there will be obvious optimizations, such as further shrinking the distance between logic and memory. Based on projections of carbon nanotube-based 3DSoC, a 100-1000x improvement in performance is plausible.

On what to do with such performance, we may see previously unfathomable new applications emerge. But if you don't care about raising the bar, you'll probably like lowering the bar, e.g. putting the performance of today's top-of-the-line workstations inside a Pi Zero-like chip.

I get the sentiment about not caring about computers getting faster, but it applies better to our current situation. A Ryzen 9 9950X is about 90% faster than a Core i7-6700K in ST, and 7.4x in MT (PassMark). It's a good improvement, but not life-changing, especially in anything not multi-threaded, and you can easily use that old processor for almost everything people do today. Having said that, we probably won't see sudden huge gains out of nowhere without fierce competition, and we probably shouldn't expect monolithic 3D consumer chips before 2040.
 
  • Like
Reactions: bit_user
So, they plan on leaving the 6.2 GHz i9-14900KS in the dust?

FWIW, the numbers quoted for performance & efficiency increases are usually based on a simple reference design. I think TSMC used an Arm Cortex-A7 core, for a long time. It's hard to predict exactly how they will translate into metrics of actual products made on these nodes.

In the case of Arrow Lake, it seems like a big part of what held it back was the SoC architecture. So, even though the process node and microarchitecture were both substantially improved, those got kneecapped by a SoC design that prioritized other things above performance.
NWM.

The subtitle of the article is: "I. Am. Speed.", the capitalization and full stop after each word, compared to the original quote by Lightning McQueen, is overemphasizing that this is to be the ultimate in the category of speed.

It is to that statement that I replied to the article's author.

If you feel that he has guessed wrong pinging me doesn't help me, I understand that a redesign of the architecture by the time this is taped is likely; so it is beyond speculation to compare with an entirely different arch or current ones.

That was theirs to improve, and if they've done so, then that's the result.

Clocks alone far from equate with performance, deep superscalar IPC at a slightly slower clock would surpass a faster clocked processor without it. On the other team while efficiency is useful in IOT or mobile most would prefer to burn a bit of juice for a substantial increase in performance; without making everything upstream obsolete.

Written to defy translation or LLM coached replies.
 
Power Direct is the second iteration of Intel's backside power delivery, and by this schedule will come to market about the same time as TSMC's first iteration of backside power delivery.
TSMC's first gen BSPD is the most advanced type. There are three levels of BSPD and Intel's is the mid-tier. There's a less complex approach and a more complex approach, which is TSMC's preferred method and why they are not coming out with it until A16.
 
  • Like
Reactions: TheSecondPower
I think neuromorphic chips with spiking neural network architectures will adapt extremely well to 3D, since they don't require full activity at all times and intrinsically have low power consumption. And what do you know... real brains are also 3D.
Yeah, good points. I'm still not very interested.
; )

But new materials may help classical computers make that transition, and there will be obvious optimizations, such as further shrinking the distance between logic and memory. Based on projections of carbon nanotube-based 3DSoC, a 100-1000x improvement in performance is plausible.
First, that's definitely talking about parallel performance, not sequential. Just want to point that out.

Second, I've been hearing such fairy tales for I don't know how long. I'll start caring when there's an actual roadmap to deploying such technologies at scale. Last time I saw anything from IMEC, it had nothing remotely like that, and that roadmap went out like 10 years.

On what to do with such performance, we may see previously unfathomable new applications emerge.
Well, you'd better get busy fathoming!
: D

But if you don't care about raising the bar, you'll probably like lowering the bar, e.g. putting the performance of today's top-of-the-line workstations inside a Pi Zero-like chip.
A faster Pi would be nice. The Pi 5 is finally getting to the point of being a usable desktop.

we probably won't see sudden huge gains out of nowhere without fierce competition, and we probably shouldn't expect monolithic 3D consumer chips before 2040.
We'll need them by 2042, so you'd better get to work!

Battlefield_2042_cover_art.jpg

 
Last edited:
  • Like
Reactions: usertests