Discussion Let's Guess What CPUs Will Be In 2030

jnjnilson6

Distinguished
How would CPUs in 2030 perform? Would the difference in performance between 2010 and 2020 remain the same between 2020 and 2030? How many Cores and what Nanometer Technology would the CPUs then use? Would it be possible that the difference between 2020 and 2030 is bigger than that between 2000 and 2010? At how many Gigahertz would CPUs, in your opinion, run in the year 2030? Who do you think would be better at that point in time - Intel or AMD? How much faster would 2030 CPUs be than Ryzen 7000 and Intel Core 13 gen? I know you cannot make a thorough estimate from this point in time, yet it would be cool if we could turn this into a guess-game and that everyone might invariably share their opinion.

So I would be glad to hear what you think. It is wonderful to go into the future that the darkened starlight on opulent magazines may cast its light into the beautiful chasms of processing power and illuminate the decades to come... Like a beatific visit to an old friend after a long separation. Some things are the same, however, there are new factors in the disposition of you both and new wrinkles on your countenance where there was nothing before; and wan smiles and a deepening of the retina from the years; in starlight and the night, illuminations of a previously vanished glory. All in all - an interesting symposium! :)

I hope you have a cool time writing!
 
Last edited:
  • Like
Reactions: alceryes

Eximo

Titan
Ambassador
2030 might be a little stretch in future predictions. But most of the planning and expectations for the next few generations of lithography are already pretty well mapped out.

Currently everyone is using some variation on FinFET 3D transistors. It is expected that 3nm, 2nm (20A), and 1nm (10A) are possible using current EUV techniques.

Gate All Around or Ribbon FET are the expected replacements to get to some of those smaller sizes. These have already been prototyped and validated to some extent.

Optical computing has been in the news again, building logic into crystal structures. But lab prototypes and proof of concept take a long time to get to to mass production, if ever. There is a lot of momentum behind the silicon integrated circuit industry and a shift to something new is cost prohibitive until they truly hit a brick wall. Alternatives to silicon would be a diamond substrate, but that is very energy intensive process to make perfect diamond wafers, so I don't think that will happen. Carbon nanotubes have also shown promise, but mass production is still a problem. As of now, they basically have to induce carbon nano tubes to be created, then use an electron microscope to locate and manipulate them for experiments.

Quantum computing is still a ways off for the consumer. It would likely show up as part of an encryption scheme for the wider quantum internet and standard processors would be what you actually interact with. But I do not expect this by 2030, probably closer to 2050 or more.

What we can say for certain is that we are going to see more integration. Chiplets and universal interconnects have been agreed upon and 3D stacking is already being used. So I think process nodes will start to become longer lived with various nodes used to construct products (AMD is already doing this, the I/O die is on a large node than the processing cores)

For mobile processors, I imagine we will soon see something like a SoC that incorporates even the system memory into a single substrate. Not great for upgradability, but it will certainly make for power efficiency and low latency.

As to performance and Ghz. Generally with node shrinks comes Lower clock speeds but greater efficiency in density and power. So it will probably bounce back to down to 5Ghz for CPUs. GPUs have been creeping up in frequency of late, but same rules apply there. Performance wise, well if we stick to the typical 10-15% increases per generation we would be looking at roughly a doubling in performance from 2022 to 2030, in a like for like manner.

However, the trend over the last few years has been to push power to extreme limits, so there is a somewhat artificial inflation happening that can't really continue. Processors and GPUs can get larger, have more cores, etc, but we are already pretty much at the limit of what you should cram in a personal computer. Might be better to look at laptops and mobile for a better comparison.
 
Last edited:
  • Like
Reactions: jnjnilson6
However, the trend over the last few years has been to push power to extreme limits, so there is a somewhat artificial inflation happening that can't really continue.
That's just a byproduct of the processes getting better and better.
They can handle that much more power and that much more heat so why should they artificially restrict them?!
Isn't that what people accuse intel for doing for many years?
If you look at both the 7950x and the 13900k leaks performance at 65W for both is ginormous.
The steamdeck and all the other similar devices are just the beginning of the PC becoming portable and not like the mobiles, or the APUs of the steamdeck and co, but actual PC cpus.

Desktop performance in 8 years might not be very different, there is only so many transistors they can fit into each core and only that much power a normal house can supply.
 
  • Like
Reactions: jnjnilson6

Eximo

Titan
Ambassador
That's just a byproduct of the processes getting better and better.
They can handle that much more power and that much more heat so why should they artificially restrict them?!
Isn't that what people accuse intel for doing for many years?
If you look at both the 7950x and the 13900k leaks performance at 65W for both is ginormous.
The steamdeck and all the other similar devices are just the beginning of the PC becoming portable and not like the mobiles, or the APUs of the steamdeck and co, but actual PC cpus.

Desktop performance in 8 years might not be very different, there is only so many transistors they can fit into each core and only that much power a normal house can supply.

More or less what I was getting at, they are pushing the Ghz at the expense of power. If you look at the laptop chips, efficiency between Intel and AMD is basically identical.

Just a matter of scale really. Mobile devices will always underperform desktop devices as long as we allow more power and cooling on the desktop. You need only look at something like the Mac Pro to get a feel for what the standards could be. To save power, all things could become NUCs. We wouldn't really be that worse off from where we were a few years ago on desktops anyway. If they drove the prices down, a NUC/Laptop with an external GPU might be the way to go. But since powerful desktops are always going to be a minority, it doesn't really matter in the grand scheme.

I'm actually surprised we still have CPU sockets and entry level desktop parts. There was talk some years back of Intel going BGA for everything. I imagine the server industry convinced them otherwise.

Another big trend might be the death of the low power GPU. If Intel and AMD get into a serious fight about integrated graphics performance, we might see a more rapid increase in die area and completely eliminate the $100-150 GPU.
 
  • Like
Reactions: jnjnilson6
Just a matter of scale really. Mobile devices will always underperform desktop devices as long as we allow more power and cooling on the desktop.
That they will, but now there is a chance of getting really a lot of performance at mobile power, if we get todays i5, max single/multi boost and all, on a steamdeck like device in a few years it would be amazing.
I'm actually surprised we still have CPU sockets and entry level desktop parts. There was talk some years back of Intel going BGA for everything. I imagine the server industry convinced them otherwise.
No intel just introduced the j series of cpus bga'ed to the mobo and everybody lost their...sheet, fearing that they would do it with every CPU.
It was just another case of fubu.
 
  • Like
Reactions: jnjnilson6

Eximo

Titan
Ambassador
I wouldn't agree with that being the source of fear. Intel was heavily pushing their NUC idea. All but the high end on laptops is BGA already. There was every indication they could more tightly integrate with their motherboard suppliers by forcing BGA chips on them. Comes with some advantages too. Instead of testing for 1000s of potential hardware combinations there would only be a limited set.

I actually bought an AMD E-350 motherboard (which is an offshoot of the jaguar chip), was my HTPC for a while. I have a few of those Celerons too.

Look at the late model Xbox and Playstation, AMD BGA chip, GDDR5 / GDDR6 memory, all in a small cheap and efficient package. Replace every 5-7 years which is what most people do anyway.

Intel's new NUCs are kind of that original modular computer idea (though the latest does have a CPU socket, RAM, and M.2 slots). CPU module plugs into a PCIe bus, and then you plug a GPU into the bus. That could easily be expanded to include additional slots and CPUs could come on cards again. Remember slots processors? That was all because they hadn't yet integrated the cache into the CPU.
 
  • Like
Reactions: jnjnilson6

Eximo

Titan
Ambassador
Well, 128 cores might not be far off. Epyc server chips with that now. Intel pushing 16 efficiency cores to the consumer on a single die today, multi-chiplet coming relatively soon. AMD could do another node shrink or two and start making 16 or 32 core CCXs. If they are small enough and efficient enough, 4 on a consumer grade chip isn't out of the question.
 
  • Like
Reactions: jnjnilson6
Well, 128 cores might not be far off. Epyc server chips with that now. Intel pushing 16 efficiency cores to the consumer on a single die today, multi-chiplet coming relatively soon. AMD could do another node shrink or two and start making 16 or 32 core CCXs. If they are small enough and efficient enough, 4 on a consumer grade chip isn't out of the question.
The main problem with companies doing that is how to market that to consumers.
There is nothing that even demands a quad core and that's not going to change for a long time.
And by demand I don't mean that more cores don't make things faster it's just that nothing really needs it, if you are not pressed for time you can run anything you want on 4 cores.
 
  • Like
Reactions: jnjnilson6

jnjnilson6

Distinguished
The main problem with companies doing that is how to market that to consumers.
There is nothing that even demands a quad core and that's not going to change for a long time.
And by demand I don't mean that more cores don't make things faster it's just that nothing really needs it, if you are not pressed for time you can run anything you want on 4 cores.
That's very true! Of course, there's the small percentage of software engineers, animators and wild gamers who wouldn't possibly be able to perform their work on 4 cores only, but as you'd said, that percentage is very small and not at all what the mainstream user covers. And there already are more advanced processors and systems for these enthusiast users even today. So your point of view is invariably very correct. We should also keep in mind that if software was written better demand on CPU power and RAM quantity would significantly diminish; but that's one of the ills of the generation - badly written software. And while we may attain higher speeds and more cores there's sadly no remedy for badly written software because people write it for money instead of harboring a lifelong aspiration and unwavering zest. On a more positive note though, it's good Intel and AMD are finally competing wildly like in the good old Pentium 4 days; that brings more power to the user at a lower price, something that six or seven years ago was far beyond the market's reach.
 

Eximo

Titan
Ambassador
I'm kind of hoping for a big pile of efficiency cores. Just the 16 efficiency cores is exactly what I would use in an HTPC. Would be nice if they would make a socketed version of the low end mobile parts. 2P cores 6 E cores would be pretty good. Now I just have my i7-4770k underclocked to a single core boost of 3.6Ghz but it more commonly sits at 3-3.2Ghz when doing something. Randomly bought an A380 and installed it yesterday. Haven't tried much with it yet.
 
  • Like
Reactions: jnjnilson6
I'm kind of hoping for a big pile of efficiency cores. Just the 16 efficiency cores is exactly what I would use in an HTPC. Would be nice if they would make a socketed version of the low end mobile parts. 2P cores 6 E cores would be pretty good. Now I just have my i7-4770k underclocked to a single core boost of 3.6Ghz but it more commonly sits at 3-3.2Ghz when doing something. Randomly bought an A380 and installed it yesterday. Haven't tried much with it yet.
Well even celerons got a second core at some point...maybe within the next 8 years we'll get a pentium or even i3 with a few e-cores.
 
  • Like
Reactions: jnjnilson6

jnjnilson6

Distinguished
I'm kind of hoping for a big pile of efficiency cores. Just the 16 efficiency cores is exactly what I would use in an HTPC. Would be nice if they would make a socketed version of the low end mobile parts. 2P cores 6 E cores would be pretty good. Now I just have my i7-4770k underclocked to a single core boost of 3.6Ghz but it more commonly sits at 3-3.2Ghz when doing something. Randomly bought an A380 and installed it yesterday. Haven't tried much with it yet.
Back in the day when I was using my i7-3770K, I was able to overclock it to 5.0 GHz and it scored just a little bit more on Cinebench R11.5 than the i7-3930K at stock. It did run stable for hours doing hard tasks at that high clock too. I guess that's a pretty good bit of performance today too, despite I no longer use that machine. I used Corsair H110 water cooling.

I suppose you can do a lot of things with the i7-4770K today; let's say it is in the middle between everyday performance and the enthusiastic streak. Used to be very good back in the day, the Haswell. Hope it does you well for many more years to come.
 
  • Like
Reactions: Nephern

jnjnilson6

Distinguished
i have a amd athlon from 2005 and im thinking about my ryzen 7 5700g and thinking how far its come.

Technology has gone so far.
The Core i9-12900K performs like a Pentium 4 HT @ 241.3 GHz.

We have witnessed a beautiful increase of between 78 and 79 times in the performance of the best microprocessors available in the last 20 years. The Pentium 4 HT @ 3.06 GHz (the fastest Pentium 4 in 2002) performs 78.8 times slower than the Core i9-12900K does today. And as stated priorly, you'd need a Pentium 4 HT clocked at the incalculably high speed of 241.3 GHz to match the Core i9-12900K's generous performance.

*Core i9-12900K Score - 2956 points
Pentium 4 HT @ 3.06 GHz Score - 37.5 points
Pentium 4 HT @ 1.0 GHz Score - 12.25 points
*Pentium 4 HT @ 241.3 GHz Score - 2956 points

Referral of Performance Points
(64-Core
Avg. Multi Core Mixed Speed) - https://cpu.userbenchmark.com/Compare/Intel-Pentium-4-306GHz-vs-Intel-Core-i9-12900K/m614vs4118
 
  • Like
Reactions: Order 66

jnjnilson6

Distinguished
Can we talk about how AMD hasnt changed its logo since 1995?
I remember how I had a few HP Compaq NX6125 laptops in 2005 with AMD Sempron 3300+ at 2.0 GHz. It was a good processor and I could open up about 40 tabs, many of which on YouTube, and get good performance with 896 MB RAM (896 because 128 MB were dedicated to the integrated GPU - ATI Xpress 200M). For the same thing you'd need a new CPU with at least 2 cores and a 2.8 GHz clock speed and a good 8 to 10 GB RAM today. Just shows you how things have changed. About the logo, I do think it is cool, but then again, the Intel logo has been progressively becoming cooler and more modern with time. It seems AMD's logo had been universally conjured because it still looks modern to this day.
 
  • Like
Reactions: Nephern

Eximo

Titan
Ambassador
Back in the day when I was using my i7-3770K, I was able to overclock it to 5.0 GHz and it scored just a little bit more on Cinebench R11.5 than the i7-3930K at stock. It did run stable for hours doing hard tasks at that high clock too. I guess that's a pretty good bit of performance today too, despite I no longer use that machine. I used Corsair H110 water cooling.

I suppose you can do a lot of things with the i7-4770K today; let's say it is in the middle between everyday performance and the enthusiastic streak. Used to be very good back in the day, the Haswell. Hope it does you well for many more years to come.


Ehh, not sure how much longer I can stretch the 4770k. It works fine, but that build is mostly about efficiency, and there are far better options available. Mini-ITX motherboard prices have been holding me back for the most part. CPUs like the 12100 or even 10105 are quite cheap, but the motherboards are not. That and I would need more DDR4 or 5.

Intel has sort of promised that the low end 13th gen parts will have e-cores, so I might wait for those to launch and then take another stab at it. Since they do the e-cores in clusters of 4, I imagine we might see an i3-13100 with 4 e cores, and possibly an i5-13500 with 8, I suspect the 13400 would also have 4 only, but I could be surprised. Depends on how good their yield was.
 
  • Like
Reactions: jnjnilson6
Maybe not in 2030, but sometime in the next ~50 years we will start to unlock the true power of 3D chip fabrication at the nm (if not Å) level.
CPU's will resemble metal spheres encased in some kind of dual-purpose contact pad/heatsink. Their processing power will be greater than the difference between today's top CPUs and the original Intel 4004.
 
  • Like
Reactions: jnjnilson6

Eximo

Titan
Ambassador
Sphere would be pretty tough to get heat out of. That is maximizing volume vs surface area. I can see that working potentially. I don't recall the name of the equation, but there is a formula for getting computational work out of thermal energy. If each layer of the sphere used the waste heat of the previous layer as power, that might work. Would be a way to maximize computational power in as small a space as possible. But that concept is more about being powered from the innermost layer...I suppose inductive power might work.

IBM did research on a liquid powered, cooled, and connected chip. It was quite large in terms of nm scale, but worked. I can seem them stacking dies with interconnects and running power carrying fluid between layers as first step. If your whole ground plane and a power plane is a fluid should be pretty easy to get the heat out. The big question there would be longevity, we are talking tiny microchannels.
 
  • Like
Reactions: jnjnilson6
Sphere would be pretty tough to get heat out of. That is maximizing volume vs surface area. I can see that working potentially. I don't recall the name of the equation, but there is a formula for getting computational work out of thermal energy. If each layer of the sphere used the waste heat of the previous layer as power, that might work. Would be a way to maximize computational power in as small a space as possible. But that concept is more about being powered from the innermost layer...I suppose inductive power might work.

IBM did research on a liquid powered, cooled, and connected chip. It was quite large in terms of nm scale, but worked. I can seem them stacking dies with interconnects and running power carrying fluid between layers as first step. If your whole ground plane and a power plane is a fluid should be pretty easy to get the heat out. The big question there would be longevity, we are talking tiny microchannels.
For closeness/speed of connectivity and computational density the sphere is it.
There are definitely some hurdles that our current technology can't overcome though. Give it time.
 
  • Like
Reactions: jnjnilson6

Nephern

Prominent
Sep 20, 2022
248
34
620
Building off of the sphere heat sync idea, maybe a block to open up and go around it, making the contact point higher.

i think ARM SOC chips will be out by then now that i think about it, Apple has already done it in the form of the M1 ultra silicon chip.

Both intel and AMD have ARM licenses but say they wont be using them any time soon, i think in the next 3-10 years the only thing available will be a ARM cpu, even intel said that the current x86 platform is not as efficient as the ARM chips could be, but they say that at the moment the x86 platform is still more efficient.
 
  • Like
Reactions: jnjnilson6