Discussion What kind of GPUS will be out in 2030?

Nephern

Proper
Sep 20, 2022
220
32
120
2
What name will it have? this includes model name, manufacturer and whether its Nvidia, AMD, etc (maybe another competitor)

How many gigs of Vram will it have?

How fast will it be? this includes base clock speed, boost clock speed and memory clock speed

What kind of cooling system will it have? will it be water-cooled, aircooled (how many fans?) or passive cooled

How much power will it draw?

How much will it cost?


(credited from jnjnilson6 cpu discussion)
 

jasonf2

Distinguished
In eight years there is a pretty significant chance that you are going to have seen at least one if not two disruptive technology shifts that make the questions you asked pretty much impossible to answer.
 

Nephern

Proper
Sep 20, 2022
220
32
120
2
Well i understand and agree with your statements. also the making of ARM soc chips will most likely be out by then. Graphics cards will be a thing of the past.
 

Eximo

Titan
Ambassador
Well, we already have multi-chip GPUs. You can expect that trend to continue. The absolute final nail in the coffin for SLI and Crossfire. Just pile more chips on, no need for a second card until you want more compute performance.

The target process nodes from GPUs might start hitting a brick wall pretty soon, so parallelism makes sense. They might have a few process node shrinks in them, but they won't be able to keep that level of power output up. They will have to scale back on frequency and power output per chiplet. Spreading out into smaller chiplets will make cooling easier and allow for a lot more computational cores. Hopefully enough that they can really scale back power without losing performance.

You can look at Apple and their use of the latest TSMC nodes and the GPU power they can squeeze out of such a low wattage chip.

That pretty much covers things through 2025. Beyond that, more and more ray tracing and AI/Deep Learning performance?

I would expect at least one doubling of memory density by 2030. So your top end cards might have 48/64GB of RAM and enterprise cards double that. They've been moving pretty fast these last two generations in that regard.
 
Reactions: Nephern

Eximo

Titan
Ambassador
Apparently AMD is pursuing 3D v cache for GPUs as well. That may let them get away with keeping GPU memory capacity lower for a while longer or at least using lower speed memory. Not sure that would have a impact on pricing, cache is the most expensive when it comes to silicon real estate, but stacking means that could be a different node entirely.
 

mjbn1977

Distinguished
Considering the current price trend they will be quite expensive.....

But price aside. They will most likely be very much AI driven.....I think this whole DLSS and XESS, maybe even FSR someday, will get real big going forward.....
 
Last edited:

mjbn1977

Distinguished
Apparently AMD is pursuing 3D v cache for GPUs as well. That may let them get away with keeping GPU memory capacity lower for a while longer or at least using lower speed memory. Not sure that would have a impact on pricing, cache is the most expensive when it comes to silicon real estate, but stacking means that could be a different node entirely.
Cache is not a new invention. They all have cache. But bigger cache can help in certain games. The problem with larger cache is that it needs to fit somewhere on the die. Space on the die is limited, the larger the chip, the lower the yield, in combination of the drastically increasing prices for wafers on the smaller manufacturing nodes, it is not that simple as just going with more cache. The more space you use for cache, the more space you lose for compute unites and raytracing and other stuff. What to do with a large cache if you do not have enough compute units to feed it. Its all about finding the best possible combo on the die.
 

Eximo

Titan
Ambassador
Cache is not a new invention. They all have cache. But bigger cache can help in certain games. The problem with larger cache is that it needs to fit somewhere on the die. Space on the die is limited, the larger the chip, the lower the yield, in combination of the drastically increasing prices for wafers on the smaller manufacturing nodes, it is not that simple as just going with more cache. The more space you use for cache, the more space you lose for compute unites and raytracing and other stuff. What to do with a large cache if you do not have enough compute units to feed it. Its all about finding the best possible combo on the die.
These would be stacked chips, so the area is a completely separate silicon chip, and would effect yields in a completely different way. All they have to make room for is the vias/interface. If they do it right they could keep on stacking until thermal limits stop them.

And it is as simple as going for more cache when you are just adding on top. That is what makes the Ryzen 5800X3D still competitive in gaming despite being a generation behind.
 

kanewolf

Titan
Moderator
What name will it have? this includes model name, manufacturer and whether its Nvidia, AMD, etc (maybe another competitor)

How many gigs of Vram will it have?

How fast will it be? this includes base clock speed, boost clock speed and memory clock speed

What kind of cooling system will it have? will it be water-cooled, aircooled (how many fans?) or passive cooled

How much power will it draw?

How much will it cost?


(credited from jnjnilson6 cpu discussion)
There won't be any. A new microorganism will contaminate ALL chip making environments. No new chips will be made from 2029 - 2035 ...
 

mjbn1977

Distinguished
These would be stacked chips, so the area is a completely separate silicon chip, and would effect yields in a completely different way. All they have to make room for is the vias/interface. If they do it right they could keep on stacking until thermal limits stop them.

And it is as simple as going for more cache when you are just adding on top. That is what makes the Ryzen 5800X3D still competitive in gaming despite being a generation behind.
The cache is is talking about is usually not stacked, since it is on the same die.....its different from video ram....

Same with CPUs...the "3D" cache is on the actual CPU die.....part of the silicone that has everything else. R5800X3D is nothing else than a clever AMD Epyc byproduct which have larger caches...good for gaming, but bad for productivity.
 
Last edited:

mjbn1977

Distinguished
But that wasn't the cache I was referencing. Specifically, 3D Vcache for AMD GPUs.

Not that they haven't also increased normal cache with the 7000 series, they certainly did.
Steve von Gamers Nexus just made an architectural video about those cards, and i really think the 3d Vcache is on the actual main die....let me check. We talking Level 3, correct.....that is usually what is referred to as the marketing named "3D cache".
 

mjbn1977

Distinguished
Steve von Gamers Nexus just made an architectural video about those cards, and i really think the 3d Vcache is on the actual main die....let me check. We talking Level 3, correct.....that is usually what is referred to as the marketing named "3D cache".
yeah....you are right Eximo, the "infinity cache is on the chiplets around the main die....the L2 cache is on the main die (GCD)

View: https://youtu.be/9iEDpXyFLFU
 

Eximo

Titan
Ambassador
yeah....you are right Eximo, the "infinity cache is on the chiplets around the main die....the L2 cache is on the main die (GCD)

View: https://youtu.be/9iEDpXyFLFU
That is the current design that is coming to market soon with the infinity cache, which was what I was referring to when I said they did increase. They are now talking about a possible refresh that would add 3d vcache to the MCDs. For even MORE cache.

Each MCD has 16 MB, they would probably do something like add 32MB to each with another layer if we go by the convention they established with the 5800X3D. Also most a lot of sense in terms of area. They are already dealing with the chiplets, so there would be some increased loss when stacking fails, but each MCD could be tested before assembly with a GCD package.

Fascinating departure from monolithic. I know Nvidia has plans for multi chip GPUs, just nothing on the market yet besides some of their existing Grid cards. But they are more like NVLink bridged then truly together.
 

ASK THE COMMUNITY