News AMD exec sarcastically teases RX 90700.05XTXT Max GPU with 320GB of VRAM and '1.2 Gigawatts' power supply requirement

I know he's poking fun at things, but I can't help but feel like with a (lot) bit of work, if they doubles the CU count of the 9070XT to 128 CUs, I feel that could be a monster.

Yeah, I know it's not that simple, but I'm gonna dream
 
Guess we will all need a personal thorium reactor to power our GPU's from here on out!
Wasn't the Mr. Fusion invented like 3 years ago in that timeline?

It also feels like they announced this particular card about a month and a half early, but maybe they are planning a real announcement for the beginning of April.
 
There's the thing, why not allow the board partners to decide if the 9070 XT deserves 32 GB?
Because if AMD releases a 9080XT with 24Gigs of RAM, people will complain that 9070XT had more memory. Hence chip manufacturers decide how much VRAM will go on a card. I also dont understand how much driver level changes needs to be done to accomodate/optimise higher memory.
Also, from what I understand, its not AIBs that buys memory. Its the chip manufacturer that buys memory and sends it to AIB along with the chip.
 
They can't double the number of cu's on a monolithic die the chip is already 490 mm square (bigger than 4080s @ 436mm^2) and the maximum die that tsmc can process is 750 mm^2.
Initially the higher end was the 80 series, they introduced the 90 series which was two GPU communicating with Nvidia SLI or AMD CrossFire on the same board.
With the current graphic card size you should be able to squeeze three GPU on the same board.
NVIDIA-GeForce-GTX-Titan-Z-PCB1.jpg
 
Last edited:
And this is why Nvidia is kicking AMDs a… in the GPu game … too busy making jokes while they should be making a better product. Even instinct 300 AI accelerators while competitive against Hopper 100s they are still behind Blackwell.
 
There isn't any risk of that specific scenario. No 384-bit card this generation, and no 3 GB modules since GDDR6 is being used.
I wonder if GPUs could use the 4 GB GDDR6W modules to get 32 GB if they use a bus switch to keep each ram chip at 32-bit. GDDR6W is weird in that it’s 2x2GB modules in the same package using a 64-bit bus. However, I do not know if this is even possible.
 
Only time will tell, but I think AMD is being a bit smarter with this GPU release.

They tempered expectations a while back, saying that they weren't competing in the top-tier segment.
We've had a few leaks supposedly showing the RX 9070 XT landing somewhere very close to the RTX 5080 in straight rasterization.

Supposedly the new 9000-series cards drastically improve RT performance, finally bringing AMD up to parity with NVIDIA. (just RT, not DLSS/FSR and FG)

Price and whether or not the above is true will decide AMD's 9000-series fate. If they are very aggressive with pricing, and the performance turns out to be as good as or better than the leaks, the 9070 XT may turn out to be AMD's 1080 Ti moment. Wouldn't that be something.
 
Technically if a card had mid range xx70 series specs from either team in terms of Core clocks, shader units, etc but had more VRAM (24 or 32 Gigs) would that be enough to run games at MEDIUM in 4K ? What I mean to say is are the VRAM numbers the limiting factor in playing at a higher resolution at lower settings, because higher res textures require more VRAM ? I wonder how different a game would look between 2K High/Ultra and 4K Medium.
Could be an interesting niche for a graphics card, since VRAM isnt as expensive as bigger and better cores. Something different instead of pursuing lower res with higher settings.
 
Technically if a card had mid range xx70 series specs from either team in terms of Core clocks, shader units, etc but had more VRAM (24 or 32 Gigs) would that be enough to run games at MEDIUM in 4K ? What I mean to say is are the VRAM numbers the limiting factor in playing at a higher resolution at lower settings, because higher res textures require more VRAM ? I wonder how different a game would look between 2K High/Ultra and 4K Medium.
Could be an interesting niche for a graphics card, since VRAM isnt as expensive as bigger and better cores. Something different instead of pursuing lower res with higher settings.
Currently, 16GB is more than enough for medium quality settings at 4K in most (if not all) AAA games. Only a few of the latest AAA titles at high (or even ultra) settings utilize over 16GBs, even at 4K.

VRAM is only the limiting factor when the VRAM buffer is completely exhausted AND the GPU's rasterization performance is enough to make good use of the extra VRAM. Hardware Unboxed made a very informative video recently showing what happens to performance once a GPUs VRAM is exhausted - highly recommended watch. In the past, some GPU manufacturers have doubled the amount of VRAM on cards for no other reason than as a gimmick to sell more cards, with the comparatively low raster performance of the card negating any uplift from running out of usable VRAM.

IMO, a >16GB RX 9070 XT would be absolutely useless to a gamer if all the other specs (memory speed, bus width, etc.) remain the same. It may have some niche uses in AI/LLM/SD or more memory intensive production apps though.
 
  • Like
Reactions: ThereAndBackAgain
Currently, 16GB is more than enough for medium quality settings at 4K in most (if not all) AAA games. Only a few of the latest AAA titles at high (or even ultra) settings utilize over 16GBs, even at 4K.

VRAM is only the limiting factor when the VRAM buffer is completely exhausted AND the GPU's rasterization performance is enough to make good use of the extra VRAM. Hardware Unboxed made a very informative video recently showing what happens to performance once a GPUs VRAM is exhausted - highly recommended watch. In the past, some GPU manufacturers have doubled the amount of VRAM on cards for no other reason than as a gimmick to sell more cards, with the comparatively low raster performance of the card negating any uplift from running out of usable VRAM.

IMO, a >16GB RX 9070 XT would be absolutely useless to a gamer if all the other specs (memory speed, bus width, etc.) remain the same. It may have some niche uses in AI/LLM/SD or more memory intensive production apps though.

I was just thinking about it in the sense that right now between the popular gaming resolutions - 1080p, 2k, 4k and probably some madlads at 8K and also all the widescreen resolutions in between, the difference in number of pixels is huge vs the difference in popular resolutions say 10 years ago. Obviously the texture sizes go up proportionately with these.

The cards nowadays seem to be targeting high settings at 1080p for xx60 and xx70, 2K for the xx80 and 4k for the xx90. I was just wondering out loud if having more VRAM would help a lower tier card run at a higher resolution but at a lower quality setting with the increased pixel density making up for that somewhat.
 
  • Like
Reactions: alceryes
I was just thinking about it in the sense that right now between the popular gaming resolutions - 1080p, 2k, 4k and probably some madlads at 8K and also all the widescreen resolutions in between, the difference in number of pixels is huge vs the difference in popular resolutions say 10 years ago. Obviously the texture sizes go up proportionately with these.

The cards nowadays seem to be targeting high settings at 1080p for xx60 and xx70, 2K for the xx80 and 4k for the xx90. I was just wondering out loud if having more VRAM would help a lower tier card run at a higher resolution but at a lower quality setting with the increased pixel density making up for that somewhat.
Where hardware capabilities are concerned, there is a MAJOR gap between 4K and 8K. Some madlads may launch games in 8K just to say they can do it, but I very much doubt they play games in 8K. In best case scenarios, 8K may be playable (above what I would call the 30FPS floor) in some older AAA titles, but they'll still have issues with artifacts, instability, and crashes.

I think the xx90, xx80, and the upcoming 9070 XT are all targeting 4K gaming. More VRAM could potentially help lower tier cards perform better, but only if they had some more raster power 'in the tank,' so to speak. Historically, this has not been the case.
 
  • Like
Reactions: M0rtis

Latest posts