News RX 9070 XT and RX 9070 specs reportedly leaked — up to 4,096 SPs, 16GB VRAM, and 2.9 GHz boost

considering these numbers of SPs, CUs RAM and Watts the performances, if leaks benchmarks will be confirmed are amazing!👍

This is a what a real improvement gen after gen is. Not double the SPs and watt (and maybe price...) for illude the masses that you have improved something. Also the node it's the same so again if leaks will be confirmed this RDNA4 (in particular the XT version) will be the middle-high (more on the true high end for the XT...) king this year. That is what consumers already show they want to buy, see steam surveys....
 
Last edited:
  • Like
Reactions: jlake3
considering these numbers of SPs, CUs RAM and Watts the performances, if leaks benchmarks will be confirmed are amazing!👍

This is a what a real improvement gen after gen is. Not double the SPs and watt (and maybe price...) for illude the masses that you have improved something. Also the node it's the same so again if leaks will be confirmed this RDNA4 (in particular the XT version) will be the middle-high (more on the true high end for the XT...) king this year. That is what consumers already show they want to buy, see steam surveys....
Also, to add on your optimism: given the Stream count, manufacturing process and use of GDDR6 still, then these should be much cheaper to make than current gen RDNA3 top dies by a lot. This is to say, if AMD was able to discount the 7900GRE to about $600, then these may actually hit the $400 territory at some point, which would be fantastic. I hope I'm not too off the mark here.

For comparison, the 7800XT is ~346mm2@N5 and this one will be ~390mm2@N4. And the 6700XT was ~335mm2@N7.

Regards.
 
The non-XT pricing is going to be interesting. It looks pretty good, but AMD may price it too high to upsell to the other one.

Could they have not offered a 9080XT with 16GB GDDR7 as a flagship?
They can't switch from GDDR6 to GDDR7 without a completely different memory controller, possibly more. And if it's not clearly faster than the 7900 XTX, then it probably doesn't need more memory bandwidth.

It would be funny if we see a 32 GB model though.
 
  • Like
Reactions: artk2219
Could they have not offered a 9080XT with 16GB GDDR7 as a flagship?
AMD learned their lesson. The money-is-no-object consumers are always going to buy Nvidia. Even if AMD is competitive in that sector. They have decided not to waste resources chasing the top of the market and instead focus on taking market share in the second-best tier where the customers aren't as brand loyal and are much more likely to choose the best price/performance option.
 
Considering all this... all I concern is the PRICE. It definitely looked nice just like how RDNA3 was, if they priced similarly bad (which I have confidence they will)... they won't sell, and vice versa if they are like $400-450, they will have a hard time keeping any stock on shelf
 
They can't switch from GDDR6 to GDDR7 without a completely different memory controller, possibly more. And if it's not clearly faster than the 7900 XTX, then it probably doesn't need more memory bandwidth.

It would be funny if we see a 32 GB model though.
Nvidia has done dies with both GDDR5 and GDDR6 support, so it is theoretically possible AMD could have done something similar and given it both controllers... but unless memory bandwidth is the bottleneck, the gains are small.

The 1650 came in GDDR5 and GDDR6 versions, where the GDDR6 one got 50% more memory bandwidth. Average performance increase was... 6.4%.

And compared to the RTX 5070 with GDDR7, AMD is only down 5% on bandwidth, so it's not like GDDR6 has put them at a huge bandwidth deficit, although Nvidia getting there with a smaller bus width and less chips might give them a production cost advantage.
 
Nvidia has done dies with both GDDR5 and GDDR6 support, so it is theoretically possible AMD could have done something similar and given it both controllers... but unless memory bandwidth is the bottleneck, the gains are small.

The 1650 came in GDDR5 and GDDR6 versions, where the GDDR6 one got 50% more memory bandwidth. Average performance increase was... 6.4%.

And compared to the RTX 5070 with GDDR7, AMD is only down 5% on bandwidth, so it's not like GDDR6 has put them at a huge bandwidth deficit, although Nvidia getting there with a smaller bus width and less chips might give them a production cost advantage.
You can't with GDDR7, as it uses PAM3 and GDDR5 and 6 use PAM4 signaling. You'd have to build 2 completely different IMCs if you want to support both.

Regards.
 
AMD learned their lesson. The money-is-no-object consumers are always going to buy Nvidia. Even if AMD is competitive in that sector.
That is true. But at least the 7900XTX was a decent competitor for the 4080... This time around, it only looks like we have competitors for the 70 series. If they were to release another competitor to the 80 series, I would seriously consider AMD. But as it is, they don't offer a product that caters to my needs/budget, which is unfortunate.
 
  • Like
Reactions: artk2219
You can't with GDDR7, as it uses PAM3 and GDDR5 and 6 use PAM4 signaling. You'd have to build 2 completely different IMCs if you want to support both.

Regards.
GDDR5/6 is actually NRZ/PAM2, only GDDR6X is PAM4. But Nvidia has also made dies that support both GDDR6 and GDDR6X, and I when I said "is theoretically possible AMD could have done something similar and given it both controllers", I thought it was fairly clear in context I was saying they would have needed to give it two memory controllers to support both types of VRAM.
 
That is true. But at least the 7900XTX was a decent competitor for the 4080... This time around, it only looks like we have competitors for the 70 series. If they were to release another competitor to the 80 series, I would seriously consider AMD. But as it is, they don't offer a product that caters to my needs/budget, which is unfortunate.
The 5080 won't be much faster than the 4080, what is there to compete exactly when both gpus are so close in rendering and the 9070 XT is expected to perform as the 4080? Next gen is shaping to be about multi frame gen and one 5080 won't ever be as good as a 9070xt plus dedicated gpu setup. Now compare $1000 5080 solution to 9070XT+9060 for $800 and see what I'm talking about
 
  • Like
Reactions: artk2219
If the 9070XT performs like a 7900GRE in raster and 7900XTX in RT then it might be worth $500 at most. The non-XT might sell well at $400, it would be a decent bit faster than the 7800XT at a lower price (and power) point.

The 5070 doesn't look like it's going to pull much ahead of the 4070ti in raw performance and the 7900GRE is already faster than that outside of RT.
 
  • Like
Reactions: artk2219
considering these numbers of SPs, CUs RAM and Watts the performances, if leaks benchmarks will be confirmed are amazing!👍

This is a what a real improvement gen after gen is. Not double the SPs and watt (and maybe price...) for illude the masses that you have improved something. Also the node it's the same so again if leaks will be confirmed this RDNA4 (in particular the XT version) will be the middle-high (more on the true high end for the XT...) king this year. That is what consumers already show they want to buy, see steam surveys....
The node went from TSMC N5 to N4. Nothing massive by any means but it does provide a small bump in PPA.

I'm thinking cost could definitely be extremely competitive -- maybe $499 launch MSRP? If FSR4 is really good, AMD is back in the running. Get devs back on board and then release a higher-end card each generation. I mean heck, AMD likes to do these resets if the past says anything.
 
  • Like
Reactions: artk2219
The 5070 will sell like hotcakes with the fake frame... every one want the 320x240 resolution at 280fps
I mean you jest, but think about how many people have their minds blow when others talk about DLSS, even though they shouldn't use it unless they have to. Personally I hate every single one of them, the latency and artifacts on all of them are not great. The only time I've been forced to use them recently is when I've been playing Remnant II on an Arc A770 16GB i have at my in laws, and on my laptop with a GTX 1650. XeSS, generally worked a little better than FSR 3 for me on those cards, and that game is generally forgiving on the latency side, but it has also made some twitchier boss fights more of a chore than it would have been otherwise. I played with DLSS when I was on my RTX 2070 and 3080, but the "extra performance" wasnt worth the latency and artifact trade off. Maybe DLSS 4 and FSR 4 will be better, and I'm sure we will get to a point when they are amazing and an integral part of every GPU, but we arent there yet and we wont be for a generation or two. Native rasterization performance is still king, for now.
 
Could they have not offered a 9080XT with 16GB GDDR7 as a flagship?
Still not enough CU's to be able to chew thru the bandwidth of 256-bit GDDR7. Now, GDDR6X, yeah, probably. GDRR7 is kind of in short supply and priced high -- only nVidia can command that kind of price premium today.
 
i think the issue of not chasing high end is most likely to do with rdna has a issue with scaling something in its internal make up doesnt like scaling rdna 4 is a soft fix of rdna 3 and its my assumption that it may be because some of the resources on the gpu die are shared so it may bottleneck with scaling thats my theory anyway.