News AMD estimates of Radeon RX 9070 XT performance leaked: 42% – 66% faster than Radeon RX 7900 GRE

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Then why not $300? Surely that's an even bigger winner?

I think folks are getting their hopes way too high, and setting themselves up for disappointment. If $750 was deemed by all and sundry to be a "good" price for 5070Ti, and 9070XT can match or slightly exceed that, asking for a 33% discount--50% off given Ti's going street price--does fall into the wishful thinking category.

Second, there won't be a reference card, so it's all up to AIBs to set pricing. "Winning market share" (for AMD) isn't on AIBs' menu. Given the current GPU crunch, their most likely route is the same as what they're doing for Nvidia cards. Have one "base" SKU with small allocation, and the rest being OC models with markups.

Third, indications are that tariff surcharge is already in effect for 5070Ti (plus an extra 10% to take advantage of the GPU crunch, so +20% altogether). So, whatever MSRP AMD puts out, it's reasonable to expect a similar price inflation for RDNA4.

All that said, let's play "guess the MSRP" game.

Going by the TPU numbers (in Owen's video) plus some guesswork, 9070XT should slightly outperform 5070Ti. 9070, given its closer positioning to 9070XT than 5070 to 5070Ti, should substantially outperform 5070. Also factor in the extreme shortage of midrange GPU at the moment.

Then, my SWAG is that 9070 will be priced on par with 5070 at $550 MSRP; 9070XT will be $650 to $700. These would give AMD substantial $/perf advantage. Given the current environment--with no Nvidia at MSRP--this should net sales wins for AMD.

Numbers above are for MSRP. As said, I expect MSRP models to be instantly OOS, and most available AIB models (in the US) to be at least 10% higher to account for tariff, and more likely closer to 20% as with 5070Ti.

That's a very bad take. AMD can indeed price the 9070XT based on reported prices of packaging, N3 yields and known*/speculated prices for AMD with TSMC around the $550 and still have like ~20% margin on it. Considering nVidia is leaving partners with barely 10% margins, AMD partners must be salivating at the prospect of AMD allowing them to go nuts with special variants of this chip.

How about joining us in reality? The launch price is irrelevant because the market will determine the price. AMD can launch these at $150 and almost no one will pay that price because scalpers will clean house on anything that is perceived as a great value. The only chance gamers have of landing a good value card is getting lucky, of buying at the end of a product's life when it is getting replaced. In the US, the 9800X3D is still selling for about $100 over MSRP.
I'm so sorry Intel and nVidia have conditioned you to expect the worst all the time with launches. I hope you can get better over time :)

Also, the 9800X3D is back at MSRP in most regions. The USA is, well, getting price hikes thanks to you-know-who.

Regards.
 
There were definitely points over the past couple years when you could get a RTX 4000, including even the 4090, for near MSRP.


Bad example. Arrow Lake's poor gaming performance created an unprecedented surge in demand for that model. AMD wasn't ready for it.
For the millionth time, it's NOT about MSRP. Gamers universally trashed the 40 series pricing outside of the 4090. The cards still spent most of their life above MSRP. The market will determine the price of a product. If no one wants them (AKA AMD GPU's) they go on discount below the MSRP, if they are in demand (Nvidia GPU'
s) they will sell for above to way above MSRP. Gamers live in their own world and think they can dictate the market with complaining. With few exceptions, they can't because they don't buy enough cards and they are too cheap compared to the markets competing with them for cards.
 
Last edited:
That's a very bad take. AMD can indeed price the 9070XT based on reported prices of packaging, N3 yields and known*/speculated prices for AMD with TSMC around the $550 and still have like ~20% margin on it. Considering nVidia is leaving partners with barely 10% margins, AMD partners must be salivating at the prospect of AMD allowing them to go nuts with special variants of this chip.


I'm so sorry Intel and nVidia have conditioned you to expect the worst all the time with launches. I hope you can get better over time :)

Also, the 9800X3D is back at MSRP in most regions. The USA is, well, getting price hikes thanks to you-know-who.

Regards.
If you took a moment to stop worshipping at the altar of AMD, you would see that CPU's prices are so high because AMD is the one that drove them up there, not Intel.
 
  • Like
Reactions: KyaraM
honestly if the new card can do more with less that's impressive. essentially i think the reason they haven't gone with a high end is because the chiplet design might not work well on rdna 4 since its assumed this is a monolithic die. and because its more simplified its why we are seeing larger gains. that and the chiplet design removes the complexity and im expecting the power to come down on these cards either that or they are going to push the these mid ranges to there limit.
 
  • Like
Reactions: artk2219
I managed to copy the table out of their HTML source, which enabled me to copy & paste it into Excel. From there, I could compute the following speedups on RT vs. non-RT games:

RX 9700 (non-XT) vs. RX 7900 GRE:
Category1440p Ultra4k Ultra
Raster
+17%​
+19%​
Raytracing
+26%​
+26%​



RX 9700 XT vs. RX 7900 GRE:
Category1440p Ultra4k Ultra
Raster
+33%​
+37%​
Raytracing
+50%​
+53%​


In both cases, we can clearly see the bigger speedup is on RT, which is in line with what has been revealed so far - that RDNA4 mostly improves in the areas of RT and AI.
if these numbers are real this card competes with the 4080... and is probably a little behind the 5070ti
 
  • Like
Reactions: artk2219
I suppose. However, to give buyers a reason to buy a new product, it seems that the improvement over the current generation leader would have value.
I get that for halo card buyers, but that's not who AMD is targeting.

If they offer something really good relative to what the bulk of gamers currently have, and it's both affordable AND available (and, ahem, NOT missing any ROPs), then they've got serious potential.
 
  • Like
Reactions: artk2219
I managed to copy the table out of their HTML source, which enabled me to copy & paste it into Excel. From there, I could compute the following speedups on RT vs. non-RT games:

RX 9700 (non-XT) vs. RX 7900 GRE:
Category1440p Ultra4k Ultra
Raster
+17%​
+19%​
Raytracing
+26%​
+26%​



RX 9700 XT vs. RX 7900 GRE:
Category1440p Ultra4k Ultra
Raster
+33%​
+37%​
Raytracing
+50%​
+53%​


In both cases, we can clearly see the bigger speedup is on RT, which is in line with what has been revealed so far - that RDNA4 mostly improves in the areas of RT and AI.
I know there are some title differences between AMD's list and TPU's but there are some shared titles and barring any outliers and assuming the card's performance is consistent; the 9070XT's performance puts it right below 5070Ti at 1440p and below regular 4080 at 4K for raster.

average-fps-2560-1440.png
 
  • Like
Reactions: KyaraM and artk2219
"AMD claims that the upcoming Radeon RX 9070 XT is 42% – 168% faster...
performance gains reach 164% –168%, again according to the numbers published by VideoCardz."

It is not 168% faster but 68% faster max (168% of 9700 GRE, refer to VideoCardz chart).
Thank you for pointing at this.
This is not the first time. I found embarrassing that, a website like this, continue to confuse percentages.
 
Isn't this the same thing as NVidia estimating the "5070 will offer the same performance as the 4090"? The GPU memory bandwidth doesn't support the claim as the 9070 is still using 20Gbps GDDR6 along with being a 256-bit bus. That is 640 GB/ss of memory bandwidth, respectable but no near the 960 GB/s of the XTX. It does beat out the 7900 GRE which is only 576 GB/s and the 7800 XT at 624 GB/s.

For comparison the 5080 uses 256-bit 30Gb/s GDDR7 for 960 GB/s memory bandwidth, the 5070 Ti is 896 GB/s and the 5070 is 672 GB/s. And while memory bandwidth isn't everything, it does tend to define the ballpark in which a card operates at. This is why nVidia use's a ridiculously large memory bus on it's halo product but then severely restricts everything down the chain.

A final note, those numbers included mixing in RT performance which should of been a huge red flag. Rasterization and Ray Tracing performance need to be separated as they are very different workloads, especially since RT relies on specialized hardware instructions that might not even be present in previous generations. Lumping them together would allow those specialized RT enhancements to artificially raise the average "reported" performance increase. Expect the real numbers to be much less on release day.
 
Last edited:
  • Like
Reactions: KyaraM and artk2219
If they're keen* on comparing it to the 7900GRE, then does that mean it'll be hovering around the same MSRP? Would that be a reasonable assumption?

The Launch MSRP of the 7900GRE, for comparison, was $550. If AMD does indeed launch the 9070XT at that price, that's a no-arguments winner this generation.

Come on AMD, you said you wanted market share. This is it.

Regards.
Price is rumored to be 749 for the 9070XT
https://www.pcgamesn.com/amd/radeon-rx-9070-xt-high-price-rumor
 
This seems to serve as nothing more than to confuse those that don't know the naming/numbering stucture.

Why compare the new XT which is apparently replacing XTX (unless they pull an Nvidia and release a TI equivalent as an XTX) to their lowest end 7900 other than to brag larger percentage gains?

Using the chart @Elusive Ruse posted, that basically means substract 16.91% from the title to have any actual meaning.
 
  • Like
Reactions: artk2219
Why compare the new XT which is apparently replacing XTX (unless they pull an Nvidia and release a TI equivalent as an XTX) to their lowest end 7900 other than to brag larger percentage gains?

Umm ... AMD is not replacing the 7900 XTX model this generation and they have outright said they are not making a high end GPU's for this generation.

https://www.tomshardware.com/pc-com...ck-hyunh-talks-new-strategy-for-gaming-market

https://www.pcmag.com/opinions/ditc...-cards-could-give-amd-an-edge-in-the-gpu-wars

https://www.pcgamesn.com/amd/new-radeon-8000-series-strategy

The 9070's are aimed to compete against nVidia's xx70 and xx70 ti models.
 
If you took a moment to stop worshipping at the altar of AMD, you would see that CPU's prices are so high because AMD is the one that drove them up there, not Intel.
I partially agree. CPU prices went up as core counts increased, but have then slowly come down (esp. inflation-adjusted) as mainstream core counts have plateaued.

Ryzen initially undercut Intel, later the 5950X did set a new high watermark for modern, mainstream CPUs.
 
  • Like
Reactions: KyaraM and artk2219
while memory bandwidth isn't everything, it does tend to define the ballpark in which a card operates at. This is why nVidia use's a ridiculously large memory bus on it's halo product but then severely restricts everything down the chain.
That's a good question.

The RTX 4090 used 384-bit GDDR6X at 21 Gbps. The RTX 5090 uses 512-bit GDDR7 at 28 Gbps. The amount of compute increased by only 19.7% to 26.8% (depending on base or boost clocks). They could've gone with 384-bit GDDR7 at 28 Gbps and gotten 33.3% more memory bandwidth, which should've been more than enough. Or, like the RTX 5080, even boosted memory speed to 30 Gbps, for a total boost of 42.9% more memory bandwidth!

So, why'd Nvidia go so wide, with the RTX 5090? It has to be AI. It should be starved for both bandwidth and capacity. On the capacity front, going to 512-bit meant they could build workstation & server cards, with this die, featuring 64 GB via clamshell configuration of 16 Gb dies. With 24 Gb dies, they could reach 96 GB, which is in line with rumors.

A final note, those numbers included mixing in RT performance which should of been a huge red flag. Rasterization and Ray Tracing performance need to be separated
Yup, that's why I went to the trouble of separating them in post #10! You're welcome.
: )
 
Last edited:
I'm pretty sure AMD will mess this up again, there's no competition, last generations of gpu show more like Nvidia and AMD have some sort of under the table deal between the CEO cousins, because Nvidia messed up more than AMD but somehow AMD wasn't able to profit.
 
As far as we know, there's no 9080XT, maybe it comes along in a year or two but there have been no reports.
Ah. I could've sworn there was some mention. But, can't recall for certain, and can't find it now.

Maybe it mentioned that there *won't* be a 9080 or 9080 XT, and I forgot the negation.

In any case I don't think mid-range of a new Gen outperforming the halo of the previous gen was the typical experience. At least, not that I can recall offhand.
 
Last edited:
  • Like
Reactions: artk2219