News AMD Simulated RX 7800 XT Performs Similarly to RX 6800 XT

Even if it only matches the performance of the existing RX 6800 XT, if it has a similar price and uses less power while providing a few new extras (like AV1 encoding support), it could still be a decent card.

No, that would be a very good reason not to buy it, and for TH and all other reputable review sites to not recommend it for purchase.
 
There's 2 bars in that graph, so the theoretical performance should land in between the two: simulated and "calculated", where the latter should be closer to what we all should expect.

But as with everything, it'll come down to price. I think by now AMD knows this can't be over $500, unless they want it to sit on shelves forever.

Regards.
 
while rumors indicate the RX 7800 XT will use a Navi 22 GCD with 60 (possibly 64) CUs.

No, it is NAVI 32. Correct the typo.

Also I agree with other points you made in the article, but the simulated specs assume that the card would feature 70 Compute Units, memory clocked at 20 Gbps and game clock likely above 2 GHz, but this is not set on stone. This will cause a huge discrepancy.

It is unlikely the 7800XT will sport 70 CUs.

While he achieved this feat via the memtestcl program, for the VRAM, since RDNA 3 no longer allows to emulate the cards directly, the simulation is far from perfect. This is just an educated guess at the most.

This is evident from the results, since the simulated GPU shows a very small performance advantage, - 4% (1080p), 8% (1440p) and 12.5% (2160p), over the previous gen RX 6800 XT SKU. This won't be the case in real world gaming since the NAVI 32 silicon and the specs are NOT even finalized yet.

It is unlikely AMD would release a card that is only 4/8/12% faster than the previous gen offering.
 
  • Like
Reactions: gg83 and bit_user
Also, using and simulating a NAVI 31 die for a NAVI 32 dies makes little sense. Just because the pro card features a 256-bit memory interface doesn't mean the final silicon of Navi 32 will also sport the same bus width. It might be possible, but at this point this is just an educated guess.
 
  • Like
Reactions: gg83 and bit_user
AMD keeps pulling Nvidia moves this gen. Its more than a little disheartening.

No, it is NAVI 32. Correct the typo.

Also I agree with other points you made in the article, but the simulated specs assume that the card would feature 70 Compute Units, memory clocked at 20 Gbps and game clock likely above 2 GHz, but this is not set on stone. This will cause a huge discrepancy.

It is unlikely the 7800XT will sport 70 CUs.

While he achieved this feat via the memtestcl program, for the VRAM, since RDNA 3 no longer allows to emulate the cards directly, the simulation is far from perfect. This is just an educated guess at the most.

This is evident from the results, since the simulated GPU shows a very small performance advantage, - 4% (1080p), 8% (1440p) and 12.5% (2160p), over the previous gen RX 6800 XT SKU. This won't be the case in real world gaming since the NAVI 32 silicon and the specs are NOT even finalized yet.

It is unlikely AMD would release a card that is only 4/8/12% faster than the previous gen offering.
So true but it is better than nothing.
 
Looks like this really confirms the end of Moore's law and 5% increases gen to gen for 10% price increases are now the new norm.
 
If a 7800XT performs the same as a 6800XT I would call that an epic failure.
Depends on the price. See, the RX 6800XT is currently selling for $100 less than its launch price, in spite of inflation! If the RX 7800XT comes out with slightly better performance, better raytracing performance, matrix cores for AI, and sells for about the same as a RX 6800XT's current price, I'd struggle to see why that's a bad thing, considering RX 6800XT is now priced rather competitively.

On the flip side, if you look at the specs, it does seem awfully strange for a next-gen design and process node to perform so similarly. I guess it has 2 fewer CUs and just 3/4th the cache. So, maybe they were counting on better FSR or something to help make up the difference.

However, the main focus should be on perf/$ and perf/W. Just ignore the model names and look at those two things. Those are the only things that ultimately matter, and if they're good, then we don't have a problem.
 
Last edited:
They can't launch it because it's way too close to the 6800XT and they will likely try to charge$700 for it. Even at $600 it's dead in the water.

7900XT gets you about 20% more than the 6800XT. As everyone says the 7900XT should have been the 7800XT. But then AMD couldn't charge you $900 for and 800 class card without revolts.

AMD, like NVIDIA painted themselves into a corner with gens pricing performance gains. Greed made them look stupid.

7800XT will sit just like the other cards. Gamers are fed up
 
Last edited:
Differences could be put down to unoptimized drivers, so any simulation of any type is really just speculation as to actual real world gaming performance.
 
As long as the price is competitive for its performance level, why wouldn't they recommend it? Makes no sense.
Because someone like me may be easily fooled by these companies but not Toms Hardware. TH is fully aware how powerful current generation of GPUs can be in the mainstream class of cards no less should Nvidia and AMD set aside their moronic pursure of insatiable greed. Yet these companies are being extremely meager for anything under $1K cards (and under $1600 in the case of nvidia since it wanted to even butcher its 4080 card. remember the "unlaunched" 4080?!)
It is expected of Toms Hardware inform readers of such facts.
 
So, you're saying that no new cards are comparable on perf/$ with RTX 3000 and RX 6000 models?

Or, how exactly are you defining "priced competitively"?
3060ti is better than 4060ti because of memory.

We are stuck at 8GB memory, when clearly games now need more than that at even 1080p. Prices of 8GB of memory have dropped considerably. It will cost ~$30 extra for an additional 8GB. It likely less as AMD buys in bulk.

So we have basically what are 3 year old cards at the same price with little uplift in performance. Yet games are getting more demanding. Lots of UE 5 and 5.1 games are coming. Nanite and lumen really push things hard.

There is zero incentive to buy other than having a really old card and a small power supply.
 
  • Like
Reactions: smitz314
So, you're saying that no new cards are comparable on perf/$ with RTX 3000 and RX 6000 models?

Or, how exactly are you defining "priced competitively"?
New cards cannot perform the same as prior generations. This isn't just GPUS. Every CPU generation gets faster. Same with SSDs.

What we are seeing is a non-competitive market. The industry is fleecing the whales because the COVID and Etherium shortages showed them that a lot of people will pay anything to get their flashy new toy. Along with the whales are a lot of regular people that are just accepting that their chosen hobby is going to screw them. That isn't sustainable.
The underlying issue is that older cards are not dropping in value very much at all. That means newer ones are being priced higher and higher.
 
Looks like this really confirms the end of Moore's law and 5% increases gen to gen for 10% price increases are now the new norm.
Moore's law would stand on the free market, in duopoly those 2 players can do whatever they want and it's to consumers whether they tolerate it or not, thank god GPUs are not necessary for humans to survive on this planet.
 
Well, now we know why AMD didn't release it. This is all AMD's fault, specifically, that idiot Sasa Marinkovic. His "creative" (read dishonest) nomenclature of the RX 7000-series cards is to blame for this.

The RX 7900 XTX should have been the RX 7800 (XT)
The RX 7900 XT should have been the RX 7700 (XT)
etc...

For anyone who doesn't understand why the cards should've been named the numbers I showed, I'll give a brief explanation of how the standard consumer-gaming video card tier system works. Most people here already know and understand it but for those who don't...

Levels below level 5 are essentially glorified video adapters for office and HTPC applications. They don't always follow typical naming conventions because their names are, for the most part, irrelevant. The top-level cards of this type would be the RX 6400 and GTX 1650.

Level 5 is the "entry-level" gaming tier which is generally personified by people who are new to PC gaming and tend to play e-Sports games like Rocket League, CS:GO, PUBG, Valorant, Overwatch, Fortnite and League of Legends at 1080p60Hz mid-settings, 1080p144Hz potato settings or some AAA-titles at 1080p potato settings. Level 5 cards are also sometimes used as glorified video adapters for HTPCs because of their relatively low power consumption. This is the only tier in which fewer than 8GB of VRAM can be found. Cards in this tier include the RX 6500 XT and GTX 3050.

Level-6 is the most popular tier by a country mile so it's referred to as "mainstream gaming". The most popular cards in the world are found in this tier like the legendary GTX 1060. Modern examples would be the RX 6600, RTX 3060, RX 7600 and RTX 4060 families. Without a doubt, these cards offer the lowest cost-per-frame than any other tier (even lower than level 5) making them the natural "go-to" for most gamers. They use the number 6 as their designations. This tier is popular for gamers playing AAA games at 1080p60Hz ultra settings or e-Sports titles at 1080p144Hz ultra settings. This is the highest tier in which 8GB of VRAM is still considered acceptable.

Then we have level-7, known as the "high-end gaming" tier . This tier is traditionally most popular among gamers who aim to play AAA titles at 1440p60Hz with ultra settings. In this day and age, the absolute minimum amount of VRAM for level-7 cards is considered to be 10-12GB depending on who you ask. Regardless, 8GB is not recommended at this level and above. Cards in this tier are identified by the number 7 like RTX 3070 and RX 6700.

Level-8 is the enthusiast tier with 8 as the identifying number like the RTX 3080 and RX 6800. This tier will have top-tier silicon that is from a lower bin or is slightly cut-down with (usually) large amounts of VRAM. This tier will have much better value than level-9 and is often the "go-to" for knowledgeable enthusiast gamers who want performance but know better than to pay level-9 prices for gaming performance increases that are of questionable utility. They generally aim for 2160p60Hz at medium settings or 1440p122Hz at ultra settings in AAA titles.

The halo tier of any generation is what I call "level-9" because 9 is used as the important number by both major players like the RX 6900 XT and RTX 3090 Ti. These cards will have top-tier silicon that is pushed to the max (within a reasonable power envelope) and are generally a terrible value for gaming with far more VRAM than gaming could possibly need in the next ten years. They are most often purchased by prosumers, people with more money than brains or people trying to compensate for something.

Level-9 cards aren't always easy to define because they have no theoretical upper limit when it comes to performance, amount of VRAM or price. Therefore, I define them like this:
"A level-9 halo card must be fast enough that their competitor's level-8 card is considered non-competitive based on performance or the fact that there is a level-8 card that is a far more natural competitor in both performance and price."

Since that definition could be considered subjective, as long as there is a level-8 card that competes directly in performance with the competition's level-8 card, the faster card can be called level-9.

AMD adhered to this very well with RDNA and RDNA2. RDNA had no card that was competitive with the RTX 2080 so the fastest card was called the RX 5700 XT and competed with the RTX 2070. The RX 6900 XT from RDNA2 isn't tremendously faster than the RTX 3080 but the fact that the RX 6800 XT is the obvious natural rival to the RTX 3080 in both performance and price which means that the RX 6900 XT name is still valid.

Neither the RTX 7900 XT nor the RTX 7900 XTX even come close to meeting these requirements. The RTX 7900 XTX is the natural performance rival of the RTX 4080, not the RTX 4090. Therefore, it is a level-8 card and should've been named the RX 7800 XT.

The RX 7900 XT is 10% faster than the RTX 4070 Ti but it is the natural rival in price and is far closer to the RTX 4070 Ti than it is to the RTX 4080 in both performance with the 4070 Ti being 9% slower and the 4080 being 16% faster. Therefore, it is a level-7 card and should've been named the RX 7700 XT.

This stupidity by AMD marketing has resulted in a cascading effect on the entire RX 7000 product stack although not as bad as the top-2 cards. The so-called "RX 7800 XT" is actually a base level-7 card similar to the RTX 4070. The "RX 7700 XT" will be a level-6 card similar to the RTX 4060 or 4060 Ti and the RX 7600 is actually a level-5 card that would compete with the RTX 4050 (if released). It has been long predicted that the RTX 4060 will be more than only slightly faster than the RX 7600 which would confirm this.

In the past, when AMD didn't (or couldn't) compete at the halo-level, they just didn't use level-9 naming because they knew that they'd be absolutely skewered by reviewers if they did.

This time around however, Sasa Marinkovic correctly recognised that the tech press was overly-obsessed with nVidia. He saw that they would be completely distracted by nVidia's attempt to pass off the RTX 4070 Ti as the "RTX 4080 12GB". This is because nVidia had pulled similar crap several times in the past while AMD did not. The tech press was blindsided by this move because it was completely unexpected.

However, that doesn't make him a genius, that makes him a stupid arsehole. He's an arsehole because he pulled this crap in the first place. He's galactic-level stupid because by the time the tech press realised what had happened (and of course it was only a matter of time), it was far too late for AMD to pull the 180° turn that nVidia did and "un-release" these cards.

This remarkably bad decision meant that in the long-term, AMD would have egg on its face (and rightfully so) without any way to remove it. This is AMD's own fault and it deserves every negative consequence that comes from this action.

The situation was far from hopeless. AMD was given an early Christmas gift from nVidia when the GeForce pricing was revealed. How AMD managed to foul this up is beyond my understanding (and I'm glad to be incapable of understanding this level of incompetence and stupidity) but I know that the responsibility for this falls squarely in the lap of their director of gaming marketing, Sasa Marinkovic.

It is patently clear that, through the decisions made by Mr. Marinkovic, AMD somehow managed to snatch defeat from the jaws of victory. His employment should be terminated and going forward, no company should ever hire him for any kind of exective position. Not because he's unethical (most executives are unethical psychopaths anyway) but because he clearly can't properly analyze an industry landscape and act accordingly in the best interests of the corporation he works for. He also is incapable of keeping his mouth shut and not making an a$$ of himself in public (or on Twitter) and so he damages the very brand that he's supposed to be marketing.
 
Last edited:
Well, now we know why AMD didn't release it. This is all AMD's fault, specifically, that idiot Sasa Marinkovic. His "creative" (read dishonest) nomenclature of the RX 7000-series cards is to blame for this.

The RX 7900 XTX should have been the RX 7800 (XT)
The RX 7900 XT should have been the RX 7700 (XT)
etc...
I get your point, but that is too far off. The 7900XTX is better than the 4080 and 7900XT is worse. Dropping them to 7800/7700 is too much. Definitely not 7900, that was a serious overreach especially for 7900XT. Maybe 7800XT and 7800 instead?

Ultimately Nvidia is in the driver's seat. They have such sticky buyers that the 7900XT could have been sold for $100 and Nvidia would still sell more 4080 cards.