News AMD Big Navi and RDNA 2 GPUs: Release Date, Specs, Everything We Know

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
If AMD has the option to, they should blow up Nvidia Ampere line in Rasterization performance then it will become a battle of useful features after that.

Yea but AMD has a horrible track record with trying to compete with Nvidia in such space. 295X, Fury X, R7, Vega, all hyped and looked on paper to be the 'Nvidia killer', all ultimately fell flat for various reasons and ended up only competing with an x70 card in the end. See it over and over to the point I don't even get excited anymore. I'll wait till the benchmarks hit to see how everything shakes out.

As much as I would love to see AMD compete in that space with Nvidia I have to stay optimistic, yet realistic. I'm full-on expecting 'big navi' to end up being 'big disappointment'. Esp in wake of Nvidia announcing a 3070 will match or outperform the 2080Ti for only 500 bucks. AMD has their work cut out for them and if Big Navi cannot well outperform a 2080Ti for it's rumored price (which if the rumored specs hold true, it won't, or not by much), Nvidia is going to eat its sales alive with that 3070 sitting there for 500. At that point, Big Navi will join the ranks of FX as the biggest flops in AMD's history.

We'll have to wait and see the benchmarks to give final judgement, but in reality, Big Navi is 2 years too late.
 
Last edited:
  • Like
Reactions: Chung Leong
At that point, Big Navi will join the ranks of FX as the biggest flops in AMD's history.
FX was a failure because it sacrificed IPC and power for clock frequencies that failed to deliver performance in a way very similar to Netburst. Worst case, Big Navi is more of the same Navi, which means sub-par performance per watt and nothing worthy of enthusiast tag when compared against the RTX3000 lineup. Still salvageable as long as AMD can price it in a way that makes sense on a performance-per-dollar basis.

For Big Navi to be an FX-esque failure, it would need to be so bad that AMD will be struggling to shift inventory even at rock-bottom dollars like the end of Vega where GPUs were selling at prices that barely covered HBM cost.
 
  • Like
Reactions: JarredWaltonGPU
A 50% higher performing space heater is greatly needed here in Oymyakon, Russia.

Hopefully we will be able to grab a Full Navi in the Fall of 2020 when the blizzards subside.

Our AMD Radeon R9 290X is starting to show its age and is not heating like it used to.
Comment of the year!! (proud ex-owner of 290 x 2) 😁
 
PS5 has 36 CUs clocked at up to 2.23 GHz. Xbox Series X has 52 CUs clocked at up to 1.825 GHz. Notice how more CUs ended up with a much lower clockspeed? That's my assumption for a 72 CU RDNA2 GPU -- it will not clock as high as the PS5, and possibly not even as high as the XBSX. But we don't know for sure -- a higher TDP could certainly allow for higher clocks. 300W and 2.0-2.1GHz? Maybe!
I believe it was stated that the GPU in the XBSX was purposely locked at 1.825 GHz and not because it couldn't go higher. They likely wanted to maintain a certain TDP since its a SFF and wanted to meet certain specifications. Hence its smaller than the PS5. They also said they didn't want a variable clock like what Sony is doing with the PS5.
 
Well I will probably get a RTX 3080 if I can get my hands on one. If RNDA2 (big Navi) is any good I could always sell the RTX 3080 if need be. RNDA2 should be able to beat the 3080 with 80CU's for rasterization, RT, features??? I do suspect that AMD would have liked higher prices from Nvidia :)
 
Well I will probably get a RTX 3080 if I can get my hands on one. If RNDA2 (big Navi) is any good I could always sell the RTX 3080 if need be. RNDA2 should be able to beat the 3080 with 80CU's for rasterization, RT, features??? I do suspect that AMD would have liked higher prices from Nvidia :)
No, not unless AMD dramatically reworks its architecture. The 3080 has 68 SMs, which sort of was the equivalent of 68 CUs in Turing. But with Ampere, each SM now has twice as many FP32 CUDA cores, so 68 SMs is effectively the same as 136 CUs. Based on current expectations, even the RTX 3070 might be faster than Big Navi. We'll find out for sure maybe next month, certainly by November, but Nvidia definitely threw down the gauntlet this round.
 
Well I will probably get a RTX 3080 if I can get my hands on one. If RNDA2 (big Navi) is any good I could always sell the RTX 3080 if need be. RNDA2 should be able to beat the 3080 with 80CU's for rasterization, RT, features??? I do suspect that AMD would have liked higher prices from Nvidia :)
Assuming all that's happening is a perf/watt improvement, then optimistically RDNA2 can achieve close to 2080 Ti performance at the same power envelope of the RX 5700 XT. If we push that same perf/watt out to 300W, which I would assume is a reasonable target for a flagship, this puts at potentially twice the performance of the 5700XT, but likely slower than the 3080.
 
Assuming all that's happening is a perf/watt improvement, then optimistically RDNA2 can achieve close to 2080 Ti performance at the same power envelope of the RX 5700 XT. If we push that same perf/watt out to 300W, which I would assume is a reasonable target for a flagship, this puts at potentially twice the performance of the 5700XT, but likely slower than the 3080.
In raw compute, the RTX 3080 is nearly three times faster than the 2080. (It's 30 TFLOPS vs. 10.1 TFLOPS.) Bandwidth might be holding it back, as it's nowhere near 3X the bandwidth. More likely CPU is also limiting performance in games. If AMD goes from 225W to 300W (33% more power) and is still 50% higher perf per watt, that would be twice the theoretical compute. But that would still only be basically equal to the RTX 3070's 20.4 TFLOPS.

How this all plays out in real-world gaming performance? We shall see, but it's looking like Nvidia may have done it again. And by "done it again" I mean Nvidia may have released new GPUs that AMD's best upcoming part won't be able to match on performance, leaving only price. That would suggest RX 6900 XT (or whatever the top card is called) would be $500 or less.
 
  • Like
Reactions: hotaru.hino
Thats just shy of the specs of the 2080Ti if memory serves. I mean this would be nice if AMD dropped this two years ago... the Problem is they are poised to compete against Nvidia's top tier Turing, while Ampere was just announced lol. NV states 3070 will match or outperform the 2080Ti for only 500 bucks. AMD has a steep steep hill to climb right now and are seemingly yet again, a generation behind.
It was pointed out in the very next post that their math was indeed wrong. >_>

It's hard to say exactly how the top-end RDNA2 card might perform, but if they go with a 300 watt design, and indeed offer around 50% more performance-per-watt, then we could be looking at performance closer to the 3080 than the 3070. And that could be why Nvidia priced those cards more aggressively than they did last generation. Although it's also difficult to say how raytracing performance might compare at this point.

Yea but AMD has a horrible track record with trying to compete with Nvidia in such space. 295X, Fury X, R7, Vega, all hyped and looked on paper to be the 'Nvidia killer', all ultimately fell flat for various reasons and ended up only competing with an x70 card in the end. See it over and over to the point I don't even get excited anymore. I'll wait till the benchmarks hit to see how everything shakes out.
Keep in mind, Nvidia has shifted product names to higher price tiers in recent years. The 2060 is what would have traditionally been considered a "70" card, while the 2070 would have been an "80" card in the past. So the 5700 XT's performance ended up above what would have previously been marketed as an "80" card, even if Nvidia countered it with some price adjustments in the form of the "SUPER" models.

As for the 2080 Ti, it was always a poor value relative to the rest of the series from a price-to-performance standpoint, so the next-generation $500 card offering similar performance might not be too surprising. Even at 4K resolution, a 2080 Ti is only around 40% faster than a 2070 SUPER, after all, despite costing over twice as much. If AMD doubles core counts over the 5700 XT as they are rumored to be doing with their upcoming high-end card, they should have no trouble outperforming the 2080 Ti by a decent margin.

In raw compute, the RTX 3080 is nearly three times faster than the 2080. (It's 30 TFLOPS vs. 10.1 TFLOPS.) Bandwidth might be holding it back, as it's nowhere near 3X the bandwidth. More likely CPU is also limiting performance in games. If AMD goes from 225W to 300W (33% more power) and is still 50% higher perf per watt, that would be twice the theoretical compute. But that would still only be basically equal to the RTX 3070's 20.4 TFLOPS.
As I pointed out in the 3070 thread, the compute performance between these different architectures doesn't seem to directly equate to gaming performance. Nvidia claimed the 3080 is "up to" twice as fast as a 2080, not three times as fast. If it were down to CPU limitations, they would have claimed "up to" three times as fast to account for those games where performance is still primarily graphics-limited at high resolutions. The same goes for their performance claims about the 3090. And if we follow that trend of the claimed performance being about two-thirds of what the Tflops might otherwise imply, that should put the 3070 at roughly around the performance level of the 2080 Ti, a card that's only around 50% faster than a 5700 XT at graphics-limited resolutions, and I wouldn't expect an 80CU RDNA2 card to have much trouble outperforming that in most cases.

Even in this article, you pointed out how the ~10 Tflop 5700 XT was almost as fast as the ~14 Tflop Radeon VII, a card with over 40% more compute performance. AMD increased gaming performance relative to compute performance with the 5000 series, but it's very possible Nvidia could have gone the other way with their new architecture, adding a disproportionate amount of compute performance relative to gaming performance.
 
As I pointed out in the 3070 thread, the compute performance between these different architectures doesn't seem to directly equate to gaming performance. Nvidia claimed the 3080 is "up to" twice as fast as a 2080, not three times as fast. If it were down to CPU limitations, they would have claimed "up to" three times as fast to account for those games where performance is still primarily graphics-limited at high resolutions. The same goes for their performance claims about the 3090. And if we follow that trend of the claimed performance being about two-thirds of what the Tflops might otherwise imply, that should put the 3070 at roughly around the performance level of the 2080 Ti, a card that's only around 50% faster than a 5700 XT at graphics-limited resolutions, and I wouldn't expect an 80CU RDNA2 card to have much trouble outperforming that in most cases.

Even in this article, you pointed out how the ~10 Tflop 5700 XT was almost as fast as the ~14 Tflop Radeon VII, a card with over 40% more compute performance. AMD increased gaming performance relative to compute performance with the 5000 series, but it's very possible Nvidia could have gone the other way with their new architecture, adding a disproportionate amount of compute performance relative to gaming performance.
If you can't get three times the performance at reasonable settings due to CPU / PC limitations, Nvidia would be smart to just say "over twice as fast" rather than "up to three times as fast." And of course GPU bandwidth didn't increase by anything close to 3X, which is probably holding back performance some.

It's like how Nvidia lists boost clocks that are very conservative. If you live in a very warm climate or have really crappy case cooling, maybe you only get the boost clock, but on my testbed, the 20-series usually boost to around 1850-1900 MHz. That's 100-300 MHz higher (depending on the GPU) than the official boost clock. So saying 2X when it might be more than that might be a similar deal. We'll find out soon enough.

TFLOPS don't always translate directly to gaming performance, but Nvidia already had good efficiency on its shader cores and it says Ampere is 1.9X the per per watt as Turing. For AMD, GCN was a very inefficient architecture, and Navi at least started to correct that. Having worse performance per watt and GPU utilization is the opposite direction. Basically, I think the TFLOPS will prove accurate for workloads that aren't bandwidth limited.
 
  • Like
Reactions: Chung Leong
TFLOPS don't always translate directly to gaming performance, but Nvidia already had good efficiency on its shader cores and it says Ampere is 1.9X the per per watt as Turing.
Maybe at certain specific compute workloads, but I really doubt those performance per watt gains apply to typical gaming workloads, or the card as a whole.

Looking at the gaming performance data Nvidia has made available for these cards, including that 4K Doom Eternal video comparing the 3080 side-by-side with the 2080 Ti, and a chart showing percentage increases of the 3080 and 3070 over the 2070 SUPER in a handful of games, it looks like the 3070 typically gets somewhere around 40% more gaming performance than a 2070 SUPER, which should place its performance roughly on par with a 2080 Ti, or maybe slightly better in some titles. And the 3080 is shown to get around 35-40% more performance than the 3070 in the limited selection of games they showed data for, or around 45% more performance on average than a 2080 Ti in Doom Eternal at 4K.

So, if the 3070 performs roughly similar to a 2080 Ti, we can get at least some approximation of the efficiency gains based on the TDPs of those cards. They rate the 2080 Ti as a 250 watt card, while the 3070 is considered a 225 watt card and the 3080 a 320 watt card. Assuming these TDPs are also mostly representative of relative gaming power draw, that would make the 3070 not much more than 10% more efficient than a 2080 Ti. And the 3080 appears to typically be around 40% faster than a 2080 Ti (or 45% in Doom Eternal), but it has a 28% higher TDP. So again, if those TDP ratings are relatively comparable between generations, that would similarly work out to only around a 10-15% efficiency gain over Turing at typical gaming workloads.

And again, these relative performance gains that they have shown depict a huge difference in the ratio of actual gaming performance to the Teraflop numbers they have listed for the cards. The 2070 appears to perform more like a 13-14 Tflop Turing card, while the 2080 performs more like a 20 Tflop Turing card, both around 33% lower than what the compute numbers they listed would imply for the previous architecture. The performance gains seem rather good compared to what we got with the previous generation of cards, but not as good as what those looking just at those Tflop numbers might expect. Someone expecting a "20 Tflop" 3070 to be over twice as fast as a 2070 SUPER is bound to be a bit disappointed if they find it to only be around 40% faster.
 
Maybe at certain specific compute workloads, but I really doubt those performance per watt gains apply to typical gaming workloads, or the card as a whole.

Looking at the gaming performance data Nvidia has made available for these cards, including that 4K Doom Eternal video comparing the 3080 side-by-side with the 2080 Ti, and a chart showing percentage increases of the 3080 and 3070 over the 2070 SUPER in a handful of games, it looks like the 3070 typically gets somewhere around 40% more gaming performance than a 2070 SUPER, which should place its performance roughly on par with a 2080 Ti, or maybe slightly better in some titles. And the 3080 is shown to get around 35-40% more performance than the 3070 in the limited selection of games they showed data for, or around 45% more performance on average than a 2080 Ti in Doom Eternal at 4K.

So, if the 3070 performs roughly similar to a 2080 Ti, we can get at least some approximation of the efficiency gains based on the TDPs of those cards. They rate the 2080 Ti as a 250 watt card, while the 3070 is considered a 225 watt card and the 3080 a 320 watt card. Assuming these TDPs are also mostly representative of relative gaming power draw, that would make the 3070 not much more than 10% more efficient than a 2080 Ti. And the 3080 appears to typically be around 40% faster than a 2080 Ti (or 45% in Doom Eternal), but it has a 28% higher TDP. So again, if those TDP ratings are relatively comparable between generations, that would similarly work out to only around a 10-15% efficiency gain over Turing at typical gaming workloads.

And again, these relative performance gains that they have shown depict a huge difference in the ratio of actual gaming performance to the Teraflop numbers they have listed for the cards. The 2070 appears to perform more like a 13-14 Tflop Turing card, while the 2080 performs more like a 20 Tflop Turing card, both around 33% lower than what the compute numbers they listed would imply for the previous architecture. The performance gains seem rather good compared to what we got with the previous generation of cards, but not as good as what those looking just at those Tflop numbers might expect. Someone expecting a "20 Tflop" 3070 to be over twice as fast as a 2070 SUPER is bound to be a bit disappointed if they find it to only be around 40% faster.
There's definitely fuzzy math going on, because we don't know exactly what Nvidia is reporting and how that truly impacts performance. For example:

RTX 3090: 35.7 TFLOPS @ 350W = 0.10 TFLOPS/W
RTX 3080: 29.8 TFLOPS @ 320W = 0.093 TFLOPS/W
RTX 3070: 20.4 TFLOPS @ 220W = 0.093 TFLOPS/W

RTX 2080 Ti: 14.2 TFLOPS @ 260W = 0.055 TFLOPS/W
RTX 2080: 10.6 TFLOPS @ 215W = 0.049 TFLOPS/W
RTX 2070: 7.9 TFLOPS @ 185W = 0.043 TFLOPS/W

But this assumes two things: First that the GPUs all hit their rated TFLOPS FP32 (no more, no less). Second that the GPUs also hit their exact TDPs (no more, no less). Based on the above, the efficiency of Ampere is anywhere from 1.86X (3090 vs. 2080 Ti) to 2.17X (3070 vs. 2070) higher than Turing. There's Jensen's 1.9X, and that's probably exactly where the number came from.

The thing is, we know theoretical TDPs and performance metrics are notoriously unreliable, particularly for untested hardware. We have tested Turing and Pascal and know that those GPUs often exceed their rated boost clocks (higher perf) and are pretty close to TDP in GPU-limited benchmarks. We know that AMD's Navi 1x tends to come in below the boost clock and close to the game clock, pretty much right at TDP -- but way above TDP for FurMark.

Ampere is brand new, untested, and we have no data right now. What if Ampere only hits TDP in ray tracing workloads but comes in well below TDP for traditional gaming workloads? What if Ampere boosts substantially higher than the rated clocks? What if Ampere is CPU limited even at 4K in many games? These are all good questions.

Most likely, Ampere is "up to" 1.9X the performance per watt of Turing -- maybe even a bit more in a few instances. And if you run games at 1080p or 1440p, the efficiency is going to be lower. And in some games even at 4K, efficiency is going to be lower. We'll find out more in the next two weeks or so of testing I guess.

I'm confident there will be situations where the 1.9X materializes, and equally confident there will be tests where the perf/watt gains are lower than that. Just like with Navi 1x -- it wasn't 50% overall for sure. It was sometimes a bit more, sometimes less, and on average somewhere around 40-50% better. Depending on what GPUs you compare, which games you use for testing, etc.
 
I agree that it's hard to say exactly how close the cards will come to those TDP ratings under typical gaming workloads, but it seems a stretch to assume those Tflop increases will materialize in actual games. Again, literally all the performance data Nvidia has showed so far shows the actual gaming performance per Tflop to be about 2/3 what it was with Turing. If there was some game where a 3070 performed twice as fast as a 2070 SUPER, or where a 3080 performed three times as fast, you think they would show that right?

They didn't show a single outlier coming anywhere remotely close to those performance levels though. Out of the six games they listed relative performance data for in that chart, only Minecraft RTX managed to reach a little over 50% more performance than a 2070 SUPER using a 3070, and more than double the performance of a 2070 SUPER using a 3080, and that's an increase of only half as much as the increase in TFLOPs might imply. All the other examples were notably lower than that. If there was some game showing greater performance increases, then I would expect their marketing department to have featured that instead of a poorly-received title like Wolfenstein: Youngblood. They wouldn't have even bothered including that game unless it's about as good as the performance gains get for current AAA games running on the 30-series cards.

Nvidia may have increased FP32 performance per core, but that doesn't seem to have much of an impact on performance, at least in today's games, so the higher Tflops are not going to be a meaningful way to compare efficiency. And that goes back to what I was saying about the article before. Assuming AMD hasn't done some similar changes to their architecture with RDNA 2, I wouldn't expect their FP32 Tflops to exceed those of the 3070. However, the actual gaming performance of such a card could still easily be closer to the level of the 3080.
 
  • Like
Reactions: JarredWaltonGPU
I agree that it's hard to say exactly how close the cards will come to those TDP ratings under typical gaming workloads, but it seems a stretch to assume those Tflop increases will materialize in actual games. Again, literally all the performance data Nvidia has showed so far shows the actual gaming performance per Tflop to be about 2/3 what it was with Turing. If there was some game where a 3070 performed twice as fast as a 2070 SUPER, or where a 3080 performed three times as fast, you think they would show that right?

They didn't show a single outlier coming anywhere remotely close to those performance levels though. Out of the six games they listed relative performance data for in that chart, only Minecraft RTX managed to reach a little over 50% more performance than a 2070 SUPER using a 3070, and more than double the performance of a 2070 SUPER using a 3080, and that's an increase of only half as much as the increase in TFLOPs might imply. All the other examples were notably lower than that. If there was some game showing greater performance increases, then I would expect their marketing department to have featured that instead of a poorly-received title like Wolfenstein: Youngblood. They wouldn't have even bothered including that game unless it's about as good as the performance gains get for current AAA games running on the 30-series cards.

Nvidia may have increased FP32 performance per core, but that doesn't seem to have much of an impact on performance, at least in today's games, so the higher Tflops are not going to be a meaningful way to compare efficiency. And that goes back to what I was saying about the article before. Assuming AMD hasn't done some similar changes to their architecture with RDNA 2, I wouldn't expect their FP32 Tflops to exceed those of the 3070. However, the actual gaming performance of such a card could still easily be closer to the level of the 3080.
Yes, which is exactly my point earlier. First, Nvidia doesn't want to oversell the performance upgrade. Second, there's clearly other factors besides TFLOPS that impact performance -- bandwidth being a big one. I think there will be cases where compute workloads will get a performance boost closer to what the TFLOPS shows, but will there be real-world scenarios where that happens? Almost certainly not at launch. Maybe a year or so later, though, we'll see even better performance out of Ampere.

Look at Turing over time. At launch, going by my old benchmarks at PC Gamer (https://www.pcgamer.com/geforce-rtx-2070-founders-edition-review/), RTX 2070 FE was 32.4% faster than GTX 1070 FE at 1080p ultra, 36.4% faster at 1440p ultra, and 38.4% faster at 4K ultra. Now, I'm not using those exact same gaming benchmarks two years later, but I do have current numbers on the same two video cards from the past month. Today, RTX 2070 FE vs. GTX 1070 FE, the 2070 is 39.6% faster at 1080p ultra, 46.1% faster at 1440p ultra, and 49.4% faster at 4K ultra. That's basically a 10% (give or take) delta in two years -- without any DLSS or RT usage.

A lot of that probably comes from games doing more complex shader calculations where the concurrent FP + INT of Turing becomes more useful. Maybe some of it is Nvidia paying more attention to driver/game optimizations for Turing and less attention for driver/game optimizations on Pascal. But whatever happens at the Ampere launch, there's historical precedence to assume the gap between Ampere and Turing will grow over the coming years. That happened with Maxwell vs. Kepler and Pascal vs. Maxwell also (and Navi vs. Polaris and Polaris vs. Hawaii).
 
But don't fall into the "evil corporation" BS - All corporations exist for 1 single solitary purpose - revenue/profits for the shareholders. AMD is not evil, Intel is not Evil. Facebook IS evil. and Nvidia is not evil.
Actually, if you take the definition of evil into account, they're ALL evil. However, saying that AMD, Intel, Facebook and nVidia are all the same is a definite false equivalency. That's like saying that no criminal is any worse than any other. These four corporations do NOT have the same past records of deeds, not even close. It's like comparing a shoplifter, a mob boss, a crooked politician and a bank robber, respectively, and saying "They're all the same". It sounds ridiculous because it is.
AMD wanted to charge $600 for the 16GB card and has decided to drop it to $550. If this card was as fast or faster than the 3080 and had 60% more memory while already being $100 cheaper, why would the announcement for the 3080 persuade AMD to drop the price $50? It looks like Navi2 will be slightly faster than a 2080Ti which is pretty much what many people already thought was the best case scenario.
AMD dropped the price because they know the place that most gamers have them in their heads. Most gamers won't buy a Radeon over a GeForce unless AMD can sway them with a lower price because most gamers with nVidia cards are comfortable with them and would rather not switch brands unless persuaded to do so. A lower price is that persuasion and it can be very effective, especially if the gamer is experienced enough to have owned both and know that they're not really that different.
Just like it has been for years, it's going to be Nvidia at the top by themselves and AMD a few years behind.
If that were the case, AMD would be dropping their prices FAR more than $50 and nVidia would have their prices through the roof. That's the way it's always been. The Xbox Series X has already shown 12 teraflops and that's a console! I agree that ATi's offerings had been pretty "meh" in the years between Fiji and Navi (Polaris, Vega, Radeon VII) but historically, that hasn't been the norm. Keep in mind that ATi has been hamstrung by AMD dedicating the lion's share of its R&D budget to Ryzen (which is the right thing for them to do) and was forced to keep using GCN long after it was a good idea (Fiji should have been the last GCN GPU).

Now that EPYC has a number of lucrative design wins under its belt, we're seeing ATi finally get something more than a shoestring budget and that's where the split between the gaming architecture (RDNA) and the data centre compute architecture (CDNA) can finally happen. GCN was made because AMD was almost bankrupt and couldn't afford two separate architectures for gaming and data centres. This is why GCN cards were so good at GPU compute.

Do you remember how the mining craze only decimated the availability of Tahiti and Polaris cards and drove their prices through the roof? That was because their hash rates were MUCH higher than Maxwell or Pascal cards and that's because those GeForce cards were optimised for gaming. That's not a knock on nVidia because gaming cards should be optimised for gaming. The Radeon cards, if their architecture had also been optimised for gaming, would have been far better than they were and wouldn't have been bought up left and right by miners leaving gamers everywhere to buy GTX 1060s (which are STILL the most common video card type among Steam users).

Now that ATi has been given enough of a budget by AMD to finally optimise their architecture properly, we should see much stronger showings from the Red Team and that's good for all of us.
 
Last edited:
  • Like
Reactions: King_V
5120 stream processors.... Thats double that of 5700XT and 50% better performance per watt.

At the very least, we will see something ~2x performance of 5700XT at similar power consumption. It should be faster than 2080ti and perhaps on-par with RTX3070.

However, 50% better performance per watt is unlike to be 2x performance at same power or same performance at 1/2 the power. ITs usually around middle.... so, around 20-30% more performance and 20-30% less power.

So, with this I think it should come up closer to RTX3080 performance @ 320W. Pretty much on par with my own prediction. Its not going to touch RTX3090 and I believe AMD never intended it to be.

So, similar to RTX3080 performance but @ $599.
 
5120 stream processors.... Thats double that of 5700XT and 50% better performance per watt.

At the very least, we will see something ~2x performance of 5700XT at similar power consumption. It should be faster than 2080ti and perhaps on-par with RTX3070.

However, 50% better performance per watt is unlike to be 2x performance at same power or same performance at 1/2 the power. ITs usually around middle.... so, around 20-30% more performance and 20-30% less power.

So, with this I think it should come up closer to RTX3080 performance @ 320W. Pretty much on par with my own prediction. Its not going to touch RTX3090 and I believe AMD never intended it to be.

So, similar to RTX3080 performance but @ $599.
As I mention in the article, how AMD gets to 1.5X performance per watt matters. RDNA2 could theoretically do 60fps in game X using 150W, while RDNA1 might do 60fps in the same game using 225W. That's technically -- and provably -- 50% better performance per watt. However, AMD could then crank the power up to 300W (double the power use as before) and only see performance improve by 50%. Now performance per watt drops to only being 12.5% better!

This is, AFAICT, exactly what Nvidia is doing with Ampere when it says it's 1.9X better performance per watt. It can achieve the same performance as Turing while using 47.5% less power! But in reality Nvidia is pushing power use to 320W on the 3080 and 350W on the 3090.

Performance gains decline steeply as you move to the right of the voltage/frequency scale.
 
5120 stream processors.... Thats double that of 5700XT and 50% better performance per watt.

Shader count and performance per watt aren't independent variables. AMD knew with more shader cores it has to dial back on the clock to keep power consumption at a reasonable level. That assumption is built into the performance-per-watt estimate.
 
Hmm there is one part in this article that is actually pressing me off...
"nVidia's 3070 and 3080 are priced aggressively"

Erm, they really are NOT! It's absolute price gouging. Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!? nVidia can go and suck a donkey. I'm gonna give my money to AMD out of principle.
 
  • Like
Reactions: Avro Arrow
Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!?
It is priced aggressively relative to the performance per dollar of previous products and AMD won't be better by much if at all since AMD wants the most profit per wafer too.

The unfortunate reality of modern gaming is that GPUs are designed for datacenters, research and supercomputers first. Gamers now have to compete for wafers with companies and organizations willing to pay 3-10k$ per 600+sqmm GPU. The prices of modern higher-end GPUs recovers some of that opportunity cost and I expect it to only get worse over time as GPGPU sales continue to increase.
 
  • Like
Reactions: JarredWaltonGPU
Hmm there is one part in this article that is actually pressing me off...
"nVidia's 3070 and 3080 are priced aggressively"

Erm, they really are NOT! It's absolute price gouging. Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!? nVidia can go and suck a donkey. I'm gonna give my money to AMD out of principle.
To be fair, who decides what's considered "high-end" and what's considered "mid-range"?

Look at the ATI Radeon 9800 Pro, for instance, a graphics card that was considered "high-end" back in 2003. Despite being faster than anything else on the market at the time, it only had a 218mm graphics chip and a 47 watt TDP. As far as the size of the graphics chip, power draw and heat output are concerned, a GTX 1650 would be the most comparable card among Nvidia's current lineup. But the 9800 Pro launched for $400, or roughly around $575 in today's money, taking inflation into account.

The graphics chip in an RTX 3070, by comparison, is nearly twice as large, and it has multiple times the power draw and heat output, with a far more advanced cooling system required to accommodate for that. A card with a chip that size would be considered extremely "high-end" by the standards of that time, though a market for even "higher-end" enthusiast hardware has since emerged. The graphics chip in a 3080 is roughly three times the size of what was used in a 9800 Pro, and there are additional costs for the more advanced power delivery and cooling systems compared that card. The 9800 Pro looked like this, despite being priced roughly in-between the RTX 3070 and 3080 at launch...

https://images.hothardware.com/static/migratedcontent/reviews/images/atir9800pro/cardkit.htm

Even if we look back more recently, the GTX 980 Ti was released a little over five years ago with a slightly smaller graphics chip than the RTX 3080 with a price that works out to over $700 after adjusting for inflation. And the 980 (non-Ti) came out in 2014 and used a chip comparable in size to the 3070, but with a price equivalent to over $600 now. Even 10 years ago, the GTX 580 launched with a price that would be around $600, and a graphics chip approximately in-between the 3070 and 3080 in size. So at least for US pricing, actually not all that much has changed for high-end parts over the last decade, even if there have undoubtedly been some fluctuations with certain generations costing more or less than others. Of course, it's possible that the prices this generation might not be quite as attractive where you are, though I don't know how pricing of these cards (and things like inflation) have compared in the UK during that time.

Still, it could be argued that the card manufacturers are making even higher-end parts available than they were before. Some might consider things like 4K resolution or 1440p 144Hz as "nice to have", but ultimately a card like a 3080 is not necessary for running today's games well at high settings and with decent resolutions and frame rates.
 
Hmm there is one part in this article that is actually pressing me off...
"nVidia's 3070 and 3080 are priced aggressively"

Erm, they really are NOT! It's absolute price gouging. Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!? nVidia can go and suck a donkey. I'm gonna give my money to AMD out of principle.
I agree. The tech press is starting to drink the pricing kool-aid. They should watch some of good ol' Jimmy to bring them back to Earth. You'll enjoy the vid linked at the bottom of this post.
To be fair, who decides what's considered "high-end" and what's considered "mid-range"?

Look at the ATI Radeon 9800 Pro, for instance, a graphics card that was considered "high-end" back in 2003. Despite being faster than anything else on the market at the time, it only had a 218mm graphics chip and a 47 watt TDP. As far as the size of the graphics chip, power draw and heat output are concerned, a GTX 1650 would be the most comparable card among Nvidia's current lineup. But the 9800 Pro launched for $400, or roughly around $575 in today's money, taking inflation into account.

The graphics chip in an RTX 3070, by comparison, is nearly twice as large, and it has multiple times the power draw and heat output, with a far more advanced cooling system required to accommodate for that. A card with a chip that size would be considered extremely "high-end" by the standards of that time, though a market for even "higher-end" enthusiast hardware has since emerged. The graphics chip in a 3080 is roughly three times the size of what was used in a 9800 Pro, and there are additional costs for the more advanced power delivery and cooling systems compared that card. The 9800 Pro looked like this, despite being priced roughly in-between the RTX 3070 and 3080 at launch...

https://images.hothardware.com/static/migratedcontent/reviews/images/atir9800pro/cardkit.htm

Even if we look back more recently, the GTX 980 Ti was released a little over five years ago with a slightly smaller graphics chip than the RTX 3080 with a price that works out to over $700 after adjusting for inflation. And the 980 (non-Ti) came out in 2014 and used a chip comparable in size to the 3070, but with a price equivalent to over $600 now. Even 10 years ago, the GTX 580 launched with a price that would be around $600, and a graphics chip approximately in-between the 3070 and 3080 in size. So at least for US pricing, actually not all that much has changed for high-end parts over the last decade, even if there have undoubtedly been some fluctuations with certain generations costing more or less than others. Of course, it's possible that the prices this generation might not be quite as attractive where you are, though I don't know how pricing of these cards (and things like inflation) have compared in the UK during that time.

Still, it could be argued that the card manufacturers are making even higher-end parts available than they were before. Some might consider things like 4K resolution or 1440p 144Hz as "nice to have", but ultimately a card like a 3080 is not necessary for running today's games well at high settings and with decent resolutions and frame rates.
I applaud you for using historical context because it means that you've been around for awhile and can see past your own nose. However, ol' Jimmy REALLY hits it out of the park with his historical analysis not only on pricing but on performance, generation by generation and starting with the GTX 2xx series. It shows just how short (or how non-existent) the public's memory is:
View: https://www.youtube.com/watch?v=VjOnWT9U96g