On the plus side, the RX5700 may become the new RX580.AMD has a steep steep hill to climb right now and are seemingly yet again, a generation behind.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
On the plus side, the RX5700 may become the new RX580.AMD has a steep steep hill to climb right now and are seemingly yet again, a generation behind.
If AMD has the option to, they should blow up Nvidia Ampere line in Rasterization performance then it will become a battle of useful features after that.
FX was a failure because it sacrificed IPC and power for clock frequencies that failed to deliver performance in a way very similar to Netburst. Worst case, Big Navi is more of the same Navi, which means sub-par performance per watt and nothing worthy of enthusiast tag when compared against the RTX3000 lineup. Still salvageable as long as AMD can price it in a way that makes sense on a performance-per-dollar basis.At that point, Big Navi will join the ranks of FX as the biggest flops in AMD's history.
Comment of the year!! (proud ex-owner of 290 x 2) 😁A 50% higher performing space heater is greatly needed here in Oymyakon, Russia.
Hopefully we will be able to grab a Full Navi in the Fall of 2020 when the blizzards subside.
Our AMD Radeon R9 290X is starting to show its age and is not heating like it used to.
I believe it was stated that the GPU in the XBSX was purposely locked at 1.825 GHz and not because it couldn't go higher. They likely wanted to maintain a certain TDP since its a SFF and wanted to meet certain specifications. Hence its smaller than the PS5. They also said they didn't want a variable clock like what Sony is doing with the PS5.PS5 has 36 CUs clocked at up to 2.23 GHz. Xbox Series X has 52 CUs clocked at up to 1.825 GHz. Notice how more CUs ended up with a much lower clockspeed? That's my assumption for a 72 CU RDNA2 GPU -- it will not clock as high as the PS5, and possibly not even as high as the XBSX. But we don't know for sure -- a higher TDP could certainly allow for higher clocks. 300W and 2.0-2.1GHz? Maybe!
No, not unless AMD dramatically reworks its architecture. The 3080 has 68 SMs, which sort of was the equivalent of 68 CUs in Turing. But with Ampere, each SM now has twice as many FP32 CUDA cores, so 68 SMs is effectively the same as 136 CUs. Based on current expectations, even the RTX 3070 might be faster than Big Navi. We'll find out for sure maybe next month, certainly by November, but Nvidia definitely threw down the gauntlet this round.Well I will probably get a RTX 3080 if I can get my hands on one. If RNDA2 (big Navi) is any good I could always sell the RTX 3080 if need be. RNDA2 should be able to beat the 3080 with 80CU's for rasterization, RT, features??? I do suspect that AMD would have liked higher prices from Nvidia
Assuming all that's happening is a perf/watt improvement, then optimistically RDNA2 can achieve close to 2080 Ti performance at the same power envelope of the RX 5700 XT. If we push that same perf/watt out to 300W, which I would assume is a reasonable target for a flagship, this puts at potentially twice the performance of the 5700XT, but likely slower than the 3080.Well I will probably get a RTX 3080 if I can get my hands on one. If RNDA2 (big Navi) is any good I could always sell the RTX 3080 if need be. RNDA2 should be able to beat the 3080 with 80CU's for rasterization, RT, features??? I do suspect that AMD would have liked higher prices from Nvidia
In raw compute, the RTX 3080 is nearly three times faster than the 2080. (It's 30 TFLOPS vs. 10.1 TFLOPS.) Bandwidth might be holding it back, as it's nowhere near 3X the bandwidth. More likely CPU is also limiting performance in games. If AMD goes from 225W to 300W (33% more power) and is still 50% higher perf per watt, that would be twice the theoretical compute. But that would still only be basically equal to the RTX 3070's 20.4 TFLOPS.Assuming all that's happening is a perf/watt improvement, then optimistically RDNA2 can achieve close to 2080 Ti performance at the same power envelope of the RX 5700 XT. If we push that same perf/watt out to 300W, which I would assume is a reasonable target for a flagship, this puts at potentially twice the performance of the 5700XT, but likely slower than the 3080.
I think I drooled slightly at that thought...On the plus side, the RX5700 may become the new RX580.
It was pointed out in the very next post that their math was indeed wrong. >_>Thats just shy of the specs of the 2080Ti if memory serves. I mean this would be nice if AMD dropped this two years ago... the Problem is they are poised to compete against Nvidia's top tier Turing, while Ampere was just announced lol. NV states 3070 will match or outperform the 2080Ti for only 500 bucks. AMD has a steep steep hill to climb right now and are seemingly yet again, a generation behind.
Keep in mind, Nvidia has shifted product names to higher price tiers in recent years. The 2060 is what would have traditionally been considered a "70" card, while the 2070 would have been an "80" card in the past. So the 5700 XT's performance ended up above what would have previously been marketed as an "80" card, even if Nvidia countered it with some price adjustments in the form of the "SUPER" models.Yea but AMD has a horrible track record with trying to compete with Nvidia in such space. 295X, Fury X, R7, Vega, all hyped and looked on paper to be the 'Nvidia killer', all ultimately fell flat for various reasons and ended up only competing with an x70 card in the end. See it over and over to the point I don't even get excited anymore. I'll wait till the benchmarks hit to see how everything shakes out.
As I pointed out in the 3070 thread, the compute performance between these different architectures doesn't seem to directly equate to gaming performance. Nvidia claimed the 3080 is "up to" twice as fast as a 2080, not three times as fast. If it were down to CPU limitations, they would have claimed "up to" three times as fast to account for those games where performance is still primarily graphics-limited at high resolutions. The same goes for their performance claims about the 3090. And if we follow that trend of the claimed performance being about two-thirds of what the Tflops might otherwise imply, that should put the 3070 at roughly around the performance level of the 2080 Ti, a card that's only around 50% faster than a 5700 XT at graphics-limited resolutions, and I wouldn't expect an 80CU RDNA2 card to have much trouble outperforming that in most cases.In raw compute, the RTX 3080 is nearly three times faster than the 2080. (It's 30 TFLOPS vs. 10.1 TFLOPS.) Bandwidth might be holding it back, as it's nowhere near 3X the bandwidth. More likely CPU is also limiting performance in games. If AMD goes from 225W to 300W (33% more power) and is still 50% higher perf per watt, that would be twice the theoretical compute. But that would still only be basically equal to the RTX 3070's 20.4 TFLOPS.
If you can't get three times the performance at reasonable settings due to CPU / PC limitations, Nvidia would be smart to just say "over twice as fast" rather than "up to three times as fast." And of course GPU bandwidth didn't increase by anything close to 3X, which is probably holding back performance some.As I pointed out in the 3070 thread, the compute performance between these different architectures doesn't seem to directly equate to gaming performance. Nvidia claimed the 3080 is "up to" twice as fast as a 2080, not three times as fast. If it were down to CPU limitations, they would have claimed "up to" three times as fast to account for those games where performance is still primarily graphics-limited at high resolutions. The same goes for their performance claims about the 3090. And if we follow that trend of the claimed performance being about two-thirds of what the Tflops might otherwise imply, that should put the 3070 at roughly around the performance level of the 2080 Ti, a card that's only around 50% faster than a 5700 XT at graphics-limited resolutions, and I wouldn't expect an 80CU RDNA2 card to have much trouble outperforming that in most cases.
Even in this article, you pointed out how the ~10 Tflop 5700 XT was almost as fast as the ~14 Tflop Radeon VII, a card with over 40% more compute performance. AMD increased gaming performance relative to compute performance with the 5000 series, but it's very possible Nvidia could have gone the other way with their new architecture, adding a disproportionate amount of compute performance relative to gaming performance.
Maybe at certain specific compute workloads, but I really doubt those performance per watt gains apply to typical gaming workloads, or the card as a whole.TFLOPS don't always translate directly to gaming performance, but Nvidia already had good efficiency on its shader cores and it says Ampere is 1.9X the per per watt as Turing.
There's definitely fuzzy math going on, because we don't know exactly what Nvidia is reporting and how that truly impacts performance. For example:Maybe at certain specific compute workloads, but I really doubt those performance per watt gains apply to typical gaming workloads, or the card as a whole.
Looking at the gaming performance data Nvidia has made available for these cards, including that 4K Doom Eternal video comparing the 3080 side-by-side with the 2080 Ti, and a chart showing percentage increases of the 3080 and 3070 over the 2070 SUPER in a handful of games, it looks like the 3070 typically gets somewhere around 40% more gaming performance than a 2070 SUPER, which should place its performance roughly on par with a 2080 Ti, or maybe slightly better in some titles. And the 3080 is shown to get around 35-40% more performance than the 3070 in the limited selection of games they showed data for, or around 45% more performance on average than a 2080 Ti in Doom Eternal at 4K.
So, if the 3070 performs roughly similar to a 2080 Ti, we can get at least some approximation of the efficiency gains based on the TDPs of those cards. They rate the 2080 Ti as a 250 watt card, while the 3070 is considered a 225 watt card and the 3080 a 320 watt card. Assuming these TDPs are also mostly representative of relative gaming power draw, that would make the 3070 not much more than 10% more efficient than a 2080 Ti. And the 3080 appears to typically be around 40% faster than a 2080 Ti (or 45% in Doom Eternal), but it has a 28% higher TDP. So again, if those TDP ratings are relatively comparable between generations, that would similarly work out to only around a 10-15% efficiency gain over Turing at typical gaming workloads.
And again, these relative performance gains that they have shown depict a huge difference in the ratio of actual gaming performance to the Teraflop numbers they have listed for the cards. The 2070 appears to perform more like a 13-14 Tflop Turing card, while the 2080 performs more like a 20 Tflop Turing card, both around 33% lower than what the compute numbers they listed would imply for the previous architecture. The performance gains seem rather good compared to what we got with the previous generation of cards, but not as good as what those looking just at those Tflop numbers might expect. Someone expecting a "20 Tflop" 3070 to be over twice as fast as a 2070 SUPER is bound to be a bit disappointed if they find it to only be around 40% faster.
Yes, which is exactly my point earlier. First, Nvidia doesn't want to oversell the performance upgrade. Second, there's clearly other factors besides TFLOPS that impact performance -- bandwidth being a big one. I think there will be cases where compute workloads will get a performance boost closer to what the TFLOPS shows, but will there be real-world scenarios where that happens? Almost certainly not at launch. Maybe a year or so later, though, we'll see even better performance out of Ampere.I agree that it's hard to say exactly how close the cards will come to those TDP ratings under typical gaming workloads, but it seems a stretch to assume those Tflop increases will materialize in actual games. Again, literally all the performance data Nvidia has showed so far shows the actual gaming performance per Tflop to be about 2/3 what it was with Turing. If there was some game where a 3070 performed twice as fast as a 2070 SUPER, or where a 3080 performed three times as fast, you think they would show that right?
They didn't show a single outlier coming anywhere remotely close to those performance levels though. Out of the six games they listed relative performance data for in that chart, only Minecraft RTX managed to reach a little over 50% more performance than a 2070 SUPER using a 3070, and more than double the performance of a 2070 SUPER using a 3080, and that's an increase of only half as much as the increase in TFLOPs might imply. All the other examples were notably lower than that. If there was some game showing greater performance increases, then I would expect their marketing department to have featured that instead of a poorly-received title like Wolfenstein: Youngblood. They wouldn't have even bothered including that game unless it's about as good as the performance gains get for current AAA games running on the 30-series cards.
Nvidia may have increased FP32 performance per core, but that doesn't seem to have much of an impact on performance, at least in today's games, so the higher Tflops are not going to be a meaningful way to compare efficiency. And that goes back to what I was saying about the article before. Assuming AMD hasn't done some similar changes to their architecture with RDNA 2, I wouldn't expect their FP32 Tflops to exceed those of the 3070. However, the actual gaming performance of such a card could still easily be closer to the level of the 3080.
Actually, if you take the definition of evil into account, they're ALL evil. However, saying that AMD, Intel, Facebook and nVidia are all the same is a definite false equivalency. That's like saying that no criminal is any worse than any other. These four corporations do NOT have the same past records of deeds, not even close. It's like comparing a shoplifter, a mob boss, a crooked politician and a bank robber, respectively, and saying "They're all the same". It sounds ridiculous because it is.But don't fall into the "evil corporation" BS - All corporations exist for 1 single solitary purpose - revenue/profits for the shareholders. AMD is not evil, Intel is not Evil. Facebook IS evil. and Nvidia is not evil.
AMD dropped the price because they know the place that most gamers have them in their heads. Most gamers won't buy a Radeon over a GeForce unless AMD can sway them with a lower price because most gamers with nVidia cards are comfortable with them and would rather not switch brands unless persuaded to do so. A lower price is that persuasion and it can be very effective, especially if the gamer is experienced enough to have owned both and know that they're not really that different.AMD wanted to charge $600 for the 16GB card and has decided to drop it to $550. If this card was as fast or faster than the 3080 and had 60% more memory while already being $100 cheaper, why would the announcement for the 3080 persuade AMD to drop the price $50? It looks like Navi2 will be slightly faster than a 2080Ti which is pretty much what many people already thought was the best case scenario.
If that were the case, AMD would be dropping their prices FAR more than $50 and nVidia would have their prices through the roof. That's the way it's always been. The Xbox Series X has already shown 12 teraflops and that's a console! I agree that ATi's offerings had been pretty "meh" in the years between Fiji and Navi (Polaris, Vega, Radeon VII) but historically, that hasn't been the norm. Keep in mind that ATi has been hamstrung by AMD dedicating the lion's share of its R&D budget to Ryzen (which is the right thing for them to do) and was forced to keep using GCN long after it was a good idea (Fiji should have been the last GCN GPU).Just like it has been for years, it's going to be Nvidia at the top by themselves and AMD a few years behind.
As I mention in the article, how AMD gets to 1.5X performance per watt matters. RDNA2 could theoretically do 60fps in game X using 150W, while RDNA1 might do 60fps in the same game using 225W. That's technically -- and provably -- 50% better performance per watt. However, AMD could then crank the power up to 300W (double the power use as before) and only see performance improve by 50%. Now performance per watt drops to only being 12.5% better!5120 stream processors.... Thats double that of 5700XT and 50% better performance per watt.
At the very least, we will see something ~2x performance of 5700XT at similar power consumption. It should be faster than 2080ti and perhaps on-par with RTX3070.
However, 50% better performance per watt is unlike to be 2x performance at same power or same performance at 1/2 the power. ITs usually around middle.... so, around 20-30% more performance and 20-30% less power.
So, with this I think it should come up closer to RTX3080 performance @ 320W. Pretty much on par with my own prediction. Its not going to touch RTX3090 and I believe AMD never intended it to be.
So, similar to RTX3080 performance but @ $599.
5120 stream processors.... Thats double that of 5700XT and 50% better performance per watt.
It is priced aggressively relative to the performance per dollar of previous products and AMD won't be better by much if at all since AMD wants the most profit per wafer too.Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!?
To be fair, who decides what's considered "high-end" and what's considered "mid-range"?Hmm there is one part in this article that is actually pressing me off...
"nVidia's 3070 and 3080 are priced aggressively"
Erm, they really are NOT! It's absolute price gouging. Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!? nVidia can go and suck a donkey. I'm gonna give my money to AMD out of principle.
I agree. The tech press is starting to drink the pricing kool-aid. They should watch some of good ol' Jimmy to bring them back to Earth. You'll enjoy the vid linked at the bottom of this post.Hmm there is one part in this article that is actually pressing me off...
"nVidia's 3070 and 3080 are priced aggressively"
Erm, they really are NOT! It's absolute price gouging. Since when has it been ok for £700 to be the price of what's essentially a MID RANGE graphics card!? nVidia can go and suck a donkey. I'm gonna give my money to AMD out of principle.
I applaud you for using historical context because it means that you've been around for awhile and can see past your own nose. However, ol' Jimmy REALLY hits it out of the park with his historical analysis not only on pricing but on performance, generation by generation and starting with the GTX 2xx series. It shows just how short (or how non-existent) the public's memory is:To be fair, who decides what's considered "high-end" and what's considered "mid-range"?
Look at the ATI Radeon 9800 Pro, for instance, a graphics card that was considered "high-end" back in 2003. Despite being faster than anything else on the market at the time, it only had a 218mm graphics chip and a 47 watt TDP. As far as the size of the graphics chip, power draw and heat output are concerned, a GTX 1650 would be the most comparable card among Nvidia's current lineup. But the 9800 Pro launched for $400, or roughly around $575 in today's money, taking inflation into account.
The graphics chip in an RTX 3070, by comparison, is nearly twice as large, and it has multiple times the power draw and heat output, with a far more advanced cooling system required to accommodate for that. A card with a chip that size would be considered extremely "high-end" by the standards of that time, though a market for even "higher-end" enthusiast hardware has since emerged. The graphics chip in a 3080 is roughly three times the size of what was used in a 9800 Pro, and there are additional costs for the more advanced power delivery and cooling systems compared that card. The 9800 Pro looked like this, despite being priced roughly in-between the RTX 3070 and 3080 at launch...
https://images.hothardware.com/static/migratedcontent/reviews/images/atir9800pro/cardkit.htm
Even if we look back more recently, the GTX 980 Ti was released a little over five years ago with a slightly smaller graphics chip than the RTX 3080 with a price that works out to over $700 after adjusting for inflation. And the 980 (non-Ti) came out in 2014 and used a chip comparable in size to the 3070, but with a price equivalent to over $600 now. Even 10 years ago, the GTX 580 launched with a price that would be around $600, and a graphics chip approximately in-between the 3070 and 3080 in size. So at least for US pricing, actually not all that much has changed for high-end parts over the last decade, even if there have undoubtedly been some fluctuations with certain generations costing more or less than others. Of course, it's possible that the prices this generation might not be quite as attractive where you are, though I don't know how pricing of these cards (and things like inflation) have compared in the UK during that time.
Still, it could be argued that the card manufacturers are making even higher-end parts available than they were before. Some might consider things like 4K resolution or 1440p 144Hz as "nice to have", but ultimately a card like a 3080 is not necessary for running today's games well at high settings and with decent resolutions and frame rates.