News AMD Big Navi and RDNA 2 GPUs: Release Date, Specs, Everything We Know

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I agree. The tech press is starting to drink the pricing kool-aid. They should watch some of good ol' Jimmy to bring them back to Earth.
I don't like pricing trends for CPUs and GPUs either. Unfortunately, the HPC, research while datacenter markets are expanding while home and office PC markets are shrinking as more people and offices move to laptops and tablets, so what is left of the consumer PC market is increasingly at odds with institutions with much deeper pockets willing to pay several times as much for the same silicon slab.

It also does not help that practically all fabs capable of manufacturing stuff at 14nm or less have months of back-orders with no end in sight. We won't see a return to more affordable consumer parts until there is spare fab capacity to actually make them on. Until then, wafers go to the highest bidders.
 
  • Like
Reactions: JarredWaltonGPU
I agree. The tech press is starting to drink the pricing kool-aid. They should watch some of good ol' Jimmy to bring them back to Earth. You'll enjoy the vid linked at the bottom of this post.
It's easy to claim companies are being greedy for the sake of being greedy because 15 years ago high end hardware didn't cost as much. But until the actual numbers from R&D to general wide release are made public, I'm only seeing such claims as "I don't like this company so I'm going to make biased claims." From what I could tell a few years ago, the costs to go to 7nm was getting prohibitively expensive for all but the biggest fabs.

Also claiming that parts are getting more expensive is a sign of greed feels like they forget that inflation is a thing. $700 USD today was worth around $525 in 2005 and I remember paying that much if not more for a GeForce 7800 GTX.
 
  • Like
Reactions: Nestea_Zen
And my first modern PC, a Pentium 133 with 16MB RAM and a 2.5 GB hard drive, S3 2MB video card, and SB16+AWE32 sound, and a 28.8k modem, with 17" CRT monitor, in 1996 ran $2000 I think. Maybe it was $2500.

That would be $3317 (if originally $2000), or $4147 (if originally $2500) in today's dollars.
 
Ugh... I can't stand stuff like that linked video. "Nvidia conned customers into accepting worse performance gains on their flagship cards over time!" Sure, that's one, highly biased, view of what happened. A better view:

Making bigger and faster GPUs is becoming increasingly difficult. Moore's Law (Moore's Curves, really) isn't keeping up, lithography gains are slowing down, costs are skyrocketing. Result: to get a big boost in performance, something has to give.

RTX 20-series was expensive. The chips were also huge. RTX 2080 Ti and TU102 are 754mm^2! That's getting close to maximum reticle size! Gee, why did Nvidia charge $1200? Because it could! Why did Nvidia launch RTX 30-series with insufficient supply to meet the demand? Because actually meeting demand would have delayed the product probably four months! It sucks to not be able to buy them, yes, but cards do exist and they are very fast. So, launching the 3080 at $700 (instead of $800 or $1000 or $1200), and doing the 3090 at $1500 (instead of doing a Titan at $2500 or $3000) is 'aggressive pricing' compared to the previous generation. Pricing didn't go up! Performance did! That's not a bad thing.

Look at where AMD is going with Zen 3 for comparison, or with any of Intel's recent CPUs. Zen 3 will be a good jump in performance from Zen 2, but it's nowhere near the jump from 2080 Super to 3080 -- and Zen 3 will cost more at each level. I'm very interested in seeing how RX 6000 actually stacks up -- in performance, pricing, features, and supply. Traditionally, demand for AMD's GPUs is about one quarter of the demand for Nvidia's GPUs. So, if a million people wanted RTX 30-series at launch, AMD will 'only' need 250K RX 6000 parts. Most likely, AMD will still sell out and not meet demand.

The real question: How many RTX 3080 and RTX 3090 cards have actually been sold, worldwide? The only hard numbers we have are from a Danish website, and sadly that's nowhere near representative of the global sales. It's entirely possible supply is being diverted to big OEMs and partners first (high profit margins), then bigger countries with larger gaming communities. But the only people who really know what the supply is -- and what the real cost is -- aren't saying. And they won't, because it would cost them their jobs.
 
And my first modern PC, a Pentium 133 with 16MB RAM and a 2.5 GB hard drive, S3 2MB video card, and SB16+AWE32 sound, and a 28.8k modem, with 17" CRT monitor, in 1996 ran $2000 I think. Maybe it was $2500.
In a competitive market, progress and competition should be driving prices down much faster than inflation can drive prices up. My father paid ~$3500 for a 486DX33 back in 1992, I paid ~$2000 for a 90MHz Pentium in 1996, almost two orders of magnitude faster for almost half the price. I then upgraded to a 650MHz P3 in 2000 for ~$1500, almost another order of magnitude faster for 25% less.

It is a sad day when you attempt to use inflation to justify rising tech prices. The real problem is the 40+% gross margins companies are shooting for to make Wallstreet and overpaid executives happy .
 
In a competitive market, progress and competition should be driving prices down much faster than inflation can drive prices up. My father paid ~$3500 for a 486DX33 back in 1992, I paid ~$2000 for a 90MHz Pentium in 1996, almost two orders of magnitude faster for almost half the price. I then upgraded to a 650MHz P3 in 2000 for ~$1500, almost another order of magnitude faster for 25% less.

It is a sad day when you attempt to use inflation to justify rising tech prices. The real problem is the 40+% gross margins companies are shooting for to make Wallstreet and overpaid executives happy .
FYI, you're using orders of magnitude wrong. A Pentium 90 MHz wasn't anywhere close to 100 times faster than the 486DX33, and a 650MHz Pentium III wasn't ten times as fast as the Pentium 90MHz.

Like all good things, they eventually come to an end. So, CPUs and GPUs getting faster and costing less and using less power is no longer really likely to happen. We can sometimes get faster and cheaper, or cheaper and less power, or faster and less power ... but rarely all three. Or at least, not all three with significant improvements in each area. 30% faster and 10% cheaper and 5% less power? Yeah, that's doable. 50% faster and 20% cheaper and 25% less power? Nope!

Note that where a fab back in 2000-2010 probably cost a billion dollars, modern single digit nanometer fabs are more likely to be in the 10 billion dollar range. So, it's not inflation in general, which usually only goes up a few percent per year. But the costs of doing certain business can't keep going down forever, just like the ability to shrink transistors eventually hits a wall. We're not going to have single atom transistors, I'm pretty confident. Maybe we'll get useful quantum computers, but even that isn't going to be 1:1 qbit to atom sizes (or anything close to that).
 
It is a sad day when you attempt to use inflation to justify rising tech prices. The real problem is the 40+% gross margins companies are shooting for to make Wallstreet and overpaid executives happy .
I mean, I'm willing to believe you if you can provide me credible evidence that the product lifecycle costs from start to finish have had a continuous downward trend even with inflation taken to account.
 
Note that where a fab back in 2000-2010 probably cost a billion dollars, modern single digit nanometer fabs are more likely to be in the 10 billion dollar range.
There is a thing called "productivity factor" and despite fabs costing 10X as much, the amount of raw compute performance per wafer has gone up by orders of magnitude more than that.

The main reason chips cost more today isn't the cost of fabs, it is the lack of sufficient fabs to keep up with demand which makes healthy competition even more impossible than the duopolies already do.
 
The main reason chips cost more today isn't the cost of fabs, it is the lack of sufficient fabs to keep up with demand which makes healthy competition even more impossible than the duopolies already do.
The lack of fabs capable of producing sub 14nm parts was due to the cost of developing technology to produce them in them in the first place. UMC and GloFo abandoned it for cost reasons: https://www.cdrinfo.com/d7/content/umc-and-globalfoundries-wont-develop-7nm-technology . And nodes from 10nm on were expected to have high costs: https://semiengineering.com/racing-to-107nm/ . It's simply getting harder to make things smaller or cram more things in a given area, especially since manufacturers are now using topologies that are difficult to work with. Practically everything up until 14nm was 2D. Now fabs have to deal with 3D.

Also the other thing is that for GPUs, you can literally throw more transistors at the problem for better performance. As a result, there's no real incentive to try to make the final package smaller because that means less transistors. NVIDIA's flagships have consistently floated around the 500mm^2 die size since G80. ATI/AMD floated around anywhere from 250mm^2 to almost 500mm^2 depending on the generation, though they seem to float around 350-400mm^2 when they aim for the really high end.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
In a competitive market, progress and competition should be driving prices down much faster than inflation can drive prices up. My father paid ~$3500 for a 486DX33 back in 1992, I paid ~$2000 for a 90MHz Pentium in 1996, almost two orders of magnitude faster for almost half the price. I then upgraded to a 650MHz P3 in 2000 for ~$1500, almost another order of magnitude faster for 25% less.

It is a sad day when you attempt to use inflation to justify rising tech prices. The real problem is the 40+% gross margins companies are shooting for to make Wallstreet and overpaid executives happy .

Not to justify it, but to postulate whether or not we ARE paying less than we used to, in inflation-adjusted dollars vs absolute dollars.

I cringe when I think about the actual relative costs of some of the hardware from back in the day...
 
I applaud you for using historical context because it means that you've been around for awhile and can see past your own nose. However, ol' Jimmy REALLY hits it out of the park with his historical analysis not only on pricing but on performance, generation by generation and starting with the GTX 2xx series. It shows just how short (or how non-existent) the public's memory is:
The video was an interesting watch, but there were some issues with his analysis. First, he insisted on comparing the 3090 with those older generations of "highest-end" cards, when logically, it makes more sense to compare the 3080, a card with less than half the MSRP, that still delivers around 90% of the performance. Realistically, relatively few people will even consider a 3090, with it being more of a Titan successor, with its huge amount of VRAM more targeted at professional use cases, and not likely of much benefit to gaming any time soon. He is treating the 3090 as if it's representative of official 30-series pricing as a whole, which it's not.

I would rather see a price/performance/power analysis comparing cards at more similar price points (while adjusting for inflation), rather than comparing $500 cards against $1500 cards, just because they're the highest available models marketed within a certain series at the time of release. That would make the 20-series fare very poorly, as the price-to-performance gains were quite limited, especially considering how long it took that generation to come out, but the 30-series would fare much better, with the 3080 seeing very large performance per dollar gains over the 2080 from two years prior. Again, just because cards are now available at higher price points for those willing to pay a large premium for a little better performance doesn't mean those cards are targeting the same "high-end" that cards of the series might have been targeting in the past.

Another thing to consider is the large architectural changes going from Pascal to Turing/Ampere. While raytracing performance in actual games was not even something reviewers could test for at the 20-series launch, as no titles supported the feature yet, the dedicated hardware in these newer cards does increase raytracing performance substantially (even if its still a bit questionable whether the performance hit is worth the visual improvements in many cases). Reviews are still not accounting for raytracing performance in their overall performance analysis, but that's likely to become a standard feature for ultra graphics settings relatively soon, and will become more relevant to the overall performance of these cards as time goes on. With raytracing (and other new technologies) taken into account, the 20-series should actually fare a bit better against the 10-series in the long run, though that still arguably wasn't worth paying a premium for at launch.

Power draw of the 3080 definitely is high though. That's likely a result of Nvidia overclocking the card near its limits to stay competitive against AMD though, as it could have easily been a ~250 watt card had they been more modest with the clocks and voltages, while still delivering substantial performance gains over the previous generation.
 
  • Like
Reactions: JarredWaltonGPU
I mean, I'm willing to believe you if you can provide me credible evidence that the product lifecycle costs from start to finish have had a continuous downward trend even with inflation taken to account.
You only need to look at financial reports. Before companies got systematically taken over by filthy greedy pigs, net profit for a well-run company was around 12% and 8% was the threshold where companies had to start worrying about getting undercut by competition. Now that we have well-established monopolies, duopolies and oligopolies, their net profit is breaking 20% and there is no worry of new competition due to prohibitive startup costs.
 
For some reason, the silliest things make me proud of myself, like this meme:
4jgovg.jpg
 
  • Like
Reactions: King_V
Realistically, relatively few people will even consider a 3090, with it being more of a Titan successor,
A Titan successor? It is nothing of the sort. If it's the successor to anything, it's the successor to the RTX 2080 Ti. Here's an even more interesting video from Jim which proves that the RTX 3090 is NOT a Titan successor (featuring Linus Sebastian and Joe Scott):
View: https://www.youtube.com/watch?v=s23GvbQfyLA
That's right, not even Linus Sebastian was willing to call the RTX 3090 a Titan because it's not. The Titan series has a separate driver set and....nope, the RTX 3090 does NOT get them.
 
Last edited:
A Titan successor? It is nothing of the sort. If it's the successor to anything, it's the successor to the RTX 2080 Ti. Here's an even more interesting video from Jim which proves that the RTX 3090 is NOT a Titan successor (featuring Linus Sebastian and Joe Scott):
View: https://www.youtube.com/watch?v=s23GvbQfyLA
That's right, not even Linus Sebastian was willing to call the RTX 3090 a Titan because it's not. The Titan series has a separate driver set and....nope, the RTX 3090 does NOT get them.
It's a successor to the Titan price bracket. 2080 Ti was already in that zone, but then Nvidia did Titan RTX and doubled VRAM and added the drivers, while also (more than) doubling price. RTX 3090 lacks the Titan drivers, which only matter for a few specific professional use cases. If you need those professional features, there's still Quadro -- or Nvidia A6000, since I guess we've dropped the Quadro branding. Yes, they'll make even Titan cards look relatively affordable, but then Quadro has drivers that enable additional features / enhancements for professional workloads that not even Titan cards got. So, for the pro market that needs those features, Nvidia gets to milk them a bit more, while at the same time selling a slightly lower priced Titan that won't cannibalize the pro market sales. Well, outside of pros that don't need CATIA or SolidWorks or similar enhanced performance.
 
  • Like
Reactions: King_V
And they wouldn't try pushing the envelope if people weren't actually buying them.
The unfortunate reality is that as datacenter, HPC and AI become an increasingly large chunk of Nvidia's revenue, chances are that consumers will be in an increasingly fierce competition for wafers with institutional client willing to pay 3-15 k$ per card instead of 300-1500 $.
 
Pretty good moment to get back to the topic.

In my experience the Radeon 7 is very fast in the machine learning sector.
Its my favourite card for denoising, frameblending and now also in vulcan ncnn machine learning.

The Radeon 7 runs in my applications (Davinci, Primiere, VRtools) way faster then VEGA 64.
2-3 times.

It is my favourite card for production and the price was extremely low.

To imagine that this card is now replaced by RX 6900 with even more power for half of 3090 price tag gives me goose bumps.

I am also very curious about apples support for these new GPUs.
New Mac Pro is getting all its power out of the radeon 7 (2) and last the RX 5700 XT.

Finally Game Developers will optimise for AMD too because of the presents of AMD GPUs in PS5 and XBOX.

AMD is on quite good run in chip designs so far.

NVIDEA doesn't blow my mind lately. Maybe the A6000 will be the most impressive card to me - but for the price of 6x RX 6900.
In this case, ill prefer to buy 2 AMD cards to get more power then one NVIDEA over pricer - and save easy 50%.
 
Last edited:
Jarred, how could you make so many obvious mistakes in one post? Did you think that you were replying to some teenage noob? You didn't watch the video, clearly, because some of your "points" are completely nonsensical as the video demonstrated quite clearly.
It's a successor to the Titan price bracket.
You say Titan price bracket and I say RTX 2080 Ti price bracket:
RTX Titan - $2,500
RTX 2080 Ti - $1,200
RTX 3090 - $1,500
So, you're off by $1,000 and I'm off by $300. I'm sorry Jarred but the theory to which I subscribe is $700 more accurate than yours. The RTX 3090 is not priced like a Titan, it's priced the way that an RTX 3080 Ti could easily be based on Turing's pricing structure if nVidia wanted to squeeze rich people a little more.
RTX 3090 lacks the Titan drivers, which only matter for a few specific professional use cases. If you need those professional features, there's still Quadro -- or Nvidia A6000,
Now you're denying the exact purpose of the RTX Titan's existence. Those drivers are what made a Titan a Titan instead of a gaming card. You were ALWAYS able to get a Quadro, hell, the Quadro is older than the Titan but the Titan was still made. Also, the Titan of today would have 24GB of RAM because with only one exception, Titans doubled their VRAM every generation. For the RTX 3090 to be a successor to the Titan, it would need 24GB of VRAM.
Yes, they'll make even Titan cards look relatively affordable
You've just contradicted yourself. You said it was in the Titan price bracket. Now you're saying otherwise.
Quadro has drivers that enable additional features / enhancements for professional workloads that not even Titan cards got.
That's not news, that has ALWAYS been true and it hasn't stopped nVidia from making Titans and selling them like mad. Therefore, as an argument, it doesn't hold water.
So, for the pro market that needs those features, Nvidia gets to milk them a bit more, while at the same time selling a slightly lower priced Titan that won't cannibalize the pro market sales.
Just how is $1,000 cheaper only slightly lower priced? That's a massive 40% reduction in price. I'm sorry Jarred but that argument doesn't hold water either. Slightly less expensive would be $2,250 (10% off) or maybe even $2,000 (20% off). Nobody in their right mind calls a price reduction of 40% a slight reduction.
Well, outside of pros that don't need CATIA or SolidWorks or similar enhanced performance.
So, outside of people who buy Titans.

Jarred, you have basically said the following:

  1. The RTX 3090 is in the Titan price bracket, except that it's $1,000 below that.
  2. You've said that the Quadro is the reason that the Titan wasn't released but the Quadro has always existed and it didn't stop nVidia from making Titans.
  3. You called a 40% reduction in price only slightly less expensive.
  4. You said that this card is for people who are not the people who have historically bought Titans.
Jarred, I don't know who you're trying to convince but it sounds like you've just bought into the nVidia BS to the point that you're parroting it without giving it its due analysis. It's like you're trying to make up excuses for the RTX 3090's existence and I don't understand why. Your post is a total mess and I would be embarrassed if it were mine, especially if I was in your position.

You did the same thing when you told me that there's NO WAY that AMD didn't tease their most powerful video card at their Ryzen 5000 launch. You dismissed my speculation that it was in fact the RX 6800 XT that was teased as impossible and unrealistic. Well, you were wrong and it was confirmed by AMD that it was, in fact, the 72CU model that was teased at clock speeds that hadn't been yet finalised. The 72CU model will be the RX 6800 XT which means that my intuition was 100 correct.

Maybe you should pay more attention to users who have been members for over a decade and are heavily-decorated instead of just tossing our logic aside like we're noobs. There's a reason why I have the stats I have and the fact that I've been here since 2009 should tell you that I'm not some noob teenager who started gaming when the GTX 1080 Ti first came out. Maybe my speculations could actually be valuable.
 
You say Titan price bracket and I say RTX 2080 Ti price bracket:
RTX Titan - $2,500
RTX 2080 Ti - $1,200
RTX 3090 - $1,500
So, you're off by $1,000 and I'm off by $300. I'm sorry Jarred but the theory to which I subscribe is $700 more accurate than yours. The RTX 3090 is not priced like a Titan, it's priced the way that an RTX 3080 Ti could easily be based on Turing's pricing structure if nVidia wanted to squeeze rich people a little more.
Nvidia shifted around product names around the launch of the 20-series to disguise limited performance gains at each price point, so the 2080 Ti was arguably more of a Titan successor for that generation, albeit one with higher-than-typical performance compared to the next tier down. Historically though...

Titan (2013): $1000
Titan Black (2014): $1000
Titan Z (2014): $3000 (Dual-GPU card, basically two Titan Blacks in SLI)
Titan X (2015): $1000
Titan X (2016): $1200
Titan Xp (2017): $1200
Titan V (2017): $3000
Titan RTX (2018): $2500

Adjusting for Inflation and ignoring the dual-GPU Titan Z, Titans started at around $1100 in today's money, and were priced in the $1100-$1300 range, up until Nvidia decided to start using the "Titan" name to market a budget version of a $9000 Quadro, since the primary market for Titans always seemed to be professionals more than anything. So, it's only within the last few years that "Titans" saw that kind of pricing, and only within the last few years that Nvidia unlocked Quadro-level features on some Titan cards. At $1500, the RTX 3090 actually costs more than Titan's have traditionally been priced, and even Nvidia described it as a successor to the Titan. And as is typical for Titans, it only performs a little faster than the next card down costing half as much, making it a poor value for anyone wanting a card for gaming, but a reasonable value for some professionals not wanting to spend significantly more for a Quadro of similar performance.

Also, the Titan of today would have 24GB of RAM because with only one exception, Titans doubled their VRAM every generation. For the RTX 3090 to be a successor to the Titan, it would need 24GB of VRAM.
Uh... The 3090 does has 24GB of VRAM. <_<

That's the one thing it offers substantially more of than the 3080, and something that some professionals can make use of, but not so much today's games.

Product names are just names, and what really matters is what performance and features are available at each price point. The 3090 offers a feature-set and pricing that makes it more comparable to what Nvidia has traditionally marketed as "Titan" cards. With only 10-15% more gaming performance than the next card down, but double the price, it's not really comparable to a typical "80 Ti" card though.
 
I would love to see how their hardware accelerated video encoder/decoder perform in the new cards. As streaming become more common for gamers, it would be nice that their video encoder is also able to deliver higher image quality at lower bitrates. Also, support for next gen codec.

EDIT: if they can enable sr-iov on their consumer cards that would also be great.
 
Last edited:
I don't like pricing trends for CPUs and GPUs either. Unfortunately, the HPC, research while datacenter markets are expanding while home and office PC markets are shrinking as more people and offices move to laptops and tablets, so what is left of the consumer PC market is increasingly at odds with institutions with much deeper pockets willing to pay several times as much for the same silicon slab.

It also does not help that practically all fabs capable of manufacturing stuff at 14nm or less have months of back-orders with no end in sight. We won't see a return to more affordable consumer parts until there is spare fab capacity to actually make them on. Until then, wafers go to the highest bidders.

I used to sell computers (commission) when I was in college. Those systems were about $1,500 to $3000 on average and people financed them.

So $650 for a high end GPU and $400 for a high end CPU today is nothing. Throw in another $800 for a case, ram, motherboard and power supply. You're still well under what computers cost 15-20 years ago.
 
So $650 for a high end GPU and $400 for a high end CPU today is nothing. Throw in another $800 for a case, ram, motherboard and power supply. You're still well under what computers cost 15-20 years ago.
15-20 years ago, each upgrade typically cost less than the previous one if you stayed within the same performance tier and usually more than doubled performance on a three years upgrade cycle, so the worthiness of upgrades was well beyond any question - a heck of a lot more bang per buck at all price points.

If I tried building a new system today using the same ~1200 canuk$ total I have spent on my i5 over the last eight years (only counting the case and internal components), I'd end up with almost the same system due to how much more expensive most parts have become. Even the SSD I bought only a year ago costs 50% more today despite unchanged exchange rate.
 
In 2009 I bought a core i7 920 for about $320 and today you'll be able to get the Ryzen 5600 for $220 , 5600x for $300, 5800x for $450. Eventually I think AMD will release a sku in between the 5880x and 5600x. Perhaps they'll call it a 5700 with 8 cores and lower clock speeds.