News Watch AMD's Radeon RX 6000 'Big Navi' RDNA 2 GPU Launch Here at 12pm ET

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

alceryes

Distinguished
The 3090 does have 50% more VRAM though and that can be worth it for people who do VRAM-intensive stuff. Doubt Nvidia will be dropping prices on it until Nvidia has put its 24GB cards in the hands of all who want one.
It's a niche card.
An extreme example of VRAM usage in current gen, AAA titles, with max settings, barely gets us to 11GBs at 4k resolution. Most titles sit in the 7 to 9GB range. None currently come close to 16GBs. Even for content creation, 16GBs will be more than enough VRAM 99.99% of the time.
 

spongiemaster

Commendable
Dec 12, 2019
810
366
1,260
0
It's a niche card.
An extreme example of VRAM usage in current gen, AAA titles, with max settings, barely gets us to 11GBs at 4k resolution. Most titles sit in the 7 to 9GB range. None currently come close to 16GBs. Even for content creation, 16GBs will be more than enough VRAM 99.99% of the time.
3090 was a terrible value for a gaming card the moment it was announced. On the flip side it's a great value for a prosumer card. Nvidia is going to sell more of these to professionals than gamers, so the price isn't going anywhere.
 
I saw the early benchmark leaks. Combined with dlss it really isnt a comparison.
Ah yes the good old benchmark leaks....always a great way to base performance. DLSS is a cool technology, however, it does require a game to have support for it. While AMD doesn't have a DLSS equivalent right now, they are working on one. Probably the hardware is there it just needs the software implementation.

People who buy top tier expect it to be best at everything. The 6900 does not deliver this. Its just a tad slower without dlss. And again rt isnt competitive.

Amd is giving you 8 cu more and a small clock bump for $250 more. Thats a 40%
Let me guess the RTX 3090 though is a great value. With the RTX 3090 you get the same performance bump from the 3080 as you do 6800XT > 6900XT, you just have to pay almost double for that bump.
 
Ah yes the good old benchmark leaks....always a great way to base performance. DLSS is a cool technology, however, it does require a game to have support for it. While AMD doesn't have a DLSS equivalent right now, they are working on one. Probably the hardware is there it just needs the software implementation.


Let me guess the RTX 3090 though is a great value. With the RTX 3090 you get the same performance bump from the 3080 as you do 6800XT > 6900XT, you just have to pay almost double for that bump.
Well you're kind of making my point. People who blow this kind of money dont care about value. They only want the best. The 3090 doesnt deliver value or the best. So they lost on two fronts. Even at $800 it would be a hard sell because the 3080 Ray Tracing would beat it down. That doesnt even include dlss.

All the recent leaks by RGT, NAAF, and Igors have proven reliable. They all said "Just slightly better RT performance than 2080ti". The 20 series has unusable RT performance.

Btw: latest versions of dlss have quicker support because they didn't need clouds to analyze the games any more.

The sad thing is I hate NVIDIA. I do. But the value is no longer there for me on the AMD side.
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,562
250
2,090
15
Amd is giving you 8 cu more and a small clock bump for $250 more. Thats a 40%.
That 8 CU more is 11% more performance , asking $250 more is justified when compared to Nvidia who asked for $800 more for %15 more performance and $200 worth of extra 12GB VRAM .. so they asked for $600 more

250 more is not like 600 more
 

Joseph_138

Reputable
Nov 22, 2016
115
17
4,585
0
These are very strange things to say, particularly given that the RX 5600 XT outdid the RTX 2060 both in performance and power efficiency. And, for what you get per dollar, the only disappointment was the RX 5500 XT. The rest of the Navi cards blew away Nvidia in the price-to-performance ratio.

Further - why do you think the definition of "being competitive" absolutely MUST be matching Nvidia tier-for-tier, dollar-for-dollar. That's an extremely narrow definition.

Was grabbing the mid-range with Polaris a failure to compete? Was getting the consoles a failure to compete?

Seems like you selectively grabbed onto the "common wisdom" from a few years ago, and became dogmatic about it. Almost like you feel it's your sacred duty to be a naysayer.

Ultimately, we'll see what happens when the hardware gets reviewed.
If they can't compete at the high end, that shows a failure in their technology. It took them until the 5700 XT just to achieve parity with the GTX 1080 and by the time they released that, Nvidia already had the RTX 2070 and above killing it. If a company has to sell their cards at razor thin margins just to get people to buy them, that's another sign that they are failing to compete. The 5700XT should have been priced at the level of the 2070, not the 2060. It is a company's high end cards that drive sales from the top down and AMD hasn't had a decent high end offering for a long time. It's just a fact. If you take your team red hat off long enough for it to filter through, you'll see that I am right.
 
If they can't compete at the high end, that shows a failure in their technology. It took them until the 5700 XT just to achieve parity with the GTX 1080 and by the time they released that, Nvidia already had the RTX 2070 and above killing it. If a company has to sell their cards at razor thin margins just to get people to buy them, that's another sign that they are failing to compete. The 5700XT should have been priced at the level of the 2070, not the 2060. It is a company's high end cards that drive sales from the top down and AMD hasn't had a decent high end offering for a long time. It's just a fact. If you take your team red hat off long enough for it to filter through, you'll see that I am right.
Ummmmmm the 5700XT competes with the RTX 2080/RTX 2070 Super and not the GTX 1080. It costs around what the RTX 2070 did when it was released just with 10-15% higher performance.
 
If they can't compete at the high end, that shows a failure in their technology. It took them until the 5700 XT just to achieve parity with the GTX 1080 and by the time they released that, Nvidia already had the RTX 2070 and above killing it. If a company has to sell their cards at razor thin margins just to get people to buy them, that's another sign that they are failing to compete. The 5700XT should have been priced at the level of the 2070, not the 2060. It is a company's high end cards that drive sales from the top down and AMD hasn't had a decent high end offering for a long time. It's just a fact. If you take your team red hat off long enough for it to filter through, you'll see that I am right.
Yes, yes, once again, you are insisting that YOU are the person who gets to decide what constitutes "able to compete" versus what the market actually constitutes as "able to compete."

Avro Arrow pretty much covered what needed to be said in respose to your post. And @jeremyj_83 countered your more recent flub.

You come in with what appears to be an agenda, and make multiple false claims. When you do that, you can't expect to be taken seriously. In the face of actual facts, and the data presented by AMD today, you still want to pursue this "AMD can't compete because I SAY they can't compete" agenda?
 
That 8 CU more is 11% more performance , asking $250 more is justified when compared to Nvidia who asked for $800 more for %15 more performance and $200 worth of extra 12GB VRAM .. so they asked for $600 more

250 more is not like 600 more
When you get into these absurd pricing categories, people want the best of the best and dont give a fudge about cost.

So you can price the card two ways. As a value or the best of the best. Obviously the 6900xt isnt the best of the best because rt will be slower than a 3080 and likely tied with a 3070. Then throw is dlss.

So AMD needs to price on value. And thats just not there.
 
So heres my concerns:

No mention of rt performance. Early leaks suggest 2080ti levels.

The benchmarks AMD are showing, they are showing on a ryzen 5000 and 500 chipset.

They claim these are chipset specific. So what happens when you compare a 3080 to a 6800XT on. B450 or intel?
 

InvalidError

Titan
Moderator
The benchmarks AMD are showing, they are showing on a ryzen 5000 and 500 chipset.

They claim these are chipset specific. So what happens when you compare a 3080 to a 6800XT on. B450 or intel?
What is new with the Ryzen 5000 + RX6000 combo is the extended BAR block size which enables the CPU to directly access the entire GPU memory space which AMD credits for ~2% improved performance, so you can expect the RX6000 series to perform ~2% worse when that feature isn't available. Drivers more heavily relying on direct access to GPU memory may also translate to a greater impact from PCIe 3.0 vs 4.0 too.
 

nofanneeded

Respectable
Sep 29, 2019
1,562
250
2,090
15
What is new with the Ryzen 5000 + RX6000 combo is the extended BAR block size which enables the CPU to directly access the entire GPU memory space which AMD credits for ~2% improved performance, so you can expect the RX6000 series to perform ~2% worse when that feature isn't available. Drivers more heavily relying on direct access to GPU memory may also translate to a greater impact from PCIe 3.0 vs 4.0 too.
The CPU already has direct access to the card using the PCIe lanes ... are you saying that the CPU will bypass the memory controller of the GPU to get the DATA from the GPU GDDR6 ?
 

InvalidError

Titan
Moderator
The CPU already has direct access to the card using the PCIe lanes ... are you saying that the CPU will bypass the memory controller of the GPU to get the DATA from the GPU GDDR6 ?
The maximum BAR block size on PCIe 3.0 is 256MB, so older CPUs/GPUs have to partition VRAM into 256MB chunks and the CPU has to tell the GPU to move the window around whenever the CPU wants to directly read from or write to GPU memory. On PCIe 4.0, that limit was removed, which enables GPUs that support extended BAR block sizes to let CPUs that support extended BAR block size access the entire GPU VRAM as flat address space - access any spot of VRAM without having to take extra steps to pre-select the currently accessible VRAM region.

This is a bit like extended memory back in the Windows 3.x days before the native protected mode patch: you may have 4MB of total RAM but only the 64KB mapped into the base memory space is directly accessible at any given time and every time you want to read from or write to a different block, you have to go through the extended memory driver to swap it out.
 
Reactions: digitalgriffin

InvalidError

Titan
Moderator
But AMD Claimed we need Ryzen 5000 for this ... while Ryzen 3000 already has PCIe 4.0 .. not to mention Threadripper 3000 as well .
AMD may not have implemented support for extended BAR block size in their first-gen PCIe 4.0 designs. Just because the spec raised the limits on some configuration registers does not mean you have to support all of them right out of the gate if ever.
 

nofanneeded

Respectable
Sep 29, 2019
1,562
250
2,090
15
AMD may not have implemented support for extended BAR block size in their first-gen PCIe 4.0 designs. Just because the spec raised the limits on some configuration registers does not mean you have to support all of them right out of the gate if ever.
So if they did not support it , what was the size of their Ryzen 3000 PCIe 4.0 BAR block 256MB ? 512 MB ? ? your reply does not make any sense ...
 

InvalidError

Titan
Moderator
So if they did not support it , what was the size of their Ryzen 3000 PCIe 4.0 block 256MB ? 512 MB ? ? your reply does not make any sense ...
If AMD did not bother updating its PCIe BAR support for increased block size for Zen 2, then it likely only supports PCIe 3.0's 256MB limit. It wouldn't make much sense to go through the trouble of updating Zen 2 for a single bit increment to BAR block size selection when at least six extra bits are required to cover current-day consumer GPU VRAM sizes, more if you include high-end GPUs and some future-proofing.

Also, AMD has a handful of HPC project eyeing Zen 3. If those customers want rack-wide flat(ter) memory address space, they'll need the BAR to be large enough for the whole rack full of GPUs and CPUs. Add all the bits AMD can afford to add without incurring additional lookup latency penalty here.
 

ASK THE COMMUNITY

TRENDING THREADS