The 6900XT is SLOWER than a 3080 in ray tracing. It also lacks the quality AI upsampling of DLSS 2.x which is a huge performance boost. AMD's implementation is delayed till Spring and rumored to be inferior.
It's rumored to be inferior by who? At the higher resolutions where upscaling makes the most sense, people are likely to be hard-pressed to pick out pixel-level differences between the two upscaling methods while playing a game anyway. If they improve on AMD's existing Contrast Adaptive Sharpening algorithm, great, but I doubt one solution will look massively better than the other in real-world scenarios at higher resolutions.
As for their RT implementation, I suspect it won't be quite on par with Nvidia's, or else they would have showed off some direct performance comparisons during the announcement. There is the possibility that it could be faster at some effects than others though, and we might not have a clear picture of how the two compare until games come out that are optimized with both implementations in mind.
Those numbers are from over three years ago though, and at a time when RAM prices in general were much higher, so I'm not sure how relevant they are today. And they didn't estimate $175 for 8GB, but rather "around $150", plus another estimated $25 for packaging the memory on an interposer. Had they gone with an HBM2 solution though, those interposer costs probably wouldn't scale linearly for a higher-VRAM variant of the card. There's also the possibility that going with HBM2 close to the graphics processor would have allowed them to get similar or better performance out of a somewhat smaller chip, and reduced power draw and heat output a bit, potentially saving costs in other areas. So, it's not exactly a direct comparison, but we can assume that the cost savings of GDDR6X were enough to convince Nvidia to go with that instead.
That GDDR6 pricing article is also almost two years old, from not long after the RTX 20-series cards started using it. And as the article points out in the second paragraph...
"The mentioned prices correspond to a purchase quantity of 2,000 pieces. Manufacturers of video cards are likely to buy larger quantities, which means they could get the parts cheaper. 3dcenter.org estimates the possible discount at 20 to 40 percent, the prices in the table below would only be estimates."
So, according to that source, a company like Nvidia might have only been paying somewhere in the vicinity of $7-$9 per GB for the GDDR6 used in their cards a couple years ago. From what I've heard, GDDR6X does in fact cost more, but how much more is pretty vague. More than $100 for 10GB is probably almost certain, but I would be surprised if Nvidia were paying over $150 for it.