Films and games at 30fps are not at all comparable, since they work differently. When recording a film, the camera is almost continuously recording motion to each frame before moving onto the next, resulting in natural motion blur that provides a smooth transition between frames. With games, each frame is rendered at one point in time, without those smooth transitions, as accurately simulating them would require rendering many frames and merging them together. As a result, the transition from one frame to the next tends to appear quite choppy at 30fps, unlike films where it can appear smooth.
Some games provide the option to add a motion blur effect, but it's only roughly simulated and doesn't look nearly as good, nor does it completely remove the choppiness. Films will also specifically control the movement of cameras and the objects they are recording to prevent anything important from getting blurred, while that's often not the case in games, where the user has control of the camera. For anything with significant camera movements, like an FPS game, even a perfect motion blur implementation would result in things appearing rather blurry when looking around at 30fps.
What about the reference cooler? From the couple reviews I've read so far, it's been very well-received, and the other review even suggested that the 6800XT was quieter than the RTX 3080 at stock settings, while maintaining very reasonable temperatures and putting out less heat due to the card's lower power draw.
If much of the performance gains are a result of keeping frequently accessed framebuffer data in the cache, that's not likely to change much unless resolution increases further. And if their new "Super Resolution" upscaling feature proves to be a good alternative to DLSS, which I suspect it probably will, then the amount of framebuffer data being accessed during the rendering process might even become smaller when that is active. Games will likely require more VRAM in the future, but that could just as easily hurt performance more on Nvidia's current cards, as they are at a VRAM disadvantage and more likely to have to shuffle data out to system RAM.
$280 for an RX 5700 was not exactly typical pricing though, and chances are that it was one of the models with questionable cooling, hence why they were trying to get rid of it. Going by more typical RX 5700 pricing, the suggested price of the 6800 is around 60-75% higher than that card, which is right about in line with it from a price to performance standpoint. Considering cards in this higher-end price bracket generally fare significantly worse on a performance-per-dollar basis, that's not too bad considering the 5000-series cards just launched a little over a year ago.
That seems unlikely within a "couple years". Especially since the newly-released consoles provide lower RT performance than these cards. So, we're likely to see most games continue to be designed first and foremost for rasterized rendering with some RT effects sprinkled on top throughout this console generation. And for most effects, implementing both versions should not be much harder, as commonly used game engines take much of that work out of the developer's hands.
Yeah, it will be interesting to see if these cards gain any ground with raytracing as new games are optimized for them. Current RT games were optimized exclusively for Nvidia's raytracing hardware, since that's all that was available, and as such might not play to the specific strengths and weaknesses of AMD's implementation. AMD may improve performance through driver updates as well, since there's a lot more to realtime raytracing than just casting rays, and many parts of the process like denoising can probably be optimized further on their end.
- I see nothing indicative of Nvidia's cooler design being "better". It's possible it might be more expensive to make, but it's also a card that puts out more heat.
- Your basing this "better image quality" on the one newly-released game where RT reflections seemed to not be rendered right? >_>
- Many people are not all that fond of how the 30-series cards look. Looks are subjective, so what might look good to one person, might not to another.
- They have the exact same number of ports. Both have four. The USB-C port can be used as a DisplayPort output using a simple adapter cable, and also allows for even more connectivity options for USB-C displays, adding USB and 27 watts of power delivery, making it arguably better than a regular DisplayPort connection.
Something tells me hobbling consumer cards to impede mining would also impede their gaming and application performance to at least some extent, so that's probably impractical. And miners are not going to buy a card that costs 4x the price and can't be easily resold, since they wouldn't likely be able to get a worthwhile return on investment out of it, and when the mining market inevitably crashes they would be stuck with a piece of hardware that they can't get rid of to recoup some of their costs.