For the moment it seems that AMD did indeed cheat, in such as presenting benchmarks without ray tracing.
One thing to consider is that the current RT-enabled games didn't come out until well after Nvidia launched the RTX 20-series. They've been optimized specifically for Nvidia's implementation of RT, since no RDNA2 cards were available for the developers to test with. It's very possible that AMD's implementation of RT could be faster at some things and slower at others, and existing games might not be optimized to take advantage of the areas where the cards perform well. So, even if they did provide RT performance numbers in some existing games, it wouldn't mean much, unless the developers had time to go through and optimize those existing games for the specific performance characteristics of AMD's new hardware.
It's also possible that at least some of these games may have implemented RT in a way that uses proprietary Nvidia libraries, since Nvidia worked closely with the developers to add these effects, so it might not be possible to enable RT effects in those titles, at least unless they get an update following the release of the RX 6000 series.
So, I can see why AMD didn't give too many details about RT, aside from mentioning some games in development being built with their hardware in mind. If I had to guess, the RT performance in existing titles probably isn't going to be at the same level, at least at launch, otherwise they would have given some performance numbers, but that could certainly improve in the future as developers get their hands on the new hardware. Even once independent reviews come out, it might be a bit vague how the performance of AMD's RT implementation will compare in the long-term, at least until we start seeing games optimized for it.
Taking advantage of being a CPU as well as a GPU manufacturer is just good sense, it's shocking they've taken so long to do so. It's only logical that systems built around their hardware exclusively should provide some benefit. Maybe Intel and Nvidia will join forces to combat a common enemy and make similar attempts.
AMD did launch "Hybrid CrossFire" around 10 years ago, which allowed some of their graphics cards to be combined with the integrated graphics in some of their APUs to improve performance. In practice, I don't think it worked that well though, and was mostly just beneficial to some low-end cards, since the integrated graphics weren't exactly fast enough to notably improve the performance of better cards, while in some cases causing performance anomalies due to the assymetrical nature of the multi-GPU setup.
As for Intel and Nvidia teaming up, why would they do that? Intel has their own line of dedicated graphics cards coming out sometime next year that will be competing directly with Nvidia, and Nvidia is in the process of buying ARM, a processor company that competes with Intel in some markets, even if they don't make x86 processors.
No, SAM improves communication between the CPU and the GPU by giving the CPU access to the full frame buffer on the GPU. The purpose of RTX I/O is to bypass the CPU and allow the GPU to pull compressed data straight from a storage device and get processed by the GPU, reducing work for the CPU. Two different technologies addressing two different bottlenecks
And of course, RTX I/O is just Nvidia's implementation of Microsoft DirectStorage, something the Ryzen 6000-series supports as well. It's anyone's guess how the DirectStorage performance might compare between cards though, or if such performance differences even matter, since no games support it yet, and apparently won't until at least sometime next year.
If they really want to put some FUD into NVidia's release, just lift the NDA early for cards sent to reviewers!
I doubt reviewers even have cards yet, seeing as the release is still a few weeks away.
Are these new benchmarks with smart access memory + rage mode on?
This is just with Smart Access Memory enabled, not the auto-overclocking feature.