Edit note: I retested today and had a very different experience.
I then remembered that I was copying terabytes of data from the network onto a hard drive in the background. While that isn't a really great load per se and used devices the game wasn't, it evidently still wrecked havoc on game performance.
So I'll scratch the parts that were clearly badly influenced by that background copying.
* * *
I finally got an RX 9070XT today, PowerColor dual slot design, the same that was used for the initial review here, I don't like paying extra for fancy, my cases are solid banged up metal, I'm rebuilding constantly.
Price went down €100 (from €900 to €800) in the mean-time, I guess after a few hours I know why. What's really fascinating is that the RTX 5070 is available and selling at another €100 less, €700 in Germany.
Anyhow, testing on a MinisForum BD790i with a 16 core Dragon Range Zen4 7945HX, which runs at 100 Watts max using 64GB of DDR5-5600-40-40-40-80.
Connection is via PCIe v5 x8, because I'm using the other 8 lanes for 10Gbit Ethernet and 6x SATA.
Was still running a B580 this morning, btw. RTX 4070 and RTX 4090 are within within a few feet, also on 16 core Ryzens, 5950X (128GB ECC DDR4-3200) and 7950X3D (96GB ECC DDR5-5600), repectively, everyone runs WD X850 4TB NVMe at v4 speeds.
Installation was nice and easy, basically the software was already there, since there is a puny little iGPU already included in the IOD.
It's also nicely quiet, Furmark2, and Furmark2 + Prime95 is typically the first round in any testing to ensure stability, power and cooling.
I dislike the AMD control center, there is nothing useful on three out of four main tabs, it violates every style guide ever published, tries to push you into giving up all your data, now also to a local chatbot, while maintaining your global configuration changes across restarts probably requires an AMD login and their bot on your PC: it won't allow you to just store global undervolting or power limits!
The vast majority of screen real estate is about upselling or installing something as useless as their CHAT and you can't even permanenty disabled that.
Please fire that team or have them do something else, there is "contra value" in what they do today!
I hate Nvidia's "Experience" and "App" just as passionately, which is why I never install them and make do with a simple control panel: I don't want more from anyone, please! Intel has always been the worst, because on their iGPUs there wasn't even anything to adjust... And no, ARC isn't any better.
Even if people use GPUs for gaming, that doesn't mean that just getting it going should be "a wonderfully different adventure every day"...
No, I'm not a masochist asking for a StorCLI, just a normal control panel, please!
* * *
Yes, the 9070XT will beat the B580, pretty near everywhere. At three times the price (before those changed), that's expected.
But it wasn't anywhere near the clear-cut home-run, all those extra months invested into maturity should have delivered.
Nor does it reflect the difference in power, transistor and VRAM bandwidth budgets.
Currently I'd say that I see very little difference between Intel and AMD in terms of game compatibility (functional) and far too little difference even in performance, considering the budgets they work with.
That's not where I expected the RX 9070XT to be, quite honestly and I love the underdog just like anyone with a heart.
Unfortunately, the title I like most playing with my youngest kid is ARK. Both of them, ARK Survival Evolved (ASE) originally, ARK Survival Ascend (ASA) currently. Those games are wonderful inside, absolutely disastrous on any hardware. They are both absolute pioneers on Unreal, which means they've picked up those engines while they were still basically in Beta, Unreal 4 for ASE and Unreal 5 for ASA. And then they never upgraded the game, only the content.
Optimizations, support for new APIs or tricks--not that I ever noticed. They offer 3rd party content creation kits and and enjoy a huge following for those add-ons. Shifting the base line just isn't an option, I guess. With "Evolution" at their heart, they understand that "disruption" typically implies extinction. One of the things Clayton Christensen, the original inventor of the theory found himself most sorry about, is using the wrong word: he really meant accelerated evolution, not breaking things to the point where they can no longer be fixed.
Long story somewhat shorter, ASE and ASA performance is the high bar to reach. Cyberpunk 2077 is a walk in the park, when it comes to playable FPS. But ASA does support DLSS. Without it even my RTX 4090 won't cut it at 4k.
The other performance outlier is Monster Hunter Wilds (MHW), only the demo so far. My daughter and her friends just love the earlier game, I just get confused.
All I know that is that if the demo isn't smooth, my daughter is unlikely to be happy.
But those two most critical titles, ASA and MHW, aren't doing well at all on anything not Nvidia.
Where ASA suffers from lack of anything but DLSS 3 support, MHW much like Cyberpunk 2077 will happily take every API known to the creators of eye candy, but the results are very discouraging.
MHW consists of two parts, a "cinematic intro", which basically uses the game engine to produce a CGI movie and a game simulation run.
The cinematic intro has tons of cuts and very rapid scene changes, which would never occur in a real game (e-sports participants excluded), yet they appear buttery smooth on both RTX, the 4070 and the 4090 at 4k, once you accept switching from "ultra" to "high" on the 4070, and "frame gen" will add 50% frames with neary any visual quality loss, even ray tracing is fully accepted, while I fail to see that it offers huge visual improvements on a desert landscape with nearly no reflections.
It's quite ok on the B580 too. Yes, MHW knows how to set defaults there properly, so the experience isn't quite ready for >60 FPS at 4k, but my target is delivering eye candy to my daughter's 27" 1440@144Hz screen, currently driven by an RTX 3070, which fails terribly at ASA.
Edit: MHW was just as smooth on the AMD without the background copying going on.
But MHW on the RX 9070XT is pure disaster: in the cinematic part, entire cuts are missing, in the gaming sections the action gets stopped for tens of seconds. Enabling frame gen has the reported FPS go crazy, but the missing gaps only get larger: The reported FPS have nothing to do with what's happening on screen, low single digit FPS effectively.
On the RTX 4090 at max everything, MHW is a pleasure to watch, only to expose the sheer nonsense of what's actually displayed on screen: the actors, the action, the physics of the world, bodies' interactions with sand and water are outrageous physical impossibilities... and the girls couldn't care less! It's Anime vs. digital twin, just in case you need a label.
The RTX 4070 isn't that much behind even at 4k. going from "ultra" to "high" or even to the target 1440p, has it beating the RX 9070XT into a hapless pulp, while the B580 does actually a lot better.
Now that's not the experience outside those two titles. Tons of stuff works just fine, Doom, Far Cry 3-6 or Primal, any Tomb Raider known to man, they all just work.
But those also work on a GTX 1070... not exactly a compelling argument for new hardware.
Most likely it's not really a reflection of what AMD (and Intel) designed in hardware.
It's a reflection on the fact that every publisher, who wants to have his console titles also run well on PCs, uses an Nvidia GPU to port or design the game. GPU diversity and inclusion, simply isn't happening.
And with the RTX 5070 now cheaper than the RX 9070 XT, the only race is if I want to shell out €300 extra for the RTX 5080, which matches the RTX 9070XT in hardware specs, but unfortunately AMD can't put their theoretical power on the street.
P.S.
Of course I did some LMstudio tests, too. The Vulkan runtimes on both Intel and AMD are suprisingly strong. I don't know if NVidia sabotages Vulkan, but only CUDA will deliver there...
Perhaps somebody should dig a little deeper here, I smell rat!
P.P.S.
One of the reasons I wanted the ability to limit the power usage globally is that the card currently runs in a server optimized for low-power... which means the PSU is normally a 400 Watt unit, upgraded to a 500 Watt unit for the duration of the testing. I fresh out of 1000W spares, evidently.
That should be enough, given 300 Watts of TDP for the GPU, 100 Watts for the (mobile) CPU and 100 Watts for everything else. But even with the GPU TDP set to 250 Watts, and observed peaks from the power meter at the wall being around 450 Watts (490 long duration Watts won't cause a reboot), some spikes are evidently much higher, because reboot it does before the power meter even registers top voltage.
That's not ok, a configured limited must be respected.