$400 for a card aimed at 1080p?
Having only a third the infiinity cache of the 6700 XT is undoubtedly going to hurt 1440p performance, which is likely why they focused on 1080p for their comparisons. Outside the infinity cache, the card has around 43% less memory bandwidth compared to the 3060 Ti, 2060 SUPER or 5700/5700 XT. And while PCIe 4.0 might be becoming more common now, on a 3.0 setup the x8 connection will undoubtedly hurt performance further when the VRAM buffer gets exceeded.
It wouldn't be at all surprising if the 3060 (non-Ti) outperforms it at higher resolutions in demanding games. Throw in raytracing, and this card is bound to fall even further behind. It's probably not going to age as well once features like that become the norm. I didn't consider the 3060 (non-Ti) to be all that attractively priced even at its $330 MSRP, but this is even less attractive.
These kinds of specifications might have been decent for $100 less. Certainly, the graphics card market has been messed up for a number of months, but jacking up the MSRP of lower-end cards isn't going to help with that once the prices subside. I would have rather seen them delay the launch by a few months and release the card at a more competitive price, at least no more than the 3060's MSRP. Of course, with much of AMD's 7nm manufacturing capacity being directed toward the millions of console APUs they are contractually obligated to make, along with their Ryzen CPUs that undoubtedly provide higher profit margins, they probably don't consider the graphics card market worth competing in right now.
Given no graphics cards are restricted by the PCI-E 3 x16 bandwidth, PCI-E 4 support offers no advantage.
Except for the majority of people still on PCIe 3.0 systems. For anyone with a 3.0 motherboard or CPU, these cards will be limited to only 3.0 x8. And we already saw how badly that can affect performance with the 5500 XT 4GB. Sure, there's double the VRAM here compared to that, but that was a budget $170 card, while people buying a card in this price range will be expecting to run games at their highest settings and at resolutions above 1080p.
And even AMD's new 5600G and 5700G APUs lack PCIe 4.0 in order to incorporate their integrated graphics. So if someone gets a new system with one of those, then decides to upgrade to a dedicated card once prices subside, they would have a 3.0 x8 connection limiting their performance with these cards, and would likely be better off with one of Nvidia's options around this price range. And if AMD is trying to popularize x8 connections on $400 graphics cards, that can be seen as a good reason not to consider one of their new APUs that can't run these cards at 4.0 speeds.
A contrast I see between AMD's performance slides and NVIDIA's is AMD always compares themselves against NVIDIA. NVIDIA compares against themselves. There's likely some outliers, but a casual 5 minute search on the interwebs for at least Ampere is telling me this.
If you have to keep using someone else to prop up your own product, I feel like you don't have much confidence that your product can stand on is own.
This is because Nvidia still holds over 80% of the dedicated graphics card market. It makes sense to compare against the line of cards people are most likely to own. If 4 out of 5 people are currently running an Nvidia graphics card, those are the ones AMD needs to convince to change brands if they want to expand their market share. Focusing their marketing at their existing userbase won't be as effective. Likewise, while Nvidia may find benefit to comparing their cards against AMD's in some cases, for the most part they are trying to convince people to upgrade from their own cards. So it just makes sense from a marketing perspective.