What are you talking about? The model names are completely irrelevant to the point Nvidia was making. Even without exact performance numbers for the 4000 series, it should be pretty safe to assume what they have said is accurate. More performance with less power used.
No, it is relevant. The question on that FAQ asking about power requirement compared to last gen, and Nvidia says it stayed the same or reduced (1st point) while delivering more performance (2nd point).
There is no argument about the 2nd point, 1st point is 100% wrong and misleading.
Basically if you are now using 3090, you probably want to upgrade to 4090 and not to 4080. If you followed Nvidia PSU recommendation, your old PSU will be 750W, 100W less than Nvidia recommendation. Are you telling me that is accurate?
Let us move to the not so 4080 tier. Compared to 3080, this 4080 tier is a joke because they use smaller chip for the 16GB variant, and an even smaller chip for 12GB variant. In the 3000 series stack, 4080 12GB would be 3060. At the very least they should call it 4070, but no, they just want more money from it thus called it 4080 12GB. Because of this, comparing it fairly will be difficult. If you just drink Nvidia kool-aid, then yes, Nvidia is kinda right. Kinda because they didn't compare 4080 16GB to 3080 but using the Ti variant. But since they have 2 4080, then I guess it is okay by Nvidia's standard. But if we want a real comparison, 4080 16GB is using similar power to 3080 with the same PSU requirement, thus 3080 owner can upgrade to 4080 16GB without changing their PSU. 4080 12GB should be called at least 4070 and I'm being generous here, then it is 4080 12GB 285W vs 3070 220W GPU.
Chip size wise, the comparison should be 4080 16GB 320W to 3070 220W and 4080 12GB 285W to 3060 170W.
Personally I don't mind if the new card is using more power since it does pack more transistor per area, but Nvidia is definitely trying to make it look better than it is (from power requirement perspective, not performance).