I think the real reason here is that Nvidia originally expected to be selling these cards for significantly higher prices than they ended up doing. I wouldn't be surprised if they got their hands on RDNA2 hardware (or at least performance and pricing data) at some point, and discovered that the hardware was better than expected, and that AMD was going to wipe the floor with them if they priced the card that ultimately became the 3080 close to the 2080 Ti's price point, and the card that became the 3070 close to the 2080's price point. That would have still given them a "relatively decent" ~25-30% performance gain over the hardware they launched two years prior if the 3080 had been marketed as a "3080 Ti" for $1000, and the 3070 as a "3080" for $700, but instead they decided to price them at a level where it should have been clear that supply wasn't going to keep up. Even if they just priced these cards $100 higher than they did, demand would have still outstripped supply, so the only reasonable explanation for why they would be throwing money away is that they knew it was necessary to avoid looking bad compared to the competition.
That could also explain why the 3080 and 3090 have much higher TDPs than is generally the norm. They may have originally been planning to give them TDPs more in the 250-300 watt range, but decided they needed to push the chips closer to their limits to keep them as competitive with AMD's hardware as possible, even if that meant outfitting them with abnormally large coolers. It should be pretty clear that they're concerned about the competition, whether that's AMD or eventually Intel. Intel is likely still some time away from launching high-end consumer graphics hardware though, and that could have always been handled with a "SUPER" refresh at a later time, so AMD is likely the one they are concerned about.