This is due more to memory arcitecture than anything else. The memory bus width dictates the amount (at least via multiplier) of memory the GPU supports*.
The 3060 and 2060 would be laughed at if they only had 6GBs VRAM. The next step is 12GBs. NVIDIA counted on some buyers just looking at the amount of VRAM and thinking, "more equals better/faster."
*Not getting in to non-uniform bus width by using different size VRAM modules or GDDR clamshell as those are an outlier and not the norm.
The 2060 does offer 6GB of VRAM, at least for the original design, though that came out a few years back and it wouldn't make as much sense for the 3060. And yes, the bus design would limit memory bandwidth if Nvidia gave the 3060 only 8GB of VRAM, at least with the graphics processor as it was designed. We see them do that with the 3050 using the same chip with a third of the processor disabled, but it wouldn't really make sense for a card using nearly all of it.
What may have happened, is that Nvidia decided they needed to change their plans for the lineup relatively late in development, after the graphics processors were all designed. Prior to the 30-series coming out, leaks were suggesting that the cards would have significantly increased VRAM over their 20-series counterparts. But that didn't happen.
It's possible that Nvidia caught wind of the fact that AMD was going to be a lot more competitive at the enthusiast level than they originally anticipated. So, they had to be more competitive with how much performance they were offering at a given price level at the high-end. And part of making that happen involved cutting VRAM to allow for card designs to meet lower price points than they would have otherwise. Perhaps memory being more expensive than anticipated played into that as well. Of course, that was all disrupted by the crypto market, and few of the cards were actually sold near those price points.
For example, the 3080 might not have originally been intended as a $700 card, but as something priced higher with more VRAM, possibly marketed as a "3080 Ti". While the card that became the 3070 might have originally been planned as the 3080, again, with more VRAM. Or perhaps they would have utilized different numbers of enabled cores and memory channels depending on how early in production the decision was made.
This left the 3060 in an odd place though, since it was using a graphics chip designed to utilize either 6 or 12GB of VRAM for optimal performance. That would have fit in well with the rest of the lineup if the higher-end cards were all equipped with 12-16GB, but not so much with them using less. Of course, had the chips been designed with the adjusted memory in mind from the start, these mid-range cards could have been built to utilize 8GB without a loss in performance.