News GeForce RTX 3060 Ti GDDR6X May Only Cost $10 More Than The Original

dk382

Commendable
Aug 31, 2021
58
56
1,610
  1. This is not a store listing. The screenshot here is misleading because this is actually a capture of the shopping cart after you add a regular 3060 Ti FE to cart at Scan, and what we're seeing is a glitch in the checkout process that identifies the card as having GDDR6X. At every other step of the process, the card is identified as the regular 3060 Ti FE with normal GDDR6, including in the actual store listing (because that's what it is).
  2. As such, this price "leak" is actually just the price of the regular 3060 Ti FE and not any new model.
  3. Even if this WERE a price leak for an upcoming GDDR6X model, this is the price the 3060 Ti FE launched at in the UK, so it would represent no price movement up or down. I also really don't like doing a quick currency conversion and assuming that's what the US price would be, especially in this time of rapidly changing exchange rates.
Just trying to combat misinformation, though it'll probably fall on deaf ears... To reiterate though, Scan has NOT listed a GDDR6X model like this article claims. That may be coming eventually, but it hasn't happened here. All this is is a visual glitch in the Scan shopping card that describes the regular GDDR6 version as having GDDR6X.
 
Last edited:
D

Deleted member 2731765

Guest
+1 for the above comment/dk382. Couldn't have said it better myself ! It definitely appears to be a glitch at the checkout process, and shopping cart.
 

thisisaname

Distinguished
Feb 6, 2009
940
522
19,760
@dk382 great post and a shame I can only give you one vote.

A quick search "3060 Ti FE launch price in UK"
The RTX 3060 line, in either of its configurations, impresses based upon its price-to-performance ratio, especially when considering that the RTX 3060 carries a $329 / £300 price tag and the RTX 3060 Ti has an MSRP of $399 and an RRP of £369

Even if this was true I see no reason why the price has to rise I'm betting that the price of GDDR6X is now lower than the price of GDDR6 at launch given that this was nearly two years ago.

Edit Spelling.
 
Last edited:

InvalidError

Titan
Moderator
Imagine the irony here if a new 3060Ti variant ends up having more VRAM bandwidth than what Nvidia wants to push as the GTX4080/12GB. It would be embarrassing if a 3060Ti-G6X ends up beating the 4080/12GB in bandwidth-intensive scenarios.
 
  • Like
Reactions: thisisaname
No more embarrassing than nVidia having two RTX 4080s with vastly different specs.
That's because the cheaper one was likely intended to be the 4060 Ti or 4070 until the marketing department decided that they could sell it for more if they called it a 4080. : P

Really though, the 4080 12GB's graphics chip is similar in size to the one used in the 3050 and 3060 (Non-Ti), and less than half the size of the one used in the 3080. Even if the process node is more expensive per wafer, the markup on those things has to be huge, and I can't really see much reason for the massive price increases over the 30-series, aside from Nvidia grasping at maintaining previous crypto-shortage price levels.

And the performance of the 4080 12GB isn't even that good. Judging by Nvidia's own charts, it provides maybe 10-15% more rasterized performance over the $700 3080 in most games, for a nearly 30% higher MSRP. Sure, raytracing performance appears to have improved significantly in games that utilize it, and there will likely be big gains in some other titles as well, but it's been 2 years since those cards came out, and one would expect larger performance gains across the board, especially considering the 30% higher price. Maybe if the 4080 16GB were $900, that might be considered a reasonable price-hike, but this is a completely different card utilizing a different graphics chip that was clearly meant to target a more "mid-range" price level.
 

InvalidError

Titan
Moderator
Really though, the 4080 12GB's graphics chip is similar in size to the one used in the 3050 and 3060 (Non-Ti), and less than half the size of the one used in the 3080. Even if the process node is more expensive per wafer, the markup on those things has to be huge, and I can't really see much reason for the massive price increases over the 30-series, aside from Nvidia grasping at maintaining previous crypto-shortage price levels.
The 30-series was fabbed at Samsung on 8nm, the 40-series is fabbed at TSMC on a semi-custom 5nm process. Between Samsung being a discount fab and 8nm being an older process than TSMC's 5nm, Ada wafers should be far more expensive than Ampere. IIRC, Samsung raised its fab prices only ~7% vs ~25% for TSMC over the last two years, that is another factor making 40-series chips more expensive per sqmm.

Between the TSMC premium, the ~20% rate hikes delta between TSMC and Samsung, and 5nm vs 8nm, Ada wafers likely cost 60+% more than Ampere.

As I've said over a year ago, it is highly unusual for AMD and Nvidia to go full-size on a new process right off the bat. Usually, they "grow" into a new process over 2-3 generations that bump performance up 30-60% at a time. This time around, they designed their new stuff to cater to the crypto miners' infinite wallets and may get a sick burn out of it now that ETH has gone PoS.
 
  • Like
Reactions: thisisaname
Between the TSMC premium, the ~20% rate hikes delta between TSMC and Samsung, and 5nm vs 8nm, Ada wafers likely cost 60+% more than Ampere.
But again, the chip is less than half the size of the one used in the 3080, so they can likely get twice the number of GPUs out of a single wafer.

If they could put a similar-sized graphics chip in the 3050 and 3060, cards positioned at $250 and $330 MSRPs, then a ~60% increase in chip cost isn't going to drive the price up to $900. Even comparing it to the currently somewhat higher prices of those cards, I don't see much reason why the 4080 12GB couldn't have been positioned as a 4070 in the $500 to $600 range.

Likewise, the entirely different chip found in the 4080 16GB is smaller than the one used in the 3060 Ti and 3070, cards with $400 and $500 MSRPs, so a 60% increase in chip manufacturing cost wouldn't account for it being priced at $1200. Even accounting for it having double the VRAM, I don't see much reason why it couldn't have been positioned as the $900 card.
 

InvalidError

Titan
Moderator
Even accounting for it having double the VRAM, I don't see much reason why it couldn't have been positioned as the $900 card.
The reason is simple: duopoly. Since there is almost no meaningful competition (70+% of people still buy Nvidia GPUs despite costing $100-200 more than AMD's for a given amount of raster performance), Nvidia gets to charge whatever it wants up to the point where enough people quit buying to hurt its net income.
 

HWOC

Reputable
Jan 9, 2020
148
28
4,640
Imagine the irony here if a new 3060Ti variant ends up having more VRAM bandwidth than what Nvidia wants to push as the GTX4080/12GB. It would be embarrassing if a 3060Ti-G6X ends up beating the 4080/12GB in bandwidth-intensive scenarios.
In real terms the 4080 would probably have faster memory access because the cache architecture is very different on the new cards, and much larger.
 

InvalidError

Titan
Moderator
In real terms the 4080 would probably have faster memory access because the cache architecture is very different on the new cards, and much larger.
Caches only help you as long as the algorithms can break things down in chunks that can comfortably fit in them. Bandwidth-intensive algorithms tend to not benefit much from caches either because their data set is larger than the caches or they only need to go through it once so there is no bandwidth gain from cached data.
 

HWOC

Reputable
Jan 9, 2020
148
28
4,640
Caches only help you as long as the algorithms can break things down in chunks that can comfortably fit in them. Bandwidth-intensive algorithms tend to not benefit much from caches either because their data set is larger than the caches or they only need to go through it once so there is no bandwidth gain from cached data.
Those are fair points. But AMD seems to have gained benefits from larger caches and whatever else they had to change to take advantage of it. I'm hopeful that nVidia has copied some of their thinking. 🆒
 

InvalidError

Titan
Moderator
Those are fair points. But AMD seems to have gained benefits from larger caches and whatever else they had to change to take advantage of it. I'm hopeful that nVidia has copied some of their thinking. 🆒
Well, GPUs are primarily intended for graphics and 3D rendering tends to have pretty good data locality since there typically are multiple waves of 16+ shader units working on the same bunch of 8x8-16x16 texture tiles to pump multi-sampled output onto surfaces. Still doesn't change that nerfing the memory bus down to 192bits on a $800 GPU looks awfully cheap. We used to have $200 GPUs with 256bits memory.
 

HWOC

Reputable
Jan 9, 2020
148
28
4,640
Well, GPUs are primarily intended for graphics and 3D rendering tends to have pretty good data locality since there typically are multiple waves of 16+ shader units working on the same bunch of 8x8-16x16 texture tiles to pump multi-sampled output onto surfaces. Still doesn't change that nerfing the memory bus down to 192bits on a $800 GPU looks awfully cheap. We used to have $200 GPUs with 256bits memory.

I wouldn't worry about numbers on a spreadsheet. Personally I don't care if the memory bus is 64-bits wide, if the real world performance numbers are good. AMD's R9 290 (and a good few other GPUs in the past) had a 512 bit bus....
 
So what I’m really reading then is if I’m a candidate for a 4080 12gb, then I should just snatch up a 6950xt, 6900xt, 3090ti or 3090, or even a 3080 ti or non ti.

The 6900xt seems interesting since I’ve seen it below 700 recently.