News Nvidia Ampere Flagship GPU Reportedly Features Up to 24GB GDDR6X and 350W TDP

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Hmm this cooler design is pretty strange. It seems like some kind of push-pull design and the PCB should be shorter than usual in order for the rear fan to be placed there. In fact the PCB could follow the X design and end like that <
 
Whoa. So is the rear fan hanging off the PCB?

More info on it here:

https://www.tweaktown.com/news/7308...ler-rumored-to-cost-150-on-its-own/index.html

Leak says it is a shorter PCB with a cutout in it. This won't be the PCB reference design that AIB's get. If true, the founders edition will no longer be the go to version for water cooling. I don't understand why Nvidia would develop such an exotic and reportedly expensive cooling solution for their own branded cards.
 
More info on it here:

https://www.tweaktown.com/news/7308...ler-rumored-to-cost-150-on-its-own/index.html

Leak says it is a shorter PCB with a cutout in it. This won't be the PCB reference design that AIB's get. If true, the founders edition will no longer be the go to version for water cooling. I don't understand why Nvidia would develop such an exotic and reportedly expensive cooling solution for their own branded cards.
If you want to be a leader, you need to do "Exotic Cooling Solutions" that pushes trends.

Remember, Nvidia was one of the first to have Vapor Chambers on GPU's.

Now, it's more common.
 
Meh 320W is insane I thought the power consumption would go down or at least stay the same when moving to a smaller process technology.

Wasn't this series supposed to be on 7nm but they ended up having to go with with 12?

If that's true then maybe they were shooting for similar TDP with the expectation of using 7nm. Explains the cooler as well.

Those TDPs are pretty high for mainstream gaming cards ... it would seem to me odd if those were their original TDP targets.
 
We have to keep in mind the greater thermal densities of the die shrinks. Like how Pascal is 16nm and Turing is 12nm... Ampere is 7nm.

Increased thermal density doesn't necessarily increase TDP though when moving to a smaller node. Usually you still gain thermal efficiency as well.

Thanks for the clarify on the node ... must have been a bad rumour I read somewhere, or I'm confusing that with something else entirely.

So if 2070 has 225 TDP ... and we believe the headline that leaked a while back about Ampere being 50% faster and 50% more efficient, that means the 3xxx series should have maybe 4 times the performance of Turing ... ?😉
 
No, the 2070's TDP is 175W. The 2070 Super is 215W. That's how we know this leak is BS.
Those are defaults - they easily pull more than that.
RTX 2070 has a default of 185w, with a limit of 215w: https://www.techpowerup.com/vgabios/204549/nvidia-rtx2070-8192-180831
RTX 2070 Super, 215w default, 260w max: https://www.techpowerup.com/vgabios/212046/nvidia-rtx2070super-8192-190531
That doesn't account for the fancier aftermarket models, which tend to have higher limits, like this Gigabyte Aorus 2070 Super with 314w limit: https://www.techpowerup.com/vgabios/212861/gigabyte-rtx2070super-8192-190625
My 1080Ti is rated for 250w, but pulls 300w quite often.
 
Yes. 320W for the 3070? I'll believe it when Nvidia announces it. 2070 is 175w. I don't believe for a second that Nvidia increased the TDP of the x70 by 145W, or 83% from last generation.

You really think the 3070 and the 3080Ti, and thus the 3080 as well, are all going to have the same TDP? The leak is complete bull.

You ask why they would need such expensive and exotic cooling (appears authentic), and the listed leaked TDPs directly answer that question. Just saying ...
 
Those are defaults - they easily pull more than that.
RTX 2070 has a default of 185w, with a limit of 215w: https://www.techpowerup.com/vgabios/204549/nvidia-rtx2070-8192-180831
RTX 2070 Super, 215w default, 260w max: https://www.techpowerup.com/vgabios/212046/nvidia-rtx2070super-8192-190531
That doesn't account for the fancier aftermarket models, which tend to have higher limits, like this Gigabyte Aorus 2070 Super with 314w limit: https://www.techpowerup.com/vgabios/212861/gigabyte-rtx2070super-8192-190625
My 1080Ti is rated for 250w, but pulls 300w quite often.
That explanation is meaningless unless Nvidia changed how they determine the TDP of a card for the 30xx series from default draw to max draw. Pretty unlikely. The problem seems to be that THG somehow screwed up copying the chart from the original source.

Original source, Igor's Lab, linked from Tom's article:

PartPCBChipModelExtensionMemoryInterfaceTBPConnectors
SKU10PG132GA102RTX 3090(Ti/Super)*24 GB GDDR6X
(Double-Sided)
384-bt350 W3x DP, HDMI
NVLink
SKU20PG132GA102RTX 3080(Ti/Super)*11 GB GDDR6X*352-bit*320 W3x DP, HDMI
SKU30PG132GA102RTX 3080none10 GB GDDR6X320-bit320 W3x DP, HDMI


Here's the chart THG posted:

Graphics CardGPUMemoryMemory InterfaceTDP (W)Outputs
GeForce RTX 3090 (Ti / Super)GA10224GB GDDR6X384-bit3503 DisplayPort, HDMI, NVLink
GeForce RTX 3080 (Ti / Super)GA10211GB GDDR6X352-bit3203 DisplayPort, HDMI
GeForce RTX 3070GA10210GB GDDR6X320-bit3203 DisplayPort, HDMI

Why did THG replace the vanilla 3080 in the original chart with a 3070 in their chart?
 
Last edited:
You ask why they would need such expensive and exotic cooling (appears authentic), and the listed leaked TDPs directly answer that question. Just saying ...
If these cards really require a $150 cooler to function properly, what are AIB's going to do? Think about how many different versions of each card AIB's like Asus and Gigabyte produce. They aren't going to develop 5 or 6 different coolers that cost that much to manufacture. Unless the 3080 FE is going sell for $900, you can't put a $150 cooler on it. $150 cooler on a $5000 Quadro, sure makes sense. $150 cooler on a card we hope is in the $700-800 range, makes no sense.
 
If these cards really require a $150 cooler to function properly, what are AIB's going to do? Think about how many different versions of each card AIB's like Asus and Gigabyte produce. They aren't going to develop 5 or 6 different coolers that cost that much to manufacture. Unless the 3080 FE is going sell for $900, you can't put a $150 cooler on it. $150 cooler on a $5000 Quadro, sure makes sense. $150 cooler on a card we hope is in the $700-800 range, makes no sense.

Misplaced hopes, perhaps.
 
That explanation is meaningless unless Nvidia changed how they determine the TDP of a card for the 30xx series from default draw to max draw. Pretty unlikely. The problem seems to be that THG somehow screwed up copying the chart from the original source.

Original source, Igor's Lab, linked from Tom's article:

PartPCBChipModelExtensionMemoryInterfaceTBPConnectors
SKU10PG132GA102RTX 3090(Ti/Super)*24 GB GDDR6X
(Double-Sided)
384-bt350 W3x DP, HDMI
NVLink
SKU20PG132GA102RTX 3080(Ti/Super)*11 GB GDDR6X*352-bit*320 W3x DP, HDMI
SKU30PG132GA102RTX 3080none10 GB GDDR6X320-bit320 W3x DP, HDMI


Here's the chart THG posted:

Graphics CardGPUMemoryMemory InterfaceTDP (W)Outputs
GeForce RTX 3090 (Ti / Super)GA10224GB GDDR6X384-bit3503 DisplayPort, HDMI, NVLink
GeForce RTX 3080 (Ti / Super)GA10211GB GDDR6X352-bit3203 DisplayPort, HDMI
GeForce RTX 3070GA10210GB GDDR6X320-bit3203 DisplayPort, HDMI

Why did THG replace the vanilla 3080 in the original chart with a 3070 in their chart?
Who knows? Could be a typo.

It's not really a problem that this is all still rumors, until confirmed by Nvidia themselves.
Also, that 10GB model sets off flags, due to how they segment the products already - just like how the 1080Ti and 2080Ti don't come with 12GBs of Vram... basically Titan cards that didn't make the cut.

There haven't been any 10GB models before. Why is that?
https://pcpartpicker.com/products/video-card/#sort=-memory
^48, 32, 24, 16, 12, 11, 8...
 
  • Like
Reactions: joeblowsmynose
I think if they perform like their TDP indicates they might, then the TDPs are less of an issue. But anything over 300w starts to get to that point where many people need to really consider whether they need a new PSU as well. But I do suspect that the people that are concerned with being able afford a new PSU, likely won't have enough money for 3070 or 3080 anyway.

I think what may have happened is that NVidia is trying to be as proactive as they can in regards to whatever AMD "might" be able to do - after just seeing what they've been doing in the CPU market.

Making the very most powerful cards they can muster, before RDNA2 and Big Navi hit, might well be that response. In order to have that level of response, you will have to stretch TDPs and prices (due to costs) to their breaking limit, to get the performance you need to ensure AMD doesn't re-surge there as well.

So from that perspective, their high TDPs, weird cooler, expected high prices, etc. all make some sense.

While I don't like to believe any rumour outright, I say that this is all within the realm of possibility.
 
Last edited:
Meh 320W is insane I thought the power consumption would go down or at least stay the same when moving to a smaller process technology.

Smaller node DOES reduce power consumption indeed... but it usually translates into the following:
The top end GPU's of the previous generation (with high wattage) become the mid-range GPU's of the new generation with lower power envelopes.

The top end ends up with same or increased power envelopes because they end up with more hardware.
I'm actually VERY skeptical of the rumors that Ampere will be excessively powerful.
At best, I'm thinking top end Ampere could get 30-50% more performance (probably on the lower end with most improvements going to Raytracing)... and power consumption will also depend on the process node, voltages, frequencies, etc.
Usually, power consumption increases exponentially depending on how much performance you gain.

Yes, new uArch can bring definitive improvements in power efficiency on the same node, but we don't have proper details of Ampere at all to make any conclusions.