News Nvidia’s GeForce RTX 3060 Ti Rumored to Launch Next Month

8gb vram? My RTX 2060 Super OC'd to base RTX 2070+ performance has 8gb vram so I guess I will have to hold out until Nvidia Hopper drops in 2021-2022, as I've never upgraded my video card without an increase in vram. As VRAM bottlenecks have always resulted in buying a new card throughout my entire period building pc's since the Radeon 9100 128mb AGP and the Nvidia 7300gt 512mb AGP.
 
  • Like
Reactions: bigdragon
8gb vram? My RTX 2060 Super OC'd to base RTX 2070+ performance has 8gb vram so I guess I will have to hold out until Nvidia Hopper drops in 2021-2022, as I've never upgraded my video card without an increase in vram. As VRAM bottlenecks have always resulted in buying a new card throughout my entire period building pc's since the Radeon 9100 128mb AGP and the Nvidia 7300gt 512mb AGP.

Things have changed for this time period though. Game engines don't consume as much VRAM as we thought, some of the vram is used for non-important stuff, like a buffer zone and isn't actually required to be on the vram.

Plus, Nvidia keeps improving their memory compression so 8GB of vram on Ampere should be capable of holding at least a few GB more than Turing and especially Pascal.
 
  • Like
Reactions: panathas
I don't think it's going to be the 3060 Ti, as Nvidia seems to be moving towards using the "Ti" moniker only for one stand-out card in a series . Titanium is an element, with fixed properties. Using the name to denote relative position on the performance chart is kinda stupid. "Super" is much clearer. There will be either a 3080 or 3090 Ti and that's it.
 
1st paragraph-
"The product might offer ~80% of GeForce RTX 2070’s performance"

4th paragraph-
"Actual frequencies of the RTX 3060 Ti chip are yet to be determined, but if it runs at the same clocks as the RTX 3070, it will offer around 82% of the latter’s performance."

I would assume the latter is correct given that the former would be a very steep decline in performance metrics.
 
At first I was excited by the "ti". It more or less confirms they will be releasing "ti" versions of all their cards (the 3090 was meant to replace Titan, not 1080ti. Not sure why that was in there when it's been talked about already in so many other articles on here).
But then reading that they are only putting NVLink on the 3090 alone, and none of the others, was heartbreaking.
.
And I can already imagine the comments of how many people believe SLI is dead, not needed, etc, etc. That's fine if you believe that. Doesn't mean you have to dictate what everyone else would like. It is rare that a single 1080ti or 2080ti is able to push a 4K display with max setting and stay constantly above 60fps. Now we are also talking about higher refresh rate displays, VRR or GSync, and even higher resolutions, all of which are becoming more and more popular. Can a single 3080 play Cyberpunk 2077 with Ray-Tracing and everything else maxed on a 4k display and stay above 60fps? We have to wait and see, but I'm willing to bet the answer is no. And what about every other game that comes out for the next 2 years?
 
At first I was excited by the "ti". It more or less confirms they will be releasing "ti" versions of all their cards (the 3090 was meant to replace Titan, not 1080ti. Not sure why that was in there when it's been talked about already in so many other articles on here).
But then reading that they are only putting NVLink on the 3090 alone, and none of the others, was heartbreaking.
.
And I can already imagine the comments of how many people believe SLI is dead, not needed, etc, etc. That's fine if you believe that. Doesn't mean you have to dictate what everyone else would like. It is rare that a single 1080ti or 2080ti is able to push a 4K display with max setting and stay constantly above 60fps. Now we are also talking about higher refresh rate displays, VRR or GSync, and even higher resolutions, all of which are becoming more and more popular. Can a single 3080 play Cyberpunk 2077 with Ray-Tracing and everything else maxed on a 4k display and stay above 60fps? We have to wait and see, but I'm willing to bet the answer is no. And what about every other game that comes out for the next 2 years?
Game developers had to do extra to support it. They don't do that anymore. Are you going to blame them for not throwing money and time at adding the support for SLI or Crossfire to the tiny percentage of potential customers that will pay for an SLI setup. Why should they take the financial hit?


DX12, and I believe Vulkan as well, handle it through the API - if I'm not mistaken, they can even handle sharing the load among cards of different performance/capability.

There's no need for Crossfire/SLI anymore in the way that it used to be done.
 
Game developers had to do extra to support it. They don't do that anymore. Are you going to blame them for not throwing money and time at adding the support for SLI or Crossfire to the tiny percentage of potential customers that will pay for an SLI setup. Why should they take the financial hit?


DX12, and I believe Vulkan as well, handle it through the API - if I'm not mistaken, they can even handle sharing the load among cards of different performance/capability.

There's no need for Crossfire/SLI anymore in the way that it used to be done.

For SLI support from game developers, yes and no. One, it depends on the engine used. Many games run off of a specific engine and that either has SLI support built in or not. And even those times when it's not, 99% of the time you can still turn it on through NVidia Inspector. I honestly can't remember the last time I was NOT able to get SLI to work. Even Unigen's Heaven and Superposition, SLI isn't natively supported, but you can find SLI profiles online to get it to work. And, yes, it does work, and does have a large speed increase. I can't remember the exact number, but something like 80%+, and that was on my older system, which was a Core i7 6850k. Newer CPUs handle the APIs much better now.
As for DX12, it's not nearly as great as it was meant to be. It brings in new eye candy, but I can't think of any games off the top of my head that do well with scaling multiple GPUs in non-SLI configurations.
Personally, I think NVidia is a little foolish on some of their ideas, and it's only a matter of time before their hubris really bites them in the ass. They screwed themselves over with the TSMC deal, thinking they were all high and mighty and trying to strong arm TSMC. And they are pushing proprietary game stuff (which proprietary stuff like that just hurts the consumer in the long run), but they abandoned the other tech they have that is also proprietary that could have easily worked hand in hand, AND sold more GPUs over all. They have PhysX, but aren't doing much with it any more. Meanwhile, HairWorks, Turf Effects, WaveWorks, FleX, Flow, PhysX, etc; all stuff that can easily be programed to offload onto a PhysX card. These things are already written into the games, it isn't that hard to add a couple of lines of code to direct CUDA cores. Maybe not very many people can afford to buy an SLI setup, but it easy to see the big difference in benchmarks in games when you turn HairWorks on and off. To buy a $120 card to run PhysX (or repurpose an older card when you upgrade) and get to be able to turn on all of these extra effects, I think a lot of people would. Especially with the right marketing. Not to mention the amount of mid level GPUs that NVidia had returned from the Crypto-bubble popping, they could have easily spun that around, push PhysX, slap a sticker on the box and put up some marketing hype.
 
For SLI support from game developers, yes and no. One, it depends on the engine used. Many games run off of a specific engine and that either has SLI support built in or not. And even those times when it's not, 99% of the time you can still turn it on through NVidia Inspector. I honestly can't remember the last time I was NOT able to get SLI to work. Even Unigen's Heaven and Superposition, SLI isn't natively supported, but you can find SLI profiles online to get it to work. And, yes, it does work, and does have a large speed increase. I can't remember the exact number, but something like 80%+, and that was on my older system, which was a Core i7 6850k. Newer CPUs handle the APIs much better now.
As for DX12, it's not nearly as great as it was meant to be. It brings in new eye candy, but I can't think of any games off the top of my head that do well with scaling multiple GPUs in non-SLI configurations.

This sounds completely wrong, and that percentage of speed increase sounds far more like fantasy than reality.

Care to cite sources on this?
 
Is it because it is so awesome that they simply skipped the 3060
or severely under-performing as it's only 80% of the 2070 that the Ampere Ti cards are actually inferior versions of the regular gpus?
 
Is it because it is so awesome that they simply skipped the 3060
or severely under-performing as it's only 80% of the 2070 that the Ampere Ti cards are actually inferior versions of the regular gpus?
Is it because it is so awesome that they simply skipped the 3060
or severely under-performing as it's only 80% of the 2070 that the Ampere Ti cards are actually inferior versions of the regular gpus?
More likely they had a different part planned for 3060 but ended up binning more 3070 GPU's with defects than they expected and now have to do something with them rather than throw them out, so 3060 Ti! And it's a typo that it's only 80% of 2070. It's supposed to read 3070. Other sources are saying that it is faster than a vanilla 2080, so 80% of 2070 performance doesn't line up with that figure but 80% of 3070 does fit.
 
This sounds completely wrong, and that percentage of speed increase sounds far more like fantasy than reality.

Care to cite sources on this?

My computer.

Core i9 10940X @ 5.0Ghz all cores
32GB DDR4 Quad Channel 3400
Adata SX8200 Pro 1TB
two Asus GTX 1080ti in SLI
three Samsung RU8000 55" 4K TVs for surround mode.
Denon AVR-S750H Atmos Receiver
Steelseries Arctis Pro Wireless

What game would you like me to test? If I have it, I'll benchmark it for you. I've been running SLI since the 970s came out, and Surround mode shortly after that, though with three 46" 1080p Samsungs before I upgraded last Black Friday (really pissed that the reviews said the RU8000 supported VRR with NVidia cards, just to retract it a couple of months later, after I bought these already. Hopefully the 30xx series will work). Since I've been running SLI for a long time, and still want to keep up with it, I'm not too bad with messing around with NVidia Inspector. And Google works wonders when you know what to look for.
 
The problem with SLI is practicality. Taking away the issues of say developer support or API features, the issues I see with multi-card setups in general are:
  • For lower tier cards, the price/performance ratio may be the same, if not slightly better than a single higher-tier video card. So what's the advantage here?
  • VRAM doesn't combine. Stuttering is more likely to occur with higher resolution textures or ironically, running games at higher resolutions.

    While I'm led to believe NVLink theoretically allows for VRAM pooling, this comes with the baggage of having a NUMA based system. Remember: games are soft realtime applications. You'd want to minimize latency, not add things to potentially increase it.
    • This is why the first gen Threadripper has a game mode.
    • This isn't a problem in Quadro setups where NVLink does pool VRAM because the expected application they'll run doesn't have a soft realtime requirement.
  • Two cards typically requires more power and better cooling than a single card of similar performance. So you likely won't save anything in the end even if the price/performance ratio of an SLI setup is better since you had to make up the cost elsewhere.
  • Even if you plan on adding another card later, you have to overbuild the PC initially. At least with the motherboard and power supply. If you don't plan on overdoing the cooling, you'll have to account for that when you can get the second card.
    • And if you wait too long, it may be cheaper to buy a new card anyway. Older cards float around a still relatively expensive for their actual performance price after the initial inventory clearing from storefronts. Although the used market is better about pricing (usually), some people don't like buying used.
SLI to me has only made sense on high-end cards. Which is still impractical for most people anyway.
 
Last edited:
  • Like
Reactions: King_V
My computer.

Core i9 10940X @ 5.0Ghz all cores
32GB DDR4 Quad Channel 3400
Adata SX8200 Pro 1TB
two Asus GTX 1080ti in SLI
three Samsung RU8000 55" 4K TVs for surround mode.
Denon AVR-S750H Atmos Receiver
Steelseries Arctis Pro Wireless

What game would you like me to test? If I have it, I'll benchmark it for you. I've been running SLI since the 970s came out, and Surround mode shortly after that, though with three 46" 1080p Samsungs before I upgraded last Black Friday (really pissed that the reviews said the RU8000 supported VRR with NVidia cards, just to retract it a couple of months later, after I bought these already. Hopefully the 30xx series will work). Since I've been running SLI for a long time, and still want to keep up with it, I'm not too bad with messing around with NVidia Inspector. And Google works wonders when you know what to look for.
Oh, so your source is your own PC? Your own guy feeling as to how much faster something is?

And NOT from actual data that's already out there?

Also: "And Google works wonders when you know what to look for" is a cop-out. You're now saying that I have to do to your work for you. That I need to do the research to prove your claims.

That's sounding an awful lot like "I don't have any proof. Just trust me."
 
Oh, so your source is your own PC? Your own guy feeling as to how much faster something is?

And NOT from actual data that's already out there?

Also: "And Google works wonders when you know what to look for" is a cop-out. You're now saying that I have to do to your work for you. That I need to do the research to prove your claims.

That's sounding an awful lot like "I don't have any proof. Just trust me."

You seem to be rather angry, and I'm wondering if you were adding negative emotion to my posts when it wasn't intended. I was trying to offer to do these benchmarks myself, for any game or benchmark you were curious about. If I have it, I'll do it. There are benchmarks out there that show improvements for SLI still. And before I got my 10940X this past spring, I was running a core i7 6850K and was able to find benchmarks that show a difference in some games for being able to run 16x16 PCIe SLI instead of 8x8 SLI.
When I was referring to using Google, I wasn't suggesting that you go look for the benchmarks. You can if you like, but my meaning was that if someone has an SLI setup and a game is not natively supported, it is rather easy to find out how to do it still, usually using NVidia Inspector. Unigen's Superposition does not have an SLI profile, but a simple Google search and I found an SLI profile and tool (Geforce 3D Profile Manager instead of NVidia Inspector this time). I just ran it a few times with SLI on and off. I don't remember doing that many benchmarks since I got the new CPU. Non-SLI it's ~9200-9300. SLI turned on and using the tool to insert the profile, ~18300-18400. I don't mind testing anything else, and can take screen shots of the results if you have issues with trust.

One of the things I do wonder about is almost all of the later SLI testing is done with mainstream CPUs that are limited to 16x PCIe lanes total. While everyone does keep saying that the GPUs don't use the full bandwidth of the PCIe 16x lanes, I still wonder about the overhead, or specifically the overhead and bandwidth when using SLI. When you turn on SLI on these higher end cards, then what is the limiting factor? There are some games that show a difference in running 16x16 vs 8x8. Small, but it's there. And that's from games that were a couple of years old, or more.
Now, the really crappy thing is, it's not so much an option for most systems out there. Both Intel and AMD don't have many PCIe lanes on their mainstream cards, and Intel is STILL running PCIe Gen3. So that means new AMD and NVidia cards will only be running at 8x (Gen4) on Intel platforms, and SLI would mean only running 4x (Gen4), which will definitely be an issue with bandwidth. I am really confused at why Intel hasn't released any PCIe Gen4 stuff, but they have really been screwing up a LOT the past couple of years.
 
Do you remember saying this?

For SLI support from game developers, yes and no. One, it depends on the engine used. Many games run off of a specific engine and that either has SLI support built in or not. And even those times when it's not, 99% of the time you can still turn it on through NVidia Inspector. I honestly can't remember the last time I was NOT able to get SLI to work. Even Unigen's Heaven and Superposition, SLI isn't natively supported, but you can find SLI profiles online to get it to work. And, yes, it does work, and does have a large speed increase. I can't remember the exact number, but something like 80%+, and that was on my older system, which was a Core i7 6850k. Newer CPUs handle the APIs much better now.

80+% was the claim you made. That was what I objected to.

That said, when you made such a claim, you did so in the general case for gaming. There was no mention of "this is only in one or two specific games, but it's much less in most games." or anything of that nature.

You're claiming 80+% speed increase with SLI. Yet, you presented no sources for this number, other than replying with "my own system." You did not cite any sources, as I'd asked.

And, since you seem to be under the impression that your own personal anecdotes count as evidence, allow me to clarify: When I say cite sources, I mean people/sites that engage in rigorous, structured testing. This site, for example, among others.