[SOLVED] RTX 2070S as potential future proofing through NVLink?

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
So, now that NVlink is available on the new-ish 2070S, and is no longer limited to just Quadro's, and $800+ RTX 2080+ cards, I have been considering it over the 2060S despite minimal perfomance increase for significantly more money, for the sole reason that I feel like maybe there is a chance that NVLink may become more feasable in the near-ish future than SLI ever was, mainly due to the large increase in bandwidth between cards.
Its still nowhere near the speed of VRAM on todays cards, but getting close to 25-30% of it(depending on which card of course) which is not bad if you ask me.

I feel like the hardware tech should be viable now, so it comes down to mostly a matter of game developers incorporating efficient scaling in future titles.

So basically my question is, what do yall think about think spending the extra cash on a 2070S because of the NVLink, therefore leaving the potential open to obtain a second one maybe second hand, in a year or two, for significantly less than it is now.
Then being able to use that 2 card setup further into the future than a single 2060s or 2070s might take me.

I realize that in order for 2 cards perform better than one, not worse (like with SLI in some situations) it will still take a change of thinking in game development.
Keep in mind I mostly play mainstream titles that are either less than 1-2 yr old or just released. Eg, DX11/DX12 titles.
I am hoping that developers might come up with something better than Alternate Frame rendering although nvidia cleaims that nvlink will still work the same way. Certain machine learning applications scale well on multi-gpu already, and the extra bandwidths seems like it opens up a lot of possibilities. Game developers should be able to do the same sort of scaling if they put in the effort, right?
What do you all think? is/will multi gpu scaling only ever be useful for neural networks and crypto mining? Do you think we will ever be able to enjoy gaming with the same kind of performance increases?

Now that the RTX cards have been out for a while their multi gpu performance seems to be fairly well documented as far as current titles go, which seems good, but not awesome.

The first step develpers need to work out the issues that cause micro stutters caused by the particularly low min fps despite very high 150+fps average in certain games, even ones that scale fairly well (by todays standards) This still seems to be an issue even with NVLink setups, not just SLI.
So, this is mostly a theoretical question at this point.
In my particular case I wouldn't be adopting it for at least a year or more. But a lot can happen in a year. And even a chance to not have to spend the big bucks on a whole new gpu setup all over again in a couple years seems like it might be worth the extra money. The extra performance over the 2060s doesnt hurt either, I just feel like its silly that they don't allow all the RTX cards to use NVLink. They all seem expensive enough, right?

So who actually uses RTX cards in NVlink? Especially if you have done SLI in the past? What do you think about the future of multi gpu gaming? Most people will tell you its dead i know, but will it always be?
 
Solution
So you guys really think that multi gpu support in future games will only get worse than it currently is, even though the hardware now exists to make it a reality?

Anyone care to comment on the other side of the argument, haha. I know there are some people out there who have had solid results in multi-gpu gaming, despite the general consensus among most people you as that multi gpu gaming is dead.

Again this is really a theoretical thing at this point, and the extra cost is not astronomical between the two cards mentioned. Wh ich is why am wondering if this might be a good option to leave open for the future.
Thanks

Actually, SLI is kind of dead. Implementing SLI/CRFX requires a lot of coding, as well as resources/time...
Nvidia seems to be phasing out multi-card setups, so increased developer support seems questionable. With the 10 series cards, they removed SLI support from their mid-range offerings, leaving it only for cards in the $400+ range. With the 20 series, they removed it from those higher-end cards as well, only offering it with their $700+ enthusiast-level offerings. The only reason the 2070 SUPER now gets it is because the card is actually a cut-down 2080, so that feature got carried over. And even that card is a relatively niche offering at $500+. Multi-card setups seem to be less common than ever, so I wouldn't expect developer support to improve for them any time soon.

Also, we're talking about cards that draw around 215 watts under load. With two of them, you will be looking at them drawing over 400 watts, pumping a lot of heat into your case and likely producing a fair amount of noise, which might be worth considering.
 
Agreed that a multi-GPU setup would not be a good idea, especially if you plan to add the second card later.
Support is lacking, and getting worse as time passes.

Overall, the better option is to get the card you need now, and upgrade to a new generation when you need to.
 

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
So you guys really think that multi gpu support in future games will only get worse than it currently is, even though the hardware now exists to make it a reality?

Anyone care to comment on the other side of the argument, haha. I know there are some people out there who have had solid results in multi-gpu gaming, despite the general consensus among most people you as that multi gpu gaming is dead.

Again this is really a theoretical thing at this point, and the extra cost is not astronomical between the two cards mentioned. Wh ich is why am wondering if this might be a good option to leave open for the future.
Thanks
 
So you guys really think that multi gpu support in future games will only get worse than it currently is, even though the hardware now exists to make it a reality?

Anyone care to comment on the other side of the argument, haha. I know there are some people out there who have had solid results in multi-gpu gaming, despite the general consensus among most people you as that multi gpu gaming is dead.

Again this is really a theoretical thing at this point, and the extra cost is not astronomical between the two cards mentioned. Wh ich is why am wondering if this might be a good option to leave open for the future.
Thanks

Actually, SLI is kind of dead. Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game's engine is going to scale well. Apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API. So I don't think NVLINK is dead as of now.

I'm hoping NVLINK brings something new to the table, especially in DX12 multi-gpu mode.

You must be aware that NVidia has introduced a new interface called NVLINK with the consumer Turing GPUs, instead of the old SLI. Obviously, it's the same multi-GPU bridge which can be used for gaming, but it has an interface with many times the bandwidth of an SLI connection.

Since NVLink can be used for direct memory access between cards, and not through the PCIe slots as this was creating a huge bottleneck with SLI, so I think NVlink might be the future, if we go by Nvidia's theory....

But I could be wrong, because not many Games might be able to reap the full benefits of NVlinK, because the same thing happened with SLI. SLI bridges mostly used to have a bandwidth of 1GB/s (normal bridge), and 2GB/s (for the HB bridge), with a rough estimate. NVLink on Turing cards can do 25GB/s one way, and or 50GB/s in total. But according to Nvidia, total bandwidth is 50GB/s one way, and 100GB/s total. But all of this will only help, if GAMES are going to take advantage of this new multi-GPU feature, provided the Game developers also implement this.

IMO, I think the main advantage of Nvlink is that it might help with peer-to-peer interface, VRAM stacking, because essentially the GPUs are much CLOSER together now, also bringing the latency of a GPU-to-GPU transfer way down. So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner.

We can think of it an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing cards will lack NVLINK support.


But again, like I said before, all of this will actually depend on how well the Game's ENGINE benefits from a future multi-GPU setup. Also, assuming NVLINK will also help with VRAM stacking, the 2 GPUS should support Split Frame rendering/SFR. Unlike the previous AFR mode used mostly in SLI, Alternate frame rendering that is, in which each GPU used it's own frame buffer/VRAM, and it never got added/stacked.

According to theory,

In AFR, each GPU renders each of the other frame (either the alternate Odd or Even).
In SFR, each GPU renders half of every frame. (top/bottom, or plane division).


So I think NVLINK should also help with VRAM stacking, though we need to see how this gets implemented fully in most of the Games, either in DX12 or VULKAN API mode. Apart from this, even the price of an NVLINK bridge is kind of high, so this can be a very expensive multi-GPU setup, and not many gamers might be able to afford these. Can't comment about the performance on NVlink though.

Just my 2 cents !
 
Solution

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
Actually, SLI is kind of dead. Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game's engine is going to scale well. Apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API. So I don't think NVLINK is dead as of now.

I'm hoping NVLINK brings something new to the table, especially in DX12 multi-gpu mode.

You must be aware that NVidia has introduced a new interface called NVLINK with the consumer Turing GPUs, instead of the old SLI. Obviously, it's the same multi-GPU bridge which can be used for gaming, but it has an interface with many times the bandwidth of an SLI connection.

Since NVLink can be used for direct memory access between cards, and not through the PCIe slots as this was creating a huge bottleneck with SLI, so I think NVlink might be the future, if we go by Nvidia's theory....

But I could be wrong, because not many Games might be able to reap the full benefits of NVlinK, because the same thing happened with SLI. SLI bridges mostly used to have a bandwidth of 1GB/s (normal bridge), and 2GB/s (for the HB bridge), with a rough estimate. NVLink on Turing cards can do 25GB/s one way, and or 50GB/s in total. But according to Nvidia, total bandwidth is 50GB/s one way, and 100GB/s total. But all of this will only help, if GAMES are going to take advantage of this new multi-GPU feature, provided the Game developers also implement this.

IMO, I think the main advantage of Nvlink is that it might help with peer-to-peer interface, VRAM stacking, because essentially the GPUs are much CLOSER together now, also bringing the latency of a GPU-to-GPU transfer way down. So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner.

We can think of it an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing cards will lack NVLINK support.


But again, like I said before, all of this will actually depend on how well the Game's ENGINE benefits from a future multi-GPU setup. Also, assuming NVLINK will also help with VRAM stacking, the 2 GPUS should support Split Frame rendering/SFR. Unlike the previous AFR mode used mostly in SLI, Alternate frame rendering that is, in which each GPU used it's own frame buffer/VRAM, and it never got added/stacked.

According to theory,

In AFR, each GPU renders each of the other frame (either the alternate Odd or Even).
In SFR, each GPU renders half of every frame. (top/bottom, or plane division).


So I think NVLINK should also help with VRAM stacking, though we need to see how this gets implemented fully in most of the Games, either in DX12 or VULKAN API mode. Apart from this, even the price of an NVLINK bridge is kind of high, so this can be a very expensive multi-GPU setup, and not many gamers might be able to afford these. Can't comment about the performance on NVlink though.

Just my 2 cents !
Thanks for the info! Its nice to hear something besides multi-gpu is dead. I agree that things like VRAM stacking, and SFR/not AFR will allow for the tech to be used better than was ever possible with SLI due to the wide gap between GPU's with SLI/Xfire.
So to say that multi-gpu is dead just because SLI/xfire was such a failure seems preemptively dismissive to a new tech that should finally have the bandwidth/throughput to make multi-gpu a reality for things beyond crypto mining and NN's and such.
Although these enterprise markets are the first to adopt things like gpu scaling which they have done for years before nvlink existed. They are also the first to get NVLink support in the enterprise sector long before it came to the consumer market. Albeit the high end consumer market.
Due to that, I get that the consumer market for NVLink is limited, due to Nvidia only including it on TU104 and higher gpu's.
However the inclusion of it on the 2070 super is encouraging, as it is a lot closer to being in reach of the average consumer than previously, which may help to sway game developers to start adding multi-gpu scaling using things like SFR and p2p VRAM stacking to their games in the future.
I ended up going with the 2070 super for a couple reasons. The performance gains are significant over the 2060, and because the future potential scalability. I would hate to have only gotten the 2060 super to save a measly $100 (which at this point is not a huge difference) only to find out that in a couple years Multi-gpu support and performance in games is better than ever due to new possibilities opened up by closing the gap between GPU's to support the likes of SFR, and VRAM stacking.
If that doesn't happen, I still end up with a better performing card that should last farther into the future than the 2060S counterpart. Plus more RT capability and DLSS. The latest RT example in the game Control, calls for a 2080 card to handle it. Since the 2070 is so close to the 2080 in performance it should handle it just fine, but to think that a game out right now is already out of reach of the 2060 super, and really that the 2070super should be the minimum for that game is unnerving. But regardless I still expect many years of the newest games to be support on this card.
 
  • Like
Reactions: Metal Messiah.
The latest RT example in the game Control, calls for a 2080 card to handle it. Since the 2070 is so close to the 2080 in performance it should handle it just fine, but to think that a game out right now is already out of reach of the 2060 super, and really that the 2070super should be the minimum for that game is unnerving. But regardless I still expect many years of the newest games to be support on this card.

By the way, do you have the RTX 2070 SUPER or the non-super SKU ? How is this Game, by the way ? RAY TRACING can be demanding on the current hardware. It's still in infancy stage. Give it 2-3 more years to become mainstream.

IMHO, the performance drop which comes with Ray tracing, is not really worth the difference in the IMAGE Quality of any scene. But still, something is better than nothing.

RTX is actually still in an infancy stage right now. But it has been an industry standard CGI. Someone had to start to bring RTX to consumers, and Nvidia was the first to do this, by releasing the TURING GPU lineup, giving Hardware level support for ray tracing, by adding RT cores, and TENSOR cores for DLSS.
 

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
By the way, do you have the RTX 2070 SUPER or the non-super SKU ? How is this Game, by the way ? RAY TRACING can be demanding on the current hardware. It's still in infancy stage. Give it 2-3 more years to become mainstream.

IMHO, the performance drop which comes with Ray tracing, is not really worth the difference in the IMAGE Quality of any scene. But still, something is better than nothing.

RTX is actually still in an infancy stage right now. But it has been an industry standard CGI. Someone had to start to bring RTX to consumers, and Nvidia was the first to do this, by releasing the TURING GPU lineup, giving Hardware level support for ray tracing, by adding RT cores, and TENSOR cores for DLSS.
I put my order in for the 2070 Super SKU. It'll be a couple weeks at least maybe a month or more before I get it. Right now i'm on a 1660 non Ti SKU.
I have been playing with non hardware RT in Metro Exodus and Control on the 1660. Its a huge strain on non RTX cards, even though this has been a surprisingly strong card without RT. In Metro Exodus I was able to get playable framerates at 1080p with RT. But the effects were not that impressive, and while it was playable, it wasn't quite as snappy as I was used to, so I ended up playing the game without RT on 1440p ultra, and it looked/played better than with RT.
In Control however the effects are far more advanced than in other RT examples we have seen so far. The difference is obvious as the game slows to a total blur the moment RT is on at any res, but its enough to see a little bit of what it would look like, if you hold completely still.

After reading a few reviews specifically on the RT effects in Control, and seeing a little bit of it for myself, I decided to save the rest of the game for when I have real actual hardware accelarated RT.
I am stopped at about 15 min in, so I can't comment too much on the actual gameplay, other than that the 3rd person shooting is a little prohibitive, as I expected. But 've only used the one weapon so far which is a pistol. The melee attack is a sort of telekinetic blast, which is really cool, but couldn't get close enough to anyone to use it for more than blasting coffee mugs off desks, etc. The AI are pretty smart. I am really excited to see the other types of weapons/abilities like this, which makes it hard to put aside. I don't expect the 3rd person setup will be prohibitive with the better weapon/abilities.
I can update you when the RTX card gets here if you like.

I just hope it works well enough on my card to be playable with RT. I expect it will be a treat if it does play well. They changed the game's official reccommended system's GPU from a RTX 2060, to a 2080. Since I can't afford a 2080, super or not, I am hoping the 2070 Super will be sufficient, as it scores so close to the 2080(non super) and really its a TU-104 GPU which is what the 2080 and 2080 super are based on. This is actually the only reason it works with NVLink. They had orignally limited it to just the 2080 in consumer lineup. So If I had gotten the 2070 non super SKU it would not support NVLink, as it is based on TU-106.

It doesn't seem like any "clearance" sales are gonna save anyone much money on discontinued SKU's. I tried, the market for GPU's is still quite aggressive, despite the fall in crypto mining.

I find it a bold move on Nvidia's part to even release another multi gpu setup to consumers in this market. There is a widespread feeling that there is no future for multi-gpu gaming.
Getting any real support from game developers will be extra hard. I'm sure they were aware of this. Limiting multi-gpu to only top cards further reduces the potential market.

I do agree with what you said though, its SLI/Crossfire that is dead. NVLink is in infancy. Along with RT and DLSS.
They use RT effects in professional 3d rendering and such, so it was only a matter of time for it to show up in games, like you said.

Speaking of DLSS, don't you think that the addition of Tensor cores could be useful for so much more than just DLSS in games?
As far as people using consumer RTX cards for machine learning stuff like TensorFlow and such, that's already going on. But we should be able to have Dedicated AI Hardware acceleration in our games soon!
This could be a total game changer as those sort of calculations are very GPU intense and take away from our visual performance as games get more advanced in that dept.
 
Speaking of DLSS, don't you think that the addition of Tensor cores could be useful for so much more than just DLSS in games?

No. Tensor Cores and DLSS will only be used for upscaling the Image quality. Not for any other purpose though. As you may be already aware of this, DLSS is sort of a deep learning antialiasing.

It's a kind of neural network to find jagged edges, and perform high-quality anti-aliasing by determining the best color for each pixel, and then apply the proper color to create/smooth out some of the edges, and also overall improve the image quality.

As per Nvidia, this new DLSS feature offers the highest quality AA with fewer artifacts, than other forms of AA. Though, I'm NOT fully convinced yet.

Nvidia programs their supercomputer to run any game at extremely high resolutions, and the AI compares that data to the standard resolution, and tries to figure out what it should look like using BOTH these sets of data.

Once the AI has figured this, the instructions are saved via a driver or profile, so that the Tensor cores on the Turing GPU can run that code, and give you the same quality, but with slightly better performance, according to Nvidia, because, I think CUDA isn't calculating antialiasing anymore. This is how it works.

It's a good feature, but it also blurs the Image in some Games. DLSS needs to refined more. It's requires more AI training.
 

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
No. Tensor Cores and DLSS will only be used for upscaling the Image quality. Not for any other purpose though. As you may be already aware of this, DLSS is sort of a deep learning antialiasing.

It's a good feature, but it also blurs the Image in some Games. DLSS needs to refined more. It's requires more AI training.
Personally I have read the Nvidia paper on DLSS so I am somewhat familiar with how it works. I like how you put it all into a few very informative descriptions on how DLSS for those who have not read the whole paper.
Both RT and DLSS are very new, hardware wise. They will no doubt require futher refinement on the software side(and maybe even the hardware side) Software is always slow to catch up with new hardware. We have only seen a glimpse of actual adoption of these technologies in games, so we can expect that they may not be the best examples of the new tech. Just look at the difference of RT in Metro exodus vs Control. Night and day difference, and thats just a slightly newer game. A lot of new games(those that haven't already been realeased) have already announced they will support RT, so I think the idea of RT is already catching on.
I hope that 7 giga-rays is enough to last as long as this card should.

I get that DLSS is the only official use for the tensor cores in consumer GPU's Nvidia has planned, or at least announced so far.
But I was thinking more along the lines of what they could be used for if game/software developers were ambitious enough.
Maybe it's not just games, my point is that the tensor cores are able to be used for more than just DLSS. They are perfectly capable of other Deep learning AI applications as well as DLSS. That's what they were created for before DLSS was a thing. Its just a matter of making use of them in games. I believe it is possible.
I was wondering if you thought any game developers might jump on that.
I guess we will just have to see.
 
But I was thinking more along the lines of what they could be used for if game/software developers were ambitious enough.
Maybe it's not just games, my point is that the tensor cores are able to be used for more than just DLSS. They are perfectly capable of other Deep learning AI applications as well as DLSS. That's what they were created for before DLSS was a thing. Its just a matter of making use of them in games. I believe it is possible.

Yeah, I an agree on this part. Tensor Cores can be used to work for other purposes as well, mostly AI inferencing, Neural Networks and Deep Learning, and even other AI related tasks. It's only a matter of time this feature gets adopted both in the consumer and the enterprise market.

BTW, is RT in CONTROL far better than Metro Exodus ? Yeah, even I think RT is slowly catching up. Future games are going to have this feature, and NVIDIA is always working hard with developers to implement this feature.

And like you said before, software needs to catch up with the Hardware as well.
 
Last edited:

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
Yeah, I an agree on this part. Tensor Cores can be used to work for other purposes as well, mostly AI inferencing, Neural Networks and Deep Learning, and even other AI related tasks. It's only a matter of time this feature gets adopted both in the consumer and the enterprise market.

BTW, is RT in CONTROL far better than Metro Exodus ? Yeah, even I think RT is slowly catching up. Future games are going to have this feature, and NVIDIA is always working hard with developers to implement this feature.

And like you said before, software needs to catch up with the Hardware as well.
The RT implementation in Control is far more advanced than Metro Exodus. Metro exodus it seems like almost a contrast adjustment as everything just seems a little brighter and maybe a little shinier, but only slightly. Shadows are there, but similarly unimpressive. The performance overhead is taxing without dedicated RT cores, and while I was able to get it to a playable state, by turning down resolution and details, the effects were less than impressive, than those gained by turning up the resolution and details without RT. I cannot say for sure, but I doubt it would be much more impressive with hardware accelerated RT, although it would likely be far less taxing on the overall performance with RT turned on. It is clearly a early implementation of the tech. So as far as i'm concerned this is to be expected.

Control however, I would say probably the most advanced implementation of the RT tech we have seen in a game to date (although I have not seen the RT effects in the new Wolfenstien game to compare) RT was clearly built into the game engine from the ground up, and not something just added right before release. They make very good use of mirror and shadow effects. This works very well with polished marble theme of the "Federal Bureau of Control" building the story happens inside of. Everything from the floors, to walls, windows, glass, even shiny metal all exhibit highly reflective tendancies. Reflections are not only present, they actually move around in real time depending on where you stand/look, and where the light sources are coming from. Same goes for shadows.

The RT implementation is so advanced, it will require RT hardware acceleration to be playable. This game is very playable in 1440p @ ultra without RT on my 1660, but as soon as the RT effects are enabled it slows to a crawl even with everything turned way down. Totally unplayable.

To put it simply, when they came out with the RT driver for GTX cards, and I was able to see it in action for the first time in Metro Exodus, my reaction was. "total waste of money"

But after seeing the effects in Control the reaction was very different. More like "I want that" haha. Clearly the software implementation is crucial in making the best use of the tech.

On another note, from what I understand DLSS was originally meant to be able to help take a load off of the gpu without sacrificing image quality, therefore helping to mitigate the taxing effect RT has on hardware. However I have heard when reading some of the reviews of Control/RT that there were maybe some issues with using RT and DLSS at the same time, that made the combo undesirable.

This could likely be a localized issue to this game, which may be fixed with a patch, or maybe just work better in a different implementation (different game) that is properly optimized for using both RT and DLSS concurrently.

Do you know if RT and DLSS were meant to be used in combination?
 

febisfebi

Distinguished
Jan 10, 2009
87
5
18,545
If you still dont think RT will catch on, get a load of this. Amd's Rdna2, and Intel's new Xe discreet graphics, will both be debuting with hardware accelerated Ray tracing. Along with the new playstation and xbox models, which will both have RT. This is an incredible incentive for developers to make use of RT in their new games, and make patches for existing games. In around a year, we will have gone from only one company supporting RT to nearly every new mainstream platform will having RT.

Also there is already a new use for tensor cores besides DLSS:
https://www.tomshardware.com/news/nvidia-rtx-broadcast-engine-game-streaming-streamers,40483.html

It was only a matter of time for this kind of thing to happen. I think we will see a lot more things like this in the near future. Due to the massive computing power that the tensor cores can provide for AI/ML related tasks.

edit: @Metal Messiah check out this for some good comparisons:
https://www.overclock3d.net/reviews/gpu_displays/control_rtx_raytracing_pc_analysis/2
 
Last edited:
  • Like
Reactions: Metal Messiah.