News Nvidia DLSS 3.5 Tested: AI-Powered Graphics Leaves Competitors Behind

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
They should make an accelerator card for ray tracing like the old physx cards and problem solved.
It wouldn't, because any accelerator card would add latency to the pipeline that you can't avoid. You'd actually make performance even worse unless the accelerator card spits out results in nanoseconds.

In addition, it'd be optional, which means it wouldn't sell very well. PhysX cards died out because it was optional and expensive. The only way PhysX could get any traction was for NVIDIA to port it to CUDA.
 
Reeeee non-AMD tech reeeeeee how dare people use it

Worship AMD like me
he's not wrong. Even if nvidia is winning big... Tom's always have had some insanity ultra pro Nvidia views and a clear anti AMD stance.
I still remember those <Mod Edit> days of "just buy it". Which almost showcased most Tom's employees as on Nvidia's pockets.
 
Last edited by a moderator:
PhysX cards died because Nvidia bought them... Locked inside of nvidia eco system and its done.
psysX card are the same era of virtu MVP... LOL days lost try'n to get that thing work!
Nvidia also killed the gpu agnostics SLI/crossfire.
Anyone remember that chipset that was looking great and could join ANY chip and get performance?

funny how fast it was killed by a few changes..

With 82% of the market place and large tech leads to Nvidia They set the standard and it is very difficult and probably foolish for others to fight. The damage caused by single actor market domination is already locked in.


Not always.
AMD had many technology releases.
Most of them ended being open.
I mean.. wasn't Vulkan/MANTLE an AMD thing?

Nvidia in the other hand, tries to lock everything proprietary.. like CUDA, even physX even when its capable on running on AMD cards, it was locked by Nvidia.
 
They should make an accelerator card for ray tracing like the old physx cards and problem solved.
If at least two companies produced them, they were compatible with DXR (not proprietary), and could work together with a GPU, it would be a start.

Next step, some games can run exclusively on the RT card.

And next step, end of rasterization and GPUs, being all replaced by RT.
 
All I ask for is the same treatment when FSR3 releases. An in depth article of all the upscaling tech available with and without RT.
DLSS 3.5 ≠ FSR 3
DLSS 3 = FSR 3

The naming Nvidia chose was plain dumb but I agree. As an Nvidia user I am interested in FSR 3 as well. Tons of benefits that Nvidia isn't extending. Especially compatibility.
 
  • Like
Reactions: P.Amini
Any idea where in the game that bottom screenshot (with the blinds) was taken? I've poked around so that I could capture something similar, not having any luck sadly.

But I will say, playing the game more, the "OMG I just went from a dark room to the bright outdoors" effect feels very overdone. Maybe realistic, and games have done that without ray tracing as well. But there are some oddities from the effect at times.
I'm not sure where it is, sorry. Other than the obvious "must be part of a quest", I'm not sure. It could be part of the DLC though.

And one of the best implementations of the "coming out of a dark area into light" I've seen is in GuildWars 2, which started as a DX9c game and it's now DX11. I think GTFO would also qualify there, as they play with light effects a lot, since it's a tactical suspense game (and totally recommended BTW) you play underground.

TPU (and many others) have also put out their RR reviews and deep dives. My impression is that the tech is very promising, but the quality trade offs over the lost performance (plus you have to use upscaling as I understood) is not worth it. Not to mention FG is only for RTX4K series and that's where they get slightly competitive to Raster, which still looks good.

I also liked the discussion about "realistic" vs "good" in most threads I've seen; as always people tends to mix both, but oh welp...

A scene being realistic is something you can prove (not super easy, but you can), but "good" is absolutely subjective and up to the viewer. My take on this is, even if it does a better job at closing the gap between "fake light" and "realistic light", it loses a lot on other artistic effects that matter for the scene, so loses "realism" and "artsey" points to me. Some* details just look terrible, but I guess that can be improved over time.

As a personal conclusion, I think RR is a good addition, but it needs to mature. Much like DLSS1 needed to and DLSS2 (or the upscaler bit) still continues to mature.

Regards.
 
  • Like
Reactions: JarredWaltonGPU
@JarredWaltonGPU , I fully agree with your take. I also share your trepidation around highly-proprietary rendering technologies.

IMO, it's not as if there can be no alternative. What I'd like to see is an open source 3D engine independently implement something conceptually similar to Ray Reconstruction that can (theoretically, at least) run on any hardware - although it will probably require some kind of deep learning accelerator, in practice (which are found in both Alchemist and RDNA3).

I assume Nvidia has built a patent wall around this family of techniques, but perhaps there's enough room for innovation that someone can find a way around it.
You need specialized hardware to make effective and efficient neural networks. It's a hardware solution that can't be easily simulated with software. Take video encoding .... Sure you can do it in software with a CPU but it's considerably slower than a hardware encoder like the one built into all modern GPUs .....

If Nvidia owns the IP the solution there is to license it from Nvidia at say $10 or $20 per GPU made that uses it. Even a lot of the so-called Open Standards are licensed and bought by the hardware vendors. Most people don't realize it but every manufacturer that made Compact Cassette players or Compact Cassette tapes have to pay a per unit licensing fee. The same thing is true for CD-ROM, DVD, and Blu Ray. There are literally dozens if not hundreds of other examples of that. (For instance all the Dolby and DBX technologies are licensed technologies)

What no one talks about is the 10's of billion of dollars Nvidia has sunk into developing these technologies since 2011. They took a huge risk with CUDA and again with AI neural networks that could have very well bankrupted them if they hadn't panned out as well as they have. Risk deserves reward and without the reward there is no sense in even taking a risk in the first place and then everything stagnates to the least common denominator
 
I dont buy into proprietary systems as that is just the beginning of the end.
The deeper you get into a proprietary ecosystem the more you become their slave as all your options are gone.

I own an Nvidia GPU but not because of any of its proprietary feature, only because when I went to purchase at the time Nvidia offered the best price/performance ratio. (1080ti).

Its why I dont own any Apple products. They are lord and savior, the dictator, and the god of their ecosystem.
They make a decision and you choke on it.
 
You need specialized hardware to make effective and efficient neural networks. It's a hardware solution that can't be easily simulated with software.
I said you'd probably need the equivalent of tensor cores. Given those, yes you can abstract it to the level of "software" that runs on a GPU.

Take video encoding .... Sure you can do it in software with a CPU but it's considerably slower than a hardware encoder like the one built into all modern GPUs .....
It's probably not as if Nvidia GPUs have a 'DLSS' instruction. They simply have these tensor cores, and DLSS makes use of them in a filter stage that runs at the end of the rendering pipeline. Don't mystify it.

If Nvidia owns the IP the solution there is to license it from Nvidia at say $10 or $20 per GPU made that uses it.
No, there's no mechanism forcing them to license it. That's entirely at their discretion.

Even a lot of the so-called Open Standards are licensed and bought by the hardware vendors.
Yes, there's a lot of confusion about this point. "Open" doesn't necessarily mean "free". It just means that it's open for all to see.

Most people don't realize it but every manufacturer that made Compact Cassette players or Compact Cassette tapes have to pay a per unit licensing fee.
My understanding is that HDMI has rather high royalties, while DisplayPort has either low or no royalties. It's really up to the standards organization how to fund themselves and pay patent holders with IP in the pool. HDMI is from CEC, while DisplayPort is from VESA. The point is that different organizations can have different licensing policies.

None of these is like DLSS, which is a Nvidia-proprietary technology - not an open standard!

The same thing is true for CD-ROM, DVD, and Blu Ray. There are literally dozens if not hundreds of other examples of that. (For instance all the Dolby and DBX technologies are licensed technologies)
Just about every single example you cited depends on an ecosystem of hardware and software, in order to be successful. Nvidia dominates the GPU hardware market in a way that it doesn't need any other GPUs to implement DLSS, in order to attract software developers to support it.

What no one talks about is the 10's of billion of dollars Nvidia has sunk into developing these technologies since 2011.
Eh, because there's not a lot to be said about it?

They took a huge risk with CUDA and again with AI neural networks that could have very well bankrupted them if they hadn't panned out as well as they have.
They made a gamble and executed it well. I believe Jensen, when he says they had no sort of "5 year plan". They just picked a direction and went with it. It paid off. Don't make it into something it's not.

Risk deserves reward and without the reward there is no sense in even taking a risk in the first place and then everything stagnates to the least common denominator
I don't even know where this is coming from. Nobody said they aren't entitled to sell their DLSS and CUDA, for that matter. Nobody has to use it, though, regardless of how much Nvidia sunk into it.

On the flip side, if I'm a game developer, I still want my software to run well on all GPUs, and I don't want to waste effort on one implementation for Nvidia, one for AMD, and one for Intel. To say nothing of Qualcomm. So, that's why we have things like Direct3D and Vulkan, which were created to abstract away differences between different hardware, and why the software vendors naturally want a hardware-independent solution to upsampling.
 
Simply cannot agree with either DLSS or FSR. I say this as a nVidia owner. DLSS may do amazingly well for an upscale, but it's still an upscale. I've used it and it's still a much blurrier, fuzzier, and overall confusing experience compared to the razor sharp clarity of native resolution. DLSS is ok for consoles because they simply can't bring the firepower that modern games with PC-oriented specs (ironic that they're made for console first but with design goals oriented towards the raw power of PCs) and it helps a lot that most players aren't near enough to their screens to pick out individual pixels anyway, but even on them it would still be better if things were better optimized to run at least closer to native.

I don't really understand this trend towards making modern gaming feel like you're sitting in a crowded theater while very sick with a high fever, half dreaming what you see and the footage is an old film that was recorded on a cheap, low grade camera then it was left, forgotten, in a basement that frequently has moisture issues due to leaks. They force our systems to waste enormous resources to make everything blurrier and uglier and I just don't get why this is such a thing. Chromatic aberration, motion blurring, separate object blurring, bloom, film grain noise, and now rendering the game in a lower resolution and trying to use tricks just to make that at least less blurry than it otherwise would be. And as impressive as things like RTX may seem in still images, when you're actually playing a game and dodging shots from enemies or building buildings or etc -- eg if you're actually playing the game instead of just looking at it -- you really just don't even particularly notice a difference of on versus off. Lower a few settings just ever so slightly from ultra to just high or whatever and turn off all that crap like chromatic aberration (which nets you quite a few FPS just on its own,) and the games can look truly amazing while also being full resolution, sharp and clear without any tricks necessary to try to fool you.

I especially just don't understand the cheap low-grade camera effects. We're supposed to be imagining we're there in that alternate reality, not imagining we're watching a really bad film recorded on a really low quality camera from the 1970s. It just really slams that fourth wall shut in your face with a vengeance. I'm tired of gaming feeling like fever dreams of watching a bad recording on bad film.


How about instead of focusing so hard on DLSS, FSR, and etc they start just making games look good and enjoyable to play? If GPU manufacturers want to add something neat, how about having overrides that force disable chromatic aberration and the like in the drivers themselves somehow (like blacklist certain shader hashes or something.) That would be spending those resources we have to pay so much extra for in a useful manner instead of making us have bad fever dreams.
 
  • Like
Reactions: Order 66
They force our systems to waste enormous resources to make everything blurrier and uglier and I just don't get why this is such a thing.

A lot of that, is due to sloppy programming, but, other than that, modern graphics require more raw power than ever before.

The model of gaming that we've come to known, becomes increasingly difficult to sustain.

Extreme horsepower alone, can only get you as far as playing Cyberpunk with 4090 (which is by far the best GPU of our times), at just 22 FPS, in 4K RT Ultra settings.

As much as i hate admitting it, upscalers are an inevitability.

And as impressive as things like RTX may seem in still images, when you're actually playing a game and dodging shots from enemies or building buildings or etc -- eg if you're actually playing the game instead of just looking at it -- you really just don't even particularly notice a difference of on versus off. Lower a few settings just ever so slightly from ultra to just high or whatever and turn off all that crap like chromatic aberration (which nets you quite a few FPS just on its own,) and the games can look truly amazing while also being full resolution, sharp and clear without any tricks necessary to try to fool you.

But, still: will the FPS be enough to give you a smooth gaming experience? If the latest version of Cyberpunk is any indication of things to come, i don't think so.

That's exactly why an upscaler can be quite handy: it sacrifices a little bit of visual quality, in favor of an enormous boost in performance - and in a more effective way than lowering the settings, i might add.



 
Last edited by a moderator:
I can't speak for 4K. I never fully understood the hate for 1080p when the DPI is very reasonable on a very reasonable sized monitor viewed at a very reasonable distance. I can tell you that Cyberpunk 2077 runs excellently at 1080p with 60 FPS most of the time when RTX is off on some very high settings otherwise. And again, I'm still not convinced that when you're actually playing the game instead of searching for still images to take screenshots the difference is all that visible. I tried both on and off and really didn't see that huge of a difference personally.

Actually, I think a 4090 should be delivering more than that even in 4K. You may have something wrong with your setup there.


EDIT: Oh, your CPU is Raptor Lake. You may want to verify that the game is definitely not using any of the economy cores...
 
  • Like
Reactions: Elusive Ruse
EDIT: Oh, your CPU is Raptor Lake. You may want to verify that the game is definitely not using any of the economy cores...

Most games favor having E-cores enabled. And in the case of Cyberpunk 2077, there's no practical performance difference.

I felt like the community has had this argument with regards to SMT when it was an Intel only thing for a while.
 
That makes zero logical sense. Economy cores perform worse and have been caught causing framerate limitations and such in games such as Starfield. They weren't meant to perform better. Remember, things like benchmarks scale linearly because they do very very simplistic operations, but games have operations of widely varying complexities and while some will do fine on economy cores, some will be gutted compared to a performance core on the same operation. Also, games can't really utilize that many cores at all. Most max out around four or so being truly utilized. A few maybe can do as many as six. I still see some modern games only truly utilizing two. (Turn off SMT/HT and then watch how much your cores are actually being used. You'll find most are basically just baseline OS stuff staying around like 4% or so and then only a handful are truly hitting 20+%.) Since throwing more and more cores at a game won't actually benefit it, performance cores are absolutely better for gaming by definition. (Unless your chip is running so close to its tolerance it gets too hot and downclocks too much when using the performance cores, but then the problem there is it shouldn't be running like that and what it really needs from the end-user perspective is a better cooler -- though truly what it needs is for better stock optimizations on the governor and such.)

I'd like to see more people doing more tests without economy cores enabled to see what happens in real life examples. Logically games MUST do worse on economy cores by definition.

No, trust me, there's nothing wrong.

That's just the way things are at the moment, as far as Cyberpunk is concerned:

Actually, it has been this way for a few months now:

If you require DLSS to get reasonable quality settings, then something is wrong. Just the problem may be on their end rather than yours.

I'm still not convinced RTX is even worth it. They're pushing way too hard for people to try to use effects that may be nice in stills, but don't make enough difference to be worth the FPS hit and then trying to compensate by actually running at significantly reduced resolution then upscaling which, ironically, means a lot of the original detail actually is lost and it just tries to approximate it back in (so you're increasing quality settings, then basically losing some of those increases anyway.) Quite frankly I'm convinced this is because nVidia is invested so heavily in stuff like LLMs which, while great for the things that specifically use them like the various so called "AI" services means they're investing less in the actual techs that games are truly using. They have to convince you that you need RTX since it's their thing but can't have you realizing that it just isn't really there yet. Meanwhile AMD seems to be getting ahead at a lot of things like VRAM for instance. Just look at the RAMBUS in the 4060 for example. In many cases it can actually perform even worse than a 3060... It should be an upgrade to go to the next generation, but outside of a lot of stuff like LLMs it actually is a downgrade...

Frankly, I'm convinced DLSS is a campaign to convince end users that nVidia just isn't really normal consumer focused right now so they don't lose the business.

And I say this as a nVidia user.
 
That makes zero logical sense. Economy cores perform worse and have been caught causing framerate limitations and such in games such as Starfield. They weren't meant to perform better. Remember, things like benchmarks scale linearly because they do very very simplistic operations, but games have operations of widely varying complexities and while some will do fine on economy cores, some will be gutted compared to a performance core on the same operation. Also, games can't really utilize that many cores at all. Most max out around four or so being truly utilized. A few maybe can do as many as six. I still see some modern games only truly utilizing two. (Turn off SMT/HT and then watch how much your cores are actually being used. You'll find most are basically just baseline OS stuff staying around like 4% or so and then only a handful are truly hitting 20+%.) Since throwing more and more cores at a game won't actually benefit it, performance cores are absolutely better for gaming by definition. (Unless your chip is running so close to its tolerance it gets too hot and downclocks too much when using the performance cores, but then the problem there is it shouldn't be running like that and what it really needs from the end-user perspective is a better cooler -- though truly what it needs is for better stock optimizations on the governor and such.)
Assuming the scheduler is doing its job in making sure that high performance tasks get shoved onto P-cores, then there's no harm in having E-cores. Also having a compliment of E-cores can help not steal a P-core for doing house keeping tasks that need to be done in the background.

If there's a problem when turning on E-cores, then that's either an issue with the scheduler or the application if the developer can give the scheduler hints on what kind of work is being done.

If it's possible to disable economy cores in the BIOS it would make a great experiment. (Alternately I guess the task manager could be used to set the game to use only specific cores, but I don't know how you know which ones are which.)
That's what the article I linked is doing.

I'm still not convinced RTX is even worth it. They're pushing way too hard for people to try to use effects that may be nice in stills, but don't make enough difference to be worth the FPS hit and then trying to compensate by actually running at significantly reduced resolution then upscaling which, ironically, means a lot of the original detail actually is lost and it just tries to approximate it back in (so you're increasing quality settings, then basically losing some of those increases anyway.) Quite frankly I'm convinced this is because nVidia is invested so heavily in stuff like LLMs which, while great for the things that specifically use them like the various so called "AI" services means they're investing less in the actual techs that games are truly using. They have to convince you that you need RTX since it's their thing but can't have you realizing that it just isn't really there yet. Meanwhile AMD seems to be getting ahead at a lot of things like VRAM for instance. Just look at the RAMBUS in the 4060 for example. In many cases it can actually perform even worse than a 3060... It should be an upgrade to go to the next generation, but outside of a lot of stuff like LLMs it actually is a downgrade...

Frankly, I'm convinced DLSS is a campaign to convince end users that nVidia just isn't really normal consumer focused right now so they don't lose the business.

And I say this as a nVidia user.
A problem I see with the whole ray tracing thing is consumers only care about what they see. And if they don't see amazeballs uplifts in image quality, they toss it aside as a gimmick. And I get that, consumers want to see actual progress.

But the problem is that ray tracing isn't just about making realistic lighting. It's about doing it easily. I don't really know any level designers for AAA games, but I'm sure they all are sticklers for detail in how something looks. With rasterization, if you want to get something to look fairly realistic, you have to do a bunch of work with regards to the lighting to get there. And then you have to take individual lights and to tweak the parameters to make sure they aren't killing performance (Like how rasterized shadows look awful unless you're within spitting distance of it)

If all you have to do to light a scene and get realistic shadows, bounce lighting, etc is click on a checkbox, then that would improve the level design process quite a bit. And I did find a scenario where rasterization did perform worse than ray tracing: a scene with hundreds of dynamic lights casting high resolution dynamic shadows. And this was with DLSS off.

This sort of thing feels like when DX10 was starting to take off. It didn't vastly improve image quality and people wrote it off as a waste of time. But DX10 brought some things that helped streamline game development, like doing away with cap bits and introducing intuitive feature levels instead. So now game developers only had to target a feature level rather than hoping that the GPU the game ran on had some cap bit enabled.
 
DLSS may do amazingly well for an upscale, but it's still an upscale. I've used it and it's still a much blurrier, fuzzier, and overall confusing experience compared to the razor sharp clarity of native resolution.

  1. What the latest version of DLSS have you tried? There was a massive difference between DLSS 1 and DLSS 2. Version 1 definitely wasn't very good.
  2. What size & resolution of monitor did you use? I think upsampling makes a lot of sense, when using a high-DPI monitor, where you have some trouble making out the smallest details, anyhow. If you're using it on like a 27" 1080p monitor, then you're definitely going to see all of its flaws and weaknesses in a way that someone using a 32" 4k monitor wouldn't.

I don't really understand this trend towards making modern gaming feel like you're sitting in a crowded theater while very sick with a high fever, half dreaming what you see and the footage is an old film that was recorded on a cheap, low grade camera then it was left, forgotten, in a basement that frequently has moisture issues due to leaks.
Can you give some examples of games which do this? Just curious.

I especially just don't understand the cheap low-grade camera effects. We're supposed to be imagining we're there in that alternate reality, not imagining we're watching a really bad film recorded on a really low quality camera from the 1970s. It just really slams that fourth wall shut in your face with a vengeance. I'm tired of gaming feeling like fever dreams of watching a bad recording on bad film.
Yeah, I remember back when I played games, you'd sometimes see effects like this when you got a concussion or ate/drank something poisonous. I agree that it'd be annoying to experience all the time.

As for camera artifacts, those could make sense if you're playing some kind of Battle Tech game where the whole point is you're piloting a robot and watching the action through a camera.

The only way I can see "film grain" making any kind of sense for live gameplay is if you're using night-vision, where the image tends to look very grainy and noisy.

how about having overrides that force disable chromatic aberration and the like in the drivers themselves somehow (like blacklist certain shader hashes or something.)
They wouldn't want to give some players an unfair advantage.
 
  • Like
Reactions: P.Amini
PhysX cards died because Nvidia bought them... Locked inside of nvidia eco system and its done.
psysX card are the same era of virtu MVP... LOL days lost try'n to get that thing work!
PhysX was already a hard sell back then. It was $200 for the card, only three games supported it at the time and none of them really spectacular blockbusters either. And even then, all it provided was more clutter (that disappeared) in certain effects, none of which had any real improvement to gameplay.

Considering you could get most of your interactive physics through Havok (if Half-Life 2 was any indication of this) and Crysis did all sorts of things with destructible environments without needing a PPU or GPU accelerated physics, the writing was kind of on the wall that having dedicated hardware just for physics wasn't going to take off.

I think even today games that have hardware accelerated physics use it only to show more clutter or better simulate fluttering cloth/clothing.
 
PhysX was already a hard sell back then. It was $200 for the card, only three games supported it at the time and none of them really spectacular blockbusters either. And even then, all it provided was more clutter (that disappeared) in certain effects, none of which had any real improvement to gameplay.

Considering you could get most of your interactive physics through Havok (if Half-Life 2 was any indication of this) and Crysis did all sorts of things with destructible environments without needing a PPU or GPU accelerated physics, the writing was kind of on the wall that having dedicated hardware just for physics wasn't going to take off.

I think even today games that have hardware accelerated physics use it only to show more clutter or better simulate fluttering cloth/clothing.
Long Long time Ago you can combine Ati graphics with nvidia graphics to use likely a physX card.
Long time Ago you can slave a nvidia card... and the fps in games are insane... have paired a 4870 with a slaved 8600GT... Good times Before nvidia Locked that in drivers...

Don't have good games with that tech, but surely nvidia don't waste any time to buy IT.
 
I'm not sure where it is, sorry. Other than the obvious "must be part of a quest", I'm not sure. It could be part of the DLC though.

And one of the best implementations of the "coming out of a dark area into light" I've seen is in GuildWars 2, which started as a DX9c game and it's now DX11. I think GTFO would also qualify there, as they play with light effects a lot, since it's a tactical suspense game (and totally recommended BTW) you play underground.

TPU (and many others) have also put out their RR reviews and deep dives. My impression is that the tech is very promising, but the quality trade offs over the lost performance (plus you have to use upscaling as I understood) is not worth it. Not to mention FG is only for RTX4K series and that's where they get slightly competitive to Raster, which still looks good.

I also liked the discussion about "realistic" vs "good" in most threads I've seen; as always people tends to mix both, but oh welp...

A scene being realistic is something you can prove (not super easy, but you can), but "good" is absolutely subjective and up to the viewer. My take on this is, even if it does a better job at closing the gap between "fake light" and "realistic light", it loses a lot on other artistic effects that matter for the scene, so loses "realism" and "artsey" points to me. Some* details just look terrible, but I guess that can be improved over time.

As a personal conclusion, I think RR is a good addition, but it needs to mature. Much like DLSS1 needed to and DLSS2 (or the upscaler bit) still continues to mature.

Regards.
One of the most impressive lighted games I've played was Mad Max - and it didn't even use RT. Transition from the sun-bleached open world to caves felt quite natural, and model lighting under moving parts (like a fan) was eye popping.
Too bad the game itself was merely good...
 
“But this effectively breaks competition, in the sense that AMD and Intel GPUs have no way of running DLSS 3.5.“

Why do people keep saying “competition?” There IS NO competition with Nvidia. The command 90% market share, and that’s competition? There are levels to this and CLEARLY Nvidia is on a whole different level than AMD! AMD just doesn’t have the technology or money to keep up with Nvidia. I will NEVER EVER buy and AMD GPU, why would I? Why would anybody? The vast majority of people prefer Nvidia because you’re getting the best video cards money can buy, period. It simply is what it is.
 
Status
Not open for further replies.