News Nvidia's RTX 50-series drivers feel half-baked, focus too much on MFG

fake frames aren't real frames. Don't bench what isnt real.

the idea is sound in theory, but it falls apart in practice & on the lower end gpu's its going to show how bad it gets as the lower your frame rate is the more the downsides of MFG is going to be front and center.

Yet you still gave the 5090 4 1/2 stars, an almost perfect score.
i mean ignoring all the gimmicks its still the best gpu on market.
its not a great deal w/o the gimmicks, but doesnt make it a bad product at the core.

Will say Nvidia needs to go to a longer release window of new gpu's as this "bump" was lame. i'd of ratehr went another yr and got a meaningful improvement that wasn't just MFG which only works in some titles & has serious downsides.
 
  • Like
Reactions: artk2219
Yet you still gave the 5090 4 1/2 stars, an almost perfect score.
Weak-ish launch drivers don't change the fact that it's the fastest GPU available. There are things the 5090 can do that make it compelling, with up to 50% higher performance in some cases compared to the 4090. In the most demanding games at max settings, it's consistently around 30% faster. It's at least a clear improvement in features over the prior gen, with 33% more memory, 78% more bandwidth, and 33% more compute.

Given the end of Moore's Law scaling, such improvements are better than what we'll typically see going forward.

fake frames aren't real frames. Don't bench what isnt real.

the idea is sound in theory, but it falls apart in practice & on the lower end gpu's its going to show how bad it gets as the lower your frame rate is the more the downsides of MFG is going to be front and center.
None of the frames are "real" and ultimately it's important to see what the experience is actually like. I've tried framegen in a variety of games and with a variety of hardware. It's not good for taking sub-30 FPS rendering up to 60 or 100 FPS. It does improve the experience when going from 40+ FPS to ~70 or ~140 FPS. It's not the improvement that the numbers would suggest, but it is an improvement in most cases.

Does framegen at 4K ultra on an RTX 4060 work great? No, not in games where the GPU is already running out of VRAM. Does framegen at 1440p with quality or balanced mode upscaling enabled work well on the 4060? Generally yes. You can go from ~50 FPS in various games to ~80 FPS, as reported by the number of rendered and generated frames sent to the display. The feel of the ~80 FPS with framegen ends up being somewhat better than the non-FG result, even if the input rate is lower.
 
Well, that is to be expected. I'm sure that in coming months there will be a ton of driver optimizations from Nvidia and improved support from the game devs themselves.

It would be interesting to see a retest half a year down the road, when the ecosystem has matured a bit. I bet there will be a decent ~7% bump average across the board, just from that.

And yeah, we can cry all we like about Nvidia and their partners skinning us from head to toe, but in the end 5090 is the fastest customer GPU ever produced on the planet Earth so far and by a decent margin at that.
 
  • Like
Reactions: JarredWaltonGPU
None of the frames are "real"
except they are real as ignoring any toggled setting you are rendering frames by default. If my game doesn't support FG/MFG then thats real frames (as in what the card can actually do at all times)

if your gpu is actually rendering 60 frames those are real.
if your gpu is rendering 60 frames and using "ai" to fake180 thats not 200+ frames its 60 frames and going to feel "off" to play with as again MFG's feel is entirely based on the actual frames you can hit w/o it and the more frames you generate w/ it the worse that feeling is.

There is also issue of input latency ballooning w/ it making competitive games that want as minimal latency as possible (while still having as many frames as possible).

again MFG in theory is a good idea, but its not going to work like that as there are downsides given especially given game devs stop trying to optimize games (which would mean higher real frames which benefit fake frames) and just brute force them anymore which lowers performance and benefit of the technology.
 
  • Like
Reactions: artk2219
if your gpu is actually rendering 60 frames those are real.
if your gpu is rendering 60 frames and using "ai" to fake180 thats not 200+ frames its 60 frames and going to feel "off" to play with as again MFG's feel is entirely based on the actual frames you can hit w/o it and the more frames you generate w/ it the worse that feeling is.
So, if you use DLSS, is that fake frames too? It is that icky "ai" after all too.

I'm just trying to get up to speed with the purity test criteria here.
 
My issue with Frame Gen, DLSS, FSR, etc is that they are inconsistently executed and can't be counted on to function consistently or even correctly in many titles. Driver revisions often break as often as they fix. Frame Gen has been particularly egregious on this front and as far as I'm concerned, it deserves the scrutiny and derision it gets from the user base. DLSS has been more consistent but still has bleeding and moire issues under motion in most titles. I'm lead to believe the new transformer model solves most of the bleed issues but I'll have to see it for myself on my own system.
 
What does it feel like though? Until you can fake the response of 240 as well its just nice wallpaper.

Games are more than just graphics. Though games companies seem to have forgotten that anyway. Too much time spent making them, not enough on making games fun as well. But I digress.
That's what I'm getting at. 100 FPS with framegen feels better than 60 FPS non-FG. Not massively better, but maybe 10~20 percent better. So even though the latency increases slightly, if you go from 30 ms to 40 ms it's not a huge deal. If you go from 35 ms to 90 ms, that's a different story.

What I'm seeing and experiencing with MFG is that, for the most part, the base framerates are almost the same as regular framegen, so you're getting additional smoothing frames without really hurting input latency. So if you have a game that ran at 50 FPS without FG, and 80 FPS with FG, you will get close to 120 FPS at MFG 3X and 160 FPS at MFG 4X. Or to give specific examples (full writeup coming):

Code:
      AlanWake2FullRT RTX 5090 DLSSQT 4K - AVG:  48.26
AlanWake2FullRT RTX 5090 DLSSQT MFG2X 4K - AVG:  88.82 (84% increase)
AlanWake2FullRT RTX 5090 DLSSQT MFG3X 4K - AVG: 128.40 (166% increase)
AlanWake2FullRT RTX 5090 DLSSQT MFG4X 4K - AVG: 166.00 (244% increase)

      Cyberpunk2077FullRT RTX 5090 DLSSQT 4K - AVG:  59.03
Cyberpunk2077FullRT RTX 5090 DLSSQT MFG2X 4K - AVG: 107.52 (82% increase)
Cyberpunk2077FullRT RTX 5090 DLSSQT MFG3X 4K - AVG: 153.74 (160% increase)
Cyberpunk2077FullRT RTX 5090 DLSSQT MFG4X 4K - AVG: 194.89 (230% increase)

      HogwartsLegacyFullRT RTX 5090 DLSSQT 4K - AVG:  55.32
HogwartsLegacyFullRT RTX 5090 DLSSQT MFG2X 4K - AVG: 113.81 (106% increase)
HogwartsLegacyFullRT RTX 5090 DLSSQT MFG3X 4K - AVG: 168.80 (205% increase)
HogwartsLegacyFullRT RTX 5090 DLSSQT MFG4X 4K - AVG: 222.03 (301% increase)

Hogwarts Legacy is basically 100% CPU limited at the settings used, which is why the scaling is basically perfect. (I have no idea why it ends up being more than 100%.) But does Hogwarts at 222 FPS with MFG4X feel like a game running at 222 FPS? Not even close. It looks maybe closer to ~100 FPS in my opinion, because there's a lot of stuttering.

Alan Wake 2 and Cyberpunk 2077 both look and feel better than Hogwarts, as they don't stutter. The 166 FPS with MFG 4X probably feels and looks more like ~100 FPS, and for CP77 the 195 FPS probably feels and looks closer to ~110 FPS. But it gets fuzzy. There are diminishing gains going from 2X to 3X to 4X, but I haven't encountered anything yet where MFG 4X looks or feels worse than 2X, and they all look and feel better (to me) than the non-FG result.
 
  • Like
Reactions: artk2219
Show me where the new architecture is
GDDR7 memory
Tensor FP4 support
All CUDA cores are now full FP32/INT32 citizens
Tensor operations can be better executed within shaders (AI + graphics)
RT ray/triangle rates doubled
RT cores support "mega geometry" feature
Improved Shader Execution Reordering
AI Management Processor to help with scheduling of resources
Flip metering to pace MFG better
Improvements for power management and sleep -> resume
New NVENC and NVDEC features
DisplayPort 2.1b UHBR20 on all three DP ports
PCIe 5.0

That's the short summary, so yes, the architecture is definitely "new."
 
So if I am understanding, the drivers are not taking full advantage of the GPUs in rasterization but the loss is insignificant once MFG is applied?
No. The drivers seem to have more CPU overhead with Blackwell right now, leading to worse 1% lows and worse 1080p performance. The latest drivers for 5080 may have improved things slightly relative to the 5090 drivers, or it may simply be that 5090 is faster and thus needs more CPU horsepower.

MFG is an enhancement of framegen, with better image quality in general and a smoother look. Even if it's 2X the number of frames, though, it doesn't really feel much better.
 
  • Like
Reactions: artk2219 and Gururu
Does seem like we are entering into another slump on process nodes.

600, 700, and 900 series were all on 28nm. That is why the 10 series was such an improvement and the last time pricing made much sense, 1080Ti was too good for the price.
(me, doing old-codger voice) "Yep, thems was different times. Oh, Pascal was king o' the hill, but hot dang, if it weren't that 'bout everyone yuh jawed with had hold of a Polaris, tell yuh what."
 
GDDR7 memory
Tensor FP4 support
All CUDA cores are now full FP32/INT32 citizens
Tensor operations can be better executed within shaders (AI + graphics)
RT ray/triangle rates doubled
RT cores support "mega geometry" feature
Improved Shader Execution Reordering
AI Management Processor to help with scheduling of resources
Flip metering to pace MFG better
Improvements for power management and sleep -> resume
New NVENC and NVDEC features
DisplayPort 2.1b UHBR20 on all three DP ports
PCIe 5.0

That's the short summary, so yes, the architecture is definitely "new."
Given the RT improvements in the new architecture, why do you think the 5000 series RT performance is so similar (linear to raster) to 4000 series?
 
  • Like
Reactions: artk2219
Does seem like we are entering into another slump on process nodes.

600, 700, and 900 series were all on 28nm. That is why the 10 series was such an improvement and the last time pricing made much sense, 1080Ti was too good for the price.
Well, I hope they will move to 3nm proper for Rubin or whatever next consumer arch will be.

But yeah, it seems to me all the low hanging fruit of process node shrinks has been picked off and now anything new on that front will be a lot of blood, tears, pain and money.
 
Well, I hope they will move to 3nm proper for Rubin or whatever next consumer arch will be.

But yeah, it seems to me all the low hanging fruit of process node shrinks has been picked off and now anything new on that front will be a lot of blood, tears, pain and money.
Pretty sure Rubin is data center only, like Volta and Hopper. AFAIK, we don't have any details on the next generation architecture name for the RTX 60-series. I vote for... Gauss or Euclid, maybe Euler. 😀
 
  • Like
Reactions: artk2219 and Gaidax
except they are real as ignoring any toggled setting you are rendering frames by default. If my game doesn't support FG/MFG then thats real frames (as in what the card can actually do at all times)

if your gpu is actually rendering 60 frames those are real.
if your gpu is rendering 60 frames and using "ai" to fake180 thats not 200+ frames its 60 frames and going to feel "off" to play with as again MFG's feel is entirely based on the actual frames you can hit w/o it and the more frames you generate w/ it the worse that feeling is.

There is also issue of input latency ballooning w/ it making competitive games that want as minimal latency as possible (while still having as many frames as possible).

again MFG in theory is a good idea, but its not going to work like that as there are downsides given especially given game devs stop trying to optimize games (which would mean higher real frames which benefit fake frames) and just brute force them anymore which lowers performance and benefit of the technology.
Just out of interest. Do you have an RTX40/50 series card. Have you tried FG/MFG? Curious to know.
 
  • Like
Reactions: artk2219
The cynic in me is saying that nVidia are getting lazy, like many game devs and studios, rushing out projects that clearly need more work. And with that rush, comes bad drivers or poorly optimized games.

Yes, I know that drivers over time can improve all measure of things for a GPU, but it kinda feels like reviewers and consumers are doing the beta testing still. Will be good to look back in a few months and see how the drivers are then, as they do have to mature a little with each new GPU release, so kinda expected somewhat.
 
No no no, Euler needs to be saved for some real huge arch breakthrough. The guy has saved my CS degree. I have a longstanding love-hate relationship with Euler.
I was trying to remember if he had already been used for something in the past, but can't seem to come up with any specific GPU or CPU architecture. But he's a big name for sure. Maybe Euler will be for when Nvidia changes from the RTX branding to something new?