Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I was talking about the performance. Period ! actually it does not seem as a new design at all , just renaming and tweaks . Period.
You don't get to dictate what something is or isn't because it's your opinion. The performance doesn't change the architecture. Period.

So it has to retain value in order to be a luxury item? LOL The mental gymnastics are incredible. I Never said a 5090 was a revolutionary product.
I also Never implied it was the same architecture.
I said that the person took issue with the comparison to a Rolex because it was bad (and it is in that particular context), but I didn't say the 5090 wasn't a luxury item. A person can take issue with a portion of a statement, especially a comparison, without being entirely in disagreement.

I also never claimed you said the 5090 was revolutionary, and the architecture comment was in response to another user (who was quoted). Please read what I said more carefully.
 
Wow. Your statement is wrong on many levels! AI is not junk. It may be to you (thank god, opinions are just that, opinions!), but not to many others. FG aside, AI also provides options for DLSS which can be very helpful in terms of increasing performance, without much hassle, and if users own an appropriate GPU and can keep those with lesser filled pockets in the game (pardon the pun.) without having to upgrade.

As for 60hz being being perfectly fine for whatever use, that makes me ask the question if you've ever had a high refresh rate monitor, or for that matter an GPU capable of upscaling like DLSS/FSR/XesS? Because if you did, you wouldn't be saying this.

Yes, even on a basic level, a high refresh monitor gives extremely smooth mouse scrolling for example. Don't knock it till you try it!
Plenty of people have tried high refresh rate without noticing any real difference.
 
Plenty of people have tried high refresh rate without noticing any real difference.
I guess that could be so. However, whilst I'm not some kind of FPS/hz snob, the difference is real. Have you tried a high refresh rate monitor? If so, how did you find it? If not, then maybe check it out in a local tech store for yourself. I'm sure you would then reply differently.

Edit: If they are not noticing a difference, then it's perhaps their setup which is the issue. For anyone, who's gaming at a stuttery 60hz mess, to then try the same game, with same components on high refresh monitor. They'll notice the difference then.
 
I guess that could be so. However, whilst I'm not some kind of FPS/hz snob, the difference is real. Have you tried a high refresh rate monitor? If so, how did you find it? If not, then maybe check it out in a local tech store for yourself. I'm sure you would then reply differently.

Edit: If they are not noticing a difference, then it's perhaps their setup which is the issue. For anyone, who's gaming at a stuttery 60hz mess, to then try the same game, with same components on high refresh monitor. They'll notice the difference then.

The answer to this is very complicated.

Short answer: Your eyes and more importantly visual cortex do not work the way you think they do.

Long answer: Your eyes and visual cortex do not see "frames" or even complete images but instead sense changes in intensity and color. Furthermore different parts of your eyes are specialized into receiving different kinds of light. You have way more intensity sensing rods then color sensing cones, except at the very center of your vision where the relationship is reversed.


https://askabiologist.asu.edu/rods-and-cones

The center of your vision has tons of color information streaming in while everything else is largely just intensity (monochrome) with a little color. Your visual cortex is then reconstructing the image based on what it previously detected and what it expects to be there. This is how optical illusions work. And yes, this means Human brains have been doing AI "Frame Gen" for a couple million years now.

Now how does this relate to "ultra omega refresh rates", intensity changes. Human eyes are very sensitive to intensity changes and our visual cortex is deliberately looking for large changes as a sign of motion. The intensity change from one frame to another is fare more important then the number of frames presented. Frames per second is really just how discrete the intensity signal can be. 60fps is each signal being on the screen for 16.66ms of time, meaning you go from one intensity to another every 16.66ms. 120fps is 8.33ms of time between intensity switches meaning you can have a smoother transition between value A and B. 240fps is 4.15ms giving you even more intermediary steps to have smoother shifting of intensity.

Taking this all together, if your intensity is going from high to low within 16.6ms, having more intermediary steps lets the change appear smoother. If the intensity shift is low, going from one shade of green to another similar shade of green over 16.6ms, then more frames won't do anything since the visual cortex just discards the extra information anyway. This is why some people are drooling over refresh rates in femto-seconds while others are happy with 60hz and don't see any difference with higher values. It's not the person, its' the content they interact with.
 
I guess that could be so. However, whilst I'm not some kind of FPS/hz snob, the difference is real. Have you tried a high refresh rate monitor? If so, how did you find it? If not, then maybe check it out in a local tech store for yourself. I'm sure you would then reply differently.

Edit: If they are not noticing a difference, then it's perhaps their setup which is the issue. For anyone, who's gaming at a stuttery 60hz mess, to then try the same game, with same components on high refresh monitor. They'll notice the difference then.
You cannot forget the input aspect as that's a big driver of higher refresh. Someone who games using a controller generally isn't going to get the input benefit that someone using a mouse will.

Personally speaking so long as it's not going below 60Hz visually it's fine, but depending on the title will absolutely notice the input difference.

Outside of gaming it's things like moving windows around and scrolling which becomes obvious with lower refresh rates. Scrolling being a driver behind phones getting higher refresh displays.
 
The answer to this is very complicated.

Short answer: Your eyes and more importantly visual cortex do not work the way you think they do.

Long answer: Your eyes and visual cortex do not see "frames" or even complete images but instead sense changes in intensity and color. Furthermore different parts of your eyes are specialized into receiving different kinds of light. You have way more intensity sensing rods then color sensing cones, except at the very center of your vision where the relationship is reversed.


https://askabiologist.asu.edu/rods-and-cones

The center of your vision has tons of color information streaming in while everything else is largely just intensity (monochrome) with a little color. Your visual cortex is then reconstructing the image based on what it previously detected and what it expects to be there. This is how optical illusions work. And yes, this means Human brains have been doing AI "Frame Gen" for a couple million years now.

Now how does this relate to "ultra omega refresh rates", intensity changes. Human eyes are very sensitive to intensity changes and our visual cortex is deliberately looking for large changes as a sign of motion. The intensity change from one frame to another is fare more important then the number of frames presented. Frames per second is really just how discrete the intensity signal can be. 60fps is each signal being on the screen for 16.66ms of time, meaning you go from one intensity to another every 16.66ms. 120fps is 8.33ms of time between intensity switches meaning you can have a smoother transition between value A and B. 240fps is 4.15ms giving you even more intermediary steps to have smoother shifting of intensity.

Taking this all together, if your intensity is going from high to low within 16.6ms, having more intermediary steps lets the change appear smoother. If the intensity shift is low, going from one shade of green to another similar shade of green over 16.6ms, then more frames won't do anything since the visual cortex just discards the extra information anyway. This is why some people are drooling over refresh rates in femto-seconds while others are happy with 60hz and don't see any difference with higher values. It's not the person, its' the content they interact with.
You didn't need to go so in-depth, but I appreciate the answer nonetheless. Even if it's directed at the wrong person.

Getting back to my point, which is still true, there is a noticeable difference even on a basic thing like mouse cursor movement at 60fps/hz vs 144hfps/hz. It's very simple. Doesn't need to be disected or cross examined 😉
 
You didn't need to go so in-depth, but I appreciate the answer nonetheless. Even if it's directed at the wrong person.

Getting back to my point, which is still true, there is a noticeable difference even on a basic thing like mouse cursor movement at 60fps/hz vs 144hfps/hz. It's very simple. Doesn't need to be disected or cross examined 😉

Ehh I don't think you bothered to read it ...


As for mouse cursor ... that is your mind seeing what it wants to see. It's like when you give someone a taste test of Coke vs Pepsi, then swear up and down they can tell which one tastes better, only to find out they were the same drink the entire time. I've done this test on refresh rate snobs before. Had them swearing they noticed the faster refresh and smoother mouse on the higher hz monitor, only for me to show them I had locked it at 60hz the entire time. Talk about some angry people.

What refresh rate really matters is the reduced input latency when you are doing something like triple buffering, where your input is a full two to three frames behind. At 16.66ms per frame that is a full 33 to 50ms worth of latency, which is noticeable. Reducing that to 8.33ms makes the input latency 16.66 to 25ms. That is where your "feeling smoother" is coming from. Double buffering on the other hand is only one frame, two at most behind. Also frame time consistency is very important to how our brains process visual information. A game doing a consistent 16.66ms per frame (60fps) is going to appear "smoother" then a game bouncing between 14.28ms (70) and 8.33ms(120) per frame. Our visual cortex is designed to notice differences and patterns, then use those patterns to fill in future information.

After that, your visual cortex just blurs the details anyway so unless it's a very sharp contrast change over a short period of time, like black to white back to black.
 
Last edited:
  • Like
Reactions: YSCCC
>I remember when video games were fun and accessible to everyone.

Games are still accessible, even AAA games on $300 GPUs. You just have to manage your expectation and dial the details down.
Of course, but the price disparity between the low, mid / mainstream and high end GPUs has widened to a ridiculous degree. And depending on the games one plays, i.e. FPS’ online, those who can afford the highest end cards and setups now have degrees of unfair advantages in terms of frame rates, wider resolutions etc. over those stuck in the low to mid end.

It’s just become kinda silly that for the cost of a top end GPU now, one can literally buy every current gen console, and then some.
 
What would be interesting here is to see what happens with DLSS at Balanced and Performance modes, specifically for the DLSSQ-Transformers base line (58 FPS one).

I wonder how much FPS is gained by running "Balanced" preset VS the artifacts/quality degradation. Maybe the latter is near indistinguishable, while you gain 10-15 more FPS, which would make Framegen much better.

It would especially be interesting to see for card like 5080/5070Ti, because that might be all that is needed to make Framegen usable. As you write, it needs some basic decent FPS to begin with to work decently well.
So, I've done some additional testing. I still need to update page six or whatever of the review, but here's some additional data for you. This is still using Full RT (RT-Overdrive), but now I've got DLAA results tossed into the picture. What's key here is that DLAA at 4K gets just over 30 FPS average, with latency of 65 ms. So, first the numbers:

Code:
CP77 FullRT DLAA 4K           AVG:  30.99   99pMIN:  23.6   Latency:  65.4   CPUCLK: 4809.3   GPUCLK: 2584.7   GPUTemp:  77.0   GPUPWR: 588.8   GPU%:  97.5
CP77 FullRT DLSSQ 4K          AVG:  56.97   99pMIN:  38.0   Latency:  41.6   CPUCLK: 4525.7   GPUCLK: 2633.2   GPUTemp:  74.2   GPUPWR: 551.7   GPU%:  94.7
CP77 FullRT DLSSB 4K          AVG:  67.34   99pMIN:  46.5   Latency:  38.2   CPUCLK: 4533.9   GPUCLK: 2656.2   GPUTemp:  73.0   GPUPWR: 523.9   GPU%:  92.4
CP77 FullRT DLSSP 4K          AVG:  79.38   99pMIN:  47.9   Latency:  35.4   CPUCLK: 4530.6   GPUCLK: 2685.4   GPUTemp:  71.6   GPUPWR: 486.9   GPU%:  87.3
CP77 FullRT DLSSUP 4K         AVG:  97.54   99pMIN:  55.9   Latency:  31.4   CPUCLK: 4522.1   GPUCLK: 2758.1   GPUTemp:  64.3   GPUPWR: 371.6   GPU%:  71.7
CP77 FullRT DLAA MFG2X 4K     AVG:  57.15   99pMIN:  43.9   Latency:  77.4   CPUCLK: 4833.6   GPUCLK: 2598.5   GPUTemp:  75.6   GPUPWR: 593.1   GPU%:  96.7
CP77 FullRT DLAA MFG3X 4K     AVG:  84.48   99pMIN:  53.2   Latency:  82.6   CPUCLK: 4865.0   GPUCLK: 2598.2   GPUTemp:  75.9   GPUPWR: 589.1   GPU%:  97.2
CP77 FullRT DLAA MFG4X 4K     AVG: 110.30   99pMIN:  66.5   Latency:  86.3   CPUCLK: 4845.8   GPUCLK: 2596.5   GPUTemp:  76.0   GPUPWR: 583.3   GPU%:  96.7

Now, if you're only looking at performance, note how DLSS Quality upscaling is basically the same FPS (or frames to monitor) as DLAA + Framegen. Both give around an 83% boost to "performance" — in quotes because framegen isn't really performance as such. And look at the input latency. With DLSS Quality, latency dropped from 65 ms down to 42 ms. The game feels very playable at this point. With framegen, latency increases from 65 ms to 77 ms. It's not terrible, but it's also not better.

DLSS Balanced gives a modest boost to 67 FPS, and latency drops to 38 ms. It feels about the same, honestly, as DLSS Quality. Visually, the new transformers model also looks very good. You can absolutely play this way and I suspect most people wouldn't be able to guess whether they were running native or upscaled based on the visuals. (Performance is a dead giveaway, though.)

DLSS Performance mode upscaling gives 79 FPS with 35 ms latency, and DLAA + MFG3X gives 84 FPS with 83 ms latency. Those framerates are close enough to look equally smooth, but the feel of the game and the responsiveness absolutely favors DLSSP.

Finally, DLSS Ultra Performance 9X upscaling gives 98 FPS and 31 ms latency. DLAA + MFG4X gives 110 FPS and 86 ms latency. Visually, you would think Ultra Performance would look pretty awful, but it's not really that bad. Again, most might not even notice. If you're versed in graphics and upscaling and know what to look for, you can tell there's upscaling enabled, but I'd almost venture to say that DLSS Transformers in Ultra Performance mode at 4K probably looks pretty close to FSR3 Quality mode! And the feel is way better than MFG4X.

I think the DLAA numbers with MFG give a good idea of what we can expect from MFG and DLSS on something like a 5070. If your base FPS (before framegen) is only 30 or so, MFG does look better — whether at 2X, 3X, or 4X — and I'd even say it tends to feel more playable as well. But that's a bit nebulous. As someone else pointed out, our eyes and brains are complex, so even though the input latency is worse, the visual smoothness counteracts that to some degree.

I ran around in Cyberpunk for 10 minutes or so with DLAA + MFG4X, as that's basically the "worst-case" option for the 5090 right now. It was absolutely playable. A bit sluggish feeling, yes, but for me I'd say not as bad as playing with a controller on a console (shots fired!) And because our brains are complex, you do get used to the slightly higher input latency after a minute or so.

The net results of MFG are interesting as well. So we have DLAA with 31 FPS, sampling input every 32 ms. With one frame of latency, that works out the 64 ms and basically matches up perfectly with the benchmarks.

Turn on MFG2X and we get 57 FPS with input sampling every 35 ms (ie, running at 28.5 ms). Now there's approximately four frames of latency (relative to the MFG rate), because the GPU renders frames 1 and 3 and generates frame 2, and frame 1 gets sent to the monitor probably around the time that the GPU is rendering frame 4. That seems to be how the math works out. Four frames of latency at 57 FPS would be 70 ms, and the measured value is 77 ms, so that's relatively close — it's 4.5 frames of latency.

MFG3X bumps the framerate to 84 FPS, but input sampling is only happening every third frame. So input sampling at 28.16 FPS and happening every 35.5 ms. And I think now we end up with about six frames of latency relative to the 84 FPS. So 84.48 FPS would be 11.8 ms and times six gives 71 ms... which means it's actually seven frames of latency at 82.6 ms.

MFG4X gives 110 FPS, input sampling at one fourth of that or 27.6 FPS. Which means it now happens every 36.3 ms. Really not much worse than the MFG3X result. Except because of the higher generated framerate, we're now getting about 9.5 frames of latency (86.3 ms) relative to the generated 110.3 FPS result.

What does that mean? If you want a really low latency with MFG4X, you're probably going to want a generated framerate well above 240 FPS. And a 240Hz or even 360Hz / 480Hz monitor. I think any latency result below about 40 is going to be plenty smooth for most gamers. And if you're getting 300 FPS generated rate, that's a 75 FPS base rate before MFG4X.

But, as I've been saying since framegen came out, it's totally not right to pretend that generated framerates are the same as rendered framerates. They can look similar, but the feel can be very different, and the gap just increase. If you have MFG4X running at 110 FPS, as shown here, it looks way smoother than a game running at 30 FPS, but it feels just as laggy as a game running at something closer to 25 FPS.
 
DLSS Balanced gives a modest boost to 67 FPS, and latency drops to 38 ms. It feels about the same, honestly, as DLSS Quality. Visually, the new transformers model also looks very good. You can absolutely play this way and I suspect most people wouldn't be able to guess whether they were running native or upscaled based on the visuals. (Performance is a dead giveaway, though.)

DLSS Performance mode upscaling gives 79 FPS with 35 ms latency, and DLAA + MFG3X gives 84 FPS with 83 ms latency. Those framerates are close enough to look equally smooth, but the feel of the game and the responsiveness absolutely favors DLSSP.

While I refuse to treat Fake Frames as "FPS" for performance reasons, I'm completely behind upscaling though it's largely wasted on the 5090. Anyway we can get non-RT numbers to get an idea of what regular rasterization pure vs upscaled would look like? The ideal use case for this type of upscaling would be lower powered cards rendering to higher resolutions then they could otherwise do. Like a 5060 or 5070 connected to a 4K HDTV. How would it make "4K gaming" feel to the mid tier market, who might otherwise not have access to it.
 
Last edited:
But, as I've been saying since framegen came out, it's totally not right to pretend that generated framerates are the same as rendered framerates. They can look similar, but the feel can be very different, and the gap just increase. If you have MFG4X running at 110 FPS, as shown here, it looks way smoother than a game running at 30 FPS, but it feels just as laggy as a game running at something closer to 25 FPS.
Tim from HUB posted a MFG overview video and his conclusion was basically "it's more frame generation". When it works it's great, but when it doesn't it gets worse at each step you go above 1 additional frame. For a good experience his minimum frame rate advice was in the 70-90 range which seems right in line with what you've said.

One thing he said stood out to me and that is frame generation is like a less bad motion blur since the overall goal of each technology is the same. I'm curious your take on that stance since it had never even crossed my mind. I haven't used frame generation at all, but can definitely see the positives for those >200Hz refresh rates on screens above 1080p.
 
Last edited:
Ehh I don't think you bothered to read it ...


As for mouse cursor ... that is your mind seeing what it wants to see. It's like when you give someone a taste test of Coke vs Pepsi, then swear up and down they can tell which one tastes better, only to find out they were the same drink the entire time. I've done this test on refresh rate snobs before. Had them swearing they noticed the faster refresh and smoother mouse on the higher hz monitor, only for me to show them I had locked it at 60hz the entire time. Talk about some angry people.

What refresh rate really matters is the reduced input latency when you are doing something like triple buffering, where your input is a full two to three frames behind. At 16.66ms per frame that is a full 33 to 50ms worth of latency, which is noticeable. Reducing that to 8.33ms makes the input latency 16.66 to 25ms. That is where your "feeling smoother" is coming from. Double buffering on the other hand is only one frame, two at most behind. Also frame time consistency is very important to how our brains process visual information. A game doing a consistent 16.66ms per frame (60fps) is going to appear "smoother" then a game bouncing between 14.28ms (70) and 8.33ms(120) per frame. Our visual cortex is designed to notice differences and patterns, then use those patterns to fill in future information.

After that, your visual cortex just blurs the details anyway so unless it's a very sharp contrast change over a short period of time, like black to white back to black.
👏
 
@
DLSS Balanced gives a modest boost to 67 FPS, and latency drops to 38 ms. It feels about the same, honestly, as DLSS Quality. Visually, the new transformers model also looks very good. You can absolutely play this way and I suspect most people wouldn't be able to guess whether they were running native or upscaled based on the visuals. (Performance is a dead giveaway, though.)
Thank you for your efforts, this is giving me the answer I was looking for. Without seeing it myself it feels like the sweet spot setting for DLSS, where it's almost indistinguishable visually from Quality, but ~10% faster, minus a few % due to new model penalty.

This can probably also give additional year or two of life to Series 30 cards, those 7-10% FPS give or take might just be enough to keep their head above the acceptable 60FPS average with higher details settings in the more intense titles.

It's frankly a bit counter-productive of Nvidia, subjectively for me, as it makes me consider dragging my 3080Ti for another generation until Series 60.
 
  • Like
Reactions: JarredWaltonGPU
@

Thank you for your efforts, this is giving me the answer I was looking for. Without seeing it myself it feels like the sweet spot setting for DLSS, where it's almost indistinguishable visually from Quality, but ~10% faster, minus a few % due to new model penalty.

This can probably also give additional year or two of life to Series 30 cards, those 7-10% FPS give or take might just be enough to keep their head above the acceptable 60FPS average with higher details settings in the more intense titles.

It's frankly a bit counter-productive of Nvidia, subjectively for me, as it makes me consider dragging my 3080Ti for another generation until Series 60.
For the 30 series the main issue is 8GB of ram till the 3070Ti is still 8GB of ram, which even at 1080P is running out of steam, for 3080 or the 3090 they definitely can survive one more gen
 
For the 30 series the main issue is 8GB of ram till the 3070Ti is still 8GB of ram, which even at 1080P is running out of steam, for 3080 or the 3090 they definitely can survive one more gen
Well, I was mostly curious for my case with 3080Ti. I'm running 1440p ultrawide, so I could use those ~10% just to get over the hump without sacrificing visuals.
 
  • Like
Reactions: Albert.Thomas
As well, the features aren't a constant to merit a good/bad designation. They're a moving target, and they will continue to improve. Upscaling is already at the "good enough" point. FG will get to "good enough" at some point, sooner than later.
That's the key point.

In my opinion, in 4-6 years from now, FrameGen will be as accepted as DLSS is. And it will happen because FrameGen in 2031 won't be the FrameGen in 2025.

All these growing pains of FG seem to me entirely solvable. For example, I don't see why inputs have to be hard bound to "real" frames and I am sure that eventually there will be a solution to this, where the "fake" frames will be treated as "real" for input.

It may require changes and evolution in game engines, in frame generation and so on - but I don't see why it can't be a thing.

Heck, you have a version of Doom that is entirely AI generated, running without actual engine, all generated and all and allowing inputs realtime. So, it's not some impossible task.
 
  • Like
Reactions: JarredWaltonGPU
>For example, I don't see why inputs have to be hard bound to "real" frames and I am sure that eventually there will be a solution to this, where the "fake" frames will be treated as "real" for input.

Yes, agree. This was the same point I made in an earlier debate re FG. But sadly I wasn't eloquent enough to carry it across--that, or people tend to ignore arguments that don't comport with their existing views.

Regardless, all this "is FG good enough" beeswaxing is more or less tempest in a teapot. The opinions on these forums aren't representative of--and often don't align with--the mainstream. We're just shooting the breeze here.

One point that's been brought up is that GPU reviews should consider hardware gains only. Again, it's a moving target. HW perf compare is still true for this gen, but it will be less and less tenable to ignore SW/AI perf gains for future iterations, as they increasingly make up a larger percentage of the overall gain.

This, simply because HW gains are getting harder and harder to achieve, the 5090 being an excellent case in point. People who keep expecting 20-30% HW perf gains at the same power & price levels are only indulging in "back in the day" fantasy.
 
Regardless, all this "is FG good enough" beeswaxing is more or less tempest in a teapot. The opinions on these forums aren't representative of--and often don't align with--the mainstream. We're just shooting the breeze here.
Yes, and that is also true.

FrameGen viability won't be decided by you or me, or anyone screaming "fake frames" at the top of their lungs.

It will be decided by a random Joe and Jane grabbing that controller and kicking back on the couch firing up some console game in 2028, when there will be a new gen of consoles using this. And if Joe and Jane will feel good about that experience - that is what will seal the deal.

This tech is relatively new and it will still take a few years for it to get better, but in the end there will be optimal use cases for it. All this arguing here does not matter any.

Heck, think ths: why do you have to generate a full frame? Maybe you can raster draw like 20% of the frame - the more important parts and let AI fill the remaining 80% blanks every frame. As in DLSS upscaling, except drawing only part of the lower resolution frame to begin with.

I bet that's what they will try next and this might be a solution to the latency.

---

And yes, people need to start getting used to this. The age of huge raster improvements will slowly come to an end, in the end you will be able to stuff only so much stream processors on the GPU and it will very fast reach a certain diminishing returns limit.

Node shrinks become harder and harder to pull off, and costlier too and as 5090 shows you can stuff only so many transistors there before it becomes simply prohibitive both price and requirements.

The low hanging raster fruit of the previous 30 years has almost been picked off entirely. A change in approach is required.
 
Last edited:
That's the key point.

In my opinion, in 4-6 years from now, FrameGen will be as accepted as DLSS is. And it will happen because FrameGen in 2031 won't be the FrameGen in 2025.

All these growing pains of FG seem to me entirely solvable. For example, I don't see why inputs have to be hard bound to "real" frames and I am sure that eventually there will be a solution to this, where the "fake" frames will be treated as "real" for input.

It may require changes and evolution in game engines, in frame generation and so on - but I don't see why it can't be a thing.

Heck, you have a version of Doom that is entirely AI generated, running without actual engine, all generated and all and allowing inputs realtime. So, it's not some impossible task.

Have you seen videos of that in action? It's slow, to be expected, but it hallucinates. Badly. Objects change into other things if you look away and back to them. I remember an imp fireball coming out of nowhere. It can't count ammo correctly at times. We're a ways off.
 
Have you seen videos of that in action? It's slow, to be expected, but it hallucinates. Badly. Objects change into other things if you look away and back to them. I remember an imp fireball coming out of nowhere. It can't count ammo correctly at times. We're a ways off.
It was also made by a few people as a side tech demo.

But the point here is not that we're going to have fully AI made games without engine, but that player inputs do not really need to have "real" frames.

I imagine in a matter of few years there will be engines and tech to allow input in AI generated frames during MFG - it is only the matter of time.

Or take DLSS upscaling and move it forward - render only 50% of the original lower resolution image, let AI upscale and fill in the blanks. Or make it even better - render parts of the image that usually are not handled well by AI upscaling/gen and fill the rest by AI.

They can and will do a lot of mixed rendering like that, they mentioned that your good 'ol shaders now are capable of directly be programmed with AI features in Blackwell - they called it Neural Shaders. You can bet they will explore all kinds of tricks and optimizations like above. You can literally count on it.

It will take a few years, but it is where it's going.
 
  • Like
Reactions: JarredWaltonGPU
It was also made by a few people as a side tech demo.

But the point here is not that we're going to have fully AI made games without engine, but that player inputs do not really need to have "real" frames.

I imagine in a matter of few years there will be engines and tech to allow input in AI generated frames during MFG - it is only the matter of time.

Or take DLSS upscaling and move it forward - render only 50% of the original lower resolution image, let AI upscale and fill in the blanks. Or make it even better - render parts of the image that usually are not handled well by AI upscaling/gen and fill the rest by AI.

They can and will do a lot of mixed rendering like that, they mentioned that your good 'ol shaders now are capable of directly be programmed with AI features in Blackwell - they called it Neural Shaders. You can bet they will explore all kinds of tricks and optimizations like above. You can literally count on it.

It will take a few years, but it is where it's going.
I doubt it can be nearly as good as a real frame.. things like ghosting in a flight display rolling number is still unsolved in DLSS4, personally my experience is that in single player advanture games like starfield or cyber punk, the artefacts are less offensive or even unnoticeable, but then when there's some minor details one focus on during game play, just like the FPS aim dot or flight sim instruments, the ghosting or mis interpretation by the AI frame gen is annoying as heck, and TBF I don't believe the AI could go to a point where it can predict you aim at left and quickly aim back to the right, not to say AI, even real player teammate won't be able to predict what you will do in a fraction of a second later.
 
I doubt it can be nearly as good as a real frame.. things like ghosting in a flight display rolling number is still unsolved in DLSS4, personally my experience is that in single player advanture games like starfield or cyber punk, the artefacts are less offensive or even unnoticeable, but then when there's some minor details one focus on during game play, just like the FPS aim dot or flight sim instruments, the ghosting or mis interpretation by the AI frame gen is annoying as heck, and TBF I don't believe the AI could go to a point where it can predict you aim at left and quickly aim back to the right, not to say AI, even real player teammate won't be able to predict what you will do in a fraction of a second later.
There is a critical mass at which it simply does not matter anymore. You literally say it yourself.

And this here is hardly the end of the road, after DLSS4, there will be DLSS5, DLSS7 and DLSS11. Eventually it will get to the level where you need to be some sort of games "sommelier" to even detect, let alone care about whatever artifacts there may be.

In the end you will be down to "I get 30 more average FPS and lower latency, but then once in an hour there is this small visual that might shimmer under specific angle" - and the answer to whether people will go for that trade will be yes for 99% of the gamers.

Take the difference between DLSS1 and DLSS4 in 6 years you went from an obvious hack job mess to "the artefacts are less offensive or even unnoticeable".

Give it another 6 years and you won't even know it's there - it is a process and a very similar process to what we had with good 'ol raster that had a lot of evolutions and optimizations before it got to where it is now.

Same will happen with FrameGen, or even partial FrameGen that I bet will be the next thing we see, because it just makes sense to do. Do you really need to render that grass or skyboxes, when you can just let AI draw it for 1/10 of the effort freeing up them shaders to work on the more important models?

Things that are less important will be offloaded to the AI within the mostly raster rendered frame, you can quote me on that in a few years from now.
 
  • Like
Reactions: JarredWaltonGPU
There is a critical mass at which it simply does not matter anymore. You literally say it yourself.

And this here is hardly the end of the road, after DLSS4, there will be DLSS5, DLSS7 and DLSS11. Eventually it will get to the level where you need to be some sort of games "sommelier" to even detect, let alone care about whatever artifacts there may be.

In the end you will be down to "I get 30 more average FPS and lower latency, but then once in an hour there is this small visual that might shimmer under specific angle" - and the answer to whether people will go for that trade will be yes for 99% of the gamers.

Take the difference between DLSS1 and DLSS4 in 6 years you went from an obvious hack job mess to "the artefacts are less offensive or even unnoticeable".

Give it another 6 years and you won't even know it's there - it is a process and a very similar process to what we had with good 'ol raster that had a lot of evolutions and optimizations before it got to where it is now.

Same will happen with FrameGen, or even partial FrameGen that I bet will be the next thing we see, because it just makes sense to do. Do you really need to render that grass or skyboxes, when you can just let AI draw it for 1/10 of the effort freeing up them shaders to work on the more important models?

Things that are less important will be offloaded to the AI within the mostly raster rendered frame, you can quote me on that in a few years from now.
Don't agree on that, as:
1) there are things that you NEED to focus on in a game, e.g. the aim dot, and the critical flight display at flight sim where the number ghosting makes the "smooth" frame completely meaningless.

2) for the "unimportant" stuffs, even 20 years ago we do it the same way, by the devs, to make the texture a ton smaller and use lower quality texture in the grass and leaves, or just make those a blurred animation where the leaves swing.

DLSS isn't really making the games looking better with higher FPS, they essentially just do the "optimisation" real time, which should be done by the dev in composing the game, not making every texture 8k and then let Frame Gen do the optimisation with whatever artefacts. I still remember being blown away by metal gear solid 3, and then subsequently IV and V graphics and how immersive the experience was, and those are in PS3 and PS4 where the hardware is nowhere near modern PCs. But did the graphics blown away me more nowadays? a few games with UE5 did, but not all, and more importantly, the glitches even in DLSS4 just breaks the experience.

Most ppl will choose to use DLSS FG not because it gives more frames with minimal artefacts, it's because in recent titles it becomes basically a must to just run it without stuttering... unless you put a 4090 in 1080P. in pure upscaling I am pretty accepting that it is as good as it can get now, but ask how many ppl will turn off FG or even upscaling if the raw performance can give them 60 FPS minimum?