Review Nvidia GeForce RTX 5080 Founders Edition review: Incremental gains over the previous generation

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
In Greece, i have seen members of a certain tech forum placing several 5090 orders in a well known retail store.

I'm very surprised: not only by how eagerly they pre-paid the 3,500€ (roughly 3,500$) scalper price, but also by their confidence that they'll actually receive those cards any time soon.

Seeing how many US stores have already run dry of 5090s, one must be very desperate (or naive) to think that an order placed in Greece will actually be delivered in time.
I mean I'm not from Greece, but here's input of someone who bought a $3600 5090 Astral.

The base Asus price for that GPU is $2800, add in the shipping costs and VAT on top of that is already $3400, the margins are pretty thin already if you ask me, unless Asus sells it to them at a discount. This comes with a proper local official Asus warranty.

And because of how scarce 5090 is, the only other alternative that was available was Gigabyte Aorus Master at $3850. And these are prices here from official retailers, not some scalpers. Guess this retailer wanted to make a killing off the however little of those they have.

So yeah, I went for Astral, because there is little other 5090 choice here. It's okay, I'm earning enough to afford myself a gift for surviving for 40 years on this planet.


There is no miracle solution to this shortage, it is what it will be for months and even after that, from my personal experience - cards like this aren't really dropping in value for 2 years or so at least.

I would not be surprised if we have a total of less than 10 units for the whole country.
 
Not just MSRP models. Every 5090 is out of stock, everywhere, as far as I can tell. Which was basically expected. I'm still surprised by how much people are willing to pay, but then again, paying up to $4000 for a 5090 that's significantly better in just about every way than an RTX 5000 Ada Generation professional GPU might make sense for some places.

It's about supply and demand. With such a lopsided curve, only those with quite a bit of disposable income can afford the 5090 and 5080's. People who that money really doesn't mean anything to them.
 
  • Like
Reactions: TeamRed2024
Sure, go find that magical GPU that will render 60FPS native in a situation 5080 renders 15FPS.

I think a lot of people here are hella disingenuous. Yes, when you turn on Framegen with base 15FPS, nothing good will come of it, but you were fudged to begin with anyway.

Framegen is great when you want to bring 40-50FPS to 100+ FPS. Yes, the input would still mostly be as if it's 40-50FPS, but at the very least the visuals you will see will be smooth. Yes, it won't be "real" 100+ FPS, but it will be better than 40-50FPS.

And it is even better when you want to bring 100 FPS to 200+ and max out your monitor refresh rate. At that base FPS, input latency will no longer matter unless you're like legit top 1% CSGO champ or something.

I think it's shortsighted to dismiss this tech - it has use cases. Yes, its best use case is making something that's already good - better, but hey, that is also nice to have as an option, isn't it?
You completely missed the point and the real implication of what I've said. If the displayed "60fps" of the fake frames is observable different than the 60fps of the real rendered frames then it's obviously not 60fps but inflated fake numbers in a counter. At this point when you say if you have 50 it goes to 100 then it's also a lie, 100 to 200 also a lie, how are you going to fill your monitor capacity with lies?
 
You completely missed the point and the real implication of what I've said. If the displayed "60fps" of the fake frames is observable different than the 60fps of the real rendered frames then it's obviously not 60fps but inflated fake numbers in a counter. At this point when you say if you have 50 it goes to 100 then it's also a lie, 100 to 200 also a lie, how are you going to fill your monitor capacity with lies?
Again, the point is that in a reasonable use case for the tech - the total experience will be better.

As I said, Framegen won't help if your starting point is 15FPS base - that's a RIP. But if it's 40-50FPS base - it will increase the image fluidity, if not the responsiveness.

Yes, it will not pass the purity test of "real" frames, but the experience will be better as opposed to chugging it at 40-50FPS base.

I don't think it needs to be turned into this weird all or nothing zero-sum game. Yes, Nvidia bad, they just stole $3600 USD from me, their marketing is deceptive AF, but there are quite a few reasonable use cases and advantages to Framegen that make the end user experience better.

Now, we can go in circles about it for 5 more pages, but in the end, it is what it is - saying that Framegen does nothing good at all is about as much of a hot take, in my opinion, as saying that 5070 is equal to 4090.
 
  • Like
Reactions: JarredWaltonGPU
But if it's 40-50FPS base - it will increase the image fluidity, if not the responsiveness.

Yes, it will not pass the purity test of "real" frames, but the experience will be better as opposed to chugging it at 40-50FPS base.
Except it will make the input latency worse while it improves the visual frame rate. So once again it comes down to the individual and the sensitivity to latency.

In your hypothetical there for someone who's okay with 40-50 that becomes closer to 50-60 to keep the same input latency.

I can tell pretty much immediately when frame rates drop below 60 on anything with real time movement. With the added input latency from frame generation that would now mean my minimum becomes around 72 fps instead of 60 for the same experience.

None of this is to say the technology is bad or unimportant just that there's so much nuance and subjectivity involved there is no simple recommendation. About the only thing one can say with certainty about frame generation is that it shouldn't be used for competitive multiplayer games.
 
  • Like
Reactions: Peksha
Except it will make the input latency worse while it improves the visual frame rate. So once again it comes down to the individual and the sensitivity to latency.

In your hypothetical there for someone who's okay with 40-50 that becomes closer to 50-60 to keep the same input latency.

I can tell pretty much immediately when frame rates drop below 60 on anything with real time movement. With the added input latency from frame generation that would now mean my minimum becomes around 72 fps instead of 60 for the same experience.

None of this is to say the technology is bad or unimportant just that there's so much nuance and subjectivity involved there is no simple recommendation. About the only thing one can say with certainty about frame generation is that it shouldn't be used for competitive multiplayer games.
The difference in input latency is negligible the higher the base FPS is. I think everyone here is in agreement that Framegen needs at least somewhat reasonable FPS as a starting point.

We can discuss about how you or I "feel" about those extra 10ms latency give or take all day long, but in the end it's a basic what you get vs what you give.

In the end, I, personally, would much rather to have this option than not, because all in all going from 60-70 FPS to 140+ FPS is nice at least as far as image smoothness goes, as long as Framegen is properly working with the title in question.

Other than that - just like DLSS, this tech will get better. I don't think it's wild to assume that in 5 years from now Framegen in games will be as baseline no-brainer of a feature, as DLSS is, and I bet there will be plenty new tricks done to make these frames count as "real" frames for input purposes.
 
  • Like
Reactions: JarredWaltonGPU
Another take on the 5090 is you could buy a 5080 and a 6080 in 2 years and the likelihood that the 5090 would still be a better buy with more life span so in essence for your 2000plus usd your still buying a better card so in 4 years maybe even longer the 7080 may just catch up to the 5090..

Had this covo with my GF while I was against the excessive 2000usd price it could save money in the long run with such terrible incremental uplifts with the 80 series of late

I’ve spent 4000aud in the last 4 years on the 6900xt and the 7900xtx so buy default if I bought a 5080 this gen im over 6000aud
 
Last edited:
Sure, go find that magical GPU that will render 60FPS native in a situation 5080 renders 15FPS.

I think a lot of people here are hella disingenuous. Yes, when you turn on Framegen with base 15FPS, nothing good will come of it, but you were fudged to begin with anyway.

Framegen is great when you want to bring 40-50FPS to 100+ FPS. Yes, the input would still mostly be as if it's 40-50FPS, but at the very least the visuals you will see will be smooth. Yes, it won't be "real" 100+ FPS, but it will be better than 40-50FPS.

And it is even better when you want to bring 100 FPS to 200+ and max out your monitor refresh rate. At that base FPS, input latency will no longer matter unless you're like legit top 1% CSGO champ or something.

I think it's shortsighted to dismiss this tech - it has use cases. Yes, its best use case is making something that's already good - better, but hey, that is also nice to have as an option, isn't it?
The first FG isn't recieved as poorly, but there are lots of ghosting and artefacts to begin with, but now the 4x FG? it's not making the good to better, it tries to make unplayable to something looking smooth when you act linearly, which is almost never the case.. now slap that with the scalper like price and put together with real scalpers
 
  • Like
Reactions: palladin9479
A couple of takeaways from the FrameGen discussion I've gathered:

1- If you just want more frames or a smoother experience, it's a valid alternative (I agree).
2- If you can't deal with artefacts, it's not a solution for you and saying "it'll work better in 5 years" it's not an answer; think RTX2K and how they perform now with RT. If the tech doesn't work for me today, I'm not going to buy it for "tomorrow" in case it works. Plus, looks like nVidia is very keen on screwing old generation owners, since FG is being pushed to the latest cards and not being backported. There's also that. Zero incentive to buy into the tech today considering nVidia's MO here.
3- They are "fake" frames. The way we define (or at least I do) a frame is when the graphical engine calculates a scene and as these are interpolated images based on 2 calculated images, then it is fake for the intended purposes of the engine calculation. More importantly: it does not take user input for the generation, which is the most important element. This is a similar discussion to "is bottled lemon juice actual lemon juice?".
4- The biggest problem everyone has with the tech is not so much how it works or what it is, but how nVidia is using it to justify """"performance"""" increases. That's a big no-no I hope everyone agrees can't be done until the tech is working indistinguishable from a "proper" generated frame.

If I'm missing something, just add it.

Regards.
 
A couple of takeaways from the FrameGen discussion I've gathered:

1- If you just want more frames or a smoother experience, it's a valid alternative (I agree).
2- If you can't deal with artefacts, it's not a solution for you and saying "it'll work better in 5 years" it's not an answer; think RTX2K and how they perform now with RT. If the tech doesn't work for me today, I'm not going to buy it for "tomorrow" in case it works. Plus, looks like nVidia is very keen on screwing old generation owners, since FG is being pushed to the latest cards and not being backported. There's also that. Zero incentive to buy into the tech today considering nVidia's MO here.
3- They are "fake" frames. The way we define (or at least I do) a frame is when the graphical engine calculates a scene and as these are interpolated images based on 2 calculated images, then it is fake for the intended purposes of the engine calculation. More importantly: it does not take user input for the generation, which is the most important element. This is a similar discussion to "is bottled lemon juice actual lemon juice?".
4- The biggest problem everyone has with the tech is not so much how it works or what it is, but how nVidia is using it to justify """"performance"""" increases. That's a big no-no I hope everyone agrees can't be done until the tech is working indistinguishable from a "proper" generated frame.

If I'm missing something, just add it.

Regards.
The 5 years remark was to put emphasis on where we're headed. I agree that as of now Framegen is limited in usefulness, it's not like DLSS where you can just no brain slam it at Quality mode and the result will almost always be free FPS at little to no actual perceived impact.

And that's why Blackwell feels like Turing for me, where the new tech is a good direction, but still in an infancy.

My issue with this whole "fake" frames meme is literally "what does this even mean" - given whatever pixels you see slammed into your eyeballs is already fake, with manipulative 2D image presented in a way that gaslights your brain into perceiving depth that does not exist.

There are plenty of rendering techniques and tricks that are already "faking" it for the sake of performance, whether it's various aa techniques, filtering levels, motion blur, LOD, ambient occlusion - that is plenty tricks there that replace, degrade or discard drawn objects in the image used by default in modern game rendering. So, are these - culled and modified frames now "fake"?

Are frames that scaled up by DLSS "fake"? After all there is literally new information being added to the lower resolution rendered frame on the fly there.

In my opinion, the only real issue with Framegen so far is that user input is not polled during that frame. That sounds to me like entirely solvable problem, the moment game engines and the tech will be more tightly integrated on that front, which I suspect will happen when next gen consoles will pop up in a few years.
 
The 5 years remark was to put emphasis on where we're headed. I agree that as of now Framegen is limited in usefulness, it's not like DLSS where you can just no brain slam it at Quality mode and the result will almost always be free FPS at little to no actual perceived impact.
We're being forced a way I'm not happy with, but you're not wrong. As long as nVidia is calling the shots technology and technique wise, we're screwed.

And that's why Blackwell feels like Turing for me, where the new tech is a good direction, but still in an infancy.
Turing has aged like butt, if you ask me.

My issue with this whole "fake" frames meme is literally "what does this even mean" - given whatever pixels you see slammed into your eyeballs is already fake, with manipulative 2D image presented in a way that gaslights your brain into perceiving depth that does not exist.

There are plenty of rendering techniques and tricks that are already "faking" it for the sake of performance, whether it's various aa techniques, filtering levels, motion blur, LOD, ambient occlusion - that is plenty tricks there that replace, degrade or discard drawn objects in the image used by default in modern game rendering. So, are these - culled and modified frames now "fake"?
False dicotomy: faking portions of a scene whithin the engine's calculation phase or as part of the overall generation of the frame itself is not the same as interpolating 2 completed frames without any engine input (correct me if I'm wrong here, but the engine is doing nothing in the interpoaltion). It's the same as the differences between FXAA and "proper" antialiasing like SSAA. The visual differences are super noticeable and, at least to me, the artifacting generated from cheaper but crappier techniques is a non-starter for many. That is the whole deal about "fake" frames we all complain about: the reduction in graphical quality is just not worth it.

To go back to the FXAA argument: when it released, many were against it and, I'm pretty sure, anyone will agree that is just sucks as an AA technique, even if it's dirt cheap. MSAA, SSAA and even TAA (when correctly implemented) are far superior in quality than FXAA, if that is what you want without "side-effects". The pursue of "moar frames" should not go against quality. That's all.

All in all, it'll be "fake" as long as the developers or artists have zero input on the generated frames. It boils down to that.

Are frames that scaled up by DLSS "fake"? After all there is literally new information being added to the lower resolution rendered frame on the fly there.
No, because you're just upscaling the frame and not adding or removing anything into the frame itself. Any artifacting would be tied to the algorithm used for the upscaling itself. More importantly: it keeps the input data of the frame intact, as well as all overlay information.

In my opinion, the only real issue with Framegen so far is that user input is not polled during that frame. That sounds to me like entirely solvable problem, the moment game engines and the tech will be more tightly integrated on that front, which I suspect will happen when next gen consoles will pop up in a few years.
By nature of the technique, it can't be solved: the engine does not do the interpolation, so UI information and/or user input is not part of the calculation for the in-between frames. If nVidia/AMD manages to put the interpolation closer to the engine and outside (pre) of the UI phase, then it would work closer to what the upscaling element does and it would work better, for sure. Can it be done? Not sure. These are incredibly wide-brushes, but it's not too far off the mark.

Regards.
 
False dicotomy: faking portions of a scene whithin the engine's calculation phase or as part of the overall generation of the frame itself is not the same as interpolating 2 completed frames without any engine input (correct me if I'm wrong here, but the engine is doing nothing in the interpoaltion).
Engine motion and velocity data is taken into account in the AI generated frame.

For example, if your player character would be hit and start flinching during MFG frame - that would be reflected.

It's not just a simple "continue drawing in a straight line extrapolation" - whatever motion change that happens in the engine during those frames is taken into account.

The limitations are, as we know, user inputs and completely new objects that aren't present in rendered and next rendered frame can't be generated as far as I know.

By nature of the technique, it can't be solved: the engine does not do the interpolation, so UI information and/or user input is not part of the calculation for the in-between frames. If nVidia/AMD manages to put the interpolation closer to the engine and outside (pre) of the UI phase, then it would work closer to what the upscaling element does and it would work better, for sure. Can it be done? Not sure. These are incredibly wide-brushes, but it's not too far off the mark.
I mean, you're sort of starting with it can't be solved - then immediately propose a solution.

But what if the engine does take all that into account?

There is some strange assumption as if that is something that won't be ever present in the game engines, especially given as I said above - part of it already exists for the frame generation's sake.

In other words, there is no reason why game engines won't be improved to take user input into account during frame generation, especially given everyone and their mothers will eventually be using this tech for upcoming UDNA consoles.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
In my opinion, the only real issue with Framegen so far is that user input is not polled during that frame. That sounds to me like entirely solvable problem, the moment game engines and the tech will be more tightly integrated on that front, which I suspect will happen when next gen consoles will pop up in a few years.
That's pretty wishful thinking and don't get the point of why ppl hate the fake frames.

For all other techniques you are listing as examples, there are still at most part, having the frame actually generated, with seslected details not drawn or blurred, or interpolation techniques, so that it is still "drawing" a frame and the whole system, or the flow of the work is still on going as usual, thus no extra latency panelty and the user input is still reactive, and the gameplay is smooth, the resulting artefacts in the game due to rendering logic could be fixed overtime, and thus, are generally more accepted for the upscaling techniques.

While FG is completely interpolating what the AI thinks it will happen next to put in a frame, it completely ignores the whole system or other rendering cores and do it's own guess work, since it is 0% rendering of the fake frame, it could only smooth out the liner stuttering in low FPS gaming to trick our brain, thinking it's smooth. While it is a great too just to make those already having high base FPS (60+ low 0.1%) to upscale to the monitor's native refresh rate, it isn't and will never be able to predict anything dynamic, both by AI or human, even the player himself. Ever wonder why all those FG demo are using slow panning scenes and not real time FPS demo? coz that could never be well predicted by the AI server, no matter how many years or billions of GPUs running the DLSS engine

And don't get me wrong, the whole idea of FG (single FG at least) is a plus on the feature set, and if it have good basic rasterization, I will personally prefer a card with FG capability than not, but now it is the sole tech Nvidia is trying to get away selling the cards at the price of a used car or great camera lens, this is now an issue which makes ppl upset.

And truth be told, when one visit back to 2015 games where FG or DLSS is not a thing, more often than not, those pre-RT rendered water puddle, waves etc. are looking way better than those real time RT with FG or DLSS in modern titles, you won't/ hardly notice the small puddle reflection angle changes with RT, but the FG artefacts will always be more annoying to the eye than the raster rendered pre-ray traced scenes.
 
While FG is completely interpolating what the AI thinks it will happen next to put in a frame, it completely ignores the whole system or other rendering cores and do it's own guess work, since it is 0% rendering of the fake frame, it could only smooth out the liner stuttering in low FPS gaming to trick our brain, thinking it's smooth. While it is a great too just to make those already having high base FPS (60+ low 0.1%) to upscale to the monitor's native refresh rate, it isn't and will never be able to predict anything dynamic, both by AI or human, even the player himself. Ever wonder why all those FG demo are using slow panning scenes and not real time FPS demo? coz that could never be well predicted by the AI server, no matter how many years or billions of GPUs running the DLSS engine
I mean that is literally incorrect.

Framegen literally takes into account engine motion data as I explained in the previous post. It's not "thinking" what will happen - it "knows" what will happen during these frames because engine "tells" it what will happen just like it tells it to good 'ol raster rendered frame.

What it "thinks" is how what it "knows" will appear. That's the weakness that leads to the artifacts.

The limitation is player input, but not what actually happens ingame.

That's pretty wishful thinking and don't get the point of why ppl hate the fake frames.
I mean, if you want me to be frank - a lot of it is the faux outrage over big bad AI and negativity being farmed.

Many of the Framegen issues are real, but many are also made up on the spot like that whole essay about how framegen does not take into account engine motion state.

In other words, much of this hate of "fake frames" is more subjective feelings than objective issues.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
I mean that is literally incorrect.

Framegen literally takes into account engine motion data as I explained in the previous post. It's not "thinking" what will happen - it "knows" what will happen during these frames because engine "tells" it what will happen just like it tells it to good 'ol raster rendered frame.

What it "thinks" is how what it "knows" will appear. That's the weakness that leads to the artifacts.

The limitation is player input, but not what actually happens ingame.


I mean, if you want me to be frank - a lot of it is the faux outrage over big bad AI and negativity being farmed.

Many of the Framegen issues are real, but many are also made up on the spot like that whole essay about how framegen does not take into account engine motion state.

In other words, much of this hate of "fake frames" is more subjective feelings than objective issues.
If we're talking projection or extraplation (which framegen isn't doing), there's more potential to be wrong. But for current framegen, the engine literally knows everything about what changed from frame 1 to 2. It creates a motion vector and feeds that to the framegen engine, which then interpolates a frame — or two or three frames with MFG. It's pretty much "solved" now and the only real issue is massive changes between frames (e.g. a camera angle switch).

But extrapolation with user input seems like it's not too hard to get 99% right in the near future. Disoccluded objects will require in-painting, but if you're running at 50+ FPS it should be relatively small amounts of in-painting required. And if there's a camera angle change? I'd just project that intermediate frame as if there wasn't a change yet and be wrong for that one frame, and let the next rendered frame correct things. If the hardware is extrapolation at 100 FPS or more, that's less than a 10ms error and the camera angle swap would still be "jarring" even with normal rendering.
 
  • Like
Reactions: Gaidax
I mean that is literally incorrect.

Framegen literally takes into account engine motion data as I explained in the previous post. It's not "thinking" what will happen - it "knows" what will happen during these frames because engine "tells" it what will happen just like it tells it to good 'ol raster rendered frame.

What it "thinks" is how what it "knows" will appear. That's the weakness that leads to the artifacts.

The limitation is player input, but not what actually happens ingame.


I mean, if you want me to be frank - a lot of it is the faux outrage over big bad AI and negativity being farmed.

Many of the Framegen issues are real, but many are also made up on the spot like that whole essay about how framegen does not take into account engine motion state.

In other words, much of this hate of "fake frames" is more subjective feelings than objective issues.

Well, I missed that post, but tbf, when now FG is position they are doing is not upscale 60+fps to higher, they are literally doing the 15-20 FPS range thing and trying to interpolate into a smooth frame, the FPS arguement still holds true in this, 15FPS is, 15FPS, it literally stutters without frame gen, so say when you try to aim from left to right, you need to wait till that 1/15 second for the next frame to be out, and then interpolate what's in between, and you will need to buffer that and add a few more ms into the game before it is spit out as a smoothish frame adding 50+ms on the game, by then you can react to the game just by then, and when the first 2 frames with all those interpolated frames in between one reacts? it need to wait till the next 2 frames computed and then add in the interpolation, so it will essentially never work well with low base FPS, if the whole tech is evolving around FG or so AI tech withou the base raster frame being massively improved, it is kind of a dead end for any action games.
 
Well, I missed that post, but tbf, when now FG is position they are doing is not upscale 60+fps to higher, they are literally doing the 15-20 FPS range thing and trying to interpolate into a smooth frame, the FPS arguement still holds true in this, 15FPS is, 15FPS, it literally stutters without frame gen, so say when you try to aim from left to right, you need to wait till that 1/15 second for the next frame to be out, and then interpolate what's in between, and you will need to buffer that and add a few more ms into the game before it is spit out as a smoothish frame adding 50+ms on the game, by then you can react to the game just by then, and when the first 2 frames with all those interpolated frames in between one reacts? it need to wait till the next 2 frames computed and then add in the interpolation, so it will essentially never work well with low base FPS, if the whole tech is evolving around FG or so AI tech withou the base raster frame being massively improved, it is kind of a dead end for any action games.
I mean, okay?

I think nobody ever argued here that Framegen is a solution to the inherently unplayable FPS.

We know what Framegen weaknesses are, I just find it disingenuous when people discard the whole tech by keeping bringing up absolute worst-case scenarios for it, while ignoring the actually decent or even good use cases.

In other words, yes, we know if you slap framegen on 15FPS base, you won't be going anywhere. The talk here is what happens when you use it with 40-50 FPS base, or with 80 FPS base and so on, at which point there can be a legitimate experience improvement, even if it won't be AS good as native triple-digit FPS.
 
  • Like
Reactions: JarredWaltonGPU
I mean, okay?

I think nobody ever argued here that Framegen is a solution to the inherently unplayable FPS.

We know what Framegen weaknesses are, I just find it disingenuous when people discard the whole tech by keeping bringing up absolute worst-case scenarios for it, while ignoring the actually decent or even good use cases.

In other words, yes, we know if you slap framegen on 15FPS base, you won't be going anywhere. The talk here is what happens when you use it with 40-50 FPS base, or with 80 FPS base and so on, at which point there can be a legitimate experience improvement, even if it won't be AS good as native triple-digit FPS.
I have no opposition to that, but to bring that up is because how I viewed Nvidia getting the DLSS4 MFG as the holy grail to get you great FPS and let a 5070 be a 4090.

As I said in part of the previous post, single frame gen to up 40FPS at least to 80 is good, but from 40 to 120 or even 160 is BS and makes no sense at all, the only usefulness of FG will be up from good to great, or mediocore to acceptable, but not horrible to ok, so that 4x FG is meaningless and single FG is kinda ok if they mature. But all that aside we will need raster to improve, and game devs to really optimize the game, not relying on the magic button of FG.
 
I have no opposition to that, but to bring that up is because how I viewed Nvidia getting the DLSS4 MFG as the holy grail to get you great FPS and let a 5070 be a 4090.

As I said in part of the previous post, single frame gen to up 40FPS at least to 80 is good, but from 40 to 120 or even 160 is BS and makes no sense at all, the only usefulness of FG will be up from good to great, or mediocore to acceptable, but not horrible to ok, so that 4x FG is meaningless and single FG is kinda ok if they mature. But all that aside we will need raster to improve, and game devs to really optimize the game, not relying on the magic button of FG.
I mean, if you ask me MFG is very meaningful, because 40 to 80 is obviously the biggest deal, but 80 to 120 or 160 on top of that is still yet another improvement to the image smoothness. Might not be as dramatic as 40 to 80, but it's still there - people do like them 240hz monitors after all, so why not get the output close to that if we can.

I do think that MFG needs to be better investigated, so we have an idea of the sweet spot setting for it at least in the higher profile titles. But in the end, it's a subjective choice for every person - it is good that there is an option to choose though.
 
  • Like
Reactions: JarredWaltonGPU
I mean, if you ask me MFG is very meaningful, because 40 to 80 is obviously the biggest deal, but 80 to 120 or 160 on top of that is still yet another improvement to the image smoothness. Might not be as dramatic as 40 to 80, but it's still there - people do like them 240hz monitors after all, so why not get the output close to that if we can.

I do think that MFG needs to be better investigated, so we have an idea of the sweet spot setting for it at least in the higher profile titles. But in the end, it's a subjective choice for every person - it is good that there is an option to choose though.
I'd argue that is all snake oil at that point, I've yet to see someone actually able to distinguish a steady 60+ FPS (I mean dead lowest) from 120, where I have a few trial on those competitive FPS shooter gamers who swear for 240hz monitor and I simply run a test at a low quality to ensure the lowest FPS is above 60, while capping the refresh rate to 60 without telling them, in controlled test where which they thought was the cap removed none ever get a hit rate of more than 60%, it's much like some mind tricks..
 
I'd argue that is all snake oil at that point, I've yet to see someone actually able to distinguish a steady 60+ FPS (I mean dead lowest) from 120, where I have a few trial on those competitive FPS shooter gamers who swear for 240hz monitor and I simply run a test at a low quality to ensure the lowest FPS is above 60, while capping the refresh rate to 60 without telling them, in controlled test where which they thought was the cap removed none ever get a hit rate of more than 60%, it's much like some mind tricks..
The irony is that Nvidia, prior to the 40-series, did a whole marketing campaign on "frames win games" with a focus on high refresh rate monitors. They had demos where you could try 60 Hz, 120 Hz, 240 Hz, and 360 Hz with a couple of different games. There was, unequivocally, a massive difference between 60 and 240+, and even between 60 and 120. The difference between 120 and 240, or 240 and 360, was far less noticeable.

Part of the campaign was about reducing input latency, and part of the way you do that is with extreme frame rates. So then framegen basically killed all of that and you don't hear about "frames win games" anymore. Or at least, it's portrayed in a very different manner.

But also note that the frames win games campaign was focused more around esports games where you could actually get 400+ FPS on a fast GPU and PC, without framegen or even upscaling. Visual fidelity in Fortnite, Valorant, PUBG, LoL, Dota 2, etc. is a tertiary concern to performance and casting as wide a net as possible for supported hardware.
 
Engine motion and velocity data is taken into account in the AI generated frame.

For example, if your player character would be hit and start flinching during MFG frame - that would be reflected.

It's not just a simple "continue drawing in a straight line extrapolation" - whatever motion change that happens in the engine during those frames is taken into account.

The limitations are, as we know, user inputs and completely new objects that aren't present in rendered and next rendered frame can't be generated as far as I know.
You know, that's what I've always had doubts on. Some people says "motion vectors" as defined in the inputs to nVidia's TAA and DLSS libraries are not "scene" motion, but image based (a hint and not a solid vector of movement of an object or element).

To me, they were "proper" vectors of movement, but I remember some people discussing this at large, but not agreeing on what they are mixing encoding terminology and graphical engine terminology.

At the end, I'm still not 100% sure there is engine data passed into the interpolation of the frames.
I mean, you're sort of starting with it can't be solved - then immediately propose a solution.

But what if the engine does take all that into account?

There is some strange assumption as if that is something that won't be ever present in the game engines, especially given as I said above - part of it already exists for the frame generation's sake.

In other words, there is no reason why game engines won't be improved to take user input into account during frame generation, especially given everyone and their mothers will eventually be using this tech for upcoming UDNA consoles.
Careful with strawmans: "humans can't fly, but if we grow wings and lower our bone density we can". Yes, to my eyes, there's some things which in the current state can't be fixed unless the approach changes to make it so the engine takes on more of the "interpolation" than nVidia's or AMD's drivers do for it. Much like upscaling, graphical engines have been doing it for years before nVidia, AMD and Intel came up with DLSS/FSR/XeSS. I just don't have enough in-depth knowledge on the interpolation (FrameGen) on how it works at the driver level that is not being included in engines like upscalers are. Point is: upscalers can work outside of UI, but so far, FrameGen can't work pre-UI data is in the scene. That needs to change and it means moving the technique of "interpolation" closer to the game engines themselves.

Regards.
 
The irony is that Nvidia, prior to the 40-series, did a whole marketing campaign on "frames win games" with a focus on high refresh rate monitors. They had demos where you could try 60 Hz, 120 Hz, 240 Hz, and 360 Hz with a couple of different games. There was, unequivocally, a massive difference between 60 and 240+, and even between 60 and 120. The difference between 120 and 240, or 240 and 360, was far less noticeable.
that irony was partly the trying the new things mental tricks, when you arn't told what refresh rate you are using, those same ppl just can't reliably tell is the setup capped at 60fps, I literally tried the test with a dozen of friends who attended similar events and swear they can distinguish between 60hz and higher refresh rate monitors, once you enabled vertical sync and not let the actual display latency drop, they just can't tell the difference in a 144hz monitor capped at 60 fps or running at native 144hz. This is similar to the hi res music beyond lossless, once get them some fake lable, like labelling a 60hz capped experience with a shiny 144hz sticker, suddenly they feel much better.

Part of the campaign was about reducing input latency, and part of the way you do that is with extreme frame rates. So then framegen basically killed all of that and you don't hear about "frames win games" anymore. Or at least, it's portrayed in a very different manner.

But also note that the frames win games campaign was focused more around esports games where you could actually get 400+ FPS on a fast GPU and PC, without framegen or even upscaling. Visual fidelity in Fortnite, Valorant, PUBG, LoL, Dota 2, etc. is a tertiary concern to performance and casting as wide a net as possible for supported hardware.
That was exactly the issue, high FPS is for latency, and more importantly, when 1% or 0.1% low isn't reviewed as frequently, the high average FPS metric is to have a rough estimation/idea on how the low 1% could be in our own experience of stuttering. For action a base 60hz is more than enough and at slow changing motion like flight sim, above 24 or 30 fps is good enough to be a smooth experience.
 
that irony was partly the trying the new things mental tricks, when you arn't told what refresh rate you are using, those same ppl just can't reliably tell is the setup capped at 60fps, I literally tried the test with a dozen of friends who attended similar events and swear they can distinguish between 60hz and higher refresh rate monitors, once you enabled vertical sync and not let the actual display latency drop, they just can't tell the difference in a 144hz monitor capped at 60 fps or running at native 144hz. This is similar to the hi res music beyond lossless, once get them some fake lable, like labelling a 60hz capped experience with a shiny 144hz sticker, suddenly they feel much better.


That was exactly the issue, high FPS is for latency, and more importantly, when 1% or 0.1% low isn't reviewed as frequently, the high average FPS metric is to have a rough estimation/idea on how the low 1% could be in our own experience of stuttering. For action a base 60hz is more than enough and at slow changing motion like flight sim, above 24 or 30 fps is good enough to be a smooth experience.
I could absolutely tell the difference between 60, 120, and 240 in the demos. They could switch it and let you try. 240 and 360, and later 480? No, I didn't usually notice much gain there. But I could still spot the differences given time. The other hardware tech people I was with could also tell the difference.

Anyone who can't tell the difference between 60 and 120+ isn't really a PC gamer is my take, and anyone that thinks 24 or 30 fps is smooth has never played games at 60 or 120+ FPS. 24 FPS is garbage, 30 FPS is barely playable. Even in Flight Simulator, 24 FPS just isn't a good experience. My brothers probably couldn't tell the difference between 60 and 120, because they're not gamers, at all. I meanwhile grumble when we play four player Mario Kart on the Wii (yeah, the original Wii) because it drops from 60 FPS down to 30 and feels crappy.

It frankly feels like you're making stuff up to argue the point now. Because on the one hand you're complaining about "fake frames" with framegen, and on the other you're spouting nonsense about people not being able to tell the difference between 60 and 120 FPS, while 24 or 30 FPS is "smooth enough."

Have you used framegen on an RTX 4080 or faster, or even on an RTX 4070 at reasonable settings (i.e. something like 1440p with upscaling where you go from 50 to 80 FPS)? Or are you just spouting off what others have said online? And if you've only used FSR3 framegen, that doesn't count. It is provably inferior, in most games. The image artifacting with FSR3 is so much worse than DLSS3 framegen that they're not even comparable other than in terms of how many FPS you might be able to push out.

So which is the correct point? Because if 24 to 30 FPS is "smooth enough" then MFG 4X pulling 150+ FPS will definitely be smooth enough and framegen is fine. But if it's "fake frames" then higher rendered FPS and input matter a lot and 60 FPS is the bare minimum we should aim for.
 
Status
Not open for further replies.