Agreed. I'm certain it benefits your eyes' ability to track objects zipping across the screen.Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.
Who said this, now?Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.
t
, your reaction time is R
, and the rendering pipeline + monitor add another term L
, then your reaction actually occurs at the time t + L + R
. Reduce L
and you react sooner. Not much, but it might be enough to make the difference when your skill is at its limit.I disagree with what I think your gist is, here. I don't want Jarred to get used to playing at high-latency before rendering a verdict. I want his initial and most visceral impression. That's actually the best take, here.Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.
I disagree that anything conferring so much latency will ever become the default and widely accepted. It's not only a "feel" thing, but also the quantitative impact latency has (i.e.In context of FG, people are used to matching FPS and IPS. FG's F/IPS mismatch will feel weird (read: worse). Understandable that some people will bail at first try and proclaim it bad. But outside of these forums, FG will follow upscaling in getting wider adoption, and at some point, becoming a default setting. People will acclimate to it, and will prefer it vs without.
To see this (over time), make a note of people's sentiment of FG now, and do it again same time next year.
t + L + R
). Even Reflex 2 probably can't save it, since it still doesn't help with events that are truly surprising (i.e. enemy pops out from behind obstacle).I don't think anyone is saying that you need equal input latency improvement to perceived frame rate increase to get a benefit.Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.
https://humanbenchmark.com/tests/reactiontime
I average about ~250ms, or about 4 IPS.
https://humanbenchmark.com/tests/aim
This aim/shoot test is more relevant, as you have to move mouse & click. I average ~600ms, or less than 2 IPS.
Anything that negatively impacts input latency is going to make the experience worse whether or not someone notices it though.My second disagreement is with your use of "true boost to performance." Reaching for "true perf" is like reaching for "fake frames" as an argument, ie if I like something, it's "true;" if I dislike it, it's "fake." It's a mental shortcut. Don't reach for shortcuts.
I take issue with this because "performance"--more precisely, responsiveness--can't be measured in "input samples per second" (IPS for short.) Responsiveness is just latency.
...
Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.
The benefit from frame generation here isn't going to offset the input latency cost of running it in the first place. The primary advantage from a high frame rate is when the enemy becomes visible and how quickly you can react not how smoothly they go across the screen.I disagree with this statement, twice. Characterizing FG as "motion smoothing," while correct at face value, glosses over the point that FG provides more (temporal) visual content. Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.
You can measure latency, image quality, and framerate. Those are distinct values. But what they mean isn't always clear cut. All three blend together and the whole isn't just a straight sum of the parts. It ends up being very subjective, which is what I hopefully managed to convey with this article.@JarredWaltonGPU I think there are primarily 2 things to look at AI rendering solutions atm:
Latency. It is easy as you can measure when yoi take action to when pixel change on screen. Higher latency means the teach not suitable for quick action. It is straight forward to measure this.
Image quality: This is critical but harder to measure. The eye is not good tool for this as it is subjective. We can use computer graphic metrics that score similarity between 2 images or video so we can compare native rendering vs neural rendering. All the ghostling, shimerring ... will degrade image and reduce the score. I belive super resolution AI use these metrics in its loss function.
I think a single card is good enough for running all these tests. We may have a way to automate all the tests with AI that use computer.
This is for DLSS4. Many say that the image quality is so good that you can switch to the Performance profile instead of the Quality profile. But for many motherboards with PCIE 4x8 - and that's three generations of Intel 600/700/800 series chipsets and AMD 800 series in the case of installing an NVME SSD in the CPU slot - this will be a big problem, and you may not get much benefit.Yes, it's possible. Will I do it, soon? Meh. I'm 99% sure PCIe 4.0 won't matter. PCIe 3.0 (same bandwidth as PCIe 4.0 x8) probably won't matter either. Because all these frame generation things happen on the GPU and so PCIe isn't a factor.
If you've ever played an electric guitar through an amp and effects emulation on your PC you will know 20 ms is a lifetime! Can't play guitar? So an even more challenging test for you, speak via a mic connected to your PC and listen to your own voice live (at the same time) via headphones, even most audio interfaces using ASIO will be too slow and you definitely can recognize THE LAG, every millisecond of it.I agree overall with the gist of your piece. But some quibbles follow.
>As with the original framegen, it's more about smoothing out the visuals than providing a true boost to performance.
I disagree with this statement, twice. Characterizing FG as "motion smoothing," while correct at face value, glosses over the point that FG provides more (temporal) visual content. Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.
Granted, this a bit of splitting hairs, but proper charactization is important, to understand the different facets of a concept. Calling FG "motion smoothing" only differs from calling it "fake frames" in degree, not type. You're reaching for a shortcut at the expense of discarding nuance.
My second disagreement is with your use of "true boost to performance." Reaching for "true perf" is like reaching for "fake frames" as an argument, ie if I like something, it's "true;" if I dislike it, it's "fake." It's a mental shortcut. Don't reach for shortcuts.
I take issue with this because "performance"--more precisely, responsiveness--can't be measured in "input samples per second" (IPS for short.) Responsiveness is just latency.
Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.
https://humanbenchmark.com/tests/reactiontime
I average about ~250ms, or about 4 IPS.
https://humanbenchmark.com/tests/aim
This aim/shoot test is more relevant, as you have to move mouse & click. I average ~600ms, or less than 2 IPS.
So, saying that IPS needs to be faster with higher FPS is ridiculous. What you're saying is that latency needs to be lower. You equate this to performance, OK. But it can't be characterized as "true" or "fake." Responsiveness is a sweet spot.
Quibbles aside.
My take of your piece is that it's a personal exploration into FG use, which is good. I think more people should try it and make up their own minds, instead of taking their cue from talking heads. But your piece has fairly limited relevance for readers, as you're taking top-tier GPUs and purposely slowing them down with full RT, then apply FG. Better is testing for the median case, eg with 4060/7600 using DLSS/FSR Balanced (no RT), then apply FG. This would be the widest use of FG, and most relevant to users.
IMO, no need to test different GPUs and compare them. Keep focus on FG and its utility. A single GPU should suffice. KISS.
Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.
In context of FG, people are used to matching FPS and IPS. FG's F/IPS mismatch will feel weird (read: worse). Understandable that some people will bail at first try and proclaim it bad. But outside of these forums, FG will follow upscaling in getting wider adoption, and at some point, becoming a default setting. People will acclimate to it, and will prefer it vs without.
To see this (over time), make a note of people's sentiment of FG now, and do it again same time next year.