Input latency is the all-too-frequently missing piece of framegen-enhanced gaming performance analysis

Over a decade on from Inside the Second, and it seems the same lessons in GPU performance measurement have to keep being relearned time and time again. We've known for at least that long that measuring, or even plotting, FPS tells you only a fraction of the story on GPU performance.

Thou shalt plot frametimes, not framerates!
 
  • Like
Reactions: Penzi
Yes, measure the time from a key press or a mouse click to in-game response and showing it on the screen. But this would not be only about the rendering pipeline, but also about the game engine itself (which can definitely make things even worse, beyond just a poor GPU/driver lag).
 
Over a decade on from Inside the Second, and it seems the same lessons in GPU performance measurement have to keep being relearned time and time again. We've known for at least that long that measuring, or even plotting, FPS tells you only a fraction of the story on GPU performance.

Thou shalt plot frametimes, not framerates!
It's the age-old problem of benchmarks. If sites make charts that only 5% of readers can understand, that's not a great solution. Putting in five charts instead of one doesn't work great either. People (well, most people...) want concise representations of data, not minutia overload. If you only plot frametimes, you still end up with the same problems as only plotting framerates. They're just reciprocals of each other: Frames per second versus seconds per frame. (FPS = 1000 / frametime in ms. Frametime in ms = 1000 / FPS.)

While we're on the subject, there's potential value in doing logarithmic scales of FPS. Because going from 30 to 60 is a much bigger deal than going from 60 to 120, or from 60 to 90. But again, there's nuance to discuss and more people will likely get confused by logarithmic charts. And what base do you use for these logarithmic charts? Because that can change the appearance of the graphs as well.

Measuring latency is helpful and good, but also doesn't tell the full story regardless. It's why all of my reviews for the past couple of years also included a table at the end that showed average FPS, average 1% lows, price, power, and — where reported by FrameView — latency. Like on the 9070 / 9070 XT article, bottom of page 8:
https://www.tomshardware.com/pc-components/gpus/amd-radeon-rx-9070-xt-review/8

(I didn't include that on the 9060 XT review because, frankly, I was no longer a full-time TH editor and didn't want to spend a bunch more time researching and updating prices, generating more charts, etc. 🤷‍♂️ )

It all ends up as somewhat fuzzy math. Some people will want sub-40ms for a "great" experience and are okay with 100 FPS as long as it gets them there. Others will be fine with sub-60ms or even sub-100ms with modest framerates. I will say, as noted in many reviews and articles, that MFG and framegen end up being hard to fully quantify. Alan Wake 2 can feel just fine with a lower base framerate, with MFG smoothing things out, and relatively high latency. Shooters generally need much lower latency to feel decent. Other games can fall in between.

And engines and other code really matter a lot. Unreal Engine 5 seems to have some really questionable stuff going on under the hood, and it's used in a lot of games. I know there was one game I was poking at (A Quiet Place: The Road Ahead) where even a "native" (non-MFG/non-FG, non-upscaled) 40~45 FPS actually felt very sluggish. Like, if I were to just play the game that way without looking at the performance or settings, I would have sworn that framegen was enabled and that the game was rendering at ~20 FPS and doubling that to 40. But it wasn't, the engine and game combination just ended up feeling very sluggish. Enabling framegen actually made the game feel much better! Weird stuff, but that's par for the course with UE5.
 
MFG "might" work on high end, but it shouldn't be used on any low end system.

Just like how DLSS was meant to let lower end systems run games better it was abused and is now expected for a min 60fps on those low systems.

FrameGen already shows us that devs will abuse any tech to cheap out on optimizing games and have "framegen on to get 60fps" as their min requirements.

This will be case with MFG if it gets adopted by both sides. It will make gaming an even more awful experience for many.
 
It's the age-old problem of benchmarks. If sites make charts that only 5% of readers can understand, that's not a great solution. Putting in five charts instead of one doesn't work great either. People (well, most people...) want concise representations of data, not minutia overload. If you only plot frametimes, you still end up with the same problems as only plotting framerates. They're just reciprocals of each other: Frames per second versus seconds per frame. (FPS = 1000 / frametime in ms. Frametime in ms = 1000 / FPS.)

While we're on the subject, there's potential value in doing logarithmic scales of FPS. Because going from 30 to 60 is a much bigger deal than going from 60 to 120, or from 60 to 90. But again, there's nuance to discuss and more people will likely get confused by logarithmic charts. And what base do you use for these logarithmic charts? Because that can change the appearance of the graphs as well.

Measuring latency is helpful and good, but also doesn't tell the full story regardless. It's why all of my reviews for the past couple of years also included a table at the end that showed average FPS, average 1% lows, price, power, and — where reported by FrameView — latency. Like on the 9070 / 9070 XT article, bottom of page 8:
https://www.tomshardware.com/pc-components/gpus/amd-radeon-rx-9070-xt-review/8

(I didn't include that on the 9060 XT review because, frankly, I was no longer a full-time TH editor and didn't want to spend a bunch more time researching and updating prices, generating more charts, etc. 🤷‍♂️ )

It all ends up as somewhat fuzzy math. Some people will want sub-40ms for a "great" experience and are okay with 100 FPS as long as it gets them there. Others will be fine with sub-60ms or even sub-100ms with modest framerates. I will say, as noted in many reviews and articles, that MFG and framegen end up being hard to fully quantify. Alan Wake 2 can feel just fine with a lower base framerate, with MFG smoothing things out, and relatively high latency. Shooters generally need much lower latency to feel decent. Other games can fall in between.

And engines and other code really matter a lot. Unreal Engine 5 seems to have some really questionable stuff going on under the hood, and it's used in a lot of games. I know there was one game I was poking at (A Quiet Place: The Road Ahead) where even a "native" (non-MFG/non-FG, non-upscaled) 40~45 FPS actually felt very sluggish. Like, if I were to just play the game that way without looking at the performance or settings, I would have sworn that framegen was enabled and that the game was rendering at ~20 FPS and doubling that to 40. But it wasn't, the engine and game combination just ended up feeling very sluggish. Enabling framegen actually made the game feel much better! Weird stuff, but that's par for the course with UE5.
You don't have to dive into minutia...just give RENDERED frame rates in reviews and don't use FAKE frames in them at all.
 
  • Like
Reactions: palladin9479
The problem with "AI Generated Frames" is that in moderation they're great, especially when used with adaptive refresh rates (Freesync/G-Sync), doubly so for lower refresh rate monitors (not as many frames needed to hit it), but in the levels that they push for DLSS4 marketing there's just too many to make sense. They should be used to "fill in the gaps" to make a game smooth, not as a crutch to "make the game playable".
 
I tried the Lossless Scaling FG to 60 fps just to handle the occasional dips to 45 fps with a second DGPU on my A750 and it looked really good.

But I was playing Expedition 33 and the variable input lag was killing my ability to dodge and parry.