That's nonsense. GPU motion vectors are computed analytically, during the rendering phase. That's why games have to explicitly enable DLSS support - because it requires the game engine to provide DLSS with more information.
LOL, wut? I don't even know what 12ms of latency per second would mean - did you think the latency was linearly increasing the longer the content showed? I don't even know how that would work...
We're well aware of the quality tradeoff, but "screen tear"? That's something I've never seen a TV motion interpolator do, nor do I think they would. It's usually caused by screen refreshes not being locked to V-sync, which a TV would have no reason to mess up.
Okay, now you're just throwing everything at the wall to see what sticks. The "soap opera effect" refers simply refers to the perception of high-framerate content vs. 24 fps. Unless you're advocating we game at 24 fps, it's a rubbish point and shows you either don't know what you're talking about or don't respect us enough to think we have a clue.
I'm not sure about that. The TV will know when it's in VRR mode and can adjust its interpolation algorithm, accordingly.
If consoles are using it, then why would TVs have an incentive to improve? I expect they'll focus mainly on higher-latency interpolation methods of video content.
First, I never disputed that GPUs aren't the superior place to do motion interpolation. Nobody did.
Second, it's never error-free. I already cited the example of a moving shadow to highlight a scenario where it's difficult for the interpolator to correctly guess. The goal is always to provide improvements of greater value than whatever artifacts come along with it. As long as that holds, then it's wolthwhile.
Not usually, in my case.
Okay, so you write this big wall of text and end it with "do your own research"? I specifically asked you for data.
@rluker5 provided data, where's yours? You made grave and sweeping claims, which deserve data of similar heft. Surely, you've got some?