Dikyashi :
The idea of what Enhanced Sync does might seem complicated at first but it easier to wrap your head around if you look at the system in two states: above the refresh rate of your panel and below the refresh rate. When your game is running at 90 FPS on a 60 Hz screen, you generally had the decision to enable or disable standard Vsync, which allowed you a tear-free experience with longer latency from mouse/keyboard input to the next frame (because you were metering the gaming engine) or with screen tearing at the lowest latency for input. With Enhanced Sync the game engine is allowed to render at its full speed (90 FPS in this example) but the driver only outputs the latest FULL frame when the displays refresh window resets. This gives you no on-screen tearing and faster input by letting the game engine continue to *think* its output at 90 FPS.(Credits: PCPer)
In this case, there'd be no tearing at all. Then, why do AMD's own slides say "Minimizes Tearing" with Enhanced Sync on a FreeSync monitor?
In the case which you describe, it would only help a game engine which polls input in the same thread driving the renderer. You could just implement a game engine which reads input and simulates the game world asynchronously with the renderer. I don't know how common this is, but in highly lag-sensitive games, I'd be surprised if at least some didn't work this way.
The down-side of just naively rendering as fast as possible & displaying the latest is increased judder, meaning fast-moving objects would appear not to move at a consistent velocity. What you really want is for the game engine to estimate the earliest time a new frame can be displayed, as well as the amount of time needed to render the frame. Then, don't start rendering it until a little before the difference between the two. That minimizes display lag
and judder. Better yet, if the game engine knows roughly when the frame will be displayed, you can extrapolate everything to that point in time, even further minimizing judder.