The trouble with realistic real-time engines is making it look realistic in gameplay. Something like micro-displacement mapping lip movement is something you would need to do on a scene by scene basis instead of a modular approach. It would eat up a lot of space.
Its also deceiving. Look at our realistic car in the foreground while you can't really see the shitty trees in the background. With most games today, you need to do everything at a higher quality where for a simple driven animation you can concentrate all the detail in the foreground and skip out on most of the games calculations.
How it translates into gameplay will always be the final criteria. If it's just an animation with little player control, you might as well have packaged it as a movie.
RE: Agni's Philosophy: shaky-cam sucks when it's live action - and now I know it's just as bad in realtime rendered engines. It luckily didn't live long as a tool for cinematographers - though you still see it used for low-budget TV occasionally.
I would say none of these demos look truly photorealistic, as in looking like a photo, or a live-action video. Out of these examples, the Unreal Loft demo probably comes closest, along with perhaps the Mizuchi Museum demo, though that has a very narrow focal range hiding any potential imperfections behind lots of lens blur. And of course, the thing these two demos have in common is that they are limited to a single room with little organic material. There are no humans or animals with unrealistic skin and chunky hair, no cloth or plants blowing in the wind, no complex character animations or collision detection between objects. They are not actually games. To be fair though, even big-budget films tend to struggle to make CG look realistic, despite it not even being rendered in real time.