Most of you are missing the OP's point. To the OP, I severely doubt this will happen for another 15 years. When it does, it will completely revolutionize the graphics and physics engine industry. I doubt DirectX or OpenGL would be the systems to even begin to handle this. Either something else would arise, or DirectX would undergo another severe overhaul.
I believe what you are talking about is something I have a dream about. I've wanted to develop an engine system that will do just what you are talking about.
- Light effects rendered based on the material the light is striking, and the light consequently traveling back to the "eye," aka the camera.
- Virtual worlds built not out of 2D planes floating in space, connected at the edges, with shader texture layers applied over top; but as actual 3D materials, surfaces, fluids, gaseous clouds. Worlds built out of "atoms" as it were. This would enable true fluid dynamics modeling to occur.
- Physics rendering systems based on the material system mentioned above. For an FPS example, if you shot someone, they would die because your bullet or plasma bolt would cut or sear into them, slicing a blood vessel built into their character model which is feeding their body energy, causing them to bleed out and die. :: You could fire a tactical nuke at a cliff, and the explosion would blow a hole in the cliff, possibly sheering off rock and biting into the lake bed above. The water would then flow out in a waterfall because its bed had been disrupted.
The greatest benefit I see to all this is honestly fluid interaction, with material interaction on the order of the cliff example coming close second. Gameplay would change with the new possible non-preprogrammed interactions available in the world. In GTA, you really could level a building with enough explosives, instead of programming specific mission sequences in.
In some way, I think this model of world design and rendering would actually make it easier on game designers as they would have to program far fewer custom effects and interactions. It would be taken care of by the world renderer.
Another interesting note is that the rendering engine as it exists would shift focus completely: FROM the position of rendering polygons and then rendering light effects on them TO the position of solely rendering how light interacts with particle surfaces. Interestingly, with a strict and true ray-tracing viewpoint, the camera would be immersed in a light field, and would detect what was coming into the viewport's plane of vision. This is in opposition to detecting what is visible outward from the plane of vision and then culling out what is not, including everything above, below, to the left and right of the viewport.
Anyway, that's my interested technophilic rant, my two cents.