Cool, but why limit it to AR? I really want to see raytraced VR. And if their algorithm truly scales (big "if", there), it should fly on desktop hardware. Of course, given what they were saying about doing away with conventional scene graph traversal methods, it's quite possible that it won't scale to match conventional rasterization in VR environments.
I think a key point is that they seem to be assuming the lighting and environment can be learned with sufficient precision and accuracy. IMO, this is possibly a harder problem to do well than the actual rendering part.
I think AR needs to work pretty robustly, to gain acceptance. It's no use having graphics that are more realistic in tightly constrained scenarios, if they look glitchy and wrong in more real-world cases. Therefore, I expect most apps will opt for slightly less realism, in favor of fewer glitches and less flickering.