This article is written from a purely graphical point of view, but I've read elsewhere that part of the advantage of raytracing is apparent from a simulation and animation point of view.
As I understand it, in order to -do- raytracing at all, the entire physical scene has to be present in full in memory. This means that the processor "knows" an entire, distinct reality of the scene it is dealing with, so if you want a character to walk, instead of making an animation sequence of the character walking on a generic flat surface, and then constraining it to a crude approximation of the ground (resulting in clipping of the foot through a hill, horrible handling of stairs in games, etc), the character can simply be simulated, since it can interact with the full-resolution, real surfaces of its environment.
If this is the case, then part of the advantage of ray tracing lies outside the areas of reflection and refraction, but lies instead in the -other- advantages that the hardware architecture necessary for raytracing inherently provide.
Not that you couldn't make a system in which the CPU renders the full scene for simulation purposes, and then the GPU renders it with rasterization for display purposes, but that's a duplication of effort.
Or I could completely be missing something...