I've never understood why people even use the term 'realtime'
for raytracing. If a particular hw solution achieves a framerate
X for a given scene, then double the complexity of that scene
and bam, the framerate collapses. Scenes that benefit from
raytracing are all too often of a nature that can make the scene
complexity much worse very easily, eg. extra reflection levels and
transparency, or add another sphere or two. See my
SGI Alias benchmark
for an example of such a scene.
Increase the resolution, or demand a greater level of reflections,
or whatever other parameter, and whatever was 'realtime' before
is reduced to a crawl. My
C-Ray benchmark is another good example (check the sphfract model). The only way
to bring the framerate back up to interactive levels is to add
more hardware, which is
how SGI did it, eg. the Boeing 777
interactive Manta system: 350 million polygons in the model
(that's 2 orders of magnitude more complex than a typical PC
game today), 'real-time' ray-traced/rendered with a 128-CPU SGI
Prism (112 x Itanium2s used for the raytracing), working on a
66GB dataset (the system has 256GB RAM).
Also, raytracing is good at rendering certain types of surfaces
and scenes, but not others. Remember the overall look of the
original PS2 Mercenaries game? Raytracing would be bad for that
sort of thing. As I understand it, raycasting is a better method
(if I recall the name correctly), offering a more general approach.
Atm though, I don't see how they can resolve the all-too-easy
potential for exponentially more complex processing that
raytracing involves without just adding more hardware. The
raster method offers so many ways of controlling the gfx loading,
including methods that have not yet been added to consumer cards
such as Dynamic Video Resizing (used in SGI's IR hw).
Modelling the world with polygons is fast for good reasons. I would
be more impressed with the development of an entirely new API
and hw for modelling natively volumetric things without using
polygons (thus ditching all the tradeoffs using polygons involves),
such as fire, smoke, fog, water, flames, mud, lava, sand, etc.
As mentioned above, SGI did this with IR4 gfx and the *Ray sw
from the Univ. of Utah (demos showed effects being done at a
performance level equivalent to more than a thousand GF cards of
the day), but it went nowhere after that AFAIK. Here's
another reference
with more details, and the original
Utah press release page with
some example images from the real-time demos. For the lazy, here
are direct links to the screenshots:
http://www.sci.utah.edu/stories/2002/images/star-ray_livingroom.jpg
http://www.sci.utah.edu/stories/2002/images/star-ray_museum.jpg
http://www.sci.utah.edu/stories/2002/images/star-ray_science.jpg
http://www.sci.utah.edu/stories/2002/images/star-ray_galaxy.jpg
http://www.sci.utah.edu/stories/2002/images/star-ray_atlantis.jpg
and remember this was all done in 2002. The *Ray software was
shown to scale linearly, even on an Origin system with 1024 CPUs.
If they can come up with a decent product that has some sensible
useful lifespan, great, but I'll believe it when I see it.
Now if we had quantum computing all sorted, that would be a
whole different ballgame...
Ian.