Various companies are gambling that future CPUs with large
numbers of cores may be able to rival GPUs (Intel is said to
be working on designs with anywhere from 80 to 256 cores),
though the key interest is more concerned with exploiting
systems that have multiple CPUs, especially HPC, large-scale,
eg. SGI's new VUE system:
http://www.sgi.com/vue/
and their concept HPC system that uses thousands of Atom cores:
http://www.sgi.com/company_info/newsroom/press_releases/2008/november/project_kelvin.html
VUE is derived from their older VizServer product which allowed
any remote device to exploit the processing power of a large
supercomputer as if one was using it locally, by means of pixel
data compression and other methods. I was in charge of the first
test setup using VizServer in the UK (16-CPU 5-pipe 3-rack
Onyx2 IR2E); it worked quite well, but from what I've read SGI
has vastly improved how the idea works for VUE, with only minor
amounts of code from VizServer carried forward.
Personally, I'm not convinced by the using-main-CPUs-for-gfx
idea. Somehow I get the feeling there's a peculiar assumption
that GPU designs are going to stand still.
What NVIDIA/AMD have not and so far have never done is release
a genuinely scalable gfx solution. I don't mean SLI here, rather
a single board with multiple sockets, sold with (say) 1 or 2
sockets filled & one then adds further modules as one wishes to
scale performance. Ditto for the RAM; multiple sockets, buy
a card with 1 filled to give 2GB (assuming modern arts, etc.),
3 or 4 more slots empty for future expansion. Just example
numbers of course.
SGI's gfx products worked along these lines, all the way from
the old 1990 Elan system to the final InfiniteReality4 which had
up to 10GB VRAM and 1GB TRAM. Strangely though, the GE board
had space for 8 GE processing ASICS but they never released a
version with more than 4. I was told this was because it was
not expected that performance would scale all that well beyond
4 GEs (which never made sense to me given one could scale a
system up to 16 x IR4 pipes in parallel anyway), but with
hindsight (given what NVIDIA did with the technology for the
original GF) this was wrong. NVIDIA proved that multiple
processing units could easily be scaled, as modern GPUs now
show all too well. SLI scales things further, but it's not a
very direct way of spreading the processing load.
On the other hand, maybe the modern chips are just too complex
and/or fragile to package in such a way as to make it feasible
for one to buy a card with 1 or 2 fitted and then add more chips
later. Still, would be cool though, and it would not be like
SLI, ie. no driver issues and the performance increase would
apply to all operations (unlike atm where certain games behave
badly for CF/SLI dpending on driver issues, while various
professional tasks don't scale either). Heh, maybe PCs just
aren't big enough, but surely in the professional/high-end
market there is scope for such a design possibility.
Anyway, I just figure the whole idea of rendering with main
CPUs comes from a viewpoint of assuming that somehow GPU design
is going to reach a plateau - unlikely IMO. In recent years
we've had steady improvements in speed, new general features
with respect to shaders, but nothing revolutionary, but who's to
say there won't be (for example) a new idea on how to render
genuinely volumetric phenomena that does not use polygons and
thus is much faster, more accurate, etc.? (actually there already
is, called *Ray, as demo'd by SGI, but it was a software system
that made use of IR4 in a special way and never released as a
commercial product) New algorithms, breakthroughs in electronics
(eg. applying spintronics and memristors to GPU design, the
effects of shrinking down to 22nm) could all give a sudden huge
leap in hardware gfx processing power. With the emergence of
quantum technology, anything is possible.
A cynic would say what MS and SGI are doing is more a way of
boosting main CPU sales which these days, let's face it, have
kinda run up against a wall for the general home/office user,
ie. performance beyond a reasonable dual-core is blatantly
unnecessary. SGI and Intel are apparently working closely
together on these ideas; I expect MS is working alongside them
aswell.
Ian.