But isn't that a huge waste of Cores? Surely using a £80 graphics card as a PPU is a much more sensible option, or am i the only one thinking this?
Thats no argument. Most Software available doesn´t use a CPU as it is intended. Instead of actually calculating the Processor is is juggling the stack like mad. So right now it may seem as if the GPU may be better, but if Intel intends to go multicore like crazy, i bet $/Core is going way down and in two years four cores will be cheaper than even a entry level GPU. Then again, two years from now, there might be GPUs integrated into the CPU or at least available vor CPU Sockets...
AMD open socket plans:
http://www.simmtester.com/page/news/shownews.asp?num=9596
Open socket designs are good for bringing innovation to the market; good for consumers. GPU, PPU, FPGA, dual Proc, Intel or Via CPUs running on an "AMD" socket. These are all possibilites. An FPGA "co processor" is the one that excites me the most. When an application is launched it could program the FPGA and you then have ~500mhz of low-overhead specialized processing power to do exactly what you want to do no matter how much load you put on the rest of your system. I am currently testing ASIC code on FPGAs running at a mere 53mhz and they can respond to and route 2ghz traffic on multiple channels simutaneously within a few nano seconds. IMHO specialized logic chips are the only way computers are going to get significantly faster from here on out. FPGAs capable of over 500mhz are currently available. Specialized logic chips are extremely powerful, as you can see by a ~500mhz GPU having significantly more processing power then a dual-core ~3ghz CPU.
I don't think it's really feasible for high-end gaming GFX though as they depend heavily on having tons of very fast RAM very close to the GPU with dedicated bandwidth and that might not work well for a "socket" GFX solution if it relies on a single Hyper Transport link and shares memory bandwidth with the rest of the system (but we're still talking much faster then current GFX cards that use system ram). For mid range GFX and many other applications it could work extremely well.
As far as it being a "waste" of cores. Dual-cores are dropping in price and rapidly becoming the norm. Using a core that a system already has instead of buying additional hardware is certainly a valid strategy. CPUs are very ineffecient though as they are designed to be able to do anything and require a lot of software overhead to accomplish most tasks, so they may simply not be fast enough, no matter how many extra cores you have, to perform some tasks (real-time GFX being an obvious example of something you wouldn't want to do with an "extra" core).
Actually Flasher nVIDIA GPU's ARE too slow for GPGPU calculations.
Mike Houston: Stanford University Folding@Home Project
nVIDIA GPU's have to many limitations. First of all they're 256bit internally (vs 512bit for ATi X18/19K VPU's), which normally would not be a problem but nVIDIA 7x00 GPU's have a 64K limit in shader lengths. Thus many passes need to be made in order to get the work required by Folding@Home done (or physics). The more passes the slower it is.
ATi X18/19K VPU's support unlimited shader lengths. (they can do all the work in a single pass).
Another aspect is the dedicated Branching unit found on ATi X18/19K VPU's. These are used extensively by GPGPU applications. nVIDIA's 7x00 do not support such a feature.. thus more passes required once more for Branching.
And last but not least pure Shader power. ATi X19K series have more shader power then ANY nVIDIA GPU currently (by 2 times or more).
All this makes nVIDIA 7x00 too slow for hardcore GPGPU applications.
Read up on it.
ATi x19K > AGEIA Physx in GPGPU applications if dedicated to such an application as well BTW.