DXRick :
The next big jump is for ray tracing to replace vertex and pixel shaders. It cannot be done on the GPU, but will require major advancements in CPU power.
It doesnt ? The way intel is trying to do it is by muscleing their way through with many-core arquitecture. We arent talking about a optimized solution, we are talking about pure muscle (or loads of cores, witch btw, are quite poor solowise).
Note: do you really think they can make a cpu with 80 cores flawlessly ? Hell no. They will slap several versions (40,50,70,80 cores) with disabled (read damaged) cores. And even so, it will be are to produce so many cores flawlessly. I think its not short of a utopia.
When Netburst came out, they were talking about a 10 Ghz CPU. Netburst proved to be a bad arquitecture.
They will say Nehalem's arquitecture will get as much as 80 core. I think history repeats itself.
First of, i have yet to see a CPU doing a decent job in the GFx side. The links i provided before (from ATI and NVidia) proves that is easy to port most apps, so they, instead flooding the CPU, use the GPU in another tasks.
I dont consider x86 and x64 dead yet, because honestly, there is already too much software coded for it. But i think this CUDA and Ati Stream, may lead to a breakthrough in software development, that will have first on a higher level ( Workstations/servers). They will adopt it first due to sheer perfmonce leap.
ATI/AMD will survive because it has a freaking plataform. (CPU/GPU/NB,SB)
NVIDIA will survive because you just need to slap a cheap (x86 or x64 CPU) to use the monster gpus.
The question now, is not replacing vertex or pixel shaders for gaming mate. After spending a good time of the afternoon reading the CUDAs manual and reference papers, i believe a really big can of whoop ass is already on its way.
Its headed to Intel by the GPU makers. We are talking about relieving the CPU from some functions he hes doing atm.
So CPU performance will be felt even less.