[citation][nom]dragonsqrrl[/nom]Actually in the context of what he's advocating Nvidia's latest high performance architecture is very power efficient, especially when comparing performance per watt to conventional processors and GPUs. I sure hope you're not basing your criticism about the power efficiency of the Fermi architecture on gaming benchmarks, especially when this article focuses entirely on parallel computing, because well... that would sound a little ignorant on your part. The people waiting for huge breakthroughs to back up the legitimacy and potential of parallel computing may not be able to appreciate the benefits until it's already upon them. Although Fermi may not be "real" or impressive enough to trigger understanding for some people, it's a genuine step in the right direction, and represents as big of a breakthrough as this area of computing has ever seen.The link below shows just a few examples of Fermi's performance potential in this area, parallel/GPGPU computing, an area that is still young and open to massive optimizations. Have a look...http://anandtech.com/show/2977/nvi [...] he-wait-/6[/citation]
My statement that you made a quote above was based in my Developer experience.
All of the talking that NV is making around CUDA, Fermi and Parallel prog. sounds very promising, but very academic and theoric yet, they're bashing the multicore serial processing which is still evolving, man, the first Core2Duo was released only four years ago, in 2006!
This discussion is much like the thing that Adobe Flash is obsolete and must be replaced, of course this will happen, but not in the next few years.