Ok, this is a GPU/CPU fan war, and it really isn't this simple, and NVIdia is really right and really wrong.
GPU people, PCs are not Consoles, people run other applications at the same time, often times with high end MMOs even multiple copies of the game at the same time. And the core 'multi-tasking' is where the CPU IS IMPORTANT.
Up until now games have been designed for a one to two threads going through the CPU for basic AI and processing, and the rest because of the brilliants of OpenGL and DirectX gets shoved through a multi-pipe GPU with amazing processing.
The trouble with NVidia's statement is that they leave no room for the 'future' where CPU and multi-core CPUs especially can start taking the load off the GPU for some things or even give the game more headroom when the users is running Outlook and Burning a DVD and listening to music at the same time.
The next generation of games are moving to better CPU reliance and with that you will get better AI, and things that tend to get left behind so that people can show how many pixels they can throw on the screen at once and how 'pretty' it is.
Additional, GPU People, there is not as 'massive' leap in video card performance as NVidia wants you to beleive.
Go back to a 7900GTX NVidia Geforce, the card is 5 years old. Yet on many tests, it will outperform an 8600, or even 8800 on older games or a Geforce 9600.
NVidia would be more correct if they were talking about 'higher end' cards with their examples, and the Geforce 250 is not a high end card compared to the 260 or 280 etc... (And again, on some tests, for games made around 3DMark 2001 or 2003 timeframe, a Geforce 7900GTX will outpeform it at higher resolutions.)
If you really want to see the 'slow' advances in GPUs in the last 5 years look at the notebook market. Go to a site like notebookcheck.com and look at the top performing mobile GPUs, the 7950m is faster than most of the 8xxx and 9xxx series cards in every test, and it is only the 'exclusive' high end 9800m or 8800m cards that can compete with a 4 year old video card. (And sadly the Geforce 7950M is faster than most of the desktop GPUs NVidia is still producing, including a desktop 8600, etc.)
----
Now for the CPU people...
If someone wants a 'quick' bang for their buck, the $600 investment in a i7 is NOT GOING TO BE WORTH IT, at least not for a few years when multi-core code in games is better and CPUs do have more burden on them.
Literally if you have a P4 3.4ghz w H/T, moving from to a newer GPU that is in the 'upper range' is a better investment. Spending $150 bucks on a Geforce 260 would be 5x the improvement over the Geforce 6600 or 6800 your P4 probably came with.
This will also buy you time until the i7s come down in price, and then you can leap to them when you can afford it, and get the best of both worlds.
CPUs are a weak spot, but game designers don't design around CPUs, and they tend to design their games with one to two threads (which H/T can usually handle), and until they do put better threaded code in games, the i7 is not a great choice.
However, moving to a Core 2 Duo is not a good choice either. If you look at single thread or even dual thread peformance comparisons, a P4 3.4ghz processor is only about 20% slower than the upper end Core 2 Duos, and to get that 20% increase, it is a lot of money that you could save and put into a good GPU w/PS to power it and wait around until i7s drop in price, or get a deal on a Core 2 Duo.
-----------
NVidia is right that GPU is a better bang for the buck for 'single application' gaming right now.
NVidia is wrong that this statement has long standing truth or is universal to all users, especially uber geeks that are already running several copies of LoTRo on the screen at once, or playing a game and encoding a DVD in the background. The GPU is only going to help with the multi-game example a bit, and again this is going to be based more on the OS architecture, as under Vista or WIn7, this is easy to do with the GPU RAM Virtualization, but under XP, OS X, or Linux - running multiple games like this with a composer fronting the actual drawing just doesn't work well. And Vista needs some CPU power and threading to handle this, in addition to is AGP/PCI GPU memory tricks.
So for now, yes ok, NVidia, we get you, upgrade a Video card and wait for CPUs to drop in price, that is unless the GPU requires a new mainboard and a new PS to make it work.
However, the future is just around the corner and gaming that needs and uses more advanced AI and CPU processing for this, that makes the 'pretty' things on screen become alive and real, a CPU is pretty damn important. (And no GPU PhysX processing is not a complete answer to this either, as right now, many companies aren't even using the DirecTX10 or other 'standards' and with NVidia locking up PhysX support to their cards, games are not going to spend a lot of time or money on making it a grand as it could be.)
- NVidia would be smarter to work inside the DirectX10 or DirectX11 models that support non-visual and physics calculations than to continue to try to run with their own and then shove it into the DirectX10 standard when gamers continue to give them the finger.
(PS Tom's do you think you could fix your site so that IE8 works properly? If I fake the IE8 header so you think it is FireFox, it works just fine. Just saying...)