Ok I know there lots of anti-Nvidia peeps on here, but please stand back and look at the bigger picture. As I've posted several times true-3D is the next step in gaming / virtual environments. No matter how bad a$$ your X-fire setup is, your still limited to viewing it through a 2D window. Current setups only attempt to render a 3D environment onto a 2D canvas. Your used to it so you expect it and believe its better. But its missing depth, no matter how ~good~ the textures, animation, lighting, shadows, and designers are, their still limited by the 2D canvas that everything gets stuck on.
Your brain was designed to see using depth, aka your depth perception. Nothing will seem real unless it has depth to it, and it is impossible to create depth on a 2D surface. The most that can be done is to create a picture of something with depth, but not create the depth itself. So we ~need~ true-3D technology to proceed, it needs to be developed. Current technology doesn't allow us to beam two separate pictures into your brain. The best we can do is to create two separate images, separate them for their respective eyes, and hope your brain puts them back together properly. This poses a few problems, most are with the end user not the program itself.
First I will describe the programs single greatest flaw. Almost no game engine design implements negative Z-buffering. The Z-buffer holds the Z-axis value of all vertices in the virtual world. A 0 Z value indicated the vertice is at absolute screen level, higher values are objects farther away. Render engines will not bother rendering Z-values that are higher then a lower Z-value object, basically they don't render objects that are behind other objects because you wouldn't see them anyway. For this same reason they don't render objects with negative Z values because it would normally be behind the "camera / screen" and not normally visible. But with a true-3D setup, the viewer ~is~ behind the camera not at the camera. The space between the camera (screen) and the viewer is represented as negative Z space that can be rendered to. This is how you make something "pop out" of a screen, by giving it a negative Z value. So if the program isn't bothering to draw negative Z-objects, then the user will never really experience the full 3D effect. Instead objects appear to "pop into" the screen, the screen becomes a window into a 3D world with depth going out, but nothing seems to be coming into the users immediate visual space. This is where application programmers need to get busy and I fully expect this to happen in the next couple of years as nVidia push's their 3D tech more.
The next problem is actually user side. Everyone has a different spacing between their two eyeballs, and thus different alignment. Your brain attunes itself to the alignment between your eyeballs when your an infant. This is how you walk and catch things thrown at you without having to think about it (your brain does it all at a deeper subconscious level). And since its not normal for someone's eyeball distance to change during their life your brain never expects to alter its depth. Now here comes 3D gaming where each eyeball is receiving a different picture. The brain does its thing and puts those picture together to form depth. Except one little problem, the eyeball distance / alignment is different then what you've used your entire life. So your brain must relearn / attune itself for the new distance / depth field. This cause's headaches for awhile until your brain gets itself in order. Its the equivalent of connecting somebody else's eyeballs into your brain from their body instead of your own. And frequently changing your view point (going from 2D to 3D and back again) cause's your brain to have to constantly shift and reconfigure itself, its not kind to someone not used to it. Only way to solve this would be to find a way to create a "visual profile" of each individual gamer, then have the program (possibly the driver) load that profile and use it to align the virtual eyeballs. This won't be possible until games are programmed with true-3D in mind from the beginning and not as an afterthought.
In short, true-3D is the next generation. Its not a gimmick, its not a fad. We tried it years ago and there simply wasn't the technology around to properly render things. It was like trying to do wireless internet with AM/FM radio's. Possibly, but not enjoyable. This article sounds like it was written by someone who went in with a very negative look on 3D to start with. Its jaded and biased, hence the terms "slightly pop out" and all the negativity.
Myself being someone who's been into true-3D since the days of the GF4 Ti4200 with a very nice 17ince NEC CRT doing 800x600 x 2 with hardwired shutter glass's. I can attest that the effect is amazing once its setup properly. Playing AvP2 with a 5.1 surround sound EAX setup, stereoscopic true-3D with shutter glass's, and the lights turned off was a total trip. The experience of running around the world being chased by aliens with a true 3D environment was amazing, it wasn't just a video game because my brain was pretty convinced it was real. It wasn't experiencing something through a flat screen. But this was back in 2002. Modern setups should allow for much better resolutions and effects. The draw back is that it takes awhile to setup up and get your brain used to it, its not something you can "just try out" and write a review on. I wouldn't listen to anyone who hasn't done this for at least several weeks on a daily basis, your human brain just doesn't work that way.