okay. about the features that nvidia does have.
they have:
real branching in vertexshaders.
that means, you can have loops, branches, conditionals depending on vertex data. a vertex can for example store a value true or false for.. uhm.. move me 2 units to the left (stupid example😀), and then you can, depending on that value, move him 2 units to the left.
you can't do that on the ati. there you can only set some constants for the vertexshader, wich determines such a thing. that means, stuff per object. set a flag that determines if you want to move your _object_ (the whole enemy,powerup,wall, whatever) 2 units to the left.
now nvidias thing _is_indeed_ more powerful. but it means you have to fully redesign on what you do in a vertex program, what you do on the cpu and everything. vertex programs are mainly there for animation, named vertex skinning. and there the whole design to use the nvidia features means that you have to recode everything. look at ut2003 and you see skinning in action, the animations are realtime generated and do indeed look great. but they could implement them the same way for all gpu's! now for nvidia they would have to redesign. and this for nvidias card only. in the case of ut2003, that will happen, as nvidia sponsored them quite a huge bit to do so. but in normal situations, it depends.
the real vertexshader design (wich includes branching and all) has even much more features (like per vertex textures for stuff like programable displacement mapping, but much other usages are possible) are _not_ implemented. those, and not nvidias thingy, will be the next standard (vs3.0 in dx9) vertex shader/program. so why working for one gpu only? its quite a bunch of work..
pixel shaders:
nvidia has 1024 instructions, ati has.. 96 or so (64 math and 32 texture instructions max). that means much bigger pixel shading programs. on the other hand, they have no branching or anything in, wich would, guess what? be the next standard after the standard of ati. now again, you have to code completely new pixelshaders for nvidias card, and only for this one. ask carmack how easy it is to work with different pixelshader-versions, and he will tell you (he develops doom3, for all that don't know). in short: it's crap. why? because you can't just say look, my pixelshader has 200 instructions. that means i have to split it into 3 parts on the ati. you can't just split, but you have to redesign your data setup and everything to do those 3 passes. that leads to different design on dx9 cards than on the gfFX. for a gain of what? supporting one lonely gpu. and the additional pixelshader instructions are often not used. (looking at the output of cg compilers currently: they compiled to a simple perpixellighting program to 61 instructions! i have the same in 32 instructions.. so what? 😀).
the additional features over standard opengl1.5 and dx9 cards:
from 96 to 1024 pixelshader instructions.
real branching in vertex shaders.
to the next standard:
real branching in pixelshaders.
textures in vertex shaders.
the next real standard will be out on the next card _after_ nv30, i guess. till then, dx9 is standard (as there are much different cards for dx9 comming as we all know).
the feature set is ridiculous. its fully off the standard.
about cg:
its just a programming language (wich i think is not that well designed) to generate vertex and pixel programs. you can run them (partially thanks to the "great" support of nvidia to make it a standard) on ati card as well. cg was just put out to make their cards (gf3,gf4) look programable in parts that aren't (gf3 and gf4 have, hardware wise, no pixelshaders.. its just an advanced multitexture unit, but most of the features have been there yet on matrox and ati cards.. envbump,dot3 lighting, etc.. still gf3 and gf4 where fun to work with😀), then they used cg to spit out a crappy emulator for the nv30, so that people "can code" for a not existing gpu, to make them not think they would need to move to ati to do such stuff.
there aren't much features on the card, but they are done wisely: those features make the card look so much more advanced, while it isn't really. and with that, and names like cinematic effects and cg - "c for graphics" everywhere, they look so future proof. they aren't. its just another gpu. a hardware and software wise not even good done gpu imho.
as stated above. moving to .13, boosting to 500mhz and 1gig ram. should be faster, not? should be over 100% faster, not? well.. it will not be. so its not a good card.
btw, its quite good possible that microsoft will not implement the additional features of nvidia for dx9 but rather for dx9.1 (wich will take over a half year to come out). that means for a long time you will not even be _able_ to use the non-standard features of the gfFX in dx. in gl, you can, but there its even more work.
hope that clarifies a bit.
btw, the ati card on its hand has real 128bit floating point textures, cubemaps, 3d textures, what ever you want. the gfFX doesn't have. makes demos like the HDR natural lighting quite a bunch more work to code (meaning eating up much more pixelshader instrs to emulate cubemaps and all that), wich on the other hand just leads to a) bad performance in comparison and b) less real useable pixelshader instructions..
well, i think you got it now a bit more😀
just believe me: i will never diss something if i would not have a reason. really not.
the only thing i can't diss for now is the real performance. but the theoretical performance, the stated numbers from nvidia and all that, that isn't "wow fast" for me. and the rest isn't as well.
"take a look around" - limp bizkit
www.google.com