As for GPGPU and doing physics on GPUs, there are a lot of reasons why it will win out over AGEIA's hardware specific solution. There are two main issues to keep in mind on my points:
(1) Faster does not matter, just has to be fast enough. Physics is not an end-all be all task at this point. Devs are working within normal game models and have some limitations. While the fastest solution, in a perfect world, is best, the reality is each solution only has to be fast enough for the end product. PhysX does have limits (e.g. you are not going to get an ocean of complete fluid dynamics), so the question is can GPUs keep up with what devs are trying to do today?
(2) Game physics & Effects physics. HL2 had some game physics. Yet so far the games like UT2007 and City of Villans are talking about ONLY putting in effects physics. e.g. the UT example: Instead of 600 rocks falling off a cliff, instead 6,000. But it was noted this would NOT change gameplay. Ditto City of Villans. Blow up a news stand and paper goes everywhere. Here is the problem: GPUs already can do effects physics great. Go to ATI.com and look at the "ToyShop" demo -- there are a LOT of GPU based physics there. The 360 has an example in Kameo where it had 1M interactive particle bodies running in realtime. So until someone makes a game w/ gameplay physics (like dynamically destructable buildings) and it cannot be done on GPUs, devs are so far not doing anything that requires AGEIA. If the power goes untapped...
On to my points of why AGEIA's PhysX wont survive.
- PR and Marketing. No contest. ATI/NV have a ready made fan base.
- Established developer support. It is all about the games, namely getting a game that screams, "You MUST have AGEIA". But no developer can risk building around AGEIA due to install base issues, which leads to...
- Distribution Channels, board maker, and OEM support. This cannot be overlooked. NV/ATI have a decade or more of working supply lines and having great connection with board makers. AGEIA has an undersupported product that was delayed with 2 board makers getting on ship.
- Install base. SLI/Crossfire > PhysX. Developers will develop where the money is; developers will then code to the expectations of the least-common denominator.
- Open platform. There is competition on the GPUs (like MS unannounced Physics API and Havok) which means more developer choice. Thus far AGEIA has NOT gone out of their way to make their platform open (they say after release anyone can work on it but they are not making the runtimes and so forth open at this point)
- GPUs are more versatile. Every 3D game uses them. PhysX? Only games using Novodex will use it. GPUs are now also used in Vista and also can be used for GPGPU tasks. PhsyX? Nope.
- GPUs have 100% overlap in market. PPUs, on the other hand, will only work with a small percentage of games.
- Cost. $250-$300 (Alienware has it at $275) for a chip that only works in a few games is nuts. Return on investment is low. Would you rather have 1GPU+1PPU or 2GPUs, like 2x 7900GT? Which leads us to...
- SLI/Crossfire. Does a game use physics? Use 1 GPU for Physics, 1 GPU for 3D. No physics needed? Then use both GPUs for 3D! PPUs cannot do this, if the game does NOT use it, well, it sits there like a $250 paper weight.
- Old GPUs may in the future be left in your machine as a dedicated physics accelerator.
- ATI/NV may put smaller chips onboard as "dedicated" physics chips. Basically just an ALU engine.
- X1900XT(X). This chip has 300% the PS performance of an X1800XT, but in games it only gets a 0-20% lead in most cases. Why? All those ALUs are idle since most games currently don't use them. The X1900XTX has a TON of power just sitting there, unused... waiting to be used by something like physics!
- ALUs are CHEAP. It cost ATI less than 63M transistors to add 32 ALUs into R580, about 2M transistors an ALU. That is well over 200 billion flops (200GFLOPs). Basically ATI/NV could add ALUs just to have a hug surplus for physics. Since DX10 GPUs are more robust and are actually much better general processing oriented, a few tweaks and changes could make them VERY Physics friendly. The power is there. The X1900XT has a total flops of 500B. This will only increase. By fall 2007 we will break the 1 TFLOPs (Tera-flops, as in TRILLION) range in PROGRAMMABLE shaders. AGEIA does not have the R&D, product range, or the know how (they are still on the large, hot, and outdated 130nm node... I believe some of the NV35s were made on that!).
- Another level of latency. With PhysX you have a CPU, GPU, and PPU. ALL working over a bus fighting for traffic and coherancy. With a GPU you have only a CPU and GPU, limiting some of the traffic and coherancy.
- The chips are expensive for what they are. 125M transistors on the 130nm node process, 25W for the chip alone. To compare GPUs are on the 90nm processing, moving to the 80nm process and are 3x as big. What they lack in elegance they make up for in brute force.
GPUs are not perfect for physics. Advanced rigid body collision detection takes more work, and they may (or may not) be faster than AGEIAs solution. But pure speed is not always the determination of the winner. VALUE, Marketshare, and PR are. And in games, the most important factor is developer support.
Thus far AGEIA has over promised and never provided a product (well, until now). Delays have been common and the chip itself is waaaaay overpriced.
Yep, not a fan. More useless hardware. At least they woke up the GPU makers.
What is more interesting is how ATI and NV will fight it out. ATI is about 5x faster in GPGPU tasks than NV (that was comparing the 7800GTX with X1800XT), and ATI has done a lot of work on the 360 with MEMEXPORT to specifically accelerate this type of tasks. 6 months ago one of the "bullet points" of the X1000 series was it can be used to accelerate a number of tasks and ATI has had working physics demos up on hardware as old as the 9700. With all the extra ALUs on the R580 + the excellent dynamic branching, I get the feeling ATI could have a significant lead here.