mayankleoboy1 :
And Nvidia took that hardware, shoved it into the dustbin, and implemented a software version of PhysX in their CUDA architecture. I dont think Nvidia is ever going to open / optimize for PC the PhysX API.
What Nvidia should do is implement a dedicated Physx hardware on the die itself. But really, why would they? Even they know that unless they open the API for all vendors, PhysX is completely gimped.
CUDA, by definition, is hardware accelerated.
PhysX is actually VERY good when run in a vaccume (EG: A stand alone app). It really shines when given really large datasets to work with [which makes sense: CUDA is optimized for large datasets, and sucks performance when the dataset is too small due to how its designed]. But when GPU resources are also being used by a game, all you do is leech FPS away without adding much to the game itself. The API, by itself, is fine.
I'm thinking DECADES ahead of everyone else in physics. Take a typical FPS: How often do you see games where you can move through a swamp without movement degregation? Ok, maybe one or two games apply a -20% movement filter. No attempt to try and calculate the odds your player (or AI's) are going to be able to move forward (viscosity of liquids is REALLY hard to do). I'm thinking of programmically calculating how grenades explode, tracking each fragment, rather then doing a simple damage radius. Hell, I'm thinking of calculating the path of every single bullet fired by a gun, the odds it passes through objects (think of the possibilites in regards to different weapon calibers, or weapons with high muzzle velocity, in MP games!), whether it pierces a players armor, and how much damage it does. Thats where I'm thinking. I'm WAY ahead of everyone else on this.
Problem is, WAY to complicated to do right now, even with a good API. But the fact so little progress is being made in this area irks me, especially since I don't expect much in the way of graphics until we get Ray Tracing/Casting (not much left thats easy/cheap to do).