Nvidia Phsx with ATI Card

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Great .... in the future.
Right now they are doing what they do with new DX graphics toys, Oooh look shiny $h1t. PhysX tornados and ragdolls that are a tiny bit more realistic doesn't really impress me. Bullet drop and chain reaction kinetics are when I'll care more than what the CPU can currently emulate or estimate even if not 100% realiastic or accurate.

If we can all see that PhysX (propriatary hardware based) isn't taking off, do you think it will be a software (thus CPU driven) item in the future?

Actually I think the push towards a DX11 compute shader or OpenCL physics implementation is the way to go. Get beyond the proprietary crap and let ATi, intel, nVidia, S3/Via, etc get into the action without prejudice, that's what I want and hope for.

Sure intel's Massively-Multi-Core future may help the CPU side of things, but you have to think that the many hundred compute units in modern VPU/GPUs would be handy for stuff.

However it's still very immature, and if you were a developer would you really base your entire game development from the ground up on any option right now? I think in about 2-3 years we'll see it more integral, for now they both tend to just do the hey look at that effect over there, not the hey look at everything everywhere it's vital to the engine.

Things may change quicker or slower than I expect, but I wouldn't be making any purchaes right now unless a specific game or effect makes it worthwhile for the individual.
Heck if it was even 'shiny-physis' in Oblivion, I might have changed my strategy about purchases, but really, even Fallout3 or FC2 wouldn't be enough for me to care and worry about changing hardware of one kind or another (whether it was a different CPU, GPU or even platform).
 


Lol, it doesn't. Expect physics to run off cpu for the foreseeable future. So long as the engine is not specially rigged for the extreme multithreading to utilize gpu (like Physx), physics run very well off cpu. Crysis is still the single game with the highest amount of destructible environment so far, and its physics run purely off cpu. For any modern cpu, the cpu load during gameplay isn't anywhere near full. There is no need for gpu physics. Nvidia is just trying to artificially expand their turf beyond graphics.
 
Does anyone really care about Folding at home? I'm not of the opinion that because I have a decent graphics card that I am going to cure cancer when my computer is idle.
 
Don't knock folding, it may not be for you, but why not for others if they have open compute cycles?

Also you can use the GPGPU-edness for things like video editing, and modeling and math programs.

And of course the basic multi-monitor thing.

Personally I don't even worry about justification, how many old pieces of hardware do we have that's 99% great, but is just old, and if we were to try and sell it, shipping might cost more than the value of the card. If you can use the extra stuff for something useful, hey why not?

I'm not going to promote it as a 'go out and buy other hardware' type of thing until it becomes a killer app, but if it's a 'reuse otherwise useless hardware' solution, all the better.
 
Well I like a goood sniping game (but can't snipe in games like COD4 because you get called a camper), what would be nice is having wind and angle and bullet weight/momentum influence bullet accuracy and as well as 'stopping/dropping/penetrating' power, where an AK-47 or Barrett 468 could penetrate a shed wall but not an M-16, and same with a 50cal penetrating something more substantial.

Anywhoo, it's stuff like that for specific games that would be nice, and that's the example I like, along with things like the weight of blades and arrows, etc in games like Oblivion would make combat a little more realistic (you might want to get higher ground for the natural physic advntage of gravity and leverage).

Anywhoo, those are the things I care about more than the tornados and such, and I suspect my desired physics bits won't arrive for a while until after the whole shiny phase wears off/thin a bit.
 


Part of the reason for that is that so few people have a PhysX capable card, you can't program physics using NOTHING but PhysX. If the entire game engine were re-written in PhysX, then we could have a comparision between the two. ITs this same reason why the API is dieing, NVIDIA shot itself in the foot by not allowing ATI to use it.
 


PhysX has the power to do what you ask, but again, no game is fully coded exclusivly with PhysX, which holds it back. I want the same things you do. I want every piece of shrapnal from an explosion to be its own object, with its own momentum, and capable of killing people. Hovok can not do this, due to the massive ammounts of CPU time it would take to perform these calculations.

For full physics, the work must be taken off the CPU. Anyone who claims you can get realsitic physics like I explained above using just CPU time needs to have their head examined.
 


The problem with that approach is, graphics also have to run off that gpu. Even with the minimal Physx effects that's present in current Physx titles, fps drop significantly. If you increase phyics effects to the level where cpu simply could not deal with the raw load without gpu acceleration, the gpu would have been so crippled already that it won't matter.

No one is saying that cpu without gpu acceleration runs better than with gpu helping (Physx also increase cpu load, just by a lesser amount, the gpu only "accelerates," not doing the full work). You can always increase load to the point where cpu can't handle it alone anymore. The point is that modern cpu is mostly idle in games, as most games are gpu-limited. So cpu physics, within a reasonable level, is "free," while gpu Physx will bleed fps.
 


Yes you can do it, they are the core engine for many other games , just not the ones they always highlite which use their competitor's engine or a proprietary engine. That they chose only the shiny add-on and not the physX engine as their core shows their trust in PhysX from the start. That may change over time, but will it matter before there's a more open API is the question. Making a game all out of GPU-PhysX is a huge task, which is why they don't do it, but it's not because it can't be done.

Hovok can not do this, due to the massive ammounts of CPU time it would take to perform these calculations.

For full physics, the work must be taken off the CPU. Anyone who claims you can get realsitic physics like I explained above using just CPU time needs to have their head examined.[/quote]

It's no different be it a terascale CPU or GPU, other than the communication is quicker on the CPU, the advantage of the GPU is cost per op.

No matter what there needs to be co-processor to CPU communication, so the advantage depends alot on the implementation, and while Havok doesn't do GPU physics right now, don't expect that to be so once Larrabee is out. It's not a limitation of Havok, it's a limitation of the implemetation that intel is providing for Havok. Moving to a more open platform lets people like Crytek and Epic continue to build their own underlying engines instead of being stuck to whatever this or that IHV let them use.
 
The other thing to remember is that it's great to use the extra power, but what it's used for matter alot, and just increasing particles and crap stesses both unnecessarily.

Offloading the kind of physics that were specific to bullet drop/windage/etc and destructables would be rather simple and easy, but it's hard to really display X vs Y being significantly better. If you understand what's happening then something like a realistic intereactive destruction of an object and how it influences the surrounding is impressive, if you don't understand the difference you may just notice that it fell to ther right or left more based on your shot, is it worth the effort for the 90% of people who wouldn't notice?

It's kinda like arguing DX9 vs DX10 / 10.1 , some people will notice and care, others will just say "well the water's shinier, or lighting is a little fainter, but I don't really care, it's about gameplay..."

Anywhoo, everyone has their preference, but right now no one from ANY camp is providing what they promised years ago, let alone what I'm hoping for. I expect the next year of physics to simply be more junk, literally.
 
The reason you can run havok on a CPU is because only a few things at any given time need to have a physics calculation performed; we're not at the point where every individual object needs to have every external effect in the game calculated upon it.

Its like having a wooden box between three bombs that go off at .05 seconds apart. A full physics API would do the following:

1: Caclulate each bombs effect on the box, throwing off pieces in the process
2: Calculate each physics for each piece that broke off the box (probably thousands)
3: Calculate the bombs effect on each other (well beyond what you can do in labs at this point, even I won't be asking for this anytime soon).
4: Calculate the effect every breakoff piece of the box has on the environment (wooden shards are quite destructive at high speeds 😀)

Point 2 is why I believe a full physics API can not be done on the CPU. In terms of calculations per second, the amount of work done would be well beyond what even a highly clocked i7 would be able to do. There isn't enough processing power to calculate physics effects on thousands of objects, that can fully interact, every single moment they exist, in a fully dynamic environment. Now, if those shards moved in a linear manner (like what most engines do with bullet drop), you can cheat a bit and store the necessary calcuatlion for quick access and easy computation. But a dynamic physics system, where the forces acting on objects is not constant, requires significantly more work (look at point 3 as an example, the formula for how explosion 3 acts while two other explosions are acting on it can take a lot of CPU time just to figure out).

I don't even think a dedicated physics card can do what I ask for now, but I do want to see some of this stuff start to get implemented. Right now, we are starting to be able to blow a hole in a wall (in a few games at least). Next is getting the falling debris to crush someone...
 
You're approaching this the wrong way.

You're examples are at the same time overly complicated and simplistic.

Why not calculate the phsyics at a molecular level, and heck step 1 then has a bazillion if/thens like whether the kintetic energy in sufficient to affect things like chemical bonds, friction, heat, etc......

That's a full level physics implementation, but we don't need that level anymore than we need thousands of particle physics just yet. Also regardless of task the more objects you have the more you have to keep track of interacting which has to be done by the CPU as long as it's where the game resides.

It all needs to be tied back to the game itself and taken into account there too for things like damage/kill feedback map alteration, etc. etc. No matter what the CPU is involved, but whether or not it can be implemented on current desktop CPUs is not to say it can't be done by the CPU like you contend in point 2, the future of massively multi-core CPUs like the Terascale projects doesn't look like they would be limited to less than PhysX currently accounts for and more.

Regardless of what's next they will be compartmentalized options, where it's not 100% of the possible it's what's doable with the processing and communication power available. However it would be nice if the focus of those limitations were on something useful rather than tornadoes of debris.