Nvidia PhysX Software is Ancient, Slow for CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Nvidia makes GPUs, to me it seems perfectly reasonable that they would spend their resources on improving the performance of their proprietary software on their proprietary hardware.

Even if Nvidia completely overhauled the Physx code for better CPU compatibility, an old (GeForce 8 series) GPU would still run it must faster.

Physx is an implementation of CUDA, CUDA is designed to run on Nvida GPUs. It isn't rational to expect Nvidia to devote much time or effort negating one of their competitive advantages.
 
I think it should be obvious that physics was a big real a couple of years ago but everyone is falling all over themselves right now to get 3d products to market. nVidia is definitely on the 3D bandwagon now, physx isn't where development is happening currently.
 
[citation][nom]gamerk316[/nom]I was wondering when this biased stuff was going to show on Toms.Taking the two primary points of the investigation:1: PhysX isn't multithreaded by default2: X87 is old and depriciatedMy response:1: DirectX, OpenGL, C++, JAVA, etc are not multithreaded by default2: While NVIDIA's implementation isn't the best, it should be realativly simply for developers to replace the offending code with SSE instructions. In short: Implementation is up to the developer [which is the same design concept that DirectX follows]Nothing to see here.[/citation]

Bias, indeed. You wouldn't happen to be running an Nvidia card, now, would you?
 
So.... When they do change the code for PhysX, does that mean games of today that uses an obsolete code will not work on future machines? Just Cause 2 for example will become a KOTOR/ Vietcong problem of the future?
 
How does crysis 2 do its physics? Let their physics be open source and boom you got the best looking and working games on the pc. Take into mind this is a what if crysis 2 is in a very optimized situation.
 
laziness is not shadiness. Why would they bother to optimize for someone else's hardware? No upside. That's not shady, it's lazy.

It's not as if they took SSE code and made it x87. They bought old code and didn't ever improve it. Obviously, that part of the code was not where their interest lay...
 
So, they can optimize the Physx code for console CPUs, but leave it to game developers to optimize the code themselves for the PC version to run efficiently on desktop CPUs? That is either a laziness on their part or a very dirty technique of promoting their pc graphics. I think it is the later one. Intel has also done similar things in the past to downplay AMD in the past.
 
Do you even realize that the code for windows 7 is out of date problem lies in the effort to recode everything takes huge amounts man hours and millions in lines of code to go through and re-code.
Although you can't expect propitiatory code to run on the competitions hardware now can you?.
Nobody said physX was designed to run on cpu's it can but very slowly same reason that cuda does not work on cpu's.
 
screw physx
I support anything that is more open and works on nvidia and ati cards
physx has failed so bad so far
only a few games actually use it well and for the most part it is usually just cloth blowing in the wind or glass shattering. It's all aesthetic. It would be more appealing if it affected gameplay but buying a physx card or a nvidia card to handle physx is only appealing about one time a year when a game actually uses it to a point where it actually looks a lot nicer

and it just being better than consoles isn't enough
they should make the same amount of effort on each platform to make it as good as it can be. This is one of the few times i have heard a company say something is good enough and not say that they are trying to make it a better experience for all. You're supposed to strive to be the best but Nvidia with physx just goes adequate
 
"The game content runs better on a PC than it does on a console, and that has been good enough."

This is another reason to hate console to pc ports.

But the main thing to remember here is Nvidia is essentially using Ageia's cpu code (and making small tweaks to it) to make it run on a cpu without an nvidia gpu but at really crappy frame rates. Granted they arent making it run at crappy frame rates on purpose and the devs could do the leg work themselves for any game but that wouldnt showcase the "power" of the nvidia cuda fizzix.

After seeing havok cloth I think nvidia is in for a LOT of work on the code base of physx. Not to make it run well on the cpu but to fix the problems with the physx cloth sticking to random objects and stretching way out of proportion and the bounciness of their "cloth" that looks like people's clothes are made out of latex. Compared to havok cloth physx cloth blows (no pun intended).
But the entire play area cant be havok cloth so physx does offer a nice array of destruction capabilities and better real world physics interactions (no more saving yourself behind a corrugated steel building when Im firing a rocket launcher at you).
 
[citation][nom]rohitbaran[/nom]So, they can optimize the Physx code for console CPUs, but leave it to game developers to optimize the code themselves for the PC version to run efficiently on desktop CPUs? That is either a laziness on their part or a very dirty technique of promoting their pc graphics. I think it is the later one. Intel has also done similar things in the past to downplay AMD in the past.[/citation]
When's the last time you upgraded the cpu in your console without throwing the entire thing out? They optimize it for consoles because you cant upgrade from the hardware it comes with. PC users dont have that problem. So if someone wants to port an already working game from console to pc *cough* almost every game *cough* then the game will run exactly like it does on the console or better on the pc just natively. If the game devs want to improve the performance beyond that they are welcome to but they dont because porting (although not simple either) is easier and quicker than porting and improving.

Why do you think all of these games like Red Faction and Mass Effect are identical to the console versions? The game devs just ported it over to pc and added different control mechanisms etc to make it run on pc hardware and left the general codebase alone. Who wants to spend 6 months porting a game to pc then spend another 6-12 months making a game run better than a constant 30fps? Thats money out the window just so pc gamers can go "yeah well it runs better on PC"... yeah let me waste a few million dollars on that!

[edit] oh correct me if im wrong but doesnt the haxbox360 come with an amd derived gpu? And gears of war ran just fine on there didnt it? GoW was a physx title.

Also its about competition... if there's something you have your competition doesnt that means more people will buy your hardware. Well usually... there is a point of diminishing returns on that and Nvidia may have hit it... rather than pushing physx and performance hard and keeping the heat and cost down they chose to go with an all new design for their new card and it ended up being hotter, more power hungry, and probably more expensive than they sell them for. Now had they kept the 285 design and made improvements to it maybe heat and power wouldnt have been such an issue and they could have pushed physx games more and gotten more sales over AMD from that. Then been working behind the scenes on the redesigned chip to make it extremely fast with limited heat and power consumption increases. But alas the past is the past and there's nothing we can do about that now.

If you like physx games then buy an nvidia card. If you dont care about physx then buy either maker's card and play the 30fps console ported games since thats about all we are getting from the game devs these days anyway. [/edit]
 
[citation][nom]Ddkshah[/nom]How does crysis 2 do its physics? [/citation]

I believe Crysis 1 roll a custom engine as part of CryEngine, could be wrong myself; but back to PhysX, OpenCL and others are GPGPU umbrella technology over CUDA, NVidia would port entire CUDA line to them once they are en masse if NVidia has some better sense then 3Dfx.

FYI 3Dfx was first to mass produce 3D GPU line called Voodoo (Banshee for 2D mixed) they cried "who gives an F to antialiasing and stuff unrelated to speed" and felt to the sideline and bought by a relatively unknown called NVidia.
 
[citation][nom]gamerk316[/nom]DirectX, OpenGL, C++, JAVA, etc are not multithreaded by default[/citation]Actually DX11 is. Furthermore, the kind of work that PhysX does on the GPU is massively parallel. Why shouldn't it be parallel (at least capable of 4 threads) on the CPUs as well? Oh that's right, because with SSE and 4+ threads, that would be one less selling point for Nvidia GPUs.
[citation][nom]gamerk316[/nom]While NVIDIA's implementation isn't the best, it should be realativly simply for developers to replace the offending code with SSE instructions. In short: Implementation is up to the developer [which is the same design concept that DirectX follows]Nothing to see here.[/citation]If it's up to the developers to write their owning implementation, what is the point of using PhysX at all? Why not write your own DirectCompute or OpenCL library, if you have to do all the work that Nvidia already should have done?

Besides, developers often don't have time to deal with that stuff, since PhysX is already an "extra". No game REQUIRES PhysX to be enabled to play it. So why bother wasting time and money optimizing an optional feature when Nvidia should have already done that? Heck the only reason to implement PhysX in the first place is if Nvidia paid you to.
 
Nvidia defends its position that much of the optimization is up to the developer, and when a game is being ported from console to PC, most of the time the PC's CPU will already run the physics better than the console counterpart. The 'already-better' performance from the port could lead developers to leave the code as-is, without pursuing further optimizations.

that is the biggest load of...

nvidia... if YOU provide middleware, it is YOUR responsibility to optimise it!!! I doubt third parties even have access to the code TO optimize it! that is BS nvidia! it is YOUR responsibility to optimize it! next you will be saying its the game dev's job to debug your dodgy middleware too!

oh, and one other thing: who cares that it runs AS GOOD as the consoles, IT SHOULD RUN BETTER!! wtf is this believe from everyone that the consoles is "all you'll ever need" further more, running good enough is one thing, that is NO excuse to be horribly inefficient, there is a little thing called POWER CONSUMPTION!

on the issue, nvidia isn't going out of their way to make the cpu version crap, and there WOULD be quite alot of effort needing to go into updating it, and clearly, as you say, they have no reason to do this cause they want to push their graphics cards. sure it would be NICE if they would do it - cpu CAN handle most awesome physics - but its not like they are going out of their way to be Lame.

and another thing: WHERE is the competition?? there is nothing stopping anyone from making an awesome cpu, or even better, openCL version, then Nvidia would be forced to act.
 
In defense of nVidia, x87 was made by Ageia. Nvidia did not updated it, bot not necesarly programmed it to be sluggish.
And, I are a programmer. SSE is 10 years old, and I never used it. Is too complex.
Against nVidia: If I were programming games, I would prefer havoc, since it gives superior base performance (in CPU). Gamers still would choose PhysX cards for 2x better performance. Game choose cards for 1.25x/1.5x enhancements, so is not necesarly to boast 10x.
 
Bah, it's obvious that nvidia won't change that now, it's not good for them, they are trying to keep the market as long as they can.
I would do the same thing, although it would be great for those who don't have a Nvidia card. That's exactly what they are looking for, make the differences between the 2 brands as bigger as they can.
Then we are buying ATI for lower prices, and they are making graphic cards as different as they can, because if phyix becomes faster with CPU acceleration there won't be much fuss about nvidia phyix anymore.
 
Havok works impressive with FarCry2 and by far the effects were incredible, the trees changing according to weather conditions, the wind changing the direction of the flames ..etc i will always like Havok cuz gives far better performance than PhysX... make all with Havok and forget to pay extra to play the same way with PhysX and stop trying to sell us expensive nvidia videocards doing the same thing than Havok with cheap videocards...
 
So, they can heavily multithread it for multiple small GPU cores, but more than 1 thread on a CPU is too much of a problem? Yeah, that 4x speedup for modern quad core processors plus 3-4x speedup from SSE would really put a dent in their "our GPU is superior to CPU for PhysX" story...
 
Status
Not open for further replies.