Nvidia PhysX Software is Ancient, Slow for CPUs

Status
Not open for further replies.
This is why Agners CPU blog is a good read.

seriously need better standardization and a massive spring clean with the x86 code base and all extensions since, it could bring a lot of performance improvements with very little expenditure on cpu resources much like optimizing bad code.
 

waffle911

Distinguished
Dec 12, 2007
243
0
18,680
Why so lazy, Nvidia? "Better than it was" and "good enough" are not real excuses for neglecting a key technology you market to developers.

I really wouldn't be surprised if AMD eventually brought about and advocated the development of a new "Open-PL" physics standard or something.
 

kelemvor4

Distinguished
Oct 3, 2006
469
0
18,780
[citation][nom]dan117[/nom]DirectCompute / OpenCL >>> PhysX[/citation]
DirectCompute/OpenCL/CUDA != PhysX. They serve different purposes.

Silly to say nvidia is deliberately hobbling physx on the cpu, though; given that ALL the physx code is old (by the author's admission) it's just old code. I'm an nvidia fan but the truth is that physx is so poorly adopted among game developers it wouldn't bother me in the least to just see it go away.
 
There is one thing to keep in mind, though: MMX is useless for this kind of computation, and SSE was absent from:
- Pentium Pro
- Pentium II
- early K7: Athlon (up to 1200 MHz) and Duron (up to 950 MHz)
So, at the time the software-only implementation of PhysX was written (around 2005, I'd say), there were still some SSE-less machines around.

Of course, Nvidia has no excuse about not making an SSE-optimized build until now, will x87 fallback.
 

rembo666

Distinguished
Apr 6, 2009
35
0
18,530
To be fair, it takes almost as much effort to make something run with SSE instructions as it does with GPU. "Deprecated" is a strong word. x87 instructions are still very widely used and are much more efficient today than they were with the old 8087 co-processor. The only time programmers use SIMD (Single-Instruction-Multiple-Data) instructions is when they need to speed up processing on a massive amount of floating point data.

I do agree with your premise, but not the language you use. Optimizing your code to run the SSE instructions can give you about 3x to 4x speedup. However, it takes effort. NVIDIA would rather spend the resources on the parts of PhysX that sell their GPUs than to maintain the CPU compatibility code.

This strategy may backfire, since they still make money by licensing the PhysX technology to game developers. If they cripple their physics engine for non-NVIDIA setups, they will lose revenue and market share to Havok. Developers want their games to work on as many configurations as possible. If they can't have what they want from NVIDIA, they will go to someone who will provide it.
 

Syndil

Distinguished
Jul 10, 2003
261
0
18,780
PhysX has always been shady. This should come as no surprise to anyone. I'm not sure why it's still around. Well, other than the fact that Nvidia wants it to be around.
But with multi-core CPUs the norm now, why would I want to dedicate part of my graphics computing power for physics? Rhetorical question; I wouldn't. I'd rather have the game written to be optimized for multiple cores, and perhaps dedicate one core to physics, if it would help. Seems silly to have CPU cores idling while the GPU does double duty.
 
G

Guest

Guest
VHS-Betamax format wars,HD/DVD or BD technology all over again.

When there is one open standard it will become mainstream! Until then it is still sideline to the main game!

Nvidia could do better here by working to get their PhysX as main standard used by correct code development and open licensing with incentive!

I see PhysX going away in the long run if they continue to deal with it so close like they are now being replaced with a more open implementation.

 

dan117

Distinguished
Jan 22, 2010
122
0
18,680
[citation][nom]kelemvor4[/nom]DirectCompute/OpenCL/CUDA != PhysX. They serve different purposes.[/citation]
I was talking about their usefulness in game engines for physics simulations.
And DirectCompute/OpenCL/CUDA can do everything PhysX can, but not the other way around, that's why they are better than PhysX.
Also, CUDA is not as good as OpenCL and DirectCompute for games because they are excusive to nVidia cards which don't have the same performance/price ratio as ATI cards.
 

fausto

Distinguished
Jan 26, 2005
232
0
18,680
i wonder is enabling proper physx on cpu to use all cores and use SSE code will render the load on the physx cable video card lower and/or enhance cpu's to the point that the physx card is not required.
 

jednx01

Distinguished
Mar 31, 2008
448
0
18,810
In my experience, I don't like physx at all. I have had much smoother gameplay experiences without it, and I don't feel like the physx are that much worse....
 

bastyn99

Distinguished
Jul 7, 2010
70
0
18,630
[citation][nom]waffle911[/nom]Why so lazy, Nvidia? "Better than it was" and "good enough" are not real excuses for neglecting a key technology you market to developers.I really wouldn't be surprised if AMD eventually brought about and advocated the development of a new "Open-PL" physics standard or something.[/citation]
That Id like, but if AMD is gonna be as slow as in the past, and with the HD 6xxx series marked for last half of 2011, I dont think its gonna happen anytime soon
 
I was wondering when this biased stuff was going to show on Toms.

Taking the two primary points of the investigation:
1: PhysX isn't multithreaded by default
2: X87 is old and depriciated

My response:
1: DirectX, OpenGL, C++, JAVA, etc are not multithreaded by default
2: While NVIDIA's implementation isn't the best, it should be realativly simply for developers to replace the offending code with SSE instructions. In short: Implementation is up to the developer [which is the same design concept that DirectX follows]

Nothing to see here.
 

Shin-san

Distinguished
Nov 11, 2006
618
0
18,980
[citation][nom]bastyn99[/nom]That Id like, but if AMD is gonna be as slow as in the past, and with the HD 6xxx series marked for last half of 2011, I dont think its gonna happen anytime soon[/citation]
AMD has been pushing an open physics API of some sort, but yeah, they also promised hardware-accelerated Havok. I haven't seen that shown off in a game. At least in AMD's case, optimizing physics on both CPU and GPU is important.
 
G

Guest

Guest
It is obvious. They need to sell their video hardware. The physx software implementation must be run slower than their hardware equivalent.
 

f-14

Distinguished
moricon 07/12/2010 4:09 PM Hide -1+
"VHS-Betamax format wars,HD/DVD or BD technology all over again."

don't bother referencing HDdvd or Blueray it's like comparing stereo to dolby and was blown out of the water the moment THX showed up. comparing HD/Blue ray is more like 8 track, they were both blown out of the water the moment you could download them off the internet. doesn't matter their format it could be laser disc for all any one cares. it's obsolete the second something better came along, which it had before HD and Blueray even came to market.
the issue at hand is how much longer are we going to need a decicated graphics card with the advent of multi core cpu.in hind sight i see AMD scoring a huge win the moment they are able to take the ati chip and put 1-2 cores dedication for video onto the cpu itself like intel has done with the new i-5's. nvdia will be screwed if they don't find a way to work with cpu makers fast to incorporate their gpu chips into cpu multi core technology. intel has been pushing their own in house graphics chip for quite awhile now, i seriously doubt they'd want to waste the money buy out a spendy company like Nvidia, they would be better off buying out Havok, and screwing over Nvidia.
it's just a matter of time now, the clock is ticking, and Nvidia is going to be dust in the wind if they don't play some very serious catch up immediately. i'm just sad that 3dfx went under after basicly starting down the multi core road 10 years ago, and Nvidia never bothered to continue that work until now that it's almost too late.
very well played forward thinking strategy AMD, very well played!
 
Status
Not open for further replies.