Running physX natively on the CPU with an ATI card

Status
Not open for further replies.

gameranew22

Distinguished
Jun 13, 2011
173
0
18,690
I'm curious about how to run physX on my Intel i5-2500k CPU if I have an ATI card with the latest drivers. I have read that ATI drivers will disable physX if they find it on your system, but how is that possible? I've SEEN people running physX-based games with full physX effects from their CPUs before on contemporary major releases where physX was written into the programming in the likes of Metro 2033 and Starcraft. I just want to know, if it is possible, how to do it with an ATI card.

I've also read that Nvidia critically hobbles a multi-threaded cpu's ability to run physX as they purposely write the code for single-threaded applications, like the majority of games, and that multi-threaded support on the CPU does not scale well at all.

If someone could precisely tell me if and how I could run physX on my i5-2500k cpu, I would be really grateful as it would really impact my decision as to what GPU to purchase in the next couple of days, especially with the 560 Ti 448 Core coming out on Tuesday, supposedly.
 
Solution


ATI chose to stick with the open software support using OpenCL, which can do the same stuff that PhysX does. The only thing lacking is that there isn't a large development kit put together yet. If/when this happens, PhysX will likely die as both cards support OpenCL.

Actually, there is probably already is some support for OpenCL now, and if there isn't, you won't likely...
I don't think that can be done, at least I haven't heard of anyone running physX off the CPU and getting good framerates. The typical solution for AMD cards and physX involves buying a lower end nvidia card, and using it as a dedicated physX card while using the AMD card as the GPU. There are hacked drivers out there that allow the AMD card to work with the nvidia physX card. The only way I can see physX working well on CPU is if someone managed to rewrite the code in the physX games to allow the CPU to use its full potential when rendering the physX features. There would probably have to be a fix for this for each game that uses GPU accelerated physX, there wouldn't be a one size fits all fix to allow it.
 
Nvidia released a beta driver a year or more ago where they left AMD unblocked. If you google for it you'll find it and then you can see if you can install it on your system. From what I remember, AMD drivers don't disable physx, nvidia's physx driver disables physx if it detects your primary vid card isn't an nvidia one. I just did a quick google and stumbled upon Hybrid Physx Mod, check that out and see if it helps.
 
Well I did my research and here's what I found:

Special thanks to Helltech, Tom's article on the issue was great and perfectly understandable to someone who doesn't really understand the inner-workings of CUDA, parallel processing and stream units (like me!).

physX runs on the CPU. Period. It will work but it will work much less effectively than if you had a gpu-based physX solution (like a single Nvidia card or a dedicated nvidia physX card paired with either an ATI or another nvidia card as your dedicated rendering card solution), meaning that you can run in your system an ATI card and bump some physX into the games that have it.

There are caveats though and basically what it comes down to is that physX was intentionally not optimized to work as efficiently as it could have been by Nvidia, clearly for business reasons because they could have optimized for a multi-threaded architecture but chose not to. It's not a real grip with Nvidia because, from a strictly business standpoint, they wouldn't want to lose any users over to ATI if more people knew they didn't need Nvidia's cards to run physX as it is a big enough selling point for some people to go that route to reduce the hassle and, supposedly, up the performance of their physX-enabled game experience.

Is it an intentionally dishonest marketing/business decision on Nvidia's part? I don't think so, but it certainly is not completely up-front about the situation either and it clearly affects the marketplace and developing environment as is, but why ATI doesn't pony up the dough and make their own havoc/physX-esque "enhanced physics" solution, is something I can only grasp at - legal (but unfair?) patents, CUDA, focus on streaming units and power instead of other calculations..., it would take someone much more in-the-know than me to say for sure.

Honestly though, looking at Nvidia's list of physX enabled games currently and projected, the list is super underwhelming, especially considering that the ABSOLUTE BEST examples of in-game physX implementation are Dark Void and Mafia 2....I mean just type in "dark void physx" and "mafia 2 physx" and then type in "metro 2033 physx" into the youtubez. And those are supposed to be the best, most current examples of in-game physX? That is hardly worth a dedicated $75 GTS 250 or a super beefy upped price of a 580... Makes you wonder why ATI cards are so expensive...I guess that is good marketing on Nvidia's part because they've convinced everyone, formerly myself included, that people NEED to have an nvidia card because of physX. Sure...if you're playing Dark Void and beating up hookers in Mafia 2, I guess.
 
I think both of the Batman games and Metro 2033 are much more popular than either of the games you mentioned + utilize Physx more than both as well.

Not to say you don't make a point, just saying there are some games that utilize it a bit. The Batman games in particular seem to be very popular.
 
Not from what I've seen...Metro and Batman may be more popular, but Metro and Batman AA certainly don't utilize physX more. Arkham City, Mafia 2 and Dark Void are definitely the best, have a look.

Dark Void physX: http://www.youtube.com/watch?v=BGCZtXg5LyA

vs.

Batman: AA physX: http://www.youtube.com/watch?v=vINH6Z9kqgI


vs.

Metro 2033 physX:http://www.youtube.com/watch?v=d47DbUOq028

vs

Mafia 2 physX: http://www.youtube.com/watch?v=vcpEc6kC0HM

vs

Arkham city physX:http://www.youtube.com/watch?v=9_UNRp7Wrog


Before Arkham City was released this year, Dark Void by far had the most physX-related resources dumped into them, I found a graph on it somewhere comparing it to Mafia 2 but I can't find it now 🙁
 
That video doesn't really show Metro's use of PhysX. But its one of the few games I have played were I can actually tell a difference with it on and off. Where as with Mafia II I could turn it off and wouldn't notice it while playing. With Mafia and Batman you kind of have to look for it to see it.

Havn't played Dark Void, can't say.
 
Really, you can't notice it in Arkham City? What about all of those papers flowing around batman as he runs or bullets riddling windows? I mean, in Dark Void the best of the physX stuff comes out in the weapon effects and those happen EVERY time you use that weapon. As far as I've seen in Metro 2033 (I haven't played it yet but just bought it on steam and am waiting to buy a new GPU tomorrow or tuesday), is that it produces smoke effects and bullet decals better. Oh and knocking down icicles. I mean, I hope I'm wrong about that because I do want to play that game and be blown away by the atomosphere (with smoke, bullet casings dropping, reflected light sources, etc.) but I haven't seen it really in the videos I've found.

I'm totally all-in for better physics in games, especially fluid and environmental effects, but I think the marketing and presentation of physX has kinda misled a lot of people into thinking every game has physX or that, on the other hand, everyone needs gpu-powered physX support for the games to look great. This is part bias speaking, as I haven't played a physX-enabled game with a physX card in quite some time, but I've seen plenty of my friends playing them, videos of it online and, honestly, I've seen incredible displays of physics generation in just standard games as well, like Deus Ex: Human Revoluiton or Mass Effect 2 on an ati card.
 
Naw, I think I get too into Batman to worry about the graphics. 😛 Or I havn't played with it long enough with Physx off.

Like I said I havn't played Dark Void so I can't say. But I have played Metro 2033 a lot with it off (when I thought it was causing me to lag) and a lot with it on, and I could tell a difference.

I also totally agree with you about Physx and how its marketed, no arguements there. Its not necessary what so ever.
 
Yeah, I've heard that about Metro 2033 as well about it suffering weird lag with advanced physX enabled, but I'll have to take your word for it that there is a noted difference with it on or off, I personally hope there is as well because I've wanted to play that game for so long and get really immersed into it.

Hopefully I can find a good deal tomorrow on cyber monday for the upcoming 560 ti 448 cores or a marked down 560 ti with Arkham City bundled with it, I'm really enjoying arkham asylum on the 360 right now. Fun little game :) It's too bad that tech companies utilize the lack of public knowledge about tech specifics like the quality and applicability of things like physX with less-than-true marketing campaigns, but I guess lots of big, important companies in other fields do that as well.

Isn't capitalism a wonderful thing sometimes?
 
If you do get a card with Arkham City for free don't count on playing it anytime soon if you want to use the DX11 features. The PC port of Arkham City is an absolute mess right now, with DX11 not working properly for all users, and lots of others running into issues with crashing and save game disappearing. Apparently the only thing they did during the 5 week delay was ensure that Securom worked.
 


Awesome-fun-time-zone!

Pc gaming is so l33t but also so not l33t....I are sad.
 


I think you need to do a bit more research, time lines and incidents would be a good place to start.
 
I'll take that, I'm sure I could do more research, mousemonkey. What do you mean by "timelines and incidents" though?

From what I gathered from Tom's and Semi Accurate is that the physX architecture does its calculations on the CPU through an x87 system that is focused on tediously stacking files, thus not utilizing full multicore support or the added bonuses provided to parallel processing that multicore optimization would have provided had Nvidia coded towards SSE2 scalar processing architecture that is mostly MUCH more efficient than x87, from what I've read at least, all of this tech talk is completely new to me.

And it makes sense that Nvidia would do this really, at least to a nubile techie like myself but someone versed in the tech marketplace nonetheless. SSE2 has been standard in all of intel's chip since 2001 and in AMD's since 2004; it is a much better processing solution than x87 but nvidia claims they had to code for x87 because of the single thread processing optimization for consoles. My question is, then: Why did nvidia only admit this when they were called on it? They never went out of their way to state that that is what they had done and were more focused on console optimization and keeping their monopoly on physX for PC than informing the public. Sounds like a hard-and-fast business decision to me and one that isn't best for the consumer but the best for nvidia in the short-term. Competition with AMD must be pretty stiff if they are using tactics like that but, of course, I'm sure there are two sides to every story and that AMD has been less-than-totally up-front about all of its practices and business plans as well...
 

If you go back to the start and find out which company was the first to talk about doing physics computations on the GPU and then move on from there to see who actually got something up and running through to how that came to end up in Nvidia's hands and which AMD employee took exception to that then you may get a different view of how things ended up as they are.
 
you gotta help me out here, I know Havok was the first and found the original press release about Nvidia and Havok partnering together but I don't know why and I don't know which AMD employee you are referring to. An executive? An engineer who tried to steal plans?

I'm genuinely interested and would like to know more but I've already spent so much time searching for this stuff that I'd like to be lazy and for you direct me, to be honest 😛
 

Havok is CPU based physics not GPU, PhysX can be run on either the CPU or the GPU. When is that article dated by the way?
 
hmmm, interesting indeed, but where does that article point to for figuring out the other stuff you mentioned about ATI employees, exceptions and agreements? Did Nvidia simply just beat out ATI with a better physics calculation system on their GPUs, why doesn't ATI have its own proprietary physics generation system that they tout around as much as team Green?
 

You are starting to ask the right questions but I would rather you found the answers yourself as then I could not accused of trying to make you see things from my point of view.
 


ATI chose to stick with the open software support using OpenCL, which can do the same stuff that PhysX does. The only thing lacking is that there isn't a large development kit put together yet. If/when this happens, PhysX will likely die as both cards support OpenCL.

Actually, there is probably already is some support for OpenCL now, and if there isn't, you won't likely see any fanfare about it, since both companies support it with their DX11 hardware, there would be no need to advertise it outside of the normal DX11 support.
 
Solution
Status
Not open for further replies.