Dedicated PhysX gpu

Feb 23, 2018
5
0
10
0
What is the best decent priced GPU to use as a dedicated PhysX card? I'd like to keep it under $500. I already have two EVGA 1080Ti FTW3 in SLI but don't want to spend that kind of $ for a third one just for PhysX. Thanks in advance.
 
Aside from that, isn't PhysX built into Nvidia GPUs? I thought getting a dedicated PhysX card used to be done by AMD card owners, getting a low end Nvidia GPU just to do the PhysX stuff.... though my memory is hazy on this.
 
Feb 23, 2018
5
0
10
0
It is built into Nvidia GPUs and I want to use another one as a dedicated PhysX card. Some games still benefit from it. You would think that my current setup would be enough to power any game, but I still get lag. I did a test on Assassin's Creed: Origins and dedicated one of my 1080Ti's to PhysX and got a steady 60fps. In SLI it would dip to 17-23fps at times. Even Rise of the Tomb Raider ran more smoothly at ultra settings with a dedicated card, but every so often would freeze for a second. So I don't want to lose the power of SLI but would like to free them up to push the graphics of the games and have a separate one doing the PhysX... I just don't want to spend more on it than needs be. My biggest dilemma is that the third PCIe socket I have needs to be a single slot card. Limits what I can do. Not sure if I can use a GTX1030 for PhysX but don't really have many other options for the newer GPU's.
 

Karadjgne

Titan
Herald
You are going in circles. There are quite a few games where sli optimization is crap to the point where actually having sli or CF makes things worse.

Got nothing to do with physX which can be set by you to run from the gpu or cpu. PhysX itself is an nvidia proprietary avi that was developed from acquisitions including the 3dfx brand. Nvidia cards have dedicated chipsets just for the physX as a result.

Adding a 3rd card of anything lower than what you have now is pretty much pointless as a single 1080ti has more than enough processing power to deal with anything physX related you could throw at it.

Part of your issue might be the bridge used, if using the older, stock junk you could be loosing out on a lot the 2nd card has to offer, a HB (high bandwidth) would be a better idea with those cards.
 

DSzymborski

Illustrious
Moderator
Assassin's Creed Origins, last time I checked (it's been a few months), didn't even support SLI. Many people have reported getting better FPS after disabling SLI, so there's nothing new here. It may not be quite as dead as PhysX, but SLI/Crossfire is definitely on the terminally ill list. You still see occasional games that scale well, but the trend has been for multi-GPU solutions to be considered less worthwhile both by the software companies and Nvidia/AMD. It's just not profitable, in a lot of cases, to devote such a large percentage of time and effort to a very small percentage of customers.
 
Feb 23, 2018
5
0
10
0
Yeah, I found that out about ACO after doing some more research. Very big disappointment. Why make a game more demanding than what current top tech can handle and then not even build SLI support in?? I can't believe that not more companies/developers aren't making use of SLI... it opens up so much more possibilities. And it would make investing in SLI more appealing to the mass market. "If you program it, they will buy.." It would seem that I sent a lot of money on a very cool looking paperweight than... And from my research, a Titan V isn't even better for gaming. Guess I have to wait until the next gen of Nvidia cards launch in order to get ultra settings. Very disappointed.
 

Karadjgne

Titan
Herald
It's simple. DX12 is a MGPU base, not sli/cf. With mainline windows 10 being DX12 native, games are shifting focus to more that way and not really investing heavily in DX11 as a 1080ti is well capable of 1080p-4k gaming.

Unfortunately, to really code mgpu effectively for both nvidia and amd into a game is a massive undertaking and can make the game 2x-3x larger than just single card compliant. That's a bummer for most download content like Steam or EA or Origin etc who don't want to spend out on the massive infrastructure that would require. The technology is available, but gamers don't really want to spend $200 for a game in order to support the server/bandwith necessary.

So single cards are where it's going to be for the foreseeable future, with multiple cards for offshoots like mining or cad work etc.

Vast majority of gamers around the world are gaming happily on a 24" 1080p monitor at 60Hz. Not really worth the investment for the 1% who use 1440p/4k with dual cards. Yet.
 
Feb 23, 2018
5
0
10
0
Well, that's a real bite in the rump for me... I'm trying to game on a 3440x1440 100Hz curved monitor and it is pretty crazy that extra power it requires to push the 21:9 ratio... Back when I still had my 1080p 60Hz monitor I could get high - ultra settings in every game with my 980Ti. But once I hooked this new one up I had to scale everything back to low - medium (if I was lucky)!! What's crazy is that they sell all this nice hardware and charge a premium yet software renders it pointless... It wouldn't even be that bad if I could just leave SLI on so it could be utilized if/when a program supports it, but it has a very noticeable negative impact on the games that don't - ACO being one

Guess I'm just not understanding why it is so much of an undertaking to build mgpu support into games... Seems like a little extra coding to me... But then I don't know anything about coding... Why couldn't they make a game without it and then have an extra "update" file that could be downloaded to add the capability? At least that way it would only be the 1% using the extra bandwidth.
 

DSzymborski

Illustrious
Moderator
Remember, 3440x1440 from 2560x1440 is nearly a third more pixels. It's a larger difference than adding a 1280x720 monitor would be.

Also, it's not like they can add a couple lines and boom, two GPUs can cooperate well in a game. It's a significant undertaking. Even when SLI/CF was an actual thing, it was a constant battle to keep the drivers functional with SLI/CF in new games.
 

Karadjgne

Titan
Herald
Under best conditions back then, the highest I ever heard was @70% out of the second card, worst case scenario was in the negatives % since there was no cooperation between the cards, you'd pretty much get the primary card putting up its frame, and the second card put up nothing, ending up with 50% of possible fps. As seen in ACO. CF was worse.

I believe you can set individual profiles in NVCP per game, and thats what I'd do, set ACO for single card with a physX secondary, and in games that do take advantage of SLI, profile the SLI. It's not the best case scenario, but it's going to be the best the actual game engines will allow.

And no, it's not a simple thing just from the standpoint that vram is different, gddr5, hbm, gddr3, etc and they'll all use different controllers that have to be accounted for, not to mention the differences between amd and nvidia and how they control memory. Mgpu isn't like sli/cf where the cards work separately together, using just the single vram, mgpu works both cards together as a single card, adding vram, speeds, power etc. Mgpu also makes possible use of any cards, no longer would they need to be identical models, so you could pair a R9 290 with a gtx1060. That alone will be a driver nightmare as it's well known that having amd and nvidia drivers present together is a major source of gpu crashing. So that issue would also have to be worked out and added, as you'll not get nvidia and amd to cooperate on driver compatability otherwise.
 


with such setup you don't need dedicated PhysX card. just let the GPU automatically handle them. even a single 1080ti should be more than capable of handling GPU physx effect without the need of dedicated card.
 


if you mean GPU PhysX then the answer is yes. but PhysX as a whole has been a very successful alternative to havok. in fact PhysX is much less proprietary than havok.
 


that's not quite right. before Ageia being acquired by nvidia PhysX processing did have specific hardware called PPU to calculate PhysX effect. nvidia did not built specific PhysX hardware into their GPU. what they did was porting PhysX software to CUDA so the effect can be accelerated using GPU compute instead of needing specific hardware like PPU. it is the whole point of unified shaders architecture in GPU. to let any kind of software to run on the same or similar hardware instead of needing specialized hardware. because of this GPU PhysX actually can be made to run on AMD hardware directly without the need any kind of GPU from nvidia. and there is third party developer that actually working on this in the past (and it actually works on certain AMD GPU). but surprisingly the biggest obstacle back then is not nvidia. nvidia actually already giving their green light and willing to help to make it happen but it is AMD that refuse the idea about PhysX running natively on their GPU because there is no marketing benefit for them to support tech that is directly coming from competitor.
 


both AC: Origin and Rise of Tomb Raider did not use PhysX. i'm not sure about Rise of Tomb Raider but Ubisoft have partnership with havok that almost if not all the game engine that they have are using Havok.
 


there is no dedicated PhysX hardware on nvidia GPU.
 
Oct 11, 2018
1
0
10
0
i made a video test on an older model gpu a 650 gtx oc edition by msi as a dedicated physx processor along with a gtx 1050 tisingle fan by ASUS as its main card check it out

https://youtu.be/_fiqAzYPpUQ

it depends on the gap between generations of the 2 cards that will determine if u will gain some benefit or will just give ur main gpu a bottleneck i tested it on the game Rise of The Tomb Raider on a 1080p monitor (1920x1080) and Ultimate Setting on the game. which includes Hair Physics of Lara Croft fps without the physx processor was 10-15 after inluding the 650 it jumped to an average 50ish fps which is awesome as for power consumption the 1050ti uses only 75 watts and the 650 uses only 64 watts
 

ASK THE COMMUNITY

TRENDING THREADS