SLI/Xfire Physics Vs. Ageia PhysX

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'd say this topic is pretty much put to rest. It's been a fun ride, though. 😀

Oh, well. Goodnight, it's time to leave work now.

BTW, why is your finger blue?
 
Here is a Link of the Ageia's PhysX card in action in Ghost Recon Advanced Warfighter, compared to a CPU doing it.
i like the way the doors and tires come flying off 😀
 
Here is a Link of the Ageia's PhysX card in action in Ghost Recon Advanced Warfighter, compared to a CPU doing it.
i like the way the doors and tires come flying off 😀

It looks nice, but now lets see that with a multi-core CPU and GPU using physics 😉 There is no doubt that games that use AGEIA wont look better than standard, non-accelerated games. The problem is AGEIA is facing both GPU makers and Intel/AMD. It is going to come down to cost, market penetration, and support.

One thing AGEIA has going for them is they got into both consoles & UE3 and are affordable. In the past Havok has been much more robust, but just like the hardware, sometimes the more robust solution does not win out.
 
well for me i will defo get a AGEIA card since im a ghost recon fan, and graw is espically made for it.
It is gonna be in use with UT 2007 which is a CPU limited game and is 1 of the most popular games, so i assume that AGEIA's card will take off with a good start.

As for Nvidia SLi Physics, personally i wouldnt like my graphics card to render frames and do physics, its bound to take up GPU. the AGEIA's phsyX card has 128mb GDDR3 dedicated to it, a CPU would use RAM, and your graphics card will use its VRAM which could be used for aa fa and higher res in the upcoming games.
So it really does make sense to have a dedicated card to handle Physics.
Also (last point), AGEIAs card you dont have to keep upgrading like a graphics card and CPU so it is a good safe long term investment.
:)
 
Well what I think it will allow for is fully deformable terrain and environments in multiplayer games (not necessarily MMOs though cause it'd be bad if people blew a hole through the world).
 
As for Nvidia SLi Physics, personally i wouldnt like my graphics card to render frames and do physics

Interestingly in todays news it was noted on the Alienware site that the 7900GTX SLI is not available with PhysX. So one or the other.

Also (last point), AGEIAs card you dont have to keep upgrading like a graphics card and CPU so it is a good safe long term investment.
:)

A little birdie (ok, the AGEIA press releases) already announced "value" (read: slower) SKUs and that they are working on future products.

What you are saying is akin to when the first GPUs came out: "I will never need to upgrade!" Sadly it does not work like that.

As the market expands we will see how it plays out I guess. I am curious if Creative will get in on the action... which brings up an interesting point. Sound cards used to be all the craze. I remember my dad's first SB and when I got my SB Pro. I remember in 1998 getting an MX300 A3D. I have an Audigy 2ZS currently. The problem is, from a market perspective, is that even a $50 sound card--which sounds better than the integrated audio and offers better performance--is dieing off. Why? Because the standard parts in a PC work well enough.

Call me skeptical, but if $50-$100 sound cards, which had great market penetration at one point, were weeded out by slower, but cheaper, integrated sound... how is AGEIA going to fight an upstream battle with no market penetration, more competition (Creative had a virtual monopoly), with an expensive product?

EAX has a lot of dev support, but in the long run it is pretty much moot. You can get the same experience for the most part elsewhere. The fact they do have faster PPUs in the pipeline indicates that this is not a one shot deal.

Well what I think it will allow for is fully deformable terrain and environments in multiplayer games (not necessarily MMOs though cause it'd be bad if people blew a hole through the world).

The problem is with online play is whatever you do in your world has to be transmitted everywhere else. You can do pre-canned destruction (see: Black) but once you talk about dynamic realtime deformable terrain and destructible content ALL that content has to be passed over the internet. Servers lag with 64 people shooting guns, the thought of people blowing up buildings with tens of thousands of individual parts is asking too much (unless it is not interactive and pre-canned ala Soldner).

That is the problem with physics. Once it impacts the gameplay level (i.e. just not junk blowing up and disappearing/making no impact in the world) you make online a huge hurdle and you also divide the userbase since many users wont have the hardware to accelerate it.

Sticky problems. Publishers in my estimation will fallow the money, which is where the install base is. It is important for AGEIA to sell millions of PPU cards.
 
I wanted to add something, but then I read those two posts and said... umm it's pretty much well covered.

Primarily the biggest selling point for me is EVERYONE needs a GPU, not everyone needs a Physx engine, the fact that you can use something you may already have to increase the playability a bit without buying a dedicated card is great.

And if it came down to bying a second card, I would buy a second graphics card for the benifits when games didn't need that much physic punch but might need more rendering punch. I'd rather have the proven dual benifits than buying a physics card that might be helpful in something like tree-sorting. 8O

In other words I concure.

I think great interview articles on this are the ones from FiringSquad;

http://www.firingsquad.com/features/havok_fx_interview/

http://www.firingsquad.com/features/ageia_physx_press_conference/

I'd say Havok with their HavokFX as the lead on this is far more realistic and attractive than Ageia's solution. And if Havok makes it a game engine upgrade then games like Oblivion could potentially add it to their current build far more seemlessly.
 
Hey This may have been answered already. But what will any of the physics solutions do anything for my currents games. Will the current games need patches or have to be totally reprogrammed? And can the game servers held all the extra rendering. Increase bandwidth? Being a Server admin and paying out of my pocket. It will be hard to pay for more bandwidth just to see a glass window break in UT2k4.
 
I'm a little skeptical about the preformance and abilities of PhysX.
Doing a little reading, I saw that Nvidia stated the GPU doing the
physics calculation would would send the data to the DX9 driver without
the intervention or use of the CPU.... :?: :?: :?:

Next off, most GPU's, but namely Nvidia's that I can speak of are
able to process physics for lighting, and not much else.
The biggest problem --current mem controllers dont allow for
the necessary (or minimum) caching to make any physics calculations
truely effective or possible. (Perhaps the following generations though)

A dedicated PPU seems to be the best possible solution.
 
so if a ppu takes the load off the cpu wat else is there for a cpu to do in the game?

-ai
-loading
-thats all i can think of

wats next an AI processing unit? and a loading unit?
 
The actual map location and centerpoint information.
Keep relational track of all other aspects and what the overall changes are for an area of the map affected (whom have you talked to, where are the boadies/bullet holes).
Keep time for the story, linear progression (this is saved on the harddrive but written and retrieved through the CPU.
Network communication.
And now relay the tasks from the dedicated parts (audio card, graphics card/physx card) interact with the rest.

There's still alot to do, and AI is a progressively larger task, especially in games like Oblivion where there's Radiant AI affecing characters not directly interactiung with the current scene or main character.
 
ATI announced physics acceleration as a possibility 6 months ago, but here is their new response:

http://www.pcper.com/article.php?aid=226

Since I have seen some of the GPGPU tests I can say in general ATI can at times have a 5-6x lead. How that translates to physics, who knows. But there is good info in here--namely confirmation MS is working on a universal API so ATI/NV/AGEIA can all compete on the same level ground and let consumers choose the best product; also confirmation that ATI plans to allow customers to use their old GPUs as dedicated PPUs i.e. recycle your old GPU. Also a nice tid bit about how physics can be used either on a standalone GPU or dedicated one. I know some cringe in regards to having a GPU do both, but lets say your new X1900XTX does 375GFLOPs. 10% of that is nearly 4x the floating point performance of a CPU. So it is like going from 66fps to 60fps (10% hit) while getting 4x the performance of a CPU. Ok, not quite that simple or clean, but you get the point.

But it is not unimaginable that if you are at 60fps at 1600x1200, that by moving to 1280x1024 (30% reduction in pixel shader & bandwidth) that you have allocated a ton of resources for physics type tasks. We are talking multiples of degrees of magnitudes. And DX10 should open up more doors and efficiency.

Obviously GPUs are not the "ideal" solution in terms of effeciency, but they are a proven design with great market penetration with a 100% overlap with the intended physics audiance.

And most importantly is economy of scale. PhysX is a small chip (125M transistors) with a 128bit bus to 128MB memory. Yet they are going for $275. I am looking at a 7900GT right now that, while only being in a similar ball park, has a chip 2.25x bigger, has 2x as much memory, and has 2x the memory bandwidth. With a PPU you are paying a LOT for what you are getting. Even if GPUs are only 60% effecient they would still come ahead on a dollar-to-performance ratio.

Who knows how effecient they are and how they compare at this point, but we do know that GPUs, like the PPU (at least the little we know from their patents), are streaming processors with a lot of brute force and a high bandwidth connection to dedicated memory. PPUs are a specialized solution and will be more effecient, the question is can that effeciency overcome the brute force of the GPUs, the market conditions, and the cost factors (not to mention versatility/usefulness).

Ultimately it comes down to cost and a killer app that drives consumer interest. If we get to a point where games have multiple options (PPU | GPU | SLI | Dual Core | Quad Core | None) before we get to such a point, well...
 
; also confirmation that ATI plans to allow customers to use their old GPUs as dedicated PPUs i.e. recycle your old GPU. Also a nice tid bit about how physics can be used either on a standalone GPU or dedicated one. I know some cringe in regards to having a GPU do both, but lets say your new X1900XTX does 375GFLOPs. 10% of that is nearly 4x the floating point performance of a CPU. So it is like going from 66fps to 60fps (10% hit) while getting 4x the performance of a CPU. Ok, not quite that simple or clean, but you get the point.

Yeah and you know what, all the old VPUs I've owned I've GIVEN away, and some thing I couldn't give away if I tried (actually I'm glad to have the PCI AIW back because it still is a solid card for Win2K/W98SE). So if I KNOW that this is an option I might get X1900AIW very VERY secure in the knowledge that it will still be useful in the DX10 era as a PPU. Better than the hassle of selling it off to a friend (I'm not much of an e-Bayer unlike guys like Cleeve and Pauldh who seme to be able to spin their old hardware into almost the entire replacement cost for sweet fresh gear [new lamps for old?]). So for me the holds alot of additional promise at no additonal cost, that I didn't have before.

But it is not unimaginable that if you are at 60fps at 1600x1200, that by moving to 1280x1024 (30% reduction in pixel shader & bandwidth) that you have allocated a ton of resources for physics type tasks. We are talking multiples of degrees of magnitudes. And DX10 should open up more doors and efficiency.

Definitely and if you thnk about a 'cheating way' opf implementing the most efficient use, you overbright the screen (drop visual/pixel load) calculate initial tragectories and close interactions) and then continue with the rendering of the explosion. This would give you the flash bang/stun effect of an explosion, and also give the card time to render trajectoriues at the same time, to help ease that instanataneous large load. Of course the self-interacting particles will multiply in a confined space or if they have heavy number of collisions (like a falling wall), but for most 'explo-zee-uhns' this should give some initial breathing room uintil efficiencies occur.

Obviously GPUs are not the "ideal" solution in terms of effeciency, but they are a proven design with great market penetration with a 100% overlap with the intended physics audiance.

I wouldn't say they are ideal, but for me they are a 'better' solution than a dedicate physics card simply because they do have other uses and that for some situations adding a second VPU will increase frames in games that aren't PPU accelerated, since they can then push pixels. When SLi and Xfire go with unbalanced solutions (like X1800+X1900 or 1600) then you'lltruely be able to exploit your previous card for good use, and increase the utility of both this physix side and the graphics side.

And most importantly is economy of scale. PhysX is a small chip (125M transistors) with a 128bit bus to 128MB memory. Yet they are going for $275. I am looking at a 7900GT right now that, while only being in a similar ball park, has a chip 2.25x bigger, has 2x as much memory, and has 2x the memory bandwidth. With a PPU you are paying a LOT for what you are getting. Even if GPUs are only 60% effecient they would still come ahead on a dollar-to-performance ratio.

Being an economist, I appreciate that, and that's a winner for Ati and nV, however for a consumer the winner is that the PhysX card doesn't have as quick product cycle, a faster refresh rate for production so likely that GF7900GT/GTX will cost as much or far less in only a few months since there'lll be a better part out to knock it out of contentionm and still would be able to do more overall, and that will continue to happen, to the point where the PPU is competing against a $50-100 graphics card on eBay. Because while most people will NEED a new VPU every year or two, most won't consider the new PPU, and eventually IMO, the VPU will far surpass the PPU until Physix makes enough money off their current line to refresh their product to a next generation performance. And the whole time you can add to it and swap in and out. And the fact that even just having the VPU that can do both is better for cheap people who don't want to pay for 2 things. Of course if they added functionality quickly the PPU might be competitive, but it will remain a costly way of competing for the manufacturer.

Ultimately it comes down to cost and a killer app that drives consumer interest. If we get to a point where games have multiple options (PPU | GPU | SLI | Dual Core | Quad Core | None) before we get to such a point, well...

Exactly, and I easily se multi-VPU + Multi-Core being able to handle all the elements well enough that a seperate thrid card being part of the solution just makes it anohter 'variable' for people.
 

TRENDING THREADS