Is Ageia\'s PhysX Failing?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Who cares about destructable terrain, not going to help you win

Oh really? So if I can bring the ceiling down on top of you in a game like Battlefield 2, therefore killing you, that doesn't help me win?

Yes Ageia's solution at the moment is expensive. But I know as well as everyone else that if say Half Life 3 was released with support for Ageia's card and offered tangible benefits, everyone would go buy one.

Yes, Ageia is going to have a hard to selling their product at the moment because of lack of support. But that doesn't mean its a piece of crap. That just means we're conscious consumers who don't run out and buy something that, at the moment, isn't going to do anything.

So get off the band wagon of "Oh it doesn't do anything now so it sucks." The same has been said for every new piece of technology thats ever come out. And another fact is that people buy far more useless things than a Ageia physics card in this day and age. Do you need $4000 spinners on your car when all it does is make you look like an idiot? Do you need a Ford Excursion or a Hummer H2 to take you to work when you are a single person with no kids to haul and no boat to tow? Do you need a 65" Plasma TV when you sit 3 feet away from it? The answer is no but stupid people buy those things every day.
 
I also don't plan to buy the first generation cards. But I do see an benifit in the future - maybe when PPUs are on a video-card or the motherboard. I don't see developers making 2 sets of code. I think there will be a standard physics engine that will read the software commands then decide to use dedicated hardware or software emulation. Just like video cards. You have good, sucky, or no 3D. But you always get some kind of 3D.
If Ageia is a leading physics expert, I don't see them dying so soon - even when the hardware goes on the video-cards. They will just licence some technology.
Again, I will wait and see what happens. I was not impressed with the old 3D cards until I saw real eyeball-popping 3D high-res full color Quake 1. Then I was hooked. When I can play something like Oblivion with more realistic trees, grass, water, and objects that don't stink in the rocks while getting 10x more framerates, then I might think differently about hardware physics.
Even if you don't care about looks when trying to win, you should care about how physics behaves. Then you assume things like bushes or a 1 meter fence won't behave like an impassable wall.
 
The question I have is this:
Is Physx actually just a pretty video effect or can it actually effect (and therefore unbalance) the gameplay itself in a multiplayer environment?

e.g. imagine you're playing a multiplayer FPS with only half the people having physx.

Physx players could kill each other with a physx-only side-effect such as with flying shrapnel from shooting something nearby. Non-physx could try exactly the same thing but software-only physiscs would have less shrapnel with a different flight path and it would just be eye-candy with no damaging behaviour. So now there's an unfair difference in actual gameplay depending if you have a physx or not.

So in order for multiplayer games to be fair, developers will have to consider players with the lowest spec machines so they'll purposely disable physx for everyone else. Now you've just paid $350 for purposely-disabled hardware.

Exactly. I'm so glad somebody pointed this out. Yet another difficulty in adding physics to your games. This could be dealt with by having "Advanced Physics" servers and games which required the settings to be at a particular level, but it would divide gamers somewhat. Unlike graphics, you can't just decrease the number of particles and expect everyone to have the same experience.



Guys, this is a null point. Physx could in no way adjust the outcome of a players ability to deal with his environment enough to damage playability for one player over another.

Seriously, how could it? At least any more than one player playing at 640x480 against another playing at 1600x1200 both at the same frame rate? Of course the higher resolution could have an advantage. So adding a physics processor would have no more advantage added than this same scenario. If you have less hardware, you are obviously at a disadvantage no matter what mix is thrown into the machine. A physics processor neither adds to or diminishes any current problem that already exsists.

To understand why, you'd have to understand collision detection. Currently collision detection in games is a VERY INEXACT science to begin with. You know what I'm talking about too. How for some reason bullets can go through walls, or you fall through objects, etc etc. There's a lot more to fix or improve before physics processing has enough of an effect to obscure the playfield. A LOT more.

Besides, if it's a real problem, then a developer can take this out by allowing the server to turn on or off the ability for the cards to have an effect. No games do this currently, but it can be done. (like some games allow only similar versions of games to join a server.) Don't worry about problems that haven't affected you. This one will not.
 
First off, I'll note an odd typo in the article. In listing the video cards used, they present an "ATI Radeon X1900XTX, 256 MB GDDR3", though the clock rates suggest an X850XT. Indeed, later, it's labelled, and performs as, an X850XT, but it confused me at first.

Oh, and yeah, I think it's pretty clear that the new version of Ageia's demos are a complete sham. If they actually did perform that way without the card, this would've been pointed out on this site, in the very threads that mentioned how to run it "software mode" in the first place. I wonder if it's possible to get one of the older releases, perhaps through BitTorrent...

Based on personal experience using the Ageia physX libraries (code wise). I think Ageia actually has a very strong foundation in the industry. Their code is freely available (in my case, used in a student project), and is INCREDIBLY easy to get up and running in no time. Which is an advantage over a few other phsyics libraries that I had experimented with (Newton, ODE).
As previously mentioned, Newton and ODE are not good choices to compare Ageia to, at least on the basis of commercial applications you might be coding for. By and far, Havok is the developer's product of choice, and is the engine to beat; it's in most major-money titles, from Half-Life 2 to Oblivion.

Yeah, but what if the cloth were on a tent in a game like EQ? Suddenly going from a game like that where you essentially had no interaction with the environment (besides possibly falling off of something) to something where everything can be destructible will be a BIG + going for it.

I contend that the game that will let this technology shine will be a racing sim. Realistic handling on different terrain, real aerodynamics, weather effects, deformable bodies for realistic collision effects, etc.

I also don't think quad-cores are going to be sufficient for these calculations either. To me, that's like saying you can give up your GPU and run all the graphics calculations off your CPU. Has anyone run the CPU tests in 3dmark and thought "Gee, this looks pretty good, wtf do I need a gpu for?". Rediculous.
Actually, it's apparent that at the current level of physics calculations or so, even a SINGLE-CORE processor is enough; just look at, say, Oblivion. Though I'm personally unimpressed at how stingy they were with the cloth physics (some tapestries move, others don't) but all non-attached objects can and do move about. Arguably, my favorite example would be to blast a "Storm Atronach" monster with a powerful explosive fire spell; this causes the monster, which is composed of hundreds of swirling rocks, particles, and vapor wisps, to burst apart at great speed, spreading those hundreds of objects all about, as they realistically bounce, roll, and otherwise move on a course that would be expected. Even on my PC, which sports an (ancient) Athlon64 2800+, there's no drop in framerate, from the ~30 I was getting before combat started.

You don't seem to understand this. A PPU is not going to up your base framerate. Either with Agiea or with a GPU implementation. By implementing real world physics in a game, there are far more objects to keep track of. That translates into more objects that need to be processed and rendered.

As with any high end effect like AA or AF, FPS goes down as the settings are turned up. Physics is no different. You will see a decrease in framerates no matter what physics implementation a game uses. The only thing that matters is, is it playable? If you go from 100 fps to 60 fps, who cares. The game is still more than playable. Now if you go from 50 fps to 10 fps. That matters because you can no longer enjoy the game.

Stop thinking along the lines of hardware adding performance. Instead, physics hardware will decrease performance, but increase the game experience. Half Life 2 was an awesome game with software based physics. But just imagine it if it had real physics. Like when you're trying to get into the Combine base and you're in that courtyard fighting the tripods. All those holes in the ground. With real world physics and hardware to drive it, they could blow those holes in the ground realistically and in real time instead of the pre-canned event that they programmed into the game that just caused a pretty explosion and waalaa, holes in the ground miraculously. Thats what I want to see in games.

Real physics in games has the potential of bringing games even more to the level of graphics seen in movies. Imagine a space shooter game when a ship explodes. Once its hitpoints reach 0, the entire ship explodes and disappears. In the movies (well done ones at least) and real life, a whole ship doesn't just explode because you launched a bunch missiles at the front of the ship. Only hitting something like the reactor might cause that. So normally just the parts of the ship that were hit explode and if enough damage was done to key systems, the ship is taken out of the fight and drifts around in space with pieces of it drifting too. Imagine games being like that. With tons of debris floating around and secondary explosions popping up.

I remember playing Tie Fighter back in the day and thinking it was pretty ridiculous that if I took out a few turbo lasers so that I couldn't get hit, I could sit there with just my two pathetic laser cannons, and eventually make a star destroyer explode. And in games today, thats still what happens. It can be better. With real physics and the ability to track millions of objects in real time, it will be. Not just the ridiculousness of blowing up a capital ship with a star fighter, but also the explosion if it does happen would be a hell of a lot cooler. No more boom and the thing disappears. Instead it explodes and persistent pieces of it go flying off and floating around in the space you're still flying in.
Indeed, I would have to agree with you on that point: physics could actually sell themselves if only they included destructible materials.

In all of the cases, software can easily handle solid objects (oil drums, vehicles, player ragdolls, etc.) even on an older single-core processor. Even for fluid physics, such as those for vapor and cloth, it's not all that demanding to provide a convincing simulation, something that modern dual-core processors can keep up with.

The real killer app, here, is clearly the prospect of destructible terrain. A popular subject of war FPS titles, for instance, is blowing up bridges or other buildings; every title I've seen has an instance of it. However, as you mentioned, just like the tripod-created-craters in the "Follow Freeman" chapter, they're all, to put it that way, "pre-canned." (perhaps a better example would be the first campaign of Call of Duty 2, which used scripted destruction every few moments)

What would really enhance the believability of games is if such destruction could truly be handled in real-time, and be completely variable. In a word, what would really convince people would be a small-scale scientific-grade physics simulation on a card.

Until then, though, people aren't going to be impressed enough. Hence, they look at the cost/performance equation; since Ageia's demos offer little to no proof that the card can improve performance over what would be gotten without their card, people are not going to be convinced.

you know i'm pretty sure that the idea of using the gpu for other things is ALOT older than AGEIA the company. i am also pretty sure but not positive that there were efforts to use a gpu for physics before the ageia chip was even announced. i may be wrong though but im sure it's true.
Although not quite old enough, I know that with the release of the Radeon X850 series, ATi also unvieled a demo that could be run on any Radeon X-series card, that used the GPU to simulate fluid physics; it's a screen-saver that just just speads fading, colored fluid across the screen. (or however many screens you have) It does look nice, believable for either water or air, and runs super-smooth, all while only using a small portion of a high-end X800 or x850's processing power.

Yes thats completely possible. But it takes a company like Microsoft to define an API for such a thing because the rest of them(Nvidia, ATI, Ageia) are in it to make money so obviously they each want to have their own way of doing things so they don't have to pay royalties to another company.

I am hoping a Physics API comes out in the near future because that would allow fair competition between all the players. DX10 will probably eventually include an API for physics. Maybe not at the start but in a future revision. DX10.0b.
Indeed, that's one of the things I'm hoping for. Probably, it should just simply include a higher generalized API that could be used for such low-bandwidth tasks like physics processing, and perhaps replace DirectInput while they're at it. I'm still bugged that an Xbox 360 controller is only partly-functional on Windows.

The question I have is this:
Is Physx actually just a pretty video effect or can it actually effect (and therefore unbalance) the gameplay itself in a multiplayer environment?

e.g. imagine you're playing a multiplayer FPS with only half the people having physx.

Physx players could kill each other with a physx-only side-effect such as with flying shrapnel from shooting something nearby. Non-physx could try exactly the same thing but software-only physiscs would have less shrapnel with a different flight path and it would just be eye-candy with no damaging behaviour. So now there's an unfair difference in actual gameplay depending if you have a physx or not.

So in order for multiplayer games to be fair, developers will have to consider players with the lowest spec machines so they'll purposely disable physx for everyone else. Now you've just paid $350 for purposely-disabled hardware.
You'e just hit on one of the primary reasons why Ageia's found that getting their card to saturate the market isn't as easy as the introduction of video cards.

As it happens, developers know better than this, and thus, given the poor market saturation of PhysX cards, (a.k.a. the number of people who have them) don't have it handle any gameplay-effecting processes. Such physics calculations can NOT be optional; it would be like an option to remove clipping in the game; it would improve performance, as well as allowing you to fly and shoot through walls. So, either the PhysX card has to be REQUIRED, or it can't really handle anything all that spectacular.

Using the GPU for other tasks is very old, but why waste GPU power for something when you're trying to get 100% of it used for graphics at the same time? Did that ever occur to you? If it were not for descrete 3D graphics engines then 3D graphics would be up to the CPU. Sounds stupid in that context doesn't it? Now, consider Physics... Get the idea?
This isn't quite the case; in modern 3D games, where the physics would be loaded off to the shaders, they are not used 100% of the time. In fact, early in the stages of rendering each frame, the pixel shaders often wind up doing absolutely nothing for thousands of clock cycles. Since physics would also be done early in the frame rendering process, this would be a key point to calculate them. Given that the PhysX chip isn't all that impressive compared to even a cheap modern GPU, (it's made on the 130nm process, for crying out loud!) it would be quite possible to find ample power in there.

And once cards move to the unified shader arcitecture, it's likely that they will NEVER lack for shader power again, and hence always have some spare, which could be used for physics, just as in a unified SA they can be split between pixels and geometry.

Yes that could be the case. However, I think it will be a while before these new effects actually influence the game in a multiplayer environment. Until physics is more commonplace, they will just be pretty effects for multiplayer games. Now in the singleplayer environment its fully possible to do that.

Now I could see MMO's going this way faster than single player games with online components. You want to do a really cool thing in your MMO but everyone has to have it for you to implement it. So therefore you have to make your audience adopt the technology or not be able to play the game.

But in a way thats akin to the days of the N64 with the Memory Expansion pack. Perfect Dark required it to play through the game.
Yes, I'd say that MMOs have the most leverage to get this sort of thing going; they have no problem getting subscribers, and price isn't much of an object; none of their subscribers complain about $20US per month to play.

As for the expansion pack for the N64, the only thing that puzzled me is that they bothered to keep the Combat Simulator so that it could be run on the default 4MB RDRAM. It might've just been easier to require the expansion pack for the whole game, like it was for a few other titles.

Actually for me its "Why the hell can't America's best run up a damn hill thats on a 60 degree incline? I can take bullets to the face but this hill is my arch nemesis."
Indeed, perhaps the best quote of the thread. However, even in well-endowed games, like Oblivion, the hill is STILL your arch-nemesis, though perhaps in there because they need to keep the player from leaving the province of Cyrodiil.
 
The way I see it, there are a few ways a game developer could choose to implement extensive physics stuff with multiplayer:

1) They could allow it to work without hardware acceleration, giving the user the option of turning down the physics effects, or turn them up and have crap performance, or turn them up and put in a physics card. In this case, the physics card would be like a graphics card today versus Integrated graphics: If you get a nice graphics card, you can turn up the eye candy without hurting performance much or at all. Playing multiplayer would required synchronized levels of physics detail, however, so in that case the physics detail level would be controlled by the server. People unwilling to play on a certain server because they lack a physics card (thus would have bad game performance) would have to play on lower-detail server.

2) They could offer hardware acceleration or software acceleration, because the level of detail for the hardware acceleration is so high that it would be completely impractical to expect any PC to do it in software. In this case, people with hardware acceleration could join any game with good performance, but software acceleration folk could only join software acceleration games.

3) They could support one and only one version, software OR hardware. If you have the card you can play, else you cannot.

The first one offers the most flexibility but requires many levels of physics performance, like current graphics solutions. The second one is like a tiered "You have it or you don't" solution, like with the N64 where sticking in the extra 4 MB of RAM gave you better performance, period, not degrees of improvement. The third one is like Donkey Kong on the N64. If you've got the extra stuff, you can play. Otherwise, you're out of luck.

In response to StrangeStranger:

...I'm glad you can admit you have "no clue" what you're talking about. Because obviously, you don't. If my graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200, AA get up to 6x Temporal, AF up at 16x, details and view distances set to max, and framerate consistently at 60 frames per second while playing Oblivion, F.E.A.R. and other games. MY GRAPHICS CARD IS FOR GRAPHICS FIRST AND FOREMOST. I don't wanti to bog it down with additional calculations for a every gust of wind, grain of sand, and droplet of water in the game. If I can get another card that will do that without a performance drop, then I'd like that very much.
 
ou'e just hit on one of the primary reasons why Ageia's found that getting their card to saturate the market isn't as easy as the introduction of video cards.

As it happens, developers know better than this, and thus, given the poor market saturation of PhysX cards, (a.k.a. the number of people who have them) don't have it handle any gameplay-effecting processes. Such physics calculations can NOT be optional; it would be like an option to remove clipping in the game; it would improve performance, as well as allowing you to fly and shoot through walls. So, either the PhysX card has to be REQUIRED, or it can't really handle anything all that spectacular.

Actually, even if everybody has the card, you then have two other issues. First, you have all these particles that need to be synchronized and updated among the users. This will take up a hefty amount of bandwidth. Second, you have each computer doing redundant computations. Only a single computer needs to do such calculations. Part of physics, especially fluids, involves randomness. You can't expect all the cards to produce the same random effects unless you eat up more bandwidth to synchronize all the cards. Multiplayer games with these physics cards are best for effects only.

Video cards are in a whole different ballgame when it comes to multi-player games. They just render the view from the server, which is unique for each player. They are an output-only device.

I agree with the previous statement that a server physics processor may be the best choice for online gaming. Personal physics cards are better left for single-player games and effects only in multi-player games.
 
> Stop thinking along the lines of hardware adding performance. Instead, physics hardware will decrease performance, but increase the
> game experience. Half Life 2 was an awesome game with software based physics. But just imagine it if it had real physics.

What do you actually think PhysX IS ?

The miracle box that contains the "real" physics otherwise unaccessible to mainstream CPUs due to it's arcane powers?


This thing is - as far as you can tell by giving their website a quick lookover - basically a preprogramed FPGA with it's own dedicated
fast RAM to hold a couple of different logical cell configurations and the data to be processed (I certainly don't know for sure, but maybe
THG simply asks them what Hardware they are based on?)

Ageia's (like Havok, Newton and others) API code implements physical interacions in optimized c++ that can be compiled and
processed on a standard CPU. All the physics, the "realism" is coded in there.
Now it turns out that very very few lines of this code can consume 80-90% of the processing time. Furthermore, some
of these calculations are highly repeatitive (for example if particle interactions have to be evaluated).
In this case these very few lines of code lend themselves to be translated to vectorized form and to use dedicated hardware
to process them. This is done by firstly translating them into programming languages like "Handel-C" or VHDL which allow
the code to be casted into the physical configuration of a huge network of 10 or 20 or 100000 logical cells which allow the calculation
to be executed in parallel.

Or expressed a bit simpler: You build an adapted coprocessor for a particular problem.

In case the coprocessor is not reprogrammable on-the-fly it's called APIC in case it is, it's an FPGA.
A couple of FPGA are even reprogrammable in real-time and thus allow to swap the configuration of one problem against another (However,
the FPGA - or at least the concerned part of it - has to be stopped to do this, so a continuous change of tasks is not an option).

What is the benefit of the exercise?

1) Freeing up the CPU from a tedious task to do other things.
2) Speeding up the execution of the task.
3) Being able to scale up the problem to a larger size which is just with the CPU not feasable.

Therefore: START thinking along the lines of hardware adding performance.

In case you don't, you do not need dedicated hardware for this: Your PPU could be fast as hell, if the libs of the API don't contain good code
things don't get a tad more realistic. On the other hand a good API - take Havok - introduces very realisic physics effects but don't need
nessessarily a dedicated hardware accelerator..

For years solutions of this type have been too specialized and too expensive to make it to the mainstream market, and were more
often found in the domain of HPC like in a Cray (for example http://www.xilinx.com/prs_rls/design_win/0591cray05.htm), where a bunch of CPUs
crunches most of the code and for the few calculations which can be vectorized and pull down the execution speed, they pair them with a bunch
of FPGAs to offload these critical spots. These spots are executed typically by a factor of 5-20 faster than on a CPU. In some cases more, in many
cases also much less.

This, however, is not a magic wand:

As soon as very complex calculations are required, the available real-estate of freely programmable "logical cells" on an FPGA does very quickly eaten
up by implementing it and parallelism has to be sacrificed. The same goes for code containing too many conditional decisions during execution and and and.
And last not least: Not all "physics" leads to problems which can be done in parallel:

Calculating the ballistic trajectory of ONE object with PhysX is of no point (Example Half-life 2 and the grav gun).
It will be more time-consuming pushing the data from the CPU over the slow PCI-bus to the Offloader and getting the result back from there than doing the calculation
within the CPU itself. In addition the SSE instructions are already offer a limited possbility of vectorized calculations whithin the CPU.

In case ten/twenty/thirty(?) trajectories have to be calculated, things can start looking differently and in case thousand trajectories are to be calculated, a dedicated
hardware might accelerate the calculations considerably.

Either way, if there are actually gains that justify the investment, it depends firstly on the type of the problem at hand and secondly on the scale of the problem.

Now Ageia claims physics is so complicated it REQUIRES hardware acceleration in order to be simulated in real-time.

Well, FINALLY THG looked at the performance of a quite lively demo with certain effects accelerated AND only calculated with the CPU.

And it turns out that even in a demo MADE to evidence the advantages of PhysX, that 90% of all physics effects are scaled in such a way that fast CPUs can
handle them without a significant drop in the framerate.
Only the cloth simulation appears convincing.
But that does not nessessarily mean much: Scale up the number of independently moving points of the flag up by a factor of ten, and the problem would also be too complex
to be handled by their PPU without the framerates breaking in. And scale them down by ten and the CPU could swing it without a PPU. Since no side-by-side
comparison is provided, how are we supposed to judge how many points ARE required to let the flag look realistic?

That does not make a strong case.

Hopefully the rather lukewarm outcome of the second THG review will force Ageia to disclose more details about the hardware and the effects accelerated and to draw up
a demo clearly evidencing the advantages of their card by showing side-by-side comparisons of accelerated/software calculated effects with different scaling.
 
how many times do fools like you need told that GPU's do not have cores. they are already in a sense multi core in that thye have multi pipelines which is for parralel computing which is what multi core is. FFS, gfx and OS's are completely diffferent and require different processors to work.

Multiple independent pipelines are not analogous to multiple physical cores and there are different types of parallel processing. But what I'm saying is that instead of two physical video cards, put them on the same board like the old Voodoo 5 5500 or Volari Duo. Back then, the Voodoo beat the mighty Geforce 256. Why should you pay for multiple video cards, then have to pay more for a certified power supply to handle them, then have to pay even more for your electric bill?

There was a time when the G400 MAX ruled the world. Performance and quality were conjoined in a blissful silicone-packed AGP wonder.
 
...I'm glad you can admit you have "no clue" what you're talking about. Because obviously, you don't. If my graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200, AA get up to 6x Temporal, AF up at 16x, details and view distances set to max, and framerate consistently at 60 frames per second while playing Oblivion, F.E.A.R. and other games. MY GRAPHICS CARD IS FOR GRAPHICS FIRST AND FOREMOST. I don't wanti to bog it down with additional calculations for a every gust of wind, grain of sand, and droplet of water in the game. If I can get another card that will do that without a performance drop, then I'd like that very much.

I'm not glad that you have no clue what you are talking about. Just read a little about current GPUs or actually read the article and you will see that they are comprised of a vertex shader and a fragment shader. These two phases are used in producing the image. In most cases, while the fragment shader is active, the vertex shader is dormant and vice versa. Currently, you can't use the dormant stage to improve the rendering speed. That is why they are discussing the possibility of using the dormant stages for doing physics calculations. Now go read.
 
Absolutely nothing about Ageia impresses me. I certainly won't go out and spend $300.00 on a card just to make a few bits fly around. It looks pretty enough with the 7900gtx. Sound cards, video cards work with more than just games so the cost is acceptable. Not so with a card dedicated to one thing and one thing only and needs compatible titles to work. I don't see developers writing two sets of code or forcing customers to purchase hardware to run the game when most already have a huge investment hardware then asking some people to eliminate a card if their PCI slots are all used and that's not hard to do with only 2 PCI slots on new motherboards. As far as I'm concerned it's a step backwards to the old 3Dfx pass through days, didn't make sense then doesn't now. If history has it's way again Ageia will be gone or consumed by ATI or Nvidia and it'll be put where it should be on the graphics card and maybe add $50 to the cost of the card since it's just a chip :)
I'll never buy one, nor will I ever install one on any computer I build or service, if a customer wants one they'll have to shoot themselves in the foot on their own I won't help. Has anyone thought about how much of a second card for SLI $300.00 would buy? That's a better use of $300.00 for graphics than anything Ageia is promoting.
When I play a game I'm not walking around looking at bits and pieces flying around or waving in the wind, I'm trying to win, paying attention to the objective. Who cares about destructable terrain, not going to help you win, not going to do anything but ooohh and awwe the croud like fireworks on the 4th and that's not worth even $100.00 and a PCI-E 1x slot to me. I'd rather have oh say, new cpu, more ram, larger monitor, the list goes on and on but oddly it doesn't include $300.00 for a card that does only one thing, does it only it optimized games and uses a useful slot.
I Totally Agree with you with high end video cards from ATI and Nvidia +Dual Cores CPU's from AMD & Intel .who's gonna care for just a few special effects that you have to watch at the monitor really carefull to be able to tell some diference. Game programers ain't gonna be waisting their time reprograming codes
 
It's obvious that this thread is starting to become crazy. Technology is meant to serve humanity, not to turn it against itself with polemics.
 
...I'm glad you can admit you have "no clue" what you're talking about. Because obviously, you don't. If my graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200, AA get up to 6x Temporal, AF up at 16x, details and view distances set to max, and framerate consistently at 60 frames per second while playing Oblivion, F.E.A.R. and other games. MY GRAPHICS CARD IS FOR GRAPHICS FIRST AND FOREMOST. I don't wanti to bog it down with additional calculations for a every gust of wind, grain of sand, and droplet of water in the game. If I can get another card that will do that without a performance drop, then I'd like that very much.

I'm not glad that you have no clue what you are talking about. Just read a little about current GPUs or actually read the article and you will see that they are comprised of a vertex shader and a fragment shader. These two phases are used in producing the image. In most cases, while the fragment shader is active, the vertex shader is dormant and vice versa. Currently, you can't use the dormant stage to improve the rendering speed. That is why they are discussing the possibility of using the dormant stages for doing physics calculations. Now go read.

I don't think you understood me. My point is, before my graphics card learns to do new tricks, I'd prefer for it do graphics to the max. If there's a way to unlock more processing power on the graphics card, I'd prefer for it to do graphics. Don't talk trash about me, I did read the article. Apparently you and some other people just don't realize that I'm talking in theory, being forward looking. If ATi or nVidia decide to make use of this dormant processing power, I'd prefer for it to go towards giving me the best graphics performance I can get.
 
Therefore: START thinking along the lines of hardware adding performance.

No matter how fast the physics calculations are processed, the graphics card still has to draw them. More polygons to draw means lower FPS. Yes faster, more powerful video cards allow for the balance to be maintained but the fact is that a scene without real-time physics is going to always be rendered faster than one with it assuming everything else is the same. Thats what I was getting at with my statement.
 
hmm do want a JCB digger or are you fine with that shovel for digging a deeper hole.

you don't get it do you. that extra power can't be used for gfx unless the code for using it is there. there fore it is NEVER going to be used for gfx.

I don't think I'm digging a deeper hole. I see it as rather simple, really. If it becomes completely impossible for them to use the graphics card's extra processing power to graphics, then fine, offload other calculations onto there. But I don't want to lose graphics performance to put physics on my graphics card. Similarly, if it were possible to squeeze all of the graphics functions onto my CPU without any performance or feature decrease, then fine by me, one less thing to worry about buying.
 
I'd like to see benchies done with older systems.. say early P4/AthlonXP cpus... and see what effect that would have on the same game =/
 
I don't think you understood me. My point is, before my graphics card learns to do new tricks, I'd prefer for it do graphics to the max. If there's a way to unlock more processing power on the graphics card, I'd prefer for it to do graphics. Don't talk trash about me, I did read the article. Apparently you and some other people just don't realize that I'm talking in theory, being forward looking. If ATi or nVidia decide to make use of this dormant processing power, I'd prefer for it to go towards giving me the best graphics performance I can get.

How do you expect a vertex shader to function as a pixel shader and vice versa? They can't with today's cards. This is not some "trick" that your graphics card can "learn". Each shader only can access certain portions of memory and the frame buffer and can't contribute to better performance. Hence, using this power for physics computations is a great idea.
 
Firstly I would like to say the title is a bit misleading and unfair to Ageia. The article (or demo) was actually showing the effectiveness of the PhysX card (although it may be a setup). The negative title can be misintrpreted especially by those who do not read it or just scanned through it.
I am not a fan of the PhysX, and in fact posted before that maybe AI co-processors are better than physics as the next gen hardware, so dont get me wrong.

Obviously each new hardware should be given a fair hearing, so that we consumers can make an educated decision on the usefulness of the hardware.
On the subject of the fluid spill, it would have been interesting if enemies could slip on the spilt liquid, and hence change the tactics used in a game.

On the topic of PhysX vs no PhysX on multiplayer. There is a simple solution; the server will determine if the if PhysX is used or not. The server does the calculations and determine the position of objects where they land, e.g. blow up a truck, the server calculates and sends the position and orientation back to the player so they can see the effect even without the cards. This does not have to be the whole model vertices, but just the base position and 3D orientation of the object. As for damage from splinters, the server can calculate the health from the damage and send back to the client. If the client does not have the card it can displayed toned down splinters, whilst card owners can enjoy full eye candy, both will "feel" the effects of the splinters. Of course this may mean more powerful servers with more powerful PhysX cards, but it is a possible solution for Ageia and their cards regarding multiplayer.
 
So is there hardware support with aegia with physics offloading in 3ds max / maya with this card?

E.g. fluid / smoke calculations in rendering?
 
you don't get it do you. that extra power can't be used for gfx unless the code for using it is there. there fore it is NEVER going to be used for gfx.

Actually SS, I'd love to know WHY there is this 'extra power' available, and why the drivers dont make use of it.

Also, you say games dont make full use of C/F and SLI configurations, which currently is about right. But what happens when games start to push more and more physics operations onto these setups? There has to be a balancing point for this, and I doubt that a GPU will be able to perform as many calculations as either a CPU or PPU as well as processing all the data and calculations needed for fast and clear visuals.

The final point is that GPU physics is only really viable on multi-GPU setups, and even then only to a limited extent. And in all fairness, which is cheaper to buy? A decent C/F or SLI setup? Or a decent single gfx card and physx PPU?

Granted, the technology is basic, and needs to be fully utilised to show its real applications. PCI-E would help here, for certain.

Just a thought... maybe if the PhysX chip was made as an opteron co-pro? 😀
 
All tho i'm not sure if Ageias solution is going to fail or not there the ppu opens up a new area for developers to step into. Without deconstructing the demo and knowing ageias sdk its hard to say if the demo is rigged or not.

PC gaming is a very expensive market to keep up with. I'm sure one person spends 2k on some cards just to get the game to run 200 frames on med quality at 1024.768 rez just to ensure there are no slowdowns during a match. Where another person is more than willing to spend the same 2k and get the game running at max rez, max aa, max quality not really caring if his frame rate occasionaly dips below 60. Meanwhile thers still people out there trying to run the same game on a hp, dell, gateway... machine they picked up at the local store a year ago.

A physics engine can add to the game enviorment fairly easily.
Concider shooting down a brick wall. Yes it would be poor choice to create it so that each brick is taken into concidiration unless you are running a SIMULATION. However if the engine is designed with physics simulation in mind it can easily interpilate bricks from a solid object. The manipulation interaction and calculation of the bricks can be handled by the physics engine which can possibly unload the specifications for the debree to the game engine to materialize into models.(think longer tearm ie at least 2 years out) If i can write scrips to do that for me in a maya/3ds enviorment i dont see why it can't be incorporated into a game engine. resources aside...
Does the card have enough power to do this is another matter?

I'm still unsure if i will get the card or not. If i do it will not just be for gaming(assuming the ppu can be alocated for other purposes like 3d simulations).
 
The answer to that really depends on the game.
Something along the lines of deusex, thief can benefit form this greatly, along with many RTS.
If its intergrated well into the engine i'd much rather take a physics engine over a 200 framerate if thats your question. If its just slapped on in the last month ie whats out now. Then i see it poinless.

I dont think id want an already existing game to adapt a physics engine just to sell the card for things like now all the flags are effected by wind. However if its intergrated well and they go back throughout and redesigns everyones clothing to be efected as well then it becomes a diferent matter. But thats all still just a viusal kick no diferent than more particles. It makes things a bit more emersive maybe but i'd prefer to wait for the real impact.

Inreference to destructable levels a console/pc game comes to mind but the name of the game evades me. It was a nice mechanic multiplayer included. It was also a diferent enviorment. In a ww2 sim i'd think twise before tossing a grenade inside a building had it the ablility to damage the floors.

Also destructable enviorment is all everyone seems to talk about in relation to physics and gaming generaly. A use that can easily be implemeted with this card is weather. Ex.. Running up the same hill 3 weather conditions having the player slide down in the rain one, stay still in the hot/dry, and not able to even run up if its iced over. A mechanic like this is generaly aplied through level design now. However imagine if it can be alied globaly even in multiplayer.


edit:
on anote to anyone that thinks you can replace a ppu with a second cpu or a second core you are horribly wrong. by that logic dual proc systems shouldnt have needed a gpu say 2 years ago....
You are missing 2 important aspects.
1) its optimized with specific instuction sets. (your cpu may need to do 10/20x the work to the ppu can do in a simple calc / same is true inreverse as well)
2) its standardized (this is critical for releasing something to a mass audience) then again so is havok ~.^
 
Hmm.. I dont get these reviews realy. Their not testing the physics..

However, when we switched off the PPU and ran the demo in software mode, we could not get near any of the simulated cloth. The second we turned the camera in the direction of the cloth and moved physically close to the hanging cloth strips near the tunnel entrances or the flag hanging on the upper deck, the frame rate would plummet and the game would slow to a crawl.

I would gues thats because the cloth is what the PhysX card works with and not the game grafix. When your not looking at the cloth its only grafix your working with so the game will run without the card and you will get good framerates, however when you put the cloth in the picture your CPU/GPU cant handle the load that the PhysX card would so its
not supricing if the computer hangs or framerates goes down the drain.

ive been trying to explain this before but noone knows what i meant..

The demo itself is just a normal game, the fluids and the cloth is the physics, the cloth textures is grafix, the cloth movements is physics. running the game without looking at the cloth is grafix, running the game and playing with the cloth is physics and what the PhysX card is meant to calculate, the cloth/fluid movements. in the review they couldnt even look at the cloth before the fps crashed, that indicates to me that yes they might be able to run the game without a PhysX card, but they could not get the same performance as the PhysX card.

the fps scores they put up are GFX, they should have measured fps scores while playing with the cloth if they wanted to see the difference between having a PhysX and not having one. they said in the review that the frames did plummit when loading the cloth bit by looking at it so clearly a PhysX card is a powerfull card that outperforms CPU/GPU power. thats how they would have put it in a review if they liked PhysX anyway..

this review is only based on grafix of a normal game with normal code, when they tried to look at the cloth and load the system with calculations that the PhysX card would handle the system gets overloaded.

these guys need to learn the difference between grafics and physics, the FPS comparisons are all grafix, if they want to measure the real performance with or without PhysX card they shoudl play with the cloth to load up some calculations etc(im no pro but i get that..) the fps plummed when they did so clearly a PhysX card can greatly imrpove massive calculations of physics that would otherwice be to mutch for even the fastest CPUs and GPUs, since they werent designed to handle those kinds of calculations and their performance calculating physics generaly sucks as mutch as a p133 trying to calculate doom3 grafix....

show us some fps difference when playing with the cloth and overloading the cpu/gpu with physic calculations. thats what the physx card does, it handles those calculations and trying to do it without one made their fps crash since the cpu and gpu cant handle it......

last time i ever try to explain things.. :cry:

edit: sry for my english..
this review is more proof of how BAD a cpu/gpu can handle physics calculations rather then how bad a PhysX card is that can handle them perfectly. if your still convinced a CPU can calculate "real" physics that the PhysX card is designed for read up.. im not a PhysX fan and dont plan to buy one but i realy like that they will come up with more real physics in games and not faked physics because no cpu/gpu can handle that huge amount of calculation it takes...
 
read my post again please!...

they said themselves their system almost crashed when they looked at the cloth. i explained why that happens.. the rest of the game is grafics, the physx card is not a grafix accelerator or something that most seem to think...
 
I thought I'd bring up an article on HardOCP -->here<--
where they had reps from Nvidia, ATI, Havok and Ageia answer the same set of questions. I think an interesting point was mentioned with the ATI rep, who stated that their implementation on an x1900 would be able to manage around 20,000 objects. That sounds fairly impressive until you put it into perspective with the PhysX processor that can manage 500,000 objects. And, if I'm not mistaken, a CPU can manage somewhere around 2-3000 objects.

Now, personally, I like the idea of the PhysX card, because that's something with a product lifetime like a soundcard. Realistically, it's probably better - it's most likely something you'd only replace once it broke.

One other thing I thought I'd bring up, from the information I've seen, it looks like ATI, Nvidia, and even the microsoft physics api in the works are currently only interested in doing 'effects' physics (ie, mostly particle effects and other non-gameplay affecting effects).
I think most of the games coming out now will most likely only use the physx engine for particle effects, and that will really undermine Ageia's position for quite some time. Right now, I think the only real shot Ageia has is with the Unreal 3 engine and the modding community. Software makers won't want to 'break' their consumer base by requiring physx chips so they'll most likely stick with the physics-lite effects, but the modding community might, and if they can produce something incredible, it'll definitely help push adoption of the technology.