Is Ageia\'s PhysX Failing?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It is identical to the scenario when dedicated 3d cards first came out vs software rendering over 10 years ago.

Just harder to quantify, by the test methods currently in place on most hardware sites.

I think you will have to play it to feel the different better physics makes, screen shots and videos don't really cut it. The extra accuracy may not actually be any more 'fun'.

Yes it will probably fail.
 
Well, as Ageia didn't actually produce Cell Factor, I'd like to know how Ageia is being dishonest.
Cell factor is produced by Immersion Games. Immersion games is owned by Parquesoft, a colombian company. The other name you'll see is Artificial Studios. They produce the Reality Engine that the game is based upon.
If you can provide some evidence that Ageia owns part of these other companies, or is owned by them then I'd like to see that, and then I might buy into your conspiracy theory regarding them fixing the demo.
Ageia most likely wasn't even aware that the game would run as well as it does, or even at all without the chip. They aren't writing the game, they're only providing tech support to the developer - Immersion Games.
 
StrangeStranger - I agree it's hard at this early stage to tell the wheat from the chaff.

I'm gonna skip pc gaming for a while and get one of those Wii's they look fun.

Good luck to everybody.
 
It may be dificult to provide a solid link. That does not however mean that they are only aware of teh development. All tho i do not believe that Ageia is working with them to degrade performance in software mode. This is their prime time to take any development support they have and send out armys worth to anyone willing to support their product at the moment. From intergration to interpilation of the sdk. So to say that all they know is that the game has issued support is a far cry. With this in mind it would not be surpricing if there were monitary or other "incentives" on the table.
This is also a fairly small company and would not be surprising if a few board members from larger publishers at one point or another gave money to a small startup. Obviously those same people would be looking to plug that company into their areas to see a larger personal return.

But its nither here nor there. I didnt research into it, i just brought out a possible scinerio to concider. ^_^
 
> No matter how fast the physics calculations are processed, the graphics card still has to draw them. More polygons to draw means lower FPS.

I don't see the link between realistic physics calculated in real-time and more polygones.

Example:
Case 1: A couple of years ago, if in a first person shooter you shot in a window, game developers simply patched a texture with a hole on the window and end
of story.

Case 2:Until recently the breaking and falling of the glass was not calculated/simulated in real time but followed a precalculated pattern done in the studio.
Also the effect can look in the game spectacularly realistic, it does not change: You can kick/shoot/hammer in different corners of the window, the glass
will break and fall each time in the same way. By this even the most spectacular effect gets rapidly repeatative. And yes, for this you need much more polygons as in Case 1, however, this is already state of the art somewhere since the days of Max Payne I.

Case 3: With a PPU you might be able to calculate the trajectory of the breaking glass shreads in real time.
Then it actually starts making a difference where and with what you shoot into the window and the glass will break and fall
according situation each time differently. Seen such an effect only once in the game, it will not appear so different compared to Case 2,
but as soon as you start manipulating and repeating the effect it will turn out differently each time. This can be extremely immersive.
However, for this you don't need nessessarly one polygone more than for case 2.
 
StrangerStranger - can you post the link to the info where Ageia was proven to rig the first demo? I'd like to see this 'fact' since you posted it.

Also, in the article, the ATI rep basically said that the number of objects didn't really have much of an impact on the GPU, as that was basically limited by the CPU. The whole reason to go the PPU route is to take that load off the CPU so you can push more objects through your GPU.

As far the numbers go, BFG's website lists the specs on their card as 500 Million Sphere-sphere collisions per second, or 533,000 convex-convex (complex) collisions per second and running 20 Billion instructions per second.
The ATI rep made the claim of 20,000 boulders at 100 fps(whatever that means), he could have made an apples-to-apples comparison and used the same nomenclature as Ageia, as their specs have been out for some time, but he didn't. He was the one hiding behind marketing speak. The 100 fps he claims is irrelevant because the context of how many actual interactions made per second is never stated.
 
Ok but in example two the GPU doesn't have to wait for where the glass will be in the next scene as it does in example 3. It just executes a prescripted event. With example 3 the location of each shard of glass has to be calculated for the next scene. As well as every other dynamic object in the environment.

Thats what I'm going for here. Every object is fully interactable with the environment.

Take for instance tall grass in a game. Many games today have tall grass in outdoor environments. But in almost every one of them, you just run straight through it like its not even there. Why? Because that would be a lot of work to calculate how the grass is going to bend and twist for when you go through it. It doesn't take really any extra polygons to draw the grass differently, but there is a latency in calculating how to draw the grass. That slows down framerates.
 
I get drawn back to this topic over and over...

I have the perception that the trouble is that we are so often debating the value of physics in games that we incorrectly associate the Ageis products with what we want because we're somewhat short on others to compare it to.

Yes, more physics in games is a good thing. But it has to be ubiquitious or it's going tobe practically worthless. Physics used to produce eye candy is just that, eye candy. It won't improve the inherent value of game play. When devs can only expect some users to have physics processing, they must write to the lowest common denominator or else cut their potential user base, and that's bad business. For example, no matter how much work goes into Cell Factor, it's not going to top sales of Quake/Halflife/Doom/etc. It can't, because the universe of customers who can play is is too narrow.

The winner in this scenario will be the comany or alliance that targets making physics processing accessible to the masses. That means that $250 single purpose cards won't be the answer that wins. This isn't the GPU days where everyone wants better visuals, many users could care les about dedicated PPUs. And if someone gives them a $50 solution for Physics versus $250 then the $50 solution will be king, even if it only does 50% of what the more expensive soltuion does.

The early GPU days were different. Everyone NEEDED video cards of some type, and there were dozens and dozens to choose from ranging from cheap VGA cards for $20 to high end cards for $400 or more. Users all HAD to have some card but they had choice in what to get.Then over time, the market matured and grew and advanced into what we have today.

At this point noone -NEEDS- a PPU, and the only option on the market is too expensive. Ageis is trying to birth a fully formed physics market overnight and it is not going to work.

It's only after the ability to distribute physics proxcessing to the masses that we can see games that -truly- use it to enhance game play become successful.

The more expensive PPUs will remain the arena of elitist gamers who want additional eye candy physics, not gameplay physics. The rest of the world will buy the minimum necessary to enjoy game enhancing physics.

The big trouble now is that on top of this, Ageis has made the mistake of letting us prove that in most cases, their hardware isn't really making a night and day difference.

In marketing the phrase is "First you create the need, then you fill it

For a company working so hard and spending so much to create the "need" for physics, Ageis has slipped up and show us that it's 80% hype.

It's the old 80/20 rule...

80% of the hype about physics is discussions of eye candy, not game play enhancement, and 80% of what Ageis' card does for us is utter B.S.
 
I don't think you understood me. My point is, before my graphics card learns to do new tricks, I'd prefer for it do graphics to the max. If there's a way to unlock more processing power on the graphics card, I'd prefer for it to do graphics. Don't talk trash about me, I did read the article. Apparently you and some other people just don't realize that I'm talking in theory, being forward looking. If ATi or nVidia decide to make use of this dormant processing power, I'd prefer for it to go towards giving me the best graphics performance I can get.

How do you expect a vertex shader to function as a pixel shader and vice versa? They can't with today's cards. This is not some "trick" that your graphics card can "learn". Each shader only can access certain portions of memory and the frame buffer and can't contribute to better performance. Hence, using this power for physics computations is a great idea.

Congratulations, you and all other people who tell me to read the article. Maybe you should learn to read first. What part of " Apparently you and some other people just don't realize that I'm talking in theory, being forward looking" do you not understand? I don't care about today's cards. I KNOW they can't just switch operations around like that on today's cards. That's why I said I was "talking in theory, being forward looking" which generally implies I'm talking about future products. I don't know how much clearer I can make myself. If on FUTURE CARDS there is absolutely no way to harness the processing power for graphics, then fine, do other calculations on there while I'm gaming away. But if there is a way to use that processing power on graphics which is, oh, the reason I bought the graphics card, then I'd like them to do that.

In my experience, the graphics cards companies have shown minimal interest in bringing new features to the previous generation of cards (such as my X800 XT, doesn't have any of the new features for the X1 series) so I find it hard to believe they'd bother making extra processing power on the X series and earlier available for physics processing.

A message to the charming StrangeStranger, just because people don't isntantly agree with you doesn't mean they didn't read the article. Maybe they didn't "read between the lines" (i.e. make pessimistic and conspiracy theory extrapolations) like you did. Instead of being a clever little twit about it, try and explain it to them. Or better yet, provide a quote from the article which you are oh-so-familiar with.
 
everthing i have said has been explained countless times. oh and about that whole they're not going to support pyhsics on older cards :roll:

you obviously don't keep up with the news. ATI's whole physics stance is on using your current and older cards to do physics. they also foresee maybe people mixing a r600 with an old x1600 to do the physics. so yes old cards will be able to do physics as ATI do not expect someone to buy 2 new cards for god knows how much. they are allowing people to recycle their old card or buy a cheap old one for soley physics.

i should not have to explain things to people if those things are widely available for them to check them selves. i also don't like people like your self making assumptions then statig them as facts or at least to that effect.

I do keep up with the news. They say that they're going to let you use your old card for physics, they don't say HOW old. By the time they have their Physics solution out, they'll also probably have the R600 out; if that's the case, then the old X1600 would be just that, your old card. I'm just going off what I've seen recently. My X800 XT, which I think is a far bit more powerful than a low-end X1 series, doesn't have the ability to do anti-aliasing for alpha-enabled textures (such as chain link fences in Half Life 2), nor does it have any hardware encoding or decoding. I can't even use the tool they provide to X1 owners which does video re-encoding on your CPU, because I don't have an X1 series. Was it you or someone else who said ATI and nVidia have proven themselves to be deceitful or unreliable? Either way, it doesn't matter to me and shouldn't for anyone else, we still have to wait and see what their final implementation is.

I realize you don't have to explain everything to everyone, but if you know specifically what you're trying to point out, you probably know where it is too; some people miss a line or don't fully understand something, so there's really no reason to tell them to read the $&%*@! article every time. If you don't like people such as me making assumptions and stating them as fact, then stop making the assumption that nobody read the article, then stating it like a fact.
 
If thats the direction you approach it from then there really isnt going to be any progress made. Just as many games dont NEED sli or a sound card with more than 2 left and right out doesnt mean that its not going to be at the demand. I agree with you their price point is definetly not something i would buy into... especially for first gen product.

All they really need to do is just get the support from 2 major companies. If they can even make it into one engine say next gen UT. Then every game based on the UT engine will have that capebility. In my opinion it happens to be too early to tell.

PS you do not need an audigy card by any means. Nor does it have a visible/tangeble impact on your gaming experience(any more so than Ageias card would have). Yet there are still plenty of people spending over 100 on them.

A first generation demo is also a poor analysis tool for any product. The only near comparison happens to be a software engine. And/Or what nvidia/ati are possibly claiming.
In my opinion anything comming from either ati/nvidia will only be a visible effect. That seems to be just about as much as Ageias card is capable of showing off at the moment. In posts above i gave specific examples how a PPU can leave a lasting effect as well.

I really could care less what company brings this forward, I'm just welcoming the change. No matter how powerful the cpu or howmany cores it has unless its given a specific instruction set it will never be as effective.

There is always the possibility they screw this up. Its a bit too early in the game to tell.

btw No matter how i can see a 50$ price point as creating a mass adoption. I cant see that happening unless there is really fearce competition or a complete lack of support.
 
first, read the ****** article we are discussing. this is the last time i will say this.


Who died and made you God's supervisor?

So some of us think Ageia's stuff is crap get over it.
 
I really could care less what company brings this forward, I'm just welcoming the change. No matter how powerful the cpu or howmany cores it has unless its given a specific instruction set it will never be as effective.

There is always the possibility they screw this up. Its a bit too early in the game to tell.

btw No matter how i can see a 50$ price point as creating a mass adoption. I cant see that happening unless there is really fearce competition or a complete lack of support.

Here here. In the end I could care less about who adds the features, so long as I get them.

The problem with getting to low costs like $50 for the card is that, first of all, Ageia gives away their software for free, so all their profits are made from the hardware. The software does cost money to develop, though, as does the hardware, and producing the hardware costs money as well. So basically, the profits from the hardware have to support a company's entire list of expenses, which is hard to do when you sell products for $50 each... especially when you consider that they're supposed to last longer than graphics cards before you have to update them.

I suppose it might be possible to make various levels of the cards, such as older $50 ones, and some all the way through $200 and beyond, but something tells me this would make things more frustrating for consumers. "Ah crap, this game requires hardware acceleration for 300,000 objects at once, mine only can handle 200,000..."
 
Okay, this is the last time I will say this - Cellfactor is not Ageia's game/demo. Whether a switch was included to run the game in software mode or not was not Ageia's doing. It's a command line switch, that has to programmed in by the games designer, Immersion Games.

AFAIK, no-one from Immersion Games, nor Ageia claimed the hardware was required to run the demo, but without the switch, the demo does look for the Physx hardware. Therefore, anyone attempting to run the demo without knowledge of the switch would be led to believe that it was required.
More than likely the switch was included by the development team without any knowledge from the marketing/sales/corporate level let alone on Ageia's side of the fence.

Yes, pretty much everything goes through the cpu, but the whole point of this is that the CPU no longer has to make the calculations for the interactions and movement of thousands of objects.

I think it's you that isn't understanding. The GPU just draws what it's told to draw. The CPU tells the GPU what to draw. Without a PPU or GPU-made-PPU the CPU also has to figure out how everything moves and interacts. The more complex that interaction, the more time it takes for the CPU to figure out the answer and let the GPU know what to draw next (FPS).

Here's a bad analogy to help you understand: Let's say the CPU is a clothing sweatshop. Finished product is what you send to market (the GPU) to sell. Up until now the little factory workers received their little bits of cloth, and a pattern and had sew it together by hand. If it's a complicated pattern, it takes longer for the poor little factory workers to assemble it and get it out to market (the GPU). The FPU is like a sewing machine. It allows the CPU to process things many times faster than it could on its own.

The real scammers here are ATI and Nvidia, because they want you to invest in another GPU for more eye candy (Havok-FX), without actually delivering anything to compete with Ageia in gameplay. Havok is the only direct competitor with Ageia, and they offer a software-run SDK limited by your cpu for 6-figures. Ageia's is free and can be offloaded to dedicated hardware.
 
> Take for instance tall grass in a game. Many games today have tall grass in outdoor environments. But in almost every one of them, you just run straight
> through it like its not even there. Why? Because that would be a lot of work to calculate how the grass is going to bend and twist for when you go through it. It
> doesn't take really any extra polygons to draw the grass differently, but there is a latency in calculating how to draw the grass. That slows down framerates.

Not nessessarily. And you say by yourself why:

You have an latency issue, so yes: you introduce lag. But this does not nessessarily bring your throughput i.e. your framerate down.

And it is this way not only for physics but also (for example) for any i/o operation:
In case you move the mouse in a first person shooter to look left, the system obviously does not stop the rendering of the current scene,
grab your input, calculate the new view angle, set up the scene and finally rerender the scene WITH THE NEW INPUT taken into account.
No, it kicks off the rendering at a set deadline and anything (i/o, new data from a another player on a different computer in a multiplayer game, physics)
that comes in later is collected, processed and queued to be taken into account for the next rendering cycle.

And this is exactly what you observe in games when you move the mouse like a madman at the limit of the refresh rate: The view on the screen starts lagging the actual position or players in multiplayer games start jagging in because they're not updated with each frame.

That's why the hardcore gamers sacrifice resolution and crank up the framrates: Not one human eye on this planet can resolve image changes of 200Hz.
But although for your eye the difference of 75Hz and 200Hz refresh starts to be banana, the lag for i/o (i.e. the player ripping the mouse to the left to get the perfect kill)
goes down from 13ms to 5ms. An THAT does make the difference. It's not for the eye, it's for the hand with the mouse...

The same will be the case with this physics stuff: The "clock" for physics effects and the "clock" for redrawing a scene does by no means need to be synchronized.

For your grass example this would mean: If a high-end SLI setup can pepper out 200frames/second, why would you want to recalculate the grass bending under your feet any faster than with every step your "game figure" takes?

Or, if a suspension bridge swings slowly in the wind, it's the duration of the bridge oscillation that is the time constraint for a smooth physics simulation:
If the bridge swings with a period of let's say 0.5Hz and you want to have a half way decent looking temporal resolution of the simulated bridge motion, you might have to
calculate at least well, let's say ten to twenty intermediary positions. Therefore, for this example you would recalculate the position of the bridge segments at a rate of five to ten times per second.
If your SLI graphics pumps out 200frames/s you just display for 20-40 frames the bridge in the same position. However, the faster your graphics, the shorter the lag of your physics effect once it's calculate until it actually appears on the screen.
Obviously rapid physics effects like falling objects will require a /much more rapid recalculation of object positions and interactions, but again, you will perceive them with your eyes, so why should you do it faster than 60Hz-70Hz as the maximum limit?
 
AFAIK, no-one from Immersion Games, nor Ageia claimed the hardware was required to run the demo, but without the switch, the demo does look for the Physx hardware. Therefore, anyone attempting to run the demo without knowledge of the switch would be led to believe that it was required.

Na, on the System Requirements section of the Ageia website, they say that a PhysX card is required to play the game. Just to let you know...
 
1. Worth $300? No, but that's the initial price. Is blue-ray player worth $1000? No, but that's the initial price. The simple reality is that until the demand increases, and Ageia can order say 1 million cards from TSMC or homever manuafacturer's the chips, they are too small to get a reasonable rate from the FAB. Secondly they have to budget how much the cards cost, plus pay for R&D, and hopefully make some sort of profit, otherwise investors will drop them like a rock. Finally a competing card needs to step up to drive competition, then the prices will likely fall.

2. Demand, well there's a big one. You know I hate to compare them to the defunct PCX1 / 2 generation. PowerVR did however market the cards better because they had working demo's that for the time, produced some excellent visuals. It gave developers and hardware enthusiasts something to think about and it was tangible. Ageia needs to get some real and practical demo's out, with the ability to disable and enable the hardware within the demos in real-time. That will stir demand.

3. Positive Competition is what we're waiting for. We all know that competition is good for the market, especially the end user. It drives manufacturers to strive to be the best. However at this point, the worst thing that can happen is getting into a Havok/Ageia/Generic dispute. What that'll drive is some people having an extra graphics card or two. Others having a PPU, and still others that decide to forgo hardware completely and design a software CPU driven physics engine. Any choice but the Generic CPU driven one would alienate users, similar to 1997 when there was PowerVR/3DFX/Rendition all with their own 3D language and some games only supporting one chipset. What's the solution? Microsoft needs to introduce a Physics API for DirectX that Ageia, Havok, etc have to comply with. Then similar to graphics cards everyone is pursuing a common goal.

4. Performance, now here's the kicker. Many seem to have the attitude, "If it's not going to make it faster, then I don't want it." Lets be realistic though, would you rather play a game at 1024x768 and medium detail at 150 fps or at 1600x1200 and High detail at 60 fps? So if your like the majority of users and opt for the high resolution and high detail at lower frame rates, then perhaps you need to look closer at the card.

Again lets go back in time to 1997. Quake 2 is out and probably the hotest game around. It features hardware support out of the box for PowerVR, 3DFX, Generic OpenGL, and Software mode. At the time, I was running a Pentium 200 and a PowerVR PCX2 card. I could run it in software mode just as fast or faster than the PowerVR accelerated mode. Yet why did I run it in accelerated mode? Of course because it smoothed out the textures and provided numerous other benefits.

Now the Ageia card is out, and it's a first generation chip. That means it will get much better with age over numerous hardware and driver revisions. You could compare it to the XBOX, as a new machine, the first games available were much less cool in content and graphics compared to what was released 2 or 3 years later. In a similar way, the first few games aren't going to fully utilize the card the way we would like to see, but time will change all preconceptions.

Now lets look at todays 'new' games that really push the graphics hardware. Take Oblivion for instance. Download the 2048x2048 texture mod, then max out the other settings (If you run an ATI card, turn on NPatches in the INI file). Then turn on 8x AA and 16x AF. At that point even today's top-of-the-line systems will grind to a halt with a few characters on screen at anything over 1280x1024. If you don't believe me try starting a sword fight in the imperial city when you will have 8 or nine highly detailed npc running around. Then if your like myself, you run the Matrox Triple Head to Go adapter. At a resolution of 3840x1024 or approx. 4 Million Pixels on the screen (210 degree field of view) that will nullify every system out now, and likely any hardware to come for a year or two. Thus I don't want any physics off-loaded to my GPU.

5. GPU Physics are the worst thing ever. Lets be serious. When you play Half-Life 2, one of the great things is watching the physics. Now whether you have a slower system and a hot-off-the-presses new PC, you get the same level of physcs. The old system you may loose some graphic quality, that's for sure. However if you were to suddenly start running physics off say a Radeon X1600 Pro and you can say run physics at an arbitrary number like 4. Software mode is 1. How does a developer produce a game where you could have a 10 or a 1?

Graphics is simple, you turn down the detail. In physics however turning down the detail isn't an option. It may be somewhat of a limited option in Oblivion because it's a single player game. But say in WOW, with everyone having different hardware specs you couldn't bank on anything. You couldn't bank that everyone's PC could handle 150, 000 particles on screen at once. How then could you have realistic fragments from say windows getting blown from an apartment. Thus you'd have to turn down the total effects to say 5,000 where everyone who meets the spec for the game could play it guaranteed. However if you have an OpenPL (Open Phyics Language) or DirectX Physics API where the minimum spec is 150,000 particles and your hardware fully supports it, then the game developer will write the game to support X number of effects. Remember physics is supposed enhance the game, like making it more realistic (like a better flight model in Battlefield 2). Graphics is just eye candy.

Thus I imagine that within the next few years, you'll pick up a game in retail box and it will have the words, PPU required, just like some games showed "3D Card Needed" back in 2001.

------------------------------------------------------------------------------------

In closing I'd say that although the Ageia Card appears expensive and seems to have limited support it's likely a grossly underestimated card and is waiting to have a developer unlock it's full potential. We need as an industry, developers and users to back this new industry and push for a unified hardware accelerated physics language. Graphics cards should be best left doing what they do best, graphics.

So have I bought one? Yes, 2 weeks ago.
 
Some graphics cards have sufficient GPU on some games to enable hardware acceleration.

"
their are spare gfx GPU cycles where whole parts of it are dormant. so why not use them.
"

However like I said, if your pushing your graphics hardware already to the limit like my example with Oblivion, then there's no room for physics calculations. Game developers need to be able to have a guaranteed number of GPU or PPU time when processing physics.
 
have to agree with you, i seriously doubt using GPUs as physics processors... not only is the amount of processing insane, its also extremely different from grafix processing. grafix cards arent made for it and will be as bad as processing physics as CPUs imo =/
1 card to rule them all is the best way and it can be made extremely more powerfull when dedicated to the processing type, if their going to make drivers for all different grafix cards, and all different grafix cards willl have diffferent performances it will be another jungle just as with GPUs today. and the old gfx cards wont be near as powerfull as a physx, not even the newest and tomorrows GPUs =/
so we would have to upgrade 2 GPUs often instead of 1 to have the best gaming rig. no thx i would prefer 1 powerfull card that wont need an upgrade for atleast 5 years like the PhysX card instead.
weither its worth it or not right now is simple, there are no games for it so why buy it.. and the first round of games will not be so awesome, the 2nd round might do some cool stuff, the 3rd round might do amazing stuff never seen in games before etc. it takes time.

i hope aegia gets support from game devs, because its realy up to them if aegia makes it or not. and with ATI,Nvidia trying to make their own gimped versions of a PhysX by using GPUs is a bad idea... it will lead to constant upgrading needs for another card. why bother to "emulate" (sry but i would call it that) and do something half as good when you kan make dedicated hardware that only uses its power to do the task, and does the task waaay better.. i wish Nvidia and Ati backed out and let aegia in, and the game devs are the ones that will show us what the card can do or not and there will be bad examples for a long time to come.. with a dedicated card there are no constant upgrades. i think a physx card would last minimum 5 years before you might need to upgrade. that depends on game developers ofc.. physics might become so popular in game that it might take 2-3 years before you need to upgrade.. a physx card is a long term card though, and its very powerful. how long term GPUs are and how powerfull they are is something id like to see...

i remember when games used to say: requires a 3D acceleration card, you could play them without one but did anyone whine about that? i didnt see any whine about it. almost every game at the time had that line on it. didnt the Nvidia demos "require" a Nvidia card? why arent Nvidia liers and the meanest ppl in the world lol.. new motherboards "require" a 2.2 PSU, you dont have to have one but they require it, you can use your old one with minimal modification but they "require" a 2.2 PSU... alot of things "require" something but can be run/used without it. i think its because general world population is getting more spoiled, we complain about every small fault we can find today. 20 years ago defects and stuff was plenty, you didnt complain if it was still functional. a single scratch today is the end of life but i gues it must be so.. alot of software "require" things, but can be made to run without them. and not once have i seen a company be blamed for it like aegia.

its general hate because there might be another card in your gaming rigs, a card you wont have to upgrade for long periods of times, a card you wont require to have to run games. Havoc is doing good with software, they would rock with hardware and it would have been nice to see them and aegia working together, but Nvidia and/or ATi are already working with them i think? aegia is left out with only game devs to help them. nvidia/ati have funds and can get their brands in any product, aegia are newbies but i have no doubt a physx seriously will outperform a GPU based physics processor... it will be interesting read once that time comes :wink:
 
you ask me for proof, so ill ask you for proof. i was saying my opinions and what i think about it, you seem to find something to complain about every poster here so show me the proof where GPUs outperform a dedicated physics processor as you say we have seen them

i know that physics calculation is the hardest and most demanding of calculations a computer can do. i know a CPU is extremely bad at doing it, i know a GPU isnt made for it, i know nothing can compete with hardware dedicated for the task. so GPUs will do a mutch poorer job, not to mention the other things i said.

you take alot of things personal, you shouldnt do that.. and your talking like were all a bunch of kids that know nothing.

your saying a piece of hardware is better at doing stuff it wasnt meant to do, while im saying hardware doing something it was dedicated to do beats that hardware. how am i discrediting myself? i didnt even say it IS so, i said we will see and it will be interesting.
 
Not quite sure what Ageia has done to piss you off this much...
Youre asking for a first generation hardware to provide proof thats its actually worth something.
Then you may as well go back a few years and ask 3dfx & ati for proof their cards actually did something when they were first released. You are also comparing something that is a well established product vs a startup. The only reason any of the graphics companies have demos now is because they need to show off their power.(otherwise joe consumer has no reason to spend 500 on a card vs 200) Demos are not something the hardware companies are responcile for.

Even so judging by your tone and responces to everyone else you would not be satisfied if Ageia did release demos for their product. As by your cocidiration they would be "rigged".

And yes the GPU is a powerfull chip. Its specialized towards a very broad range of mathimatical functions. More than enough for a the number crunching distributed proccessing programs need. However the calculations requiered to do physics calculation tend to be quite diferent from the capabilities of a gpu.

Aigeas SDK is also free for developers. The demo in question is slapping the functionality onto a product mid stream if not worse. The fact that someone found a way around it does not mean its rigged just means the developers programed the ability to do so.(?possibly because the final product may have physics as a toggle and not a requerment?)

I cant read the future and tell you weather Ageia is full of BS or not. But you do not have sofficient proof yourself to state that they are.

I happen to agree that using a GPU to calculate physics is a poor decision. Unless the graphics companies expand the opperations of the GPU itself. This also means developers need to pay $ for the privledge of using Havok(in my oppinion a half assed solution) vs a free untested alternative.

If you have access to an Ageia card and its realy supported by either of the 3d packages out there(maya, max, xsi) or the SDK can be adapted to it. Then your best bet is to go test it on there to test the effectivness. Until then your argument holds up no more than stating that a Video compression card has a powerfull chip...its obviously better than Ageias....Hers the proof with the card you can compress your mpeg videos 4x the speed obviously it can do physics.
 
i think we need to remember that we are talking first generation hardware here yes the gpu is capable of performing physics perhaps at a sutible rate to run current levels of Physics calculations, but as already mentioned previously the software rendering used to be just as quick as the first batch of 3d acceleration. now that we are on 7th generation geforce cards the cpu is not even capable of performing 2fps (3d mark Cpu testing) now as strangestranger says 'ageia needs to put up or shut up' i read this as saying they either need to give up now or produce second generation technology that does show real significant performance gains (not in terms of FPS but in terms of game experience) that will make people think that the card is a good option.
As far as GPU physics processing goes im not a fan as i see it as a feature along the lines of AA that whilst improoving appearance reduces performance. if this is wrong then ill hold my hands up but surely the part of GPU doing the calculations should be used in graphics process, however if there is only 10% decrease this point is made redundant as 60fps to 54 fps is no real issue.but this is where you can say that you may start seeing FPS gains by having a Dedicated PPU as games, physics & graphics start becoming even more advanced than they already are enthusiasts will want to unload some of the strain off the GPU and place it on the PPU (ofc this is just a mere educated guess)
i would love to see ageia doing well as it gives buyers another option when designing a system. and when i come to build again i will like with graphics cards look at the performance gains of having a certain model against the extra price of getting it and if second generation does give significant gains then i will get one and if not i wont, simple as that
 
OMFG, you were the one who posted that article and the irony is it rips your arguments to shreds. it mentions that both havoc and GPU's can do gameplay physics but you say they can't. WTF.

Actually, if you'd read my comment :
The real scammers here are ATI and Nvidia, because they want you to invest in another GPU for more eye candy (Havok-FX), without actually delivering anything to compete with Ageia in gameplay. Havok is the only direct competitor with Ageia, and they offer a software-run SDK limited by your cpu for 6-figures. Ageia's is free and can be offloaded to dedicated hardware.


You'd see that I said Havok COULD do gameplay physics and I DIDN't say that the GPU couldn't do gameplay physics, but that they WON'T(at least not yet). The fact that they won't leads me to believe that the current generation hardware can't support the read-back to the cpu well enough, and that the adoption of physics effects only is a stalling tactic. But go ahead and buy your third gpu - I'm sure it'll be pretty.
 
No one, including ATI has produced any benchmark software (the slide doesn't say what system specs, what game, or what detail everything is set to either) to show how many Shaders are being utilized at one time during a game. Give me that as proof. Let me run Oblivion/FEAR/3DMark06 at the max and see whether or not the shaders are maxed out. That's the proof I want.

So what if the shaders are maxed out 95% of the time on a particular title? Is the 5% enough to keep physics happening? What if the game developer sets a minimum hardware spec of say an X1800XT. Based on that spec, they plan to make the physics take up a constant 50% of the available shaders? Now are you goint to cut down on graphics detail from Medium to Low or Ultra Low because the other effects would peak higher than 50% like that slide ATI showed. If you think that if the graphics chip could off-load the physics to the Quad-Core CPU when it's low on resources, you'd likely experience the same problem when the RAM starts paging to the Hard Drive.

Right now physics is very rudementary in any game to date. Virtually every title i've played has glaring misses. I remember thinking about Doom3 when I shined a flashlight on the mirror it didn't reflect back and light up the room. Why not I wondered, isn't this a next gen game? Oblivion I laugh at where the falling rate is constant, you don't accelerate as you fall, and rain falls through solid objects like floors and roofs.

With a dedicated PPU card (Ageia or otherwise), the developer can bank on X number of physics in a game.

If you believe the GPU is the be-all in the industry, then why do we still have CPUs and APUs? Use those extra shader cycles to replace the CPU and Sound Card. Drop the stupid Core 2 or Athlon, and plug your Nvidia/ATI GPU in the CPU socket. Wait, someone recently thought they could do everything with just one chip, SONY. Guess what, even with the superchip that they're putting the in the PS3 they still needed a GPU to handle the graphics.
 
If your really interested in what physics can do in simulations. I would suggest creating some realistic AI representations in 'games.

You might start by something relatively (relatively) easy. Such as landing on the moon.

I can garuantee you. That. Shooting radioactive barrels,will surely make you dead. Within any sense of the physics world.

Try the moon. Or Venus . Or Mars. .. or Mercury.

Angers me to think that wow,I got to blow something up. Then, in this educated comment - ''look at my physics engine". Phooey. Apply physics engines to all sorts of simulations that may require persistent physics calculations to display.

Cloth tearing ? Blowing up radioactive barrels. What market segment are you kidding.

Sure some teenagers dad will get a respective kick in the A for this type of idealism in simulation/physics engines. Of course this type of presentation is the neutral zone for public and market. Posh overkill,making a large market very narrow. Hardly respect of the microcosm creating the simulations. With the strengths it has to do so.