Is Ageia\'s PhysX Failing?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
- If developers didn't want gameplay physics they wouldn't be using havok or physx engines.
- Havok-FX is effects only, and that's what the GPU makers are supporting.
- Yes, the calculations are the same, but if they only do the work on the GPU they probably can do it very quickly. But if there is no problem sending it back to the cpu to be used in gameplay, why won't they just do it and have it done with? Do you think all these developers are using havok and physx because they just want fancy particle effects? The reason developers are using physx at all is because they ARE excited about the possibilities of interactive physics effects. The fact that ATI and NVidia supposedly can support this and aren't when it's painfully obvious that they do want it, says to me at least, that their technology can't support it in that capacity with the current generation of hardware. It's a stalling tactic, and I'm betting that when they do roll out gameplay physics, you won't be doing it on your old card. It'll be a case of "if you want physics effects you can use your old x1600 card, but if you want the new gameplay physics enabled, you'll need to upgrade.
 
I didnt read all six pages, so if this has been stated before apologies.

Imo the Agea card is a really good idea and deserves some success. However I think it is bound to fail mainly because it is too early. By this I mean that the Gameing market is going more and more towards online play. No matter how effective the PPU the problem lies with the data transfer speeds accross the internet.

Imagine a physics explosion in a multiplayer game. Firstly the server will need to transmit the information about each piece of debris to eveyone playing who is likely to be effected in some way by it. The clients will then need to report collisons back to the server which will then need to check that these collisons can be considered valid. Due to the nature of physics sims (i.e they are chaotic) the explosion will be different on each client. Then the server will finally send back the information to the clients telling them that the player has been hit or missed.

All of this will make a huge impact on the amount of data that the servers/clients will need to transmit back and forth. Certainly far more than your average gamer has access to. The same problem limits the number of players in a game 200 players equals slow gameplay.


On the issue of the GPU doing the Job of a PPU just as well as the ppu could :lol: :lol: :lol:

People complain that the Agea card does not improve performance. Do you really think that a GPU that struggles to play Oblivion will be uneffected by the extra overhead of calculating and then rendering thousands/ tens of thousands of physics objects on top of what it is already doing.
 
Ageia released a card. What people chose to do with that hardware and sdk is the direct result. All tho I completly agree with you you want to show off physics thers 100 better things that can demonstrate its power and effectivness or lack there off. However the developers decided to demo it by oozing slime out of a barrel and cloth sim.

Part of this fault falls on Ageisa part falls on the developer. One could have targeted a diferent market the other could have thought of better applications or demos.
VVVExampleVVV
Had it been a demo of a flag wavering in the wind or a coal over a complex character model it would have been a diferent effect. Similarly with a contraption and with a fluid flowing through it. Give the player the ability to controll the ?density?(sory a bit brain dead) of the ware to see how it acts under diferent properties.
The same with my earlier example adding a physix interaction to the player and the level. changing the properties between dry and wet/slippery.
Someone elses example of altering foilage sprites.
and so on...
 
here is a Google video Link

It is shot off a screen and not fantastic quality but at least there is no signup.

That looks pretty nice but it really does not show anything useful. By this I mean that the GPU is pretty much dedicated to running the physics and not running a game world at the same time. I would have liked to see it chucking large amounts of geometry around with collisons as well.
 
Well the presentation is to the comunity of programmers actually as a statement to the public saying that the programmers doing this 'really dont get out a lot'. Laughing as they walk or run to the bank.

My anologuos statement is to take the bag off of their heads,or place it over the lips to get some air.

If simulation of physics changes physics ,or simply by mention of programing it to do so,then there is not much any intention of reacting to the rules of physics.

So if so,then,the intention of the simulation is not that of a representation of physics.

Im not telling this as a mention of the means of AI. The means of representing physics within it. Is what I am dealing with. So that I am completely sure that amassing the ceiling to simply game playing and the representation of this article is a complete contrast.

Failure- you bett. Of course I like anybody do not have the perfect condition for any headline.<- to article writer. In hopeful expectation.
 
I am not trying to dispute that Ageia does not have any good examples as proof. I agree on that front. I also happen to agree with several other people above that the demo's demonstration is a pretty bad way of demonstrating the power of the effects.

However you have to remember this is a small startup. They do not have the resources to hire to write a serie of demos to show off their capabilities. I'm pretty sure all of their developers are either pulling 80 hr weeks on the SDK development and exansion. Or they are sent over to actual developers to smooth out intergration.

As i said tho it is waaay to early to tell if Ageias card will make a major impact. You never know they could always be another 3dfx or aurial. Develop and refine the need and get eaten up by the giants for intergration.
 
My main concern about GPU physics acceleration is the implementation of a unified architecture. Both Nvidia's and ATI's solution to physics processing is the utilization of dormant clock cycles to do the physics calculations, or the dedication of a separate GPU. It is my understanding that one of the advantages of the Direct3d 10 API is to Unify the various shader pipelines in order minimize dormant clock cycles. In my opinion this would in essence nullify part of the current advantage of GPU physics. When the physics processing requirements will surpass the the power available in today's GPU's is anyone's guess, but the fact that a dedicated card will eventually be required seems to be a logical scenario to me.

The other advantage of GPU physics is the use of an older GPU to do physics and a next generation GPU to perform graphics operations. Unfortunately this results in a multi-card solution to physics processing, which is the stumbling block firmly planted in front of AGEIA's solution.
The proposed ATI and Nvidia solution is purported to be complimentary to the upgrade path which should help them (and AGEIA) convince developers to implement more physics in their games.

Currently the best option for physics processing is AGEIA's, by default, due to the fact they have an actual product supported by actual titles. At the same time it is unfair to judge AGEIA's PPU capabilities based on the currently released games since it was a tacked on extra. It is equivalent to using the X64 Far Cry patch to judge how 64bit games will look and perform. Until ATI and Nvidia actually release their solution and it is implemented there will be no unbiased figures to base a comparison between the two on. All we have is theoretical data to debate, and arguments based of theoretical data only hold "theoretical" water.
 
A lot of pages to read.
Is Ageia PhysX failing? It sounds like a very baised title. but the article read different. So I agree what is put in there. Mostly is old news. But is more wait and see. All is still open.

What 3DFX has done till now is a track record of many years where other firms have take over after a lot of years.
Was 3DFX burnt to the ground 3 months after releasing there PCI voodoo?
3DFX kickstarts the 3D gaming card market. With the latest result of that market the X1900 and 7900 series als a full produkt line from cheap on board, budged to high-end to extreem.

Failing? As like the start a race of 50 miles and call the one that runs a bit behind in the first mile failing?
To ageia The heat of the batle spans 1 to 3 years we have just a few months behind us.
The outcome. fail or sucses of Ageia depends on games wich pushes the hardware. The very much needed Killer app. Most favorite pointed one would be UT2007 ?

Developing games take a lot of time. Manny years. A lot of those 65+ ageia supporting games are coming out within a year. The are some time already in development. The time Havok and ATI/nV needs to react with Released games to ageia takes what longer.

So the first games don;t decide if ageia fails.
Graw who cares. Its the first. What was 3dfx first leading Glide Game?
If UT2007 fails to push PPU or Joint task force. Or retail version of Cellfactor fails and almost every of those 65+ games. Don't take good use with PhysX and arent populair. Then Ageia will fail. But we know how this battle unfolds in 2007/2008 when a lot of havokFX and PHysX games are aviable. Or one camp is out of busness.

Mine Experience. I do like graw I'am a fan of ghost recon series mostly single player. So I already like it with Havok(CPU). PPU PhysX adds just a tinny bitty to it.

Oh I do like CF. It's excualy a bad example of a Map to demostrate somemore PhysX but its just one map. It Some Ut like Bot mayhame with a PhysX twist.

But CF R36 come with Mod tutorials and tutorial vids and the Reality Builder "Mod" level design tool. You can use Visual C# express as Script editor. I do. Played CF also in reality Builder with out bots.

Maybe some modder can make more intances of those Flag in the Play field. Or make a new physx oriented new map.

I managed to adjust the ammount of Nadeshels from 6 to 12. And clipAmmo from 90 to 250.

Judge Ageia PhysX sucses/fail pure on CF combat training. limbostation make no sense its just one single map.

It take some time when those 65+ game com out also some time till the first Havok FX games comes out. Effect Physics is a save low risk adoption so sucsess secured. But Ageia is pushin the cool Gameplay Physics stuff now. If a few of thos 65 plus game take good use of it. Ageia has a opertunity to be sucsesfull.

the first and first wave of hardware accelerated Physics game would be Ageia P1 supporting games. Havok FX game comes a bit later. This gives ageia also a bit more opertunity to sucseed.

So its a long wait. "A long race."
 
are you sure about the read back situation as that kinda shoots your argument down in flames.

If they can do it, why won't they? Why the hesitation? Saying developers haven't asked for it yet is ridiculous. Havok is already utilized in over 150 games - wouldn't they benefit from having the whatever gameplay physics exists in those games enhanced?

The only reason that makes any sense to me is that they are suffering a massive performance hit once that information is sent back to the cpu. They say that the info is read back to Havok-FX, but is it residing on the GPU in one of the shaders (I'm guessing this is the case) or is it on the system?

If it is on the system and not the GPU, then the question still remains: Why not just implement full physics acceleration now, if it's so easy to do?
If it is on the GPU, then that explains perfectly why they say they CAN do it, but won't. (because they have poor data throughput back to the game engine.
 
There are gameplay physics already integrated into the havok engine. The havok engine is doing calculations on the CPU for those gameplay effects. If it's as they say, then they could simply shift all those existing calculations to the GPU, probably through something as simple as a change to a library (dll).

Instead, they are choosing to only do half. Why? Implementing a feature on their end doesn't alienate anyone. That's like saying they can't implement DX10 cards because that might alienate their customer base. It's up to the developers to decide whether to use those new features or not.

You can report me for trolling all you like, I'm not the one calling people stupid. All I'm doing is speculating on why the GPU manufacturers have made the decisions/announcements they have, and trying to explain why I think the dedicated PPU is a better solution.
 
On Playstation 3, there is the SPU to accelerate physics. On PC there is the GPU and multi (-core) processors to accelerate physics. Both SPU and GPU are not limited to 'FX' physics, the programmability becomes better with each shader model release.

It will not be easy for Ageia to fight all those battles against the 800 pounds gorillas (Havok, NVidia, ATI, Intel/AMD, Sony).

They better join one of those gorilla's ;-)
 
I don't really know anything about the PS3, or how the SPU might handle physics calculations. From what I have read, it sounds like the performance of the SPU is largely determined by whether it is doing single or double precision calculations. If it's single precision, then it's probably very effective, however if it's double precision then apparently the performance "drops by an order of magnitude" down to that of a desktop.
From what we know already, the CPU does a fairly poor job of physics calculations, so I'm not counting on multi-core processors to provide an adequate solution.
This one --> snippet <-- here seems to indicate that the cell processor will be about 50% as effective as the dedicated PPU, but I could be misreading that.
 
Physics simulation is usually performed in single precision, on PhysX PPU, GPU, CPU and CELL SPU. PS3 CELL handles double precision in hardware, rather then emulated (PS2), so it is not that much slower. The SPU's are specifically designed to perform physics simulation very well.

This one --> snippet <-- here seems to indicate that the cell processor will be about 50% as effective as the dedicated PPU, but I could be misreading that.
It is misread, PPU means PowerPC Unit in the PS3 context.
In the 2.4 release, several components of the AGEIA PhysX pipeline have been offloaded from the PPU of the PLAYSTATION®3 to the SPUs
These were early results of the porting from Ageia, to offload of the CELL PPU to one CELL SPU. Typically 1 single CELL SPU can run many times faster then 1 CELL PPU for tasks like physics simulation.

I am not guessing, because I've been one of the members of the Sony team doing the PhysX port (and former Havok employee too), see google cache:

Refactoring AGEIA for SPUs (Erwin Coumans, SCEA PhysX development team, Tom Lassanke, AGEIA)
 
A couple things I've been meaning to say.

First of all, StrangeStranger, you keep on saying that GPUs are really good for calculations, and that's all physics is, so GPUs should be really good at physics (although ask yourself, what processing part of the computer ISN'T good at some form of calculations?). Calculations are calculations after all, right? Fine, then explain to me the OTHER thing you keep saying (No, not "ur stupid read the article") : This picture here:
gpuloads-work.jpg


You keep on saying (not an actual quote, but I don't want the stupidity to look like it's my idea) "Look see if you read the article you'd know that the GPU has lots of extra processing power, see how the pixel and vertex shaders aren't always being used?" Here's what I want you to explain: If calculations is calculations is calculations, then WHY can't they simply balance the load between the vertex shader and pixel shader? Obviously you can't just stick calculations anywhere and expect them to be done, there are optimizations to be taken into consideration (my spoon is good for soup and my fork is good for salad, switching their roles doesn't work so well if it works at all). Think about it, if a GPU is at least 10 times as powerful as a CPU in terms of raw processing power, why don't they just stick 4 GB of DDR/DDR2 on a graphics card and offload all calculations onto there like a CPU? Maybe it's not so simple, maybe there are software/hardware/firmware things to take into consideration.

Speaking about this free processing power due to pixel shaders and vertex shaders sometimes being fully utilized while the other is twiddling its virtual thumbs, if I understand the Unified Shader concept it combines them which would essentially nullify that point. It's funny, now that I think about it, this relates back to me being an "idiot" and digging myself into a deep hole of stupidity. All I asked was that graphics cards companies work on exploiting that "hidden power" for graphics first. And you know what? If I'm given the choice between the Unified Shader architecture and what we have now, where I'm forced to lose potential processing power due to VS and PS being separate, I'm going to choose the Unified Shader stuff (which I believe ATi is using in the R600 chip and used in the Xbox 360 chip). Maybe I'm crazy for wanting my graphics cards power to be used for graphics, but it seems ATi at least is crazy in the same way, so I don't really mind.

So what have I tried to establish thusfar? First, in current processor architectures, not every piece of processed silicon can do everything as well as the next one; if this were the case we wouldn't have multiple processors with specialized uses in our computers. Second, the physics processing power that's sitting dormant in current graphics cards will hopefully be used in the next generation of GPUs.

One thing I realized the other day but didn't get around to posting in regards to random numbers in physics... first of all, one point of realistic physics would be to eliminate a great deal of randomness (in a current game, you might program 3 different ways that a window or vase might shatter, so as to add a bit of variety without complex calculations) by simulating, on a simplified level, "exactly" what happens. If all 30 computers of players on a map get informed that a grenade was thrown under a car, assuming the physics are done properly, the explosion should be the same each and every time, because each computer can calculate the trajectory of the grenade, its position and speed when it explodes, the resulting concussive force on the car above it, and if that is enough to blow off any parts. However, if you want random numbers this shouldn't be hard at all to do. You see, when a computer makes a random number, it's not really random. A computer can't just invent a number; random numbers can generally only be done by doing physical simulations. However, say we have a game and we want to have some random stuff going on. The game could have a pre-programmed table of 10,000 random digits, 0 through 9. When a multiplayer level starts up, it would tell every computer to start at a certain position in the table, and every time a random number needed to be used, the clients could all get it from that table. Maybe they pick every 7th number, or move forward x spaces (where x is the sum of the random number and the one before it on the table). The end result is that the computers all have a list of random numbers from which additional "random-enough" numbers can be picked, yet there's still a formula so every computer will always end up with the same result.

A final point, to people who keep insisting that Cell Factor is currently just a demo of "cloth tearing" and "liquid spilling". Did you not find destructable objects interesting in the slightest bit? Somebody tries to take cover behind something, and you blow it to pieces; not only does this add to the gameplay by forcing players to make intelligent choices about where to hide (YES behind the 2' thick concrete wall, NO behind the plywood) but it also populates the game with massive amounts of truly physics-driven rubble. It appears and behaves in the ways you would expect it to, as opposed to current games where a grenade or gunshot will make sparks and sprays of dust which promptly disappear. Destructable objects are very cool in my opinion, as is the magnitude of objects in general. Sure, it's neat to be able to hide behind a single box, but not anywhere as neat as hiding behind a stack of boxes. In earlier times, the stack of boxes would be treated as a single immobile object, probably to reduce physics computations. With the ability to calculate huge amounts of objects (throw a gravity grenade to see what I mean) it's feasible to make any and all objects physics-driven, adding to the dynamics of gameplay. This also goes hand-in-hand with destructable objects, because if you can use a grenade to turn a stack of crates and a car into thousands of independently animated fragments, you better be able to calculate them all. Granted, those could be done with software-only mode (as I did myself) but they're taking processing power from somewhere, and I have a feeling I could add more bots if my CPU wasn't bogged down with that.

In the end I'd have to say I hope it gets standardized like DirectX, which should make everyone happy: The pro-Ageia people would get their cards, the pro-GPU people would get their cards, and everybody would benefit from the competitive drive to crank up performance on the cards, as well as a standard way to compare them.
 
What part of " Apparently you and some other people just don't realize that I'm talking in theory, being forward looking" do you not understand? ..
I don't care about today's cards. ..
I don't know how much clearer I can make myself.

You call that clear? Lets take a look at your posts…

First you say:
If there's a way to unlock more processing power on the graphics card, I'd prefer for it to do graphics.

Then you say:
I don't care about today's cards. I KNOW they can't just switch operations around like that on today's cards.

The first quote is talking about unlocking potential within current cards, while the second says you don’t care about today’s cards and know that there is no potential to be unlocked. Hmm… Maybe this is would make it unclear? You can count two contradictions in your posts here. There are several others in your posts, but that will make this post too long.


Also, look at this sentence. Does this imply you are talking about FUTURE video cards?

If my graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200, AA get up to 6x Temporal, AF up at 16x, details and view distances set to max, and framerate consistently at 60 frames per second while playing Oblivion, F.E.A.R. and other games.
MY GRAPHICS CARD IS FOR GRAPHICS FIRST AND FOREMOST. I don't wanti to bog it down with additional calculations.

Hmmmm… You refer to your graphics card, which you care about. Yet with the previous quote you DO NOT CARE about today’s cards. You also state that:
As for people saying that CPUs and GPUs could be used to do physics, that's already done now

Current GPUs are not used for physics calculations within games, so it is not, as you say, “done now”.

This leads to two conclusions for your drivel:


A) You have developed a time machine and you currently own a future video card because you want the best graphics outcome for your current video card, but you don’t care about today’s video cards because you own the video card of the FUTURE!!!!! In this case, can you please use your time machine to go back in time and clarify your posts so that they would not cause this confusion?

or

B) You are confused and not sure what you previously wrote, and why people do not understand you. In that case, that effective communications class at your local community college seems very tempting now!
 
i know that physics calculation is the hardest and most demanding of calculations a computer can do.
Hardly; ever heard of ray tracing? :?

Do you mean Raytracing as a alternative render methode.
Because you can do Raytrace collission to with simple straight gun fire Physics.

Currently Hardware is aimed at geometry Texture Pixel shading method.

What if there whas Specific a RT_VPU or VVPU
RaytraceVisual Proscesing Unit.
Voxel Visual PRoscesing Unit.

Problem with raytrace and Voxel it only excist in software so it is a CPU load.
Where the geometry Texture pixelshading Method has the Option software and Hardware Support.

Knowing The difference between a CPU and the latest GPU is in graphics prosesing. What could mean Hardware acceleration for Raytrace and voxel.

It's mayby possible with those GPGPU of today and unified nextgen. Those uniform shader could be mis-used for Voxel computation.
As there where Voxel and geometry hybrid Novalogic games.

Raytrac i don't know. But hardware accelerated with specialiced hardware.
Could make a big deal.

Just Like PPU for PhysX
And AIS1 for AI.

GPU is acting more as a general purpouse Parrallel Coprocesor
 
Alright, further clarification because you don't seem to get it. I'll go in order of what I posted.:

I did in fact say
As for people saying that CPUs and GPUs could be used to do physics, that's already done now, and as you can see the effects are generally not very great.
This was a bit of a tongue slip on my part. It was a careless combination of two true statements: 1) Some people are advocating using CPUs and GPUs to do physics calculations, and 2) CPUs currently do the physics calculations. The point of the sentence was to say "People seem to think that CPUs and GPUs can handle complex physics, but if we look at modern game physics as they're implemented [using CPUs] there isn't nearly the level of physics detail we could hope for." Fortunately nobody else was tripped up by my unintentional statement (that GPUs currently do physics calculations) in the next 4 pages of posting otherwise they would've shouted "OMG I want your graphics card how well does it do physics??" Gimmie a break. It was a decently sized post, and I'm sorry if I didn't have an English professor proof-read every sentence to make sure it couldn't be picked apart by overly-hostile persons such as yourself.

The next time I posted, I said
...I'm glad you can admit you have "no clue" what you're talking about. Because obviously, you don't. If my graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200, AA get up to 6x Temporal, AF up at 16x, details and view distances set to max, and framerate consistently at 60 frames per second while playing Oblivion, F.E.A.R. and other games. MY GRAPHICS CARD IS FOR GRAPHICS FIRST AND FOREMOST. I don't want to bog it down with additional calculations for a every gust of wind, grain of sand, and droplet of water in the game. If I can get another card that will do that without a performance drop, then I'd like that very much.
You ask if this could talk about future cards, and it absolutely could. When I said "my graphics card" I didn't just mean my current card, I meant cards I might buy in the future as well. To demonstrate this, when I say "If my (current) graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200 [etc]" and "If my (next) graphics card has processing power to spare, I'd like to see my resolution hit 1600x1200 [etc]" it still makes sense. Whether it's now or next year, I'd like my graphics card to do the graphics. The only scenarios I now envision for which I would approve of graphics cards doing physics is if I'm running at max graphics detail and resolution at fluid framerates and there's still processing power to spare, OR if some design issue with graphics cards means that they cannot and never will be able to use some of their power for graphics, but they CAN do other things on it.

Obviously, however, other people DID understand me, just not you, and just not until now it seems. With later posts I've tried to clear up what I'm saying, but you seem completely unable to deal with it. Due to your deliberate misinterpretation of what I was saying, negative and sarcastic attitude and conclusions, completely pointless personal attacks and utter refusal to let go of any way you might have to demean my valid opinion, I have to conclude that you're one of those people who enjoys getting a reaction by provoking somebody to no end rather than having a useful exchange of ideas. Instead of trying to come to the conclusion that I'm either from the future or that your inability to understand what everyone else can stems from my lack of communication skills, how about you try to address the pros and cons of hardware vs software physics, using past and present products, as well as possible future scenarios? I'm sure it would be much more useful and relevant to the article this thread was based off of.
 
how about you try to address the pros and cons of hardware vs software physics, using past and present products, as well as possible future scenarios? I'm sure it would be much more useful and relevant to the article this thread was based off of.

I already did. Please read the thread before making conclusions.
 
*Sigh* Here we go again, more flaming, well, transistor count does not determine a chips prosessing power and how many instructions it can do per sec, take a 7800 GTX and a X1800XT for e.g. wait, a X1800 has a special instruction feature that enables every pixel and vertex engine to work almost every clock, the performance matches the 7800GTX, not so strange.

Heh, personally I don't really like my GPU to do Effects Physics and simple collision detection like ragdolls(It can do ragdoll, but the ragdoll won;t be able to hit you even you can hit it, get it?), I would like physics to be process on something like a PPU, I mean on paper, it would be nice see? like I got a bunch of ragdolls and other objects in my current viewing area and my FPS is 58, I put a nice little 'nade in the middle, then kaboom, FPS dropped to only 50, get it? Nothing is added, exp for the explosion effects and the granade model, since all the objects are already there, try spawning 30 ragdolls and other breakable stuff in Half-Life 2 Garry's Mod then put a granade in the middle and you would know what I really mean.
 
I think the limiting factor on physics simply lays in updating thousands of objects, both across the system bus and more importantly over the net. Most games deal with a few dozen objects and crap out with more.

I'm guessing the biggest growing market in PC games is MMOs or multiplayer and I can't fathom keeping thousands of objects syncronized over the net. Physics in games are good for a for a few interactive objects at a time and for graphical effects. I really hope physics improves, but I think the bottleneck is everything but the math itself.
 
HERE[/url]
Getting quake-3 to work required 20 AMD 1800s. Notice the superior lighting, shadows, reflections, and refraction.,
I believe some special ray-trace cards already exist.
I won't be surprised if real-time ray-tracing becomes common this decade. (these examples are like over 2 years old already)

[img:b673f7365a]http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/screenshots/mutlipleReflectiveSpheres.JPG
An old game, but seeing it in raytracing is one of the coolest things I ever seen.
massive2.JPG
 
Yes nice old stuf if you compared it with what is posible now Crysis.

Now imagine if there where no GPU only simple rasterizor chip.
You need a 40 conroe render farm to do Crysis on the software reference device. Emulating The shaders and pipelines.

RT & Voxel need Hardware acceleration. But the market has choosen for the other methode.

So I like the PPU solution it means a lot more Physics is posible in games.

And games get jet a lot more complex.

//edit

Link please? . It's a fictional example.
Point is with a 90nm 300Mill RTVPU you need one conroe and a way more advance game is possible then the old Q3. But Q4 or crysis. with a lot of extra unit to push the eyecandy up.
20 AMD CPu for Q3 is not a commercial sucses.
 
I did'nt want this post to be a mile long,but something you've said really knocked me off: "The real killer app, here, is clearly the prospect of destructible terrain. A popular subject of war FPS titles, for instance, is blowing up bridges or other buildings; every title I've seen has an instance of it. However, as you mentioned, just like the tripod-created-craters in the "Follow Freeman" chapter, they're all, to put it that way, "pre-canned." (perhaps a better example would be the first campaign of Call of Duty 2, which used scripted destruction every few moments)

What would really enhance the believability of games is if such destruction could truly be handled in real-time, and be completely variable. In a word, what would really convince people would be a small-scale scientific-grade physics simulation on a card. "

No offense,but this is as stupid as it gets. In order not to script events in the game, you have to make an advanced AI. Now,in COD 2,for example, you are running through the trench, while the (scripted) destruction takes place around you. Imagine the writing of the code that would allow a great number of your opponents to interact,change opinions and experiences and THINK!!! They would then be able to spot you,tell the tank commander about it so he could fire a shell directly at you. This is something we've seen on a small scale in F.E.A.R. and other games, but i guess you wouldn't expect it to be possible with 100 (or more) artificially intelligent objects (enemies). Or,if You have,lets say,an gigantic underground worm that's "chasing you".He's supposed to jump out on a certain place,and kick something off (usually,to devastate something completely).You would NOT be able to destroy it yourself before that,with or without the physics card!!! If there wasn't for the trigger zones in the games,they would not be possible at all.You just cannot build a perfect universe inside your computer and just join and interact.
Imagine a machine You'd have to have for this.Also,this opens an issue of enormous media able of storing and transfering this data, and,of course,the sheer time and resources required to make such a game.

This post is more likely to be in Games section,but I couldn't help it. Sorry.