A couple things I've been meaning to say.
First of all, StrangeStranger, you keep on saying that GPUs are really good for calculations, and that's all physics is, so GPUs should be really good at physics (although ask yourself, what processing part of the computer ISN'T good at some form of calculations?). Calculations are calculations after all, right? Fine, then explain to me the OTHER thing you keep saying (No, not "ur stupid read the article") : This picture here:
You keep on saying (not an actual quote, but I don't want the stupidity to look like it's my idea) "Look see if you read the article you'd know that the GPU has lots of extra processing power, see how the pixel and vertex shaders aren't always being used?" Here's what I want you to explain: If calculations is calculations is calculations, then WHY can't they simply balance the load between the vertex shader and pixel shader? Obviously you can't just stick calculations anywhere and expect them to be done, there are optimizations to be taken into consideration (my spoon is good for soup and my fork is good for salad, switching their roles doesn't work so well if it works at all). Think about it, if a GPU is at least 10 times as powerful as a CPU in terms of raw processing power, why don't they just stick 4 GB of DDR/DDR2 on a graphics card and offload all calculations onto there like a CPU? Maybe it's not so simple, maybe there are software/hardware/firmware things to take into consideration.
Speaking about this free processing power due to pixel shaders and vertex shaders sometimes being fully utilized while the other is twiddling its virtual thumbs, if I understand the Unified Shader concept it combines them which would essentially nullify that point. It's funny, now that I think about it, this relates back to me being an "idiot" and digging myself into a deep hole of stupidity. All I asked was that graphics cards companies work on exploiting that "hidden power" for graphics first. And you know what? If I'm given the choice between the Unified Shader architecture and what we have now, where I'm forced to lose potential processing power due to VS and PS being separate, I'm going to choose the Unified Shader stuff (which I believe ATi is using in the R600 chip and used in the Xbox 360 chip). Maybe I'm crazy for wanting my graphics cards power to be used for graphics, but it seems ATi at least is crazy in the same way, so I don't really mind.
So what have I tried to establish thusfar? First, in current processor architectures, not every piece of processed silicon can do everything as well as the next one; if this were the case we wouldn't have multiple processors with specialized uses in our computers. Second, the physics processing power that's sitting dormant in current graphics cards will hopefully be used in the next generation of GPUs.
One thing I realized the other day but didn't get around to posting in regards to random numbers in physics... first of all, one point of realistic physics would be to eliminate a great deal of randomness (in a current game, you might program 3 different ways that a window or vase might shatter, so as to add a bit of variety without complex calculations) by simulating, on a simplified level, "exactly" what happens. If all 30 computers of players on a map get informed that a grenade was thrown under a car, assuming the physics are done properly, the explosion should be the same each and every time, because each computer can calculate the trajectory of the grenade, its position and speed when it explodes, the resulting concussive force on the car above it, and if that is enough to blow off any parts.
However, if you want random numbers this shouldn't be hard at all to do. You see, when a computer makes a random number, it's not really random. A computer can't just invent a number; random numbers can generally only be done by doing physical simulations. However, say we have a game and we want to have some random stuff going on. The game could have a pre-programmed table of 10,000 random digits, 0 through 9. When a multiplayer level starts up, it would tell every computer to start at a certain position in the table, and every time a random number needed to be used, the clients could all get it from that table. Maybe they pick every 7th number, or move forward x spaces (where x is the sum of the random number and the one before it on the table). The end result is that the computers all have a list of random numbers from which additional "random-enough" numbers can be picked, yet there's still a formula so every computer will always end up with the same result.
A final point, to people who keep insisting that Cell Factor is currently just a demo of "cloth tearing" and "liquid spilling". Did you not find destructable objects interesting in the slightest bit? Somebody tries to take cover behind something, and you blow it to pieces; not only does this add to the gameplay by forcing players to make intelligent choices about where to hide (YES behind the 2' thick concrete wall, NO behind the plywood) but it also populates the game with massive amounts of truly physics-driven rubble. It appears and behaves in the ways you would expect it to, as opposed to current games where a grenade or gunshot will make sparks and sprays of dust which promptly disappear. Destructable objects are very cool in my opinion, as is the magnitude of objects in general. Sure, it's neat to be able to hide behind a single box, but not anywhere as neat as hiding behind a stack of boxes. In earlier times, the stack of boxes would be treated as a single immobile object, probably to reduce physics computations. With the ability to calculate huge amounts of objects (throw a gravity grenade to see what I mean) it's feasible to make any and all objects physics-driven, adding to the dynamics of gameplay. This also goes hand-in-hand with destructable objects, because if you can use a grenade to turn a stack of crates and a car into thousands of independently animated fragments, you better be able to calculate them all. Granted, those could be done with software-only mode (as I did myself) but they're taking processing power from somewhere, and I have a feeling I could add more bots if my CPU wasn't bogged down with that.
In the end I'd have to say I hope it gets standardized like DirectX, which should make everyone happy: The pro-Ageia people would get their cards, the pro-GPU people would get their cards, and everybody would benefit from the competitive drive to crank up performance on the cards, as well as a standard way to compare them.