Nvidia puts their money on PhysX

Julian33

Distinguished
Jun 23, 2006
214
0
18,680
Hmm well I'd be tempted to say it's just the Inquirer, but there is photographic evidence.

This is just wild speculation, but perhaps it wouldn't be crazy to conclude Nvidia is planning on buying out Aegia? It would make sense to include the PhysX chip on a graphics card, like the 3D accelerator was integrated onto 2D cards.

Of course it could just be another sensationalist Inquirer story!
 

shabodah

Distinguished
Apr 10, 2006
747
0
18,980
If I was Nvidia, even before the ATI/AMD merger stuff, I would have jumped on Ageia. It gives them an advantage that ATI's arch would have otherwise had. I can't understand why they've waited so long.
 

IcY18

Distinguished
May 1, 2006
1,277
0
19,280
i doubt it since they've been working with Havoc to make their own physics processor...if anything the Dell came with the Aegia card...
 

kinneer

Distinguished
Sep 11, 2006
56
0
18,630
I personally think the concept of a dedicated physics card is dead. It makes more sense to offload the physic to a seperate core in a multicore CPU. More people are likely to have a multicore CPU than a dedicated physics card.
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Agreed! Why not just pay $250 for a dual core that can do more than physic calculations? Let's see, I pay $250 for a PhysX card and how many games support this? Price of a PhsyX card > price of games that support it.

Conclusion: PhysX is a waste of money until more games support it!
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
Agreed! Why not just pay $250 for a dual core that can do more than physic calculations? Let's see, I pay $250 for a PhysX card and how many games support this? Price of a PhsyX card > price of games that support it.

Conclusion: PhysX is a waste of money until more games support it!

Well the are numerous differences in the architecture of CPUs and GPUs/PPUs that make it theoretically implausible to run physics calculations on a CPU. Someone with more knowledge on the subject could help you more than I, but i believe it's something to do with the floating point operations or something. Not entirely sure, i just know it's not really plausible on a CPU.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
You obviously never saw the tornado demo of Alan Wake running on a Kentsfield. Basically its been designed to utilize multiple core CPU's to dedicate 1 or more cores (depending on how many you have) to do the Physics Processing. Then again they could have had a PPU Card hiding somewhere inside the machine...

http://www.youtube.com/watch?v=DetnKgOxrSI

The reason a CPU cannot do physics as well as a dedicated PPU is that they cannot do as many floating point calculations per second, which is basically what you need to calculate physics. If someone more knowledgable than i came along they can explain the intracasies better.
 

kamel5547

Distinguished
Jan 4, 2006
585
0
18,990
You obviously never saw the tornado demo of Alan Wake running on a Kentsfield. Basically its been designed to utilize multiple core CPU's to dedicate 1 or more cores (depending on how many you have) to do the Physics Processing. Then again they could have had a PPU Card hiding somewhere inside the machine...

http://www.youtube.com/watch?v=DetnKgOxrSI

The reason a CPU cannot do physics as well as a dedicated PPU is that they cannot do as many floating point calculations per second, which is basically what you need to calculate physics. If someone more knowledgable than i came along they can explain the intracasies better.

Right, Folding@Home said that Folding on GPU's was much faster due to the huge gain in FLOPS over CPU's. However, they also said they cannot get that power out of nVidia cards due to the differences (I think they use the shaders for calculation of which the 19xx series has much more of than the 79xx, I may be worng on the exact specifics). This may mean that it would make sense for nVidia to ally with Ageia if they do not want to add the huge number of shaders.

As far as running it on CPU's, if the calculations can be spread among cores it may soon be possible to use the CPU's. While a GPU can do more FLOPS than a single core (I think it is 2-3 times the number at the moment), a quad core should be able to match that performance by spreading those calculations to the other three cores (1 core for main game thread, 3 for physics)
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
You obviously never saw the tornado demo of Alan Wake running on a Kentsfield. Basically its been designed to utilize multiple core CPU's to dedicate 1 or more cores (depending on how many you have) to do the Physics Processing. Then again they could have had a PPU Card hiding somewhere inside the machine...

http://www.youtube.com/watch?v=DetnKgOxrSI

The reason a CPU cannot do physics as well as a dedicated PPU is that they cannot do as many floating point calculations per second, which is basically what you need to calculate physics. If someone more knowledgable than i came along they can explain the intracasies better.

Right, Folding@Home said that Folding on GPU's was much faster due to the huge gain in FLOPS over CPU's. However, they also said they cannot get that power out of nVidia cards due to the differences (I think they use the shaders for calculation of which the 19xx series has much more of than the 79xx, I may be worng on the exact specifics). This may mean that it would make sense for nVidia to ally with Ageia if they do not want to add the huge number of shaders.

As far as running it on CPU's, if the calculations can be spread among cores it may soon be possible to use the CPU's. While a GPU can do more FLOPS than a single core (I think it is 2-3 times the number at the moment), a quad core should be able to match that performance by spreading those calculations to the other three cores (1 core for main game thread, 3 for physics)

But isn't that a huge waste of Cores? Surely using a £80 graphics card as a PPU is a much more sensible option, or am i the only one thinking this?
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
Right, Folding@Home said that Folding on GPU's was much faster due to the huge gain in FLOPS over CPU's. However, they also said they cannot get that power out of nVidia cards due to the differences...
When and where did folding@home say that? They said they haven't gotten it to work with nvidia GPUs yet. http://www.tgdaily.com/2006/09/29/folding_at_home_to_use_gpus/
There have been a number of posts by fanboys saying that, but I don't see anywhere that F@H has said that. So many "enthusiasts" seem to be completely clueless about how projects are run. If you read all the information you can see that they chose the ATI GPU over the nVidia GPU because it was easier to make it work (and I assume more people have them right now, which likely also influenced their decision). They could have likely coded it to run on a physx card with even more ease, and it would have likely kicked the crap out of the ATI card and the nVidia card combined, but almost no one has physx cards, so that would be pointless. There are also a number of scientific co-processing add-on boards that would put an entire "enthusiast" gaming machine to shame as far floating point power goes, but in a project like F@H ease of programming and how many people actually run it is far more important than raw speed. The SETI@home team admitted that the "screensaver" mode of their distributed computing program slowed processing down to as little as 1/4th the speed, but that they continued to support it because it increased the number of people working on the program thusly improving the number of units processed for the project. There are many factors that go into programming projects like this, not just what is fastest for the job, because ATI GPUs are most certainly NOT the fastest thing by a long shot.


As for the OP: There was a physx card in it... so what? Was it demoing any programs that use it? Like another poster said it probably just came with the Dell desktop. INQ makes really bad stories, they got a picture of an nvidia machine that happened to have a physx card in it and seem to have made up the rest. Unless they were demoing a game that can even use a physx card this is a completely made up story, pure rumor, no basis in reality. nVidia has it's own physics processing projects already underway, unless those projects are having a serious lack of brain power there is no reason for them to buy another company's technology. Until there is evidence of one of those two things, you make safely disregard this story.
 

kinneer

Distinguished
Sep 11, 2006
56
0
18,630
Agreed! Why not just pay $250 for a dual core that can do more than physic calculations? Let's see, I pay $250 for a PhysX card and how many games support this? Price of a PhsyX card > price of games that support it.

Conclusion: PhysX is a waste of money until more games support it!

Well the are numerous differences in the architecture of CPUs and GPUs/PPUs that make it theoretically implausible to run physics calculations on a CPU. Someone with more knowledge on the subject could help you more than I, but i believe it's something to do with the floating point operations or something. Not entirely sure, i just know it's not really plausible on a CPU.

Well, can you give me an example on these theoretically implausible physics calculations ? What differences makes it impossible ?

There are problems with using a dedicated physics card, not just number crunching. One is memory. Physics affects the interactively of a game more than graphics. Such as a falling box. The way the box falls has a greater effect than changing the appearance of the box. This means the CPU and the physics will need to communicate a lot more. This could would mean a lot of data needs to be transfered in both direction. There was a major problem with the old AGP port. Upstream to the GPU was very fast. Downstream was very slow.

I am no expert so I do not what the bus transfer rate is like but if the PCIx transfer is optimised for one way, then that is a potential bottleneck.

This will not be the case in a multicore CPU.

Another possible reason is as more cores becomes available, Intel announced the quad-core, then it is possible to dedicate more than one core for the physics.

Unless Ageia is bought out, they will find it hard to compete with Intel in processor development.
 

Slobogob

Distinguished
Aug 10, 2006
1,431
0
19,280
But isn't that a huge waste of Cores? Surely using a £80 graphics card as a PPU is a much more sensible option, or am i the only one thinking this?

Thats no argument. Most Software available doesn´t use a CPU as it is intended. Instead of actually calculating the Processor is is juggling the stack like mad. So right now it may seem as if the GPU may be better, but if Intel intends to go multicore like crazy, i bet $/Core is going way down and in two years four cores will be cheaper than even a entry level GPU. Then again, two years from now, there might be GPUs integrated into the CPU or at least available vor CPU Sockets...
 

chuckshissle

Splendid
Feb 2, 2006
4,579
0
22,780
All I see is that Nvidia pc is equiped with Ageia card. Whatever impress the masses I guess. So they're using physics card, big deal. I don't think they're fooling the people about that, maybe they ran out of space that's why they place the desktop behind the monitor.
 

hcforde

Distinguished
Feb 9, 2006
313
0
18,790
There are many factors that go into programming projects like this, not just what is fastest for the job, because ATI GPUs are most certainly NOT the fastest thing by a long shot.

However, the x1xxx chips from ATI are programmable and some time ago ATI released a program that will let you encode video via its GPU and it does it much faster than the CPU.

Thanks
 

SuperG

Distinguished
Jul 21, 2006
28
0
18,530
All I see is that Nvidia pc is equiped with Ageia card. Whatever impress the masses I guess. So they're using physics card, big deal. I don't think they're fooling the people about that, maybe they ran out of space that's why they place the desktop behind the monitor.
I see a DeLL PC with nV and Ageia hardware.
It would be realy amusing if Dell give a ATI ageia rig to show of nV.
So funny, but it's a Dell. So what the new's.
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
But isn't that a huge waste of Cores? Surely using a £80 graphics card as a PPU is a much more sensible option, or am i the only one thinking this?

Thats no argument. Most Software available doesn´t use a CPU as it is intended. Instead of actually calculating the Processor is is juggling the stack like mad. So right now it may seem as if the GPU may be better, but if Intel intends to go multicore like crazy, i bet $/Core is going way down and in two years four cores will be cheaper than even a entry level GPU. Then again, two years from now, there might be GPUs integrated into the CPU or at least available vor CPU Sockets...

AMD open socket plans: http://www.simmtester.com/page/news/shownews.asp?num=9596

Open socket designs are good for bringing innovation to the market; good for consumers. GPU, PPU, FPGA, dual Proc, Intel or Via CPUs running on an "AMD" socket. These are all possibilites. An FPGA "co processor" is the one that excites me the most. When an application is launched it could program the FPGA and you then have ~500mhz of low-overhead specialized processing power to do exactly what you want to do no matter how much load you put on the rest of your system. I am currently testing ASIC code on FPGAs running at a mere 53mhz and they can respond to and route 2ghz traffic on multiple channels simutaneously within a few nano seconds. IMHO specialized logic chips are the only way computers are going to get significantly faster from here on out. FPGAs capable of over 500mhz are currently available. Specialized logic chips are extremely powerful, as you can see by a ~500mhz GPU having significantly more processing power then a dual-core ~3ghz CPU.

I don't think it's really feasible for high-end gaming GFX though as they depend heavily on having tons of very fast RAM very close to the GPU with dedicated bandwidth and that might not work well for a "socket" GFX solution if it relies on a single Hyper Transport link and shares memory bandwidth with the rest of the system (but we're still talking much faster then current GFX cards that use system ram). For mid range GFX and many other applications it could work extremely well.

As far as it being a "waste" of cores. Dual-cores are dropping in price and rapidly becoming the norm. Using a core that a system already has instead of buying additional hardware is certainly a valid strategy. CPUs are very ineffecient though as they are designed to be able to do anything and require a lot of software overhead to accomplish most tasks, so they may simply not be fast enough, no matter how many extra cores you have, to perform some tasks (real-time GFX being an obvious example of something you wouldn't want to do with an "extra" core).
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
This may be one reason why people believe nVidia is too slow http://www.anandtech.com/video/showdoc.aspx?i=2849&p=3

"The situation for NVIDIA users however isn't as rosy, as while the research group would like to expand this to use the latest GeForce cards, their current attempts at implementing GPU-accelerated processing on those cards has shown that NVIDIA's cards are too slow compared to ATI's to be used." The article says that... but they don't quote anyone on the folding@home team as saying that, or any folding@home documents that say that.

The article I linked a few posts up references Stanford Associate Professor Vijay Pande as saying that ~"the group has not been able to get the software to work on Nvidia chips."~. The two articles were posted within 1day of eachother. If you're completely out of touch with reality and ****-retentive I suppose you could say that not working at all is much, much slower, but to any sane person you would quickly realize that the reason ATI is supported now and nVidia isn't is because it was easier to code for ATI. If it had been the other way around I'm sure it would be nVidia cards running folding@home irregardless of the number of shader units. If the folding@home team got the code to work with nVidia cards they would release it, it's the only thing that makes sense, no matter how slow it was. They have pentium2 machines working on the project, they're not picky about where the gflops come from. They haven't released it because they never got it to work. Maybe they had some trouble and "it's slower anyway" was one of the reasons they gave up (if they even did give up, F@H didn't say they had) but if that was the reason I'm sure "and the g80 is coming out soon and should be faster and easier to code for" was the next reason.

More on subject: A CPU can be used for any number-crunching task. But there is a lot of overhead involved as you not only have to give the CPU the information, you have to tell the CPU what to do with it. With a dedicated logic chip you give the chip the information and it does the same thing with it it always does. It has transisters hard-coded to do complex equations without having to be told how. A CPU has to do the first step, write it to chache, read the instructions for the next step, read back the information and do the next step, over and over (a vary vague description, but largely accurate). Some algorythms are more ineffecient in this senario then others. A dedicated logic chip can do any algorythm in one pass (but there is a limit to how many algorythms it knows how to do, whereas a CPU can do anything as the software tells it the algorythm each time). That not only significantly increases the amount of work done per cycle, but also greatly decreases the latency which is why they are often used in real-time applications such as gaming GFX and real-time physics calculations. That's why a 500mhz GPU can kick the crap out of a 3ghz dual-core CPU and a 53mhz FPGA can do things a computer system could never dream of.
 
IMO the reason nV would look at Ageia is to bolster their Physics knowledge base and R&D versus ATi to apply to their GPU based solution, not to promote, adopt or continue the PPU.

Of course people are making alot of noise out of a PC that had the PPU in it, yet we don't know if it was actually used in any way, may have been there in the test system for other stuff prior to the expo/demo.
 
That's why a 500mhz GPU can kick the crap out of a 3ghz dual-core CPU and a 53mhz FPGA can do things a computer system could never dream of.

Yeah FPGAs are great, but they wouldn't be as fast for the mundane stuff, and they'd be more expensive than ASIC once the requirements were finalised, also they'd likely draw more power.

What would be great IMO is adding a good sized FPGA alongside the ASIC (CPU/VPU) in order to be much more open ended with options/features.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
IMO the reason nV would look at Ageia is to bolster their Physics knowledge base and R&D versus ATi to apply to their GPU based solution, not to promote, adopt or continue the PPU.

Of course people are making alot of noise out of a PC that had the PPU in it, yet we don't know if it was actually used in any way, may have been there in the test system for other stuff prior to the expo/demo.

I see that ATI's physics engine is much more advanced than nvidias at this moment in time, so this is probably why.
 

IcY18

Distinguished
May 1, 2006
1,277
0
19,280
IMO the reason nV would look at Ageia is to bolster their Physics knowledge base and R&D versus ATi to apply to their GPU based solution, not to promote, adopt or continue the PPU.

Of course people are making alot of noise out of a PC that had the PPU in it, yet we don't know if it was actually used in any way, may have been there in the test system for other stuff prior to the expo/demo.

I see that ATI's physics engine is much more advanced than nvidias at this moment in time, so this is probably why.

Uh? Where did you see this?
 

ElMoIsEviL

Distinguished
But isn't that a huge waste of Cores? Surely using a £80 graphics card as a PPU is a much more sensible option, or am i the only one thinking this?

Thats no argument. Most Software available doesn´t use a CPU as it is intended. Instead of actually calculating the Processor is is juggling the stack like mad. So right now it may seem as if the GPU may be better, but if Intel intends to go multicore like crazy, i bet $/Core is going way down and in two years four cores will be cheaper than even a entry level GPU. Then again, two years from now, there might be GPUs integrated into the CPU or at least available vor CPU Sockets...

AMD open socket plans: http://www.simmtester.com/page/news/shownews.asp?num=9596

Open socket designs are good for bringing innovation to the market; good for consumers. GPU, PPU, FPGA, dual Proc, Intel or Via CPUs running on an "AMD" socket. These are all possibilites. An FPGA "co processor" is the one that excites me the most. When an application is launched it could program the FPGA and you then have ~500mhz of low-overhead specialized processing power to do exactly what you want to do no matter how much load you put on the rest of your system. I am currently testing ASIC code on FPGAs running at a mere 53mhz and they can respond to and route 2ghz traffic on multiple channels simutaneously within a few nano seconds. IMHO specialized logic chips are the only way computers are going to get significantly faster from here on out. FPGAs capable of over 500mhz are currently available. Specialized logic chips are extremely powerful, as you can see by a ~500mhz GPU having significantly more processing power then a dual-core ~3ghz CPU.

I don't think it's really feasible for high-end gaming GFX though as they depend heavily on having tons of very fast RAM very close to the GPU with dedicated bandwidth and that might not work well for a "socket" GFX solution if it relies on a single Hyper Transport link and shares memory bandwidth with the rest of the system (but we're still talking much faster then current GFX cards that use system ram). For mid range GFX and many other applications it could work extremely well.

As far as it being a "waste" of cores. Dual-cores are dropping in price and rapidly becoming the norm. Using a core that a system already has instead of buying additional hardware is certainly a valid strategy. CPUs are very ineffecient though as they are designed to be able to do anything and require a lot of software overhead to accomplish most tasks, so they may simply not be fast enough, no matter how many extra cores you have, to perform some tasks (real-time GFX being an obvious example of something you wouldn't want to do with an "extra" core).

Actually Flasher nVIDIA GPU's ARE too slow for GPGPU calculations.

Mike Houston: Stanford University Folding@Home Project

nVIDIA GPU's have to many limitations. First of all they're 256bit internally (vs 512bit for ATi X18/19K VPU's), which normally would not be a problem but nVIDIA 7x00 GPU's have a 64K limit in shader lengths. Thus many passes need to be made in order to get the work required by Folding@Home done (or physics). The more passes the slower it is.

ATi X18/19K VPU's support unlimited shader lengths. (they can do all the work in a single pass).

Another aspect is the dedicated Branching unit found on ATi X18/19K VPU's. These are used extensively by GPGPU applications. nVIDIA's 7x00 do not support such a feature.. thus more passes required once more for Branching.

And last but not least pure Shader power. ATi X19K series have more shader power then ANY nVIDIA GPU currently (by 2 times or more).

All this makes nVIDIA 7x00 too slow for hardcore GPGPU applications.

Read up on it.

ATi x19K > AGEIA Physx in GPGPU applications if dedicated to such an application as well BTW.