Carmack: Hardware Physics A Bad Idea

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I love all these people who cant code thinking they know more than Carmack or he is somehow over the hill. He is a legend and still designs good game engines. What have you done lately? That's what I thought...
 
So dedicated graphics cards were a step backwards too?

Yeah, screw dedicated hardware! I mean, what have those those discreet cards for graphics acceleration really gotten us anyways? Right? And why do we really need a separate south bridge anyways? Just another part that can break, in my book! And, all Hail Carmack!

You guys are missing the point, and obviously don't know much about hardware. Having a graphics card and specifically targeted API's like DirectX and OpenGL were necessary for a while because they were the best option available at the time. But now that hardware is getting very, very fast and we have the option to put 2, 4, 8, 16, etc cores onto one chip, you have to have a different approach to maximizing how you use the hardware.
With our best performance gains in the future coming from expanding horizontally and making the computer comprized of thousands, or hundreds of thousands of processing units, or more it doesn't make sense to have a CPU in one place connected to the graphics processor in another place which also happens to be a vector coprocessor and has a relatively slow PCI express bus in-between and a physics processor in another location with the same situation. You want to integrate it all as best you can and make it all general purpose so that the bus speeds between the different areas is quick and you can assign resources with more control to maximize usage. Its how the industry is moving with new CPUs and GPUs and even AMD has announced quite a while ago their fusion project to try to house the GPU under the same house as the CPU.
So whether or not graphics cards and such were good in the past is irrelevant. The hardware has changed, our capabilities have changed, we have to adapt to continue to make the most out of what we can physically produce.
 
[citation][nom]VioMeTriX[/nom]i think carmack is right to a degree, a seperate ppu unit does suck, but what nvidia did by intergrating it into the video card was the way it should have always been, and maybe with a combo of using an available cpu core that isnt being used by by a game or application it would maximise what we can do with physics without the need to spend extra money, use and available slot and consume more power[/citation]

yea 10 bucks that he uses nvdia cards with physics
 
PhysX is still non-essential for PC Gaming. Most modern games can do without it and were able to easily saturate any high-end card with just 3D rendering alone. Hopefully, next generation mid-end Geforce should be able to run games like crysis at 1920X1200@30 Max setting. Till then i am still holding off playing it. Got the original and warhead but waiting for my hardware to catch up.
 
Ahem, some of us like to play simulation games (Flight sims, combat sims [Arma2, BE], and I can tell you that physics have 2 "parts":

-Decision making. If you have to calculate the trajectory of a bullet, ricochets in a faithfull way, you just can't send that to a place not inside the processor.

-Cosmetic. The sun goues through the clouds, and you can see the dust particles shining and slowly falling to the ground.. this part needs physics to look really right, but as it is not a part that takes decision making can be left to the discreet solution, as sound.
 
PPU's are dead. Physics processing isn't. As Carmack says, multicore CPUs can more than handle physics processing, but GPGPUs will soon take the crown of processing on that front.
 
I got sucked into all the hype of the Agiea (or however the hell you spell it) PPU when they first came out in late 2006, or 2007 there was one game that had just come out that was made specifically to showcase the difference in the enviormental effects using the PPU and i'll admit that it looked promising, but i was remiss at having to run it on XP since x64bit support was (and still is i think i read on one of the posts) lacking. Using an AMD cpu you get performance gains of 17-25% running 64bit over 32bit so that was enough to make me want to return the POS, but the fact that you actually saw some serious performance losses in other games was just ....offensive.


I run 4870x2's in crossfire and happened across a refurbed nvidia 9000 series card which is half way decent for under $100 and got it just to see what it could do in win7 since it isn't limited to a single gpu driver for the OS like vista. Right off having to pull one of the 4870x2's was a major downside and the only real performance gains i saw were in Benchmarks and crysis. I'm an obsessive overclocker at times i'll admit but big meaningless benchmark scores hold little interest for me, and cysis lost it's allure once i could run it maxxed out in 1920x1200 beyond using it as a stability test on new overclocks.

ATI cards at some point in the past 6-12 months finally began supporting the HaVoK physics engine and with each driver release frame rates have only improved for me, i don't think hardware based physics is a bad idea so much as poorly implimented and even more poorly supported by software.

As for arrays of cpu's replacing the gpu (larrabee) there 16 core effort has only been able to match the performance of a 9800gtx while sucking up 300 watts of juice. Not exactly a step forward so much as a step to the far left.
 
@DeadlyPredator: NO, we don't like "quick and dirty." Speak for self! WE like elegant solutions.
 
Status
Not open for further replies.