Carmack: Hardware Physics A Bad Idea

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Wow, this is so stupid. First, he is a programmer and as programmer, we are very lazy and prefer quick and dirty solutions, like emulating everything. YES, havok is very nice because you can compute all physics with your CPU but at a very big cost... no matter how well optimized and how many core you have, a PPU will pwn havok on both cost and performance. Intel owns Havok so they want you to buy over sized and over priced CPUs from its monopoly even if a small cheap PPU could do the job. That's why it's very bad that intel acquired havok. So, why not just throw all our GPUs to the garbage and replace them with crazy 10 ghz CPUs to compensate? We must focus on GPU and GPGPU because they are the key to achieve computing power at low costs for the masses. OpenCL is very promising.
 
Hardware Physics acceleration sounds great on paper, but I don't know. I actually would say that Carmack has a lot of expertise in this field as he is one of the highest regarded game programmer. I do believe him since CPUs still have a lot of game in them and with the advent of multicore processors, I think physics can still be calculated on the CPU fine. I think NVIDIA may be overhyping physics on GPU ... then again, maybe because NVIDIA wants to move more units so they keep pushing for physics on GPUs? Both AMD and Intel back Havok, the only company that backs AGEIA is really NVIDIA here. Sure, Havok is owned by Intel and NVIDIA owns AGEIA, but I think there's a reason why both AMD and Intel, two powerhouses in CPU industry backs Havok which does perfectly fine for physics on the CPU. Most games out there are Havok-enabled compared to PhysX enabled, so I think NVIDIA is just trying to overhype things.
 
I'm going to get thumbed down to hell here but this just has to be said.

What year does Carmack think were in? 2007? Hardware physics never took off and never will and the one company that tried it got bought out and it's IP was integrated into mainstream video cards.

It's really sad to see a person like Carmack saying these things it makes him look foolish and out of touch which is far cry away from the ground breaking and innovative games this guy once made.
 
In the era where we have netbooks selling for $300 we still see Developers obsessing about the extra cost of a extra piece of hardware to the consumer(ie a $60 physic supported card).

You know honestly I like the idea that you can make some quick collision physics with Ageia. Granted the lack of x64 support is really a bad idea(I here this could be fixed??). However to think the GPU will replace the PPU in the immediate future means you ignore the fact that customers are always clammering for better looking graphics. The upside of putting physics on the GPU is you do get higher bandwidth since you wouldn't be using the PCI bus as often. However I would argue due to the cheap cost of physic support cards at the moment) while on the PPU/CPU side you can have more branching that the PPU is needed. Branching is really the key here since to do something random access allows for a hybrid of collision/custom physics emphasis for the specific genre.

Now on the flipside the ray tracing folks can argue you want raytracing on the cpu with the branching and physics on the GPU. Well to that I say good luck sending all that bandwidth across the PCI Express bus.

In sum would be nice if the implementation for all this game related SDK's would be cleaner but hey I'll take what I can get.
 
Ignore my first post needed some grammar corrections ;-)

In the era where we have netbooks selling for $300 we still see Developers obsessing about the extra cost of a extra piece of hardware to the consumer(ie a $60 physic supported card).

You know honestly I like the idea that you can make some quick collision physics with Ageia. Granted the lack of x64 support is really a bad idea(I here this could be fixed??). However to think the GPU will replace the PPU in the immediate future means you ignore the fact that customers are always clammering for better looking graphics. The upside of putting physics on the GPU is you do get higher bandwidth since you wouldn't be using the PCI bus as often. However I would argue due to the cheap cost of physic support cards at the moment) while on the PPU/CPU side you can have more branching that the PPU is needed.

Branching is really the key here since to do something, random access is needed. Random access allows for a hybrid of collision/custom physics emphasis for the specific genre which is a good thing since we are nowhere near real world physics(try to do a realworld blackhole simulation))..

Now on the flipside the ray tracing folks can argue you want raytracing on the cpu with the branching and simple physics on the GPU. Well to that I say good luck sending all that bandwidth across the PCI Express bus with the hd images which must be transmitted across the bus(1920x1080x32+ bitsx 30fps eats away very quickly the bus).

In sum would be nice if the implementation for all this game related SDK's would be cleaner but hey I'll take what I can get.
 
Carmack is ok but have you guys noticed the developers are talking up CPU's over GPUs. I think it's just easier for them to code with & saving money is the bottom line. I think it would be taking a step backwards to do cpu only some day.
 
Geez, people, somebody ASKED CARMACK A QUESTION. He didn't bring it up. He didn't have any slides prepared. He spend like 1 minute talking about it. What was he supposed to say? "I can't answer that because the doofuses on THG are gonna hate me for it."

And if you'd watch the video that was linked for your convenience, or just read the according snippet in the article, you'd know that JC didn't bash Physics, but Ageia. He didn't believe - rightfully so I think - that seperate hardware was the way to go nor did he believe that Agaia's PhysX Card was a good product or that the company had a solid game-plan other than try to get acquired.
 
I don't understand why he even bothered to state this, dedicated physics cards being a bad idea should be pretty obvious to anyone who knows hardware. That's not to say that having the hardware to run physics is bad, but the current hardware trend is to increase the cores in our CPUs and increase the general purpose processing in our graphics cards. As is shown with the advent of CUDA, OpenCL, Direct Compute and the way that AMD/Nvidia and now even Intel with the Larrabee hardware will eventually integrate all processing (general purpose, graphics, sound, physics, etc) onto one chip where you can more freely utilize all of the computers processing resources fully as well as alleviate some of the bandwidth issues that are now occurring over the PCI express bus now when using the video card as a vector coprocessor. Eventually it will all be in the same place in the form of a bunch of configurable hardware, or a series of general purpose pipelines with vector co-processors and some configurable hardware all in one package. So the idea of trying to add additional hardware for the specific purpose of physics processing and further separating hardware instead of working on the ultimate solution of integration doesn't make much sense, Carmack is right.
 
Why he bothered to say it? BECAUSE HE WAS ASKED. Is it really that hard to figure out? He is not years behind, the person asking the question was. Read th_at's reply three posts up, that sums it up nicely. I wish people would learn to read the article.
 
[citation][nom]JeanLuc[/nom]What year does Carmack think were in? 2007? Hardware physics never took off and never will and the one company that tried it got bought out and it's IP was integrated into mainstream video cards. It's really sad to see a person like Carmack saying these things it makes him look foolish and out of touch which is far cry away from the ground breaking and innovative games this guy once made.[/citation]

I don't understand your point. In essence you are agreeing with everything he stated, yet you bash him for saying it?
 
Most people posting here don't seem to realize that Havok and other CPU-based physics engines can't calculate 2-dimensional continuous non-rigid body movement. Ageia PPU can, and so can PhysX-enabled NVIDIA GPUs. So are dynamic liquid simulation.

However, there are two questions left. The first one is that WHY and HOW would you want those physics features in the game? And the second one is that how do you fit PPU into current business model?

Separate PPU can cut developing cost down dramatically. You no longer need to pay hefty to animation artists for effects. Just do it real time and it will look spectacular and blows up differently each time.

But it only sounds good if most people have PPU. That's not the case. For companies focusing on consoles and iPhone, it doesn't make sense to them at all. NVIDIA needs to post PhysX to practically all OpenCL-enabled hardware and to persuade game-developers to adopt their model.

One example is Mirror's Edge. PC version with PhysX looks great, while console version looks banal at best. However, it doesn't influence the game play at all.
 
He's right, and who the hell wants to learn a different api for each type of hardware? to be honest I think the idea of 'Hardware graphics' will soon come to an end too. and an AI card?? THEY ARE ALL THE SAME THING! highly parallelized Vector floating point processors! I think the ideal future solution will be one cpu, and then any number of "parallelized Vector floating point processors", basically GPGPUS. (wow, sounds alot like the cell processor doesn't it.. maybe they were onto something? sure they didn't get it right, but they were on the right track)
 
Mirror's Edge PhysX use was... anemic... to be more precise. Sincerely, The only thing I noticed was the flags and the... intense lag bugs made by the PhysX use on high end machine.

After playing the game on an ATI and an NVIDIA system, I can honestly tell you that it was the exact same experience.

You can brag all you want, but PhysX is nothing more than a marketing tool with absolutely no use. If Nvidia and ATI can offer a compatible solution with both platform, the new console war will be a mess and devs will simply find an alternative if their port to PS or Xbox is too difficult.
 
Carmack's just a software *itch ... so he's got real problems with the real world. Of-course hardware is ALWAYS the better solution and that fact scrapes Carmacks pale, thin, reedy hide.
 
[citation][nom]Airborne11b[/nom]and to answer Upendra, No one buys PPU's because all Geforce cards from 9 series and up come with built in PhysX support And those have been out well over a year now.[/citation]

Get your facts right Airborn11b, Physx have been a major introduction to nVidias' video card line up. Go to here and see for your self before you state something. You don't need to buy a PPU to have physx work, today you can get a second hand 8800 GTS/GT anywhere from 70$ to 130$ and run it as a second card with a Radeon as your main GPU. It works.
 
No doubt Carmack is probably a game programming genius, however, it seems to me he has been riding on the laurels of past innovation achievements and basking in the now old glory of doom and quake. I have not seen anything interesting come out of the id lab for years and they have been overshadowed by games like Farcry2 and even Russian developers like in the S.T.A.L.K.E.R series. All I see them do recently is develop game ports for the iphone, whoopee. I think Carmak has gotten to rich and fat and lost perspective of the industry as a whole and is in no position to make blanket statements about any one gaming development technology. I think Carmak is more likely to speak against any tech that is not Apple friendly since he now appears to be deep in their pockets.
 
I think Carmack is right, for once. Going with a separate physic hardware is a huge step backward. You can't just take a part of the graphic engine and make it run on a separate hardware. What's next? Lightning Processing Unit? Shading Processing Unit? Texture Processing Unit? That's just bad engineering. The future is in multi-core general CPU's.

On a side note, I'd like to pit the Average Joes out there who thinks Carmack has no clue of what he's talking about. You might not like the way he thinks or what he says, but he definitely knows more about the subject than you and I do.
 
Yeah, screw dedicated hardware! I mean, what have those those discreet cards for graphics acceleration really gotten us anyways? Right? And why do we really need a separate south bridge anyways? Just another part that can break, in my book! And, all Hail Carmack! I mean, maybe dedicated graphics adapters made Quake II everything that it is today, but that doesn't mean you shouldn't piss all over any hardware solution that you haven't been able to capitalize on yet.
 
Who cares what john carmack says... he's been riding on the same games he designed when gaming was just starting up.

The new game might be good, though. I would really love to see doom finally die. Then we might actually be able to take a step forward and innovate for a change.
 
I would love to see a x86 or x64 processor with physics in mind. Either an extension like SSE or revamp the APU(s) to process Physics based calculations more efficiently. This would knock out the overhead of transferring extra data to the GPU (nvidia) or let up some work for the other normal CPUs on the chip. If its just an APU revamp you'd see improvements in all games that have physics using the CPU.
 
Whatever CPU's and GPU's are WAY to overloaded with today's games and pulling teh CPU power off to a PPU is a great idea. But Camarack is probably tired of learning all this new crap.
 
Forget about physics. They need to invent a card that adds an in-depth storyline, diverse gameplay, and the 'fun' factor to any game made in the last several years 😛
 
[citation][nom]zak_mckraken[/nom]I think Carmack is right, for once. Going with a separate physic hardware is a huge step backward. You can't just take a part of the graphic engine and make it run on a separate hardware. What's next? Lightning Processing Unit? Shading Processing Unit? Texture Processing Unit? That's just bad engineering. The future is in multi-core general CPU's.On a side note, I'd like to pit the Average Joes out there who thinks Carmack has no clue of what he's talking about. You might not like the way he thinks or what he says, but he definitely knows more about the subject than you and I do.[/citation]

So dedicated graphics cards were a step backwards too?
 
Status
Not open for further replies.