Analysis: PhysX On Systems With AMD Graphics Cards

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
The point if clarifying sse2 efficiency is not to tell nvidia how to do their job. Their job is making money for shareholders and there's nothing wrong with a gimmick if it brings in the bills.

This article is asking if the gpu offload story is really so useful, or if there's some games being played with the gains so nvidia can sell more cards. The issue is interesting because game programmers need to adopt this api, and they wont if it doesnt help them. A crippled cpu implementation is probably not what they signed up for.

From where I sit it looks like the benefit advertised by physx is not against a reasonable cpu implementation, but against a crippled one.

If same work can be done in 3x the cycles (and distributed across multi core to reduce time even more) then the physx starts to appear suspect.

Seems like nvidia made a bet that this gimmick would give them an advantage, and they had to cheat to make it look good.
 
guys
this review open my eyes..... unbelievable ....nvidia deserve the finger !
anyway the multicore trend for cpu is a good thing regarding physix, bullet phyx also

"nVidia is holding back progress"... that's an sick behavior from a corporation who has money, engineers, etc to create something competitive,,,,not lies, deception, etc
 
[citation][nom]clonazepam[/nom]iirc, wasnt/isnt nintendo a big fan of this strategy? i could be mistaken but i think i had read that game developers for the original nintendo couldn't utilize the hardware's full potential, only what nintendo allowed at that time. Later on, they'd allow developers access to more of the hardware/features/capabilities as time went on, in phases.[/citation]
The whole idea that "most games don't use all of a console's full potential" is largely a myth; there's some improvement after the first wave of games, but that's because the first games began development before the console was finalized, and hence had to be open to the console being tuned down a bit.

With older consoles, (3rd/4th-gen stuff like the NES, Super NES, and Genesis) pretty much all the power was used from the start; there's not much available with the CPU, graphics or audio. Most limitations often seen, especially for the SNES, were in terms of add-on stuff and accessories, which were limited primarily by cost/practicality. I mean, sure, an 8-player game sounds cool today, but back then, even getting 4 people crowded around a TV was a stretch... And that'd assume the player owned two splitters. (and remember, early 90s TVs were NOT the huge ones that are standard today)

Similarly, there was also cost concerns; making a copy of a game wasn't a matter of just paying 5 cents to stamp another DVD; they had to be burned onto ROM cartridges, built on assembled PCBs. If you wanted a large game, that meant paying for more ROM chips. Sure, Star Ocean or Tales of Phantasia where a then-mind-blowing 6MB... But they also cost three times as much as a more standard 2MB game to make; I'm talking $50-60US in costs per cartridges, vs $15-20US. And if you wanted to add in a snazzy SuperFX or SA-1 chip to get the improved visuals seen in Yoshi's Island or Super Mario RPG, (respectively) that's another $10-20US.

For the original NES, the console was even more limited; you had a whopping 2 KILOBYTES of program RAM for the CPU. Special things were limited to screen effects pulled off by altering registers on a per-scanline basis (Kirby's Adventure does this a lot) and also using bank-switching to make the cartridge as large as you wanted to spend money on. (similarly, KA packed a whole 0.75 MB)

You might be talking about some tech demos done, such as a raycaster on the NES, but they don't represent actual GAMES... Adding in the ability to do something other than walk around would've likely been too much to handle. Similarly, one must remember that back in the 1980s, the FPS genre outright didn't exist. It didn't start becoming POPULAR until Wolfenstein 3D hit shelves in 1992. and by then the NES was old hat. So it's not because any maker held back their console's capabilities; it's because it was either impractical, or simply no one had thought of it at the time. The same goes for all consoles, regardless of who made 'em, be it Sega, Sony, Microsoft, Nintendo, or whatever.
 
I'm no fanboi of ANY company and lately I've been really meaning to pull the trigger on 460 1gb (since it's clearly best bang for the buck right now).

But the things they do....BIG turn off and I will not support that type of business.

ATI it is
 
Why won't Nvidia just make a dedicated Physx card? I'm sure they'll make profits out of it since their decisions are business based anyway.
 
Must be a slow news week. I remember the hype of this back in late 2009 and the beginning of 2010, when AMD and Nvidia execs were mudslinging back and forth (AA disabling in Batman:Arkam Asylum comes to mind).

To me, the issue revolves around being 'competitive' or being 'anti-competitive'. Its perfectly fine to make a feature which works well with (or even ONLY) your hardware. Thats just being 'competitive', as its up to the users (us) to choose to adopt it by purchasing it or not.

Its NOT ok to implement a feature that intentionally looks for competing hardware and cripples it or disables features. This is 'anti-competitive' because users get penalized for exercising their choice of using a competitor's hardware.

Nvidia are walking this line between savvy business and stifling competition. Evidence so far points to the latter (IE. hacking to block NV vendor ID recognition and voila! radeons work with geforces).

The real tragedy here is blaming developers. They have neither the time nor resources to optimize an SDK. Problem is, even if one did invest in a way to seamlessly offload Physx to a CPU (no incentive to, but just saying), Nvidia would probably sue their pants off.

Just my $.02

 
I've been running 2 5850's in CrossfireX with a GTS 250 as dedicated PhysX for a while now. This is all done on my Asus Rampage III Gene using the x4 slot for the GTS 250.
 
I am sick of this partisan crap that is simply put, completely unnecessary. It looks completely unprofessional and just plain low for nVidia to cripple the competitor's game performance by leveraging developers.
I understand that nVidia needs to turn profits, and yes, I know the economic climate for computer graphic solutions is not the greatest. They have to stay competitive in order to float some profit. But screwing over customers in order to post better performance than their competitor is just not cool.
I congratulate nVidia for coming up with a solid GPU physics engine. I would love to see it turn into something truly remarkable; one might say that it already has. But instead of bringing the best out of it, they have leveraged it (and their TWIMTBP campaign) to hobble the competing GPUs and sacrificing their own customer's performance to boot.

Good job nVidia, with a Tragedy of the Commons on your hands you should have a fun time earning my respect again.
 
To people defending NVIDIA. There is a difference between passively not helping your competition and actively reducing performance. What would you say if AMD and INTEL made their CPUs slow down when working with NVIDIA cards? A company should try to keep customers by making good products not by artificial incompatibility and uncompetitive practices.
 
I believe that Physix should be improved
And who exactly might you think should improve it? Nvidia did, got a huge improvement in performance and - sore loosers are crying. TH must have got some $ from ATI or Intel to talk this crap.
 
Then why is the performance picture so dreary right now?

•With CPU-based PhysX, the game developers are largely responsible for fixing thread allocation and management, while GPU-based PhysX handles this automatically.

Automatically. Why is that? I want to note that Nvidia makes it happen so. Should they now do it for the developers? Can Nvidia foresee every game on the horizon? Shouldn't developers use the 3.0 kit, if they really care about gamers (cause Nvidia apparently doesn't) - no. Developers will just release the game faster and at lower expense, and quietly blame this on anybody else.

Lets assume that people go to work for companies like Nvidia because they have a passion for gaming, or graphics, or even computers in general. After that - everyone is alike - remain competitive, keep an edge, keep a keen eye, and earn as much as you can. I am tired of hearing about not caring about gamers. People generally care about their performance and their personal lives. Does anyone make YOU do anything extra? I can see some major objections - this is communism, where is freedom, this is free enterprise...
 
i love the comments, had nVdia and ATI (back when it was ATI) took this route instead of adopting directx i think 3d gaming would be in a different place then it is now

the reason why so many people cry foul is that chances are a Raedon card could well run physix as well if not better then a nVidia card, nVidia is purposely excluding AMD because they fear competition, which is a bad thing for consumers and developers in the long term, for this technology to be effectively exploited and reach maximum efficiency there has to be competetion
 
I am sick of this partisan crap that is simply put, completely unnecessary. It looks completely unprofessional and just plain low for nVidia to cripple the competitor's game performance by leveraging developers.

You are off. The standard CPU + GPU performance is a given. The GPU PhysX gains is an extra, due to Nvidia's technology. They are not taking anything from you. Limiting it to their gpu boards is the same as ATI would have done, in a heartbeat. You want extra, you buy Nvidia.
 
@ NoCompetetion.

So you think a competitive attitude for ATI would be to develop a competitive product, or to be granted to use Nvidia technology, and just split the cost that went into the development? And if second, how do you plan to split the cost? 'Cause just to be given something is not a competition, it's a charity.
 
This is quite unfair to competitors - Nvidia incorporates strategic barriers in its drivers to prevent these combinations and performance gains if non-Nvidia cards are installed as primary graphics solutions.
 
[citation][nom]hiph[/nom]You are off. The standard CPU + GPU performance is a given. The GPU PhysX gains is an extra, due to Nvidia's technology. They are not taking anything from you. Limiting it to their gpu boards is the same as ATI would have done, in a heartbeat. You want extra, you buy Nvidia.[/citation]

You really don't understand what's going on here. People are complaining, yes, because PhysX is one sided. But, you need to realize that games nVidia has a heavy hand in, their TWIMTBP titles, are optimized only for nvidia graphics cards and thus artificially make AMD's products look inferior. For example, look at the newest charts from Metro2033 in the 580 article. In some cases, the 470 performs better than the 5970. That sure as hell seems like the developers were paid off by nvidia. You need to wake up and realize that the issue is now beyond just the scope of "extra" features. When nvidia helps release games, the competition's performance is artificially lowered just so nvidia can look better.
 
[citation][nom]Ninjawithagun[/nom]It is NOT the fault of NVidia for aggressively pursuing Physx technology (and 3D Vision technlogy as well). AMD has only themselves to blame for not developing their own Physx technology. So what if NVidia's Physx doesn't work with AMD cards. Too bad! If you AMD fanboys like NVidia's Physx technology so much, then buy a damn NVidia card and stop your whining. NUFF SAID![/citation]
Simply could not just thumb you down as you would take it as a ATI-conspiracy.

Your logic is flawed as this whole discussion goes towards opening up proprietary technology to the benefit of the masses - not limited by supplier. If ATI (AMD) were to develop a competing physics engine and use the same anti-competitive practices as Nvidia this would mean that consumers lose even more.

I would buy an Nvidia card if they supported the combination of their cards with ATI. As long as they do not my money goes towards ATI.
 
What if Microsoft offered the Physics solution instead of graphics developers kinda like a directx implementation? They would act as a nice mediator, and it could not only allow for open physics development but also distribute the workload optimally between GPU/CPU. It would also pave the way for cross platform merger (xbox/console world with PC gaming world) in future generations. Would be nice...
 
Seriusly, i don't get why people think that nV should improve cpu performance or/and enable PhysX on ATI cards. They paid for this tech, they're developing it. If they'd do that, there wouldn't be a reason to get nV cards if customers wanted additional effects in games.

Why doesn't ATI share their Eyefinity tech with nv? Why doesn't Intel share their fabs with AMD, so that they could make better CPUs, or their HT technology?

Quit crying.
 
[citation][nom]eyefinity[/nom]The article could barely spell it out more clearly.Everyone could be enjoying cpu based Physics, making use of their otherwise idle cores.The problem is, nVidia doesn't want that. They have a proprietary solution which slows down their own cards, and AMD cards even more, making theirs seem better. On top of that, they throw money at games devs so they don't include better cpu physics.Everybody loses except nVidia. This is not unusual behaviour for them, they are doing it with Tesellation now too - slowing down their own cards because it slows down AMD cards even more, when there is a better solution that doesn't hurt anybody.They are a pure scumbag company.[/citation]

Nvidia are a business, they developed an asset that can be used for gaming alongside their video cards. They have a right to keep this proprietary, to try and improve their revenue streams. If Physx was an open source based API and Nvidia did something to make Physx work best on their cards and no'one else's then yeah i could understand where your coming from. But Physx belongs to Nvidia, it's like shouting at MS or Apple for not open sourcing their OS's to the public (they leave that to Linux). It's just my opinion, but morality aside here, it's business and Nvidia have the right to do what they can to earn money. Maybe a competitive dedicated open source Physics API needs to be created to start competing with Physx, much like OpenGL is working to compete with DX.
 
[citation][nom]old_newbie[/nom]Must be a slow news week. I remember the hype of this back in late 2009 and the beginning of 2010, when AMD and Nvidia execs were mudslinging back and forth (AA disabling in Batman:Arkam Asylum comes to mind). To me, the issue revolves around being 'competitive' or being 'anti-competitive'. Its perfectly fine to make a feature which works well with (or even ONLY) your hardware. Thats just being 'competitive', as its up to the users (us) to choose to adopt it by purchasing it or not. Its NOT ok to implement a feature that intentionally looks for competing hardware and cripples it or disables features. This is 'anti-competitive' because users get penalized for exercising their choice of using a competitor's hardware.Nvidia are walking this line between savvy business and stifling competition. Evidence so far points to the latter (IE. hacking to block NV vendor ID recognition and voila! radeons work with geforces). The real tragedy here is blaming developers. They have neither the time nor resources to optimize an SDK. Problem is, even if one did invest in a way to seamlessly offload Physx to a CPU (no incentive to, but just saying), Nvidia would probably sue their pants off.Just my $.02[/citation]


I agree to a point... If you are putting a competitors product in your system and you are putting my product in your system I have the right to say you cant use my product while their product is in there... which is what nvidia does. You dont get gpu physx when nvidia detects an amd card in the system. That's good business sense. Period. it says oh you dont like my card enough to make it your only card well then screw you buddy Im not giving you the performance features of it... since I cant go so far as to completely disable the card because you did buy it (I assume) after all.

If you were selling ford cars and someone bought a new ford focus off the lot then went home and took the engine out and put in a chevy engine then ran into a problem would you fix their car under warranty? hell no because they chose to add a competitor's product to the mix and voided the warranty by doing so.

Its the same thing here only nvidia cant stop you from using their card completely they can just turn off physx so you cant use your old 285 with a shiny new amd 6870 or whatever.





And to the article... you made one rather glaring mistake. The physics processor on the 480 is 4x faster than the 285 you were using as a dedicated gpu physx solution. The 480 by itself would have given you faster physx. If you wanted to truly compare you should have used a 480 as the dedicated physx card and a 580 as the gpu. Just saying.
 
i have an 860GT lying around somewhere, but after what i have seen nvidia is just a pile of crap...holding back performance when a non-nvidia card is detected...

out of all the hacks and cracks in software and code writing...this is one code that NEEDS to be cracked wide open
 
Status
Not open for further replies.