CUDA, PhysX Are Doomed Says AMD's Roy Taylor

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


it can be done...if crytek wants to

http://physxinfo.com/news/11055/gdc-2013-demo-apex-turbulence-in-cryengine/
 


and now nvidia already got their hands on PGI:

http://blogs.nvidia.com/blog/2013/07/29/portland/
 

Keep repeating the mantra....
CUDA is doomed
PhysX is a dead end gimmick
AMD is a company on the rise

Say it enough times, get your fans to repeat it, people will start to believe it.
 
Does anyone remember when ATI launched "Truform" (it was a kind of tesselation) which only ran on ATI hardware.....
I guarantee that if AMD bought ageia they would have done the same thing as nvidia.
Depends one what you mean by "the same thing"

If you mean make it only work on ATI/AMD hardware, then of course they would. They bought it afterall.

If you mean lock it out when there is a NVidia card, I sincerely doubt it.
 


Well... I heard Bullet Physics will be here with the Catalyst driver release that really really fixes the multi-GPU micro-stutter issue (since the Aug 1 release just kind of really fixed it... or is it really kind of fixed it?).
 

PhysX's main problem is not not playing nice with the other kids. It is all the new kids who have recently popped up around the block offering a couple of new games for which everyone is invited.

With software developers starting to look at OpenMP and OpenCL for their GPGPU needs and Nvidia crippling (arbitrarily capping) GPGPU performance on desktop chips, developers have fairly strong incentive to move to a multi-platform, multi-vendor GPGPU API that works non-crippled on everything else instead.
 
I'm sorry AMD but Kepler kicked your ass (without CUDA and PhysX).

As far as your CPUs are concerned - I think they offer great performance for a good price (but you've got some catching up to do to beat Intel).
 


LOL...Like I said...GROWING, not doomed. People don't like proprietary stuff, but when it blows away the competition (or in this case there isn't another viable option with such a great ecosystem), you use it because it makes you faster, which makes you more money. You don't really complain and attempt a switch until you DON'T make money.
 
Also like I was saying in my first post Nvidia didnt invent this tech they bought it from ageia who bought it from NovodeX... so if you're upset that bullet isnt popular yet (and I feel for you on that one), just remember the multi-threaded physx engine was developed and brought to market and then bought in 2004 by Ageia... who was then bought out by Nvidia in 2008 and a few months later they released the physx engine software with their drivers for any cuda cored gpu (then the 8xxx and 9xxx series were the only ones available). So bullet might not be as polished as physx but its also not been in development since pre 2004 either.

Lastly heres a list of games that use bullet so far:
https://en.wikipedia.org/wiki/Bullet_%28software%29#Commercial_games
 
To those of you saying that Nvidia's physics are better than AMDATI's, it is expected that a company will produce better quality than another when they get away with using dirty business tactics, because obviously those tactics going unpunished will make them richer and more powerful and more able to produce great things.
 

Which of the following options has the potential to make you the MOST money in the long run:
1- relying on proprietary APIs running on proprietary chips from one single vendor who cripples performance on mainstream chips to force people to buy their GPGPU-specific variants for non-gaming GPGPU loads or
2- relying on open standards supported by nearly every architecture, every vendor and every OS where proprietary vendor from #1 shoots itself in the foot by artificially crippling performance on those other APIs

For people who port their applications to multiple OSes, running on multiple architectures from equally diverse vendors and hardware configurations that themselves are mix-and-match combination of IP cores from multiple vendors as is very common in phones, tablets, PC-on-stick, etc., investing in a proprietary API that works optimally only on a handful of possible configurations (PC running Windows with Nvidia GPU or phone/tablet with Tegra SoC) focused on a single vendor out of thousands of possible permutations is silly.

While CUDA/PhysX may maintain niche status on PCs, the broad selection of non-Nvidia hardware out there with untapped GPGPU potential is going to motivate developers to look into and possibly adopt more open standards for cross-platform convenience.
 
"Unlike our competitor, who’s obsessed with launching consoles in the mobile market, we still love PC gamers and we’re absolutely committed to them. That’s never going to go away. Nobody should have any doubt that we’re committed to GPUs."

While most people love consoles and tablets over PC, im very glad to see that AMD still cares for what is most important. Never let the PC dissapear!
 
That's really nice and all, but I'm going to stick with nVidia until AMD finally does something about their terrible Crossfire(X) support. He can fire as many shots as he wants -- nVidia's product is still better.
 

They are working on it. A beta driver improvement is already out but it still has a fair bit to go.
 


SLI has its own problems as well. the reason why SLI has more consistent Frametimes is that Nvidia had a hardware fix on kepler in order for SLI to be much stable. the SLI problems still probably exist in older fermi and prefermi architecture. AMD's approach is a software approach which is much tougher to take down in a hard ware situation. What SLI doesnt have which Crossfire does is the ability to use a 4x lane to multigpu, and being able to multigpu with similar family cards as well as dual gpu cards(e.g a 690 cannot SLI with a 680, but a 7970 can crossfire with a 7990 or a 7950)
 


13.8 Beta driver actually pretty much solved the microstutter problems... And as far as the 7000 Vs. 600 series AMD had the better high end card (Not counting the $800+ cards).
 


I've had SLI setups since the 500 series. SLI with 580s, 680s, and now 780s, I haven't had any issues with SLI.

And why would you want to crossfire a 7970 with a 7990 if they're only beginning to fix crossfire issues? So you can fork out a lot of bucks only to view a flickery juttery screen with a "promise" of future functionality? I'm not sure I care about that kind of flexibility when the product doesn't even work as advertised. I wouldn't have a problem considering a single-GPU setup with AMD cards. I would have absolutely no appreciation for a multi-GPU solution with AMD cards at this time.

Hopefully AMD gets it worked out, but I was having the very issue the tech sites finally exposed this past March with my 5850 crossfire setup three years ago. That's more than enough years of complacency for crossfire gamers. I say as consumers we need to demand more for our money.

I don't see value in a "value" offering from AMD if they're selling a product that doesn't even function properly.
 
Moron said: "Tayor said that Nvidia's CUDA is doomed, and PhysX is an utter failure. Why? Because the industry doesn't like proprietary standards. "

So, let me get this str8, you are saying Xbox, Playstation, and all the game platforms are doomed? That Windows is doomed? Wow.... nice fantasy...[I use Nvidia on PC for games, not consoles -- well, when I used to buy games... DRM put a stop to that]. Also use windows & linux... Seems like both coexist and both get away with things because there are so few choices...

 

How is stating the truth, being a douche bag? CUDA is losing industry support to OpenCL. This is a fact. Even Folding@Home is moving from CUDA to OpenCL for NVidia cards. NVidia doomed PhysX by keeping it proprietary. Developers can't use PhysX in any meaningful way without risking loss of sales. With Havok properly supporting CPU physics and Bullet being an open API that devs can modify for their own needs (unlike PhysX) and utilizing OpenCL....why should devs waste their time on a proprietary API like PhysX?


Roy Taylor simply stated facts. PhysX and CUDA are on the way out.... CUDA is proprietary and lacks the industry support that OpenCL....and is losing support regularly. Folding@Home is even being shifted to run OpenCL exclusively. PhysX is nothing but a gimmick restricted to NVidia cards while Bullet is an open API that runs on both NVidia and AMD GPUs and Havok is optimized to run on CPUs. Devs don't make money by using proprietary garbage that limits sales....
 

What part of his statements in the VR interview were "facts". Fact: "I think CUDA is doomed"; "think", does not make it a fact. If he said that "CUDA is doomed and here is why", then he would be substantiating his argument with (hopefully) logical facts. However, he instead followed it up with, "I don't want it...you don't want it", those are not factual statements, instead he is playing on some wish-fulling dislike for "proprietary" tech...until of course AMD will come up with something, and then it's going to be the best thing since the sliced bread. The wishful optics or perceptual observations based on his understanding of the competition, does not magically grant him some all-mighty fact-based finality to his statements.

Edit: Of course dev's will go where the money is...so why don't we let the free market determine where the tech goes, instead of the biased opinions and biased observations of a single salesman.
 
Status
Not open for further replies.