CUDA, PhysX Are Doomed Says AMD's Roy Taylor

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I finally signed up for an account just to respond to this. I expect this sort of trash talk from a pasty faced dungeon dweller living in mommy's basement--possessed of all the social acumen that entails. What I don't expect is for professionals to act this way. George Bush got up and declared the war over... and we see how that turned out. AMD is just trying to cash in on the weakness that Nvidia has shown; whether that's good or bad, it still feels like an immature gesture--like a monkey shaking his flaming ass cheeks at the opposition. And it's just about as attractive, too.
Its business - get over it.
As for the claim that Physx is unfair to AMD users, uhm.... if you buy a Ford, you don't get to tell the Chevy drivers what their manufacturer can do for their customers. If you want Physx, buy Nvidia. If you want AMD, buy that. Take this fanboy BS somewhere else. It has no place among grown men.
I never said Physx was unfair to AMD users - afterall NVidia is the one that bought it so it makes sense that they are the only manufacturer to include it. But things like the physx hybrid lockout only hurts customers. It really doesn't help NVidia at all.

And you should take your own advice as far as fanboy BS

 

I wonder why I have not had any major problems with the so called "horrible" drivers?
So, will Physx endure over time? Probably not. But not because I have a bone to pick with NVidia or that I don't like the color green. The plain fact of the matter that no one has really captured here in the comments is that PhysX won't last as long as NVidia keeps it proprietary to their GPU architecture. If NVidia were willing to share their PhysX code or allow for a unified OpenCL integration of code. Currently, OpenCL is NOT optimized for Physics based processing, but could be shaped to do so with a bit of creative coding. Afterall, OpenCL is suppose to be a unified coding language. But, until 3rd party developers are willing to join together and use the same exact coding, all there is to do physics processing is NVidia's PhysX technology. Like it or hate it, but it's here now and it works.
I will actually have to agree with this. Physx will remain an eyecandy niche for a while. But it will be replaced eventually.

 


CPUs aren't powerful enough? Then why does nVidia have an ooption for setting the CPU as the PhysX processor? Sure, it's not great, but it's doable.

Never had trouble with AMD drivers, but Intel and Optimus are both a PITA at times. Then again, I haven't had anything with AMD's version of switchable graphics.
 

legokangpalla

Honorable
Feb 28, 2013
23
0
10,510
This :"Because the industry doesn't like proprietary standards" is why Nvidia is going to suffer, not just the PhysX or CUDA.
Linux and the Chinese Dept of Edu pretty much told Nvidia to get off their lawn. These proprietary standards and designs gotta go, looking Nvidia GPU inside is like a nightmare-salute to those reverse engineers working on custom drivers etc.
 

KF Mush

Honorable
Jul 17, 2013
3
0
10,510
The way I understand it, is that you can buy an nVidia card with CUDA, but that card will also support OpenCL. Even if CUDA is dying, it's not a reason to not buy an nVidia card, because it supports either technology. Correct me if I'm wrong.
 


However, OpenCL is a lot faster on AMD cards than nVidia ones. Therefore, if you know you're going to be using it for OpenCL only, you go for AMD.
 

KF Mush

Honorable
Jul 17, 2013
3
0
10,510


Ah, okay. I did a little research and it looks like you're right. I might have to make some different considerations for an upcoming build. Though I mostly need it for Adobe CS applications, which support CUDA, it doesn't seem like CUDA will last as it is proprietary and Adobe could potentially drop support in the future.
 


honestly i don't think Adobe will easily drop CUDA.
 

InvalidError

Titan
Moderator

While they might not drop it altogether - at least not overnight, they could end up putting it on the back-burner.

You can run OpenCL code on CPUs, IGPs/APUs, Xeon Phi, non-x86 platforms, variety of embedded GPUs, etc.
You can run CUDA on Nvidia hardware only. (And isn't GPGPU performance massively crippled on 6xx/7xx chips? Or is that only for OpenCL?)

OpenCL looks like a much safer long-term investment than a single-vendor proprietary thing.
 




F@H runs OpenCL now and my Nvidia cards rip through a WU a lot quicker than my AMD card does and although they are different classes of cards the disparity suggest to me that AMD cards are not a lot faster if indeed they are actually faster at all, and don't pull out some canned benchmarks to try and prove your point download and run the F@H client yourself.
 
660Ti's and a 7790and there is a 40k+ PPD difference between a single 660Ti and the 7790 but with GCN being so wondrous shouldn't it be less than that? If someone were to post their numbers for a 7850 or 7870 that would be useful as they are closer to the 660Ti but the numbers from a 7950 and 7970 whilst good to know may be less useful in this instance.
 
7790 is a lot slower and cheaper though - just over half the price at the moment

Double the performance and you'll be comparing performance per money. It's not really linear though.

I'm not too sure on the typical context - what are the actual values? Is that a 150% difference, or a 5% one?
 

InvalidError

Titan
Moderator

There aren't many recent comprehensive OpenCL benchmarks out there but the few I do see show the HD7970 20-100% ahead of the GTX690.

The closest thing to a decent roundup of OpenCL benches I could find is:
http://www.tomshardware.com/reviews/radeon-hd-7990-review-benchmark,3486-16.html

The problem with things like Folding as a "benchmark" is that you never get the same WU (or whatever Folding calls them) twice so you cannot guarantee repeatability - each run may exercise completely different proportions of different (sub)sets of algorithms.
 


It is possible to get an average using HFM but you do need to do several WU's for that to pan out.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


Adobe won't drop support for 90% of the professional market which is based on NV. When people need to make money they buy PRO cards not home cards (for ECC & driver support etc). This is most of the market that is able to afford a $1200-2000 Adobe suite lic. I don't know many home users who have a lic for Adobe CS6 suite. Er, I know ZERO. The people I know running that software at home, are just emulating their work environment and have $1000+ cards. Cuda support is in, there is no work involved, and I doubt AMD could even PAY them to make them drop it. It just doesn't make sense to remove something that works so great for years.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


You seem to forget that both cuda/physx will be in SOCS in 6 months with kepler. Also, unless someone else is footing the bill for moving to opencl they'll stick with what already works which is cuda for pro apps. That work is done and 90% of the pro market uses NV cards with CUDA. You don't magically wake up one day and toss 90% of your market aside for slower options. So you'll have cuda on linux, android, windows. What else is there? With NV licensing out Kepler to anyone, I suspect the number of devices will grow :) Anyone will be able to slap on a great soc gpu etc next year.

Don't forget devs are already experienced with Kepler (and amd) hardware so I think you're going to see others gpu's start to dwindle as more will want to standardize on a single vendor (or two when AMD's soc hits) that is FAR more suited to gaming and the company you're working with is the same you already have for 20yrs (AMD/NV both). The rest have ZERO experience here. People take the path of least resistance. It's a proven fact, especially when any other path=more cost in a bad economy. :)

Game devs etc don't program for tech that may be better YEARS down the road. They make a game for next year and that's about it. Professional stuff will stick with what is already there without heavy investment from AMD (like with Adobe) as there is nobody else who really needs us off cuda. It's up to AMD and they're broke. Cuda to date, really only serves the pro market. Now that they're licensing kepler IP out that may change and go into gaming, but NV will be just fine keeping the 90% pro pie they have already and adding whatever they can in games as a bonus.

Also note, NV can change when desired and make an OpenCL driver worth something. They only keep it down because they don't have to support it yet if ever. Like that or not, it's the truth and as a business I'd do the exact same thing. It's NV's job to put AMD out of business, not help open standards survive right? You don't see Intel giving out x86 to everyone either for the same reasons.

You're argument only works if AMD can take a larger portion of the discrete pie. With NV at 65%/AMD at 30% nothing will change. IT's worse in workstations at 90% to 10%. The only question I see here is can licensing kepler IP make this even more lopsided going forward. Will cuda become sort of like the next directX? If Logan really is 3x+ faster than ipad4 we could see this happen (I'd expect ipad5 to double it, but that would still come up short right?). Everyone was impressed with mobile kepler's power use etc and that is at an assumed 28nm version according to most. Ira demo ran under 3w, and the water/island demo was very impressive too. Let me know when you see someone with an OpenCL tegrazone type site. Until then NV will just keep growing their base with shield (and all other consoles/handhelds coming with T3/T4 etc), kepler coming etc. This isn't about what we'd like to happen, it's about reality and how much it costs to change it ;)
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


What inspiration is there to do this? Pro's need ECC and driver support etc. This is for expensive people that have expensive apps. There is a reason pro cards cost $1000-5000. They can avoid multimillion dollar mistakes by using them. These people won't go to NON-pro cards any time soon for work that depends on accuracy. So why would someone take the time and expense to add support for something that won't be used by many (again NV owns 90% of these people's cards) without it being funded by AMD (or someone else with tons to gain...which I can't think of anyone else at the moment)?

Wishful thinking IMHO. Let me know when it happens. Until then, it's like saying "but the sky may turn green one day"...Yeah, so what. Hell may freeze over too...But I don't see it happening ;)
 
I am sure the professional cards support OpenCL as well.
Wouldn't it make sense to support as much hardware as possible to maximize sales? I think so.

You do realize CS6 already has preliminary openCL support. I can only see it improving from there
 
Status
Not open for further replies.