AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 37 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
isn't this 380x the same gpu that was only available in the apple 5k imac? from what I remember it was a good gpu just had thermal throttling due to poor cooling solution thanks to apple.
 


No idea tbh but seeing how lacklustre the 380 is in what I am using it for... meh.
 


im pretty sure it was the full Tonga core special made for apple's 5k imac. meaning I expect a 3dmark11 score slightly faster than the 7970. should be able to play all games on high to ultra in 1080p no problem. for $250 that's pretty good. (best option is just get a used 7970 for $130 but some people need to buy new...)

you should have known what you were buying when you got the 380. benchmarks are everywhere. its all about price to performance if you got the 380 for $200 and its as fast as the 960 that cost $230 then don't complain. yea its slower than a 970 and a 980 and cant play assassins creed unity on ultra with gameworks on BUT you paid what you get.
 


It certainly is, and the 960 gets 115k PPD and the 380 only gets 90k PPD. Find that in your benchmarks.
 
so it would seem that the 960 is a better deal I didn't research the price of parts in my previous post it was just relative.

Still. you should have looked at that when you bought the 380.

There is no such thing as bad hardware, just bad prices.
 
The two sides seem to have a slightly different approach, whilst both use the GPU cores to the max Nvidia cards also employ the CPU but AMD cards don't. I have no idea if that's what makes them have so much of a lower PPD or it's just that they have weaker cores to begin with.
 


well they do have weaker cores but they have more of them... fury x has 4000 cores and is slower/just as fast as 3000 core titan.

cpu acceleration may have a large factor in the PPD.

what are you computing with your card?
 


That's already been mentioned, are you not reading previous posts?
 


Sorry but I thought F@H meant something like a game title I was unfamiliar with.

upon googling F@H I found this.
https://en.wikipedia.org/wiki/Folding@home
is this what you were mensioning?
 


Yep, that's Folding or F@H.
 
Tradisonally folding has always been better with nvidia hardware. But since they update the client with OpenCL some probably would think that AMD now can do better than nvidia since when it comes to opencl it seems AMD are much more proactive in supporting the latest spec of opencl.

But in my opinion just because they were much faster than nvidia supporting the latest spec it doesn't mean they also better at it than nvidia. I always believe that nvidia is very capable to support opencl properly but they purposely holding back to push their CUDA ecosystem more.

Anyway it seems Vulcan will also going to use OpenCL quite extensively. So i think this will give nvidia a reason to fully support OpenCL spec ASAP.
 


They may be "holding back" I have no info on that but the R9 380/285/7950 gets its arse handed to it by the GTX960 when it come to F@H! :lol:
 


Specific benchmarks favor specific hardware.

F@H has never been a strong point for GCN. Still generally GCN offers more throughput for Open CL than nVidia in many applications.

The *best* compute processor in the GCN lineup is Hawaii I believe as it offers AMD's highest DP rate...

I also think from memory that Tahiti is stronger than Tonga at compute tasks (so the HD7950 / 7970 are probably a better option than the 380 / 380X in ppd).
 


Other folders have confirmed that the PPD for a 7950 and 380 are pretty much about the same, memory has no bearing on the outcome it seems and F@H switched from a CUDA based client to an OpenCL based client to even the playing field IIRC so GCN being pants speaks volumes for their claim to support open standards and software IMHO.
 
I'm not really sure I see your point? I've run F@H in the past, what precision are you working to? From my memory it wasn't as black and white as you portray, however even if nvidia is outright better, different tasks suit different gpu configurations...
 
too many games are used to factor into the overall performance graph on techpowerup. who cares if world of warcraft gets 200fps on a 980ti and 150fps on a fury x. find another irrelevant game and the tables turn. averaged out it throws off the relative performance graph. and they have no min/max or 99th% frames on top of that. i guess i shouldn't say i dont trust them, their individual game benches are ok i guess, although lacking more precise measurements.
 
Looking at these compute benchmarks:http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20

So in f@h gtx 980 is way ahead in * single precession* interesting though that the old 7970 spanks it in DP in the same test. I'm not sure if a 380 would match that though as I think it'd got a lower dp rate than 7970.

As always best option is to pick the tools based on the task.
 


doesn't the latest version of openCL used by the program matter just as much as what version is supported? amd supports 4.3 but if F@H only uses openCL 3.0 then NVidia isn't behind... only if the software utilizes the latest version and NVidia has to downgrade to an older version to be compatible will you see a loss based on version.

regardless I get back to my original question for you mouse. every aspect of every card gets benchmarked today. there is no way you couldn't have found exactly what the 270x was capable of before making the purchasing decision. do not complain about how a piece of hardware isn't what you expected when it was given hundreds of reviews as it was comparing it to every other piece of hardware created in the last 5 years. You should have known before you bought it that it was slower than a 960 at f@h and therefor you should have bought the NVidia card as you knew you were only going to use f@h and 960 was faster for the money spent.

I am an amd fan I will not lie, but I would never claim that a NVidia card I bought was a bad card because a similarly priced amd card performed faster an one specific task. I may say I got a bad deal, may say I should have gotten the amd card as I mostly only play this game, but never say the NVidia card is bad. This is low key fanboy bashing.

Were here to discuss future cards and gpu rumors essentially and this IMO is out of the spectrum not to mention its the R9 thread...
 


F@H "benchmarks" mean absolutely nothing compared to actually running F@H for real, if you have done it in the past or ever then you would know that.