AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 53 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
mercury_nvidia_nb_gpu_mkt_share.png


A lot of it was due to the Bitcoin craze. And man HD 7970/R9 290s got super expensive in the later end of that craze...
 
AMD's stock just jumped more than it has in a while, which coincided with their announcement last night of their new APU - the fastest so far.

http://cnafinance.com/advanced-micro-devices-amd-stock-heres-why-its-climbing-today/8450


Here's a link to AMD's official press release:

http://www.amd.com/en-us/press-releases/Pages/amd-enhances-its-2016mar01.aspx


Also, here's a new thing about FirePro they just announced today.
http://www.amd.com/en-us/press-releases/Pages/amd-gpuopen-fuels-2016mar02.aspx
 


All true, but it's AMD's first upswing in three months. Hopefully they'll pull it together. They went so low recently that even getting up to a quarter of their most recent spike in 2006 (about $40) would make some people quite rich indeed.
 

I think firepro successfully converting CUDA programs into openCL is more important news for AMD. It'll bring a lot of cash. Present APU lineup on the other hand, meh.
 
Still it won't be easy for them to steal market share from nvidia. CUDA developmemt still is faster than OpenCL. And the way I heard about it some developer prefer CUDA because of it's high level nature vs OpenCL which is the opposite. So for some developer ease of use will still attract them towards nvidia solution. Plus nvidia are no longer slower than AMD when it comes to OpenCL.
 


Not sure that's entirely true. The gap might have closed a bit, but it still appears that OpenCL compute benchmarks favor AMD significantly (click image for link to original article)




You can see that the older 7970 matches Titan X in OpenCL, and newer AMD cards outperform Nvidia's fastest option by a decent amount.
 


If you read the entire page there is more than one OpenCL benchmark there and AMD and nVidia seem to go back and forth depending on the program more than it being just OpenCL.
 


It does. And that shows the gap closing, but I'm not sure it's gone. Either way, it's good to see Nvidia focusing on compute in non-proprietary code.
 


True but I can see why they prefer to get people to use CUDA. With CUDA nVidia can guarantee the performance while with OpenCL they cannot as it is not controlled by them and will normally be implemented to be optimized for more than just their hardware. CUDA on the other hand is designed for and optimized for their hardware.

It is much like say QuickSync that is optimized for Intels hardware which runs faster for it but there are open source implementations that do not run nearly as fast on Intels CPUs as QuickSync does.
 


Yeah, I've always been curious to see tests of applications that offered better comparisons. Unsure whether anything like I'm thinking about exists, but I think it doesn't. I'd be interested to see the same application coded for both OpenCL and CUDA, and able to run on AMD or Nvidia hardware. That'd give four sets of results on the same thing: (1) Nvidia running CUDA, (2) Nvidia running OpenCL, (3) AMD running CUDA, and (4) AMD running OpenCL. So far, I think (3) isn't possible, but it would give a better point of comparison if it were.
 

In the real world those benchmarks don't mean squat IMHO, F@H uses OpenCL IIRC and Nvidia GPU's are streets ahead of AMD.
 
@jimmy
+1

Nvidia did not outright win in every OpenCL bench but if you look at 980 review it is very clear depending on the application Nvidia can be ahead of AMD. See the bench on anandtech and to me it seems nvidia tune maxwell itself to be more opencl friendly. But still in HPC market AMD is the less threat to nvidia. The main threat actually intel with their Xeon Phi. Nvidia (with IBM) already win one of supercomputer contract that aiming to be world fastest in 2018 (i think it was Summit) and there will be competing system that will also aiming for the top spot called Aurora which will use intel xeon phi.
 


Lol I think you are a rather 'unusual' consumer given you only really do F&H with your gpu and nothing else. Agreed- AMD are (now) behind Nvidia on that specific workload so you're going to get more performance for your money with NV. I remember AMD were ahead not so long ago and have been bundling F&H with their drivers for years (I used to run it on a HD4670 as it was bundled with the driver, that card incidentally and despite being ancient is still going strong, gave it to a friend so he can play minecraft in double digit frame rates lol).

There are Open CL accelerated workloads in real software however that are faster on AMD (mainly video encoding / processing stuff). One thing I find frustrating though, all these 'rendering' benchmarks. What rendering software *actually uses the bloody gpu*? I do this kinda stuff for work and *all* the actual rendering plugins I use are all pure CPU based solutions. The one I use mainly even supports render farming over the local network to speed things up but the GPU (whichever brand) basically sits idle. I think this is the real issue, the number of accelerated workloads are actually pretty thing on the ground. I guess GPU's are still a bit 'new' (I mean they've only been around since the mid 90s).
 


While it is not the same a good comparison is the Apple iPhone 6S that uses both and in most cases both perform and use about the same amount of power.
 

I read a rumor a while back all GPU were on gf and tsmc were making zen opterons, it'll be an interesting second half of the year that's the only certainty.
 


I don't think it's that unusual as many folk fold, it's not just me. :lol: And it seems to me to be more relevant than the benchmarks you find so "frustrating". 😉
 


Lol I wasn't saying it's unusual for someone to run F&H on a gpu (I did for a while, set to auto when machine was idle). It's helping a project out after all. What I think is perhaps a bit unusual is F&H being the primary reason to buy a gpu, although maybe that's just me?

I mean unless you yourself are actually involved directly with the project somehow, what benefit does spending lots of money on a new GPU gain you other than a higher score? I wouldn't personally invest money purely for higher benchmark performance. Usually my gpu purchases are prompted by a game or bit of 3D software running poorly enough for me to notice and not being able to tweak the settings to an acceptable frame rate.... I guess F&H doesn't effect me directly enough for that to be a *primary* decision factor.

Now, a fully GPU accelerated rendering plug in- that would be a serious factor for me as it would potentially shave significant time of rending / animation sequences which can (on high settings) take weeks to render on the cpu alone.
 


Awesome, I want one already.