One day, maybe they'll remember OpenCL isn't cared about by NV at all because nobody tries to make money with this crap. You run CUDA in AdobeCS6 (video or photo etc), Blender, 3Dsmax, Vray, etc etc. Pro apps use CUDA (with perhaps an option for opengl or opencl).
Start running CUDA vs. whatever AMD runs fastest with (opengl or opencl) in apps testing gpgpu. Nobody buys a NV card to run opencl stuff. NV doesn't care about an opensource driver when they already can do the same things faster in Cuda or OpenGL in EVERY pro app that makes you money. Testing gpgpu without showing cuda is pointless. A real pro app user would not act like 7yrs of cuda groundwork doesn't exist. IT DOES exist, and it's a HUGE difference in some things. IE your own ps cs5 tests shown here:
http://www.tomshardware.com/reviews/adobe-cs5-cuda-64-bit,2770-8.html
3mins vs. 12mins (6core cpu)...WOW.
http://www.blenderguru.com/4-easy-ways-to-speed-up-cycles/
Only 12x faster for ticking a box for cuda...12x faster sucks right?
I would only turn on OpenCL if there was NO other app that did the same job with Cuda. But then, that situation doesn't exist so... Which explains why NV pushes cuda and openGL and basically ignores OpenCL. I mean, why would they want to foster AMD optimizations in apps? Helping OpenCL means AMD gets into the game as apps add support. They don't desire this. Not to mention cuda in the same app would (should) never be beaten by opencl, since it's kind of an abstraction layer above directly talking to the hardware with cuda so to speak.
http://www.heatonresearch.com/node/2487
"If you want to use nVidia, ATI and Intel, then you will use OpenCL. Sounds like a slam-dunk to use OpenCL? Right? CUDA is much more advanced that OpenCL. In OpenCL you are programming your graphics kernels in C (actually C99, but still C). In CUDA, you are using C/C++. In OpenCL, you are dealing with a higher-level abstraction. If you are on an nVidia card, OpenCL is essentially compiling to CUDA. Because of these reasons, I decided to go directly with CUDA for Encog's GPU implementation. This limits me to nVidia cards, but that is something I am willing to accept. At least for now. I may add an OpenCL version at some point in the future, but that is not planned at this point."
There's an example of what many devs think, especially when NV funds some help (like all big apps) to code in Cuda. Besides this why do I care how fast I can help someone solve cancer etc? Bitcoining is over for all but large botnets as easy blocks are gone.
https://community.rapid7.com/community/infosec/blog/2012/12/06/skynet-a-tor-powered-botnet-straight-from-reddit
Just an example...Oh look it uses opencl...LOL. Well a botnet would have to be cross-hardware compatible right?
So what is it with testing open source JUNK? I get that a lot of it's free and works on a lot of stuff, but does anyone use this stuff to make money? Based on registrations 163,000 folding@home site, folding@home is pointless to most and so is bitmining. There are 352MILLION computers sold each year and you're benchmarking for 163000 people (and why the heck do I want to run up my electric bill for that? Will they pay me when solved?)? How about some meaningful REAL MONEY MAKING apps start get tested for showing gpgpu stuff? You can use luxrender plugins for stuff like 3dsmax, blender 2.6, cinema4d etc, but again as soon as you do that I turn on Cuda. OpenCL should always be a last option if forced when there is a cuda way to do the same thing on NV. You guys are always forcing NV cards into the worst gpgpu situation they could be in, which nobody would actually do, then acting like it's normal instead of retarded
This amounts to this lame excuse:
"Previously, when the GPGPU universe was divided into CUDA (Nvidia) and Stream (AMD), we faced the problem that most applications supported only one of the two environments, and could thus not be directly compared to each other."
http://www.tomshardware.com/reviews/graphics-card-benchmarks-charts-review,3154-8.html
It's AMD's tough luck most apps skip them because they haven't invested for 7yrs like NV has right? Take the best app from both and compare those running the same scene render etc if this is the excuse you'll give for an answer. Then again everyone uses adobeCS, so it's hard to imagine not using it as a benchmark while showing a bunch of stuff nobody uses for real life instead. Odd...If stream has fallen so far behind cuda that it's useless that isn't NV's problem, nor should you act as though the situation doesn't exist. 500 universities in 26 countries teach cuda. How many teach stream or opencl use? You can use OpenCL in Adobe and anyone with NV hardware will immediately turn on cuda. The # of people using AdobeCS vs all of what you've shown combined is ridiculous. They sell 4Bil worth of the stuff yearly. Surely you can do better than this platform agnostic crap.