• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

Nvidia Announces CUDA 4.1 with LLVM Compiler

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
I thought the APU/GPU world was going towards OpenCL? Sure, nVidia can use CUDA experience as a competitive advantage. But I don't see any long term gains by heavy investing in CUDA specifically. What am I missing?
 
There are many functions and levels where OpenCL may not be the proper method to get the best results. OpenCL still lacks a vast array of functionality for GPU-based 3D rendering, for example. Until it catches up, which may or may not even happen at all, CUDA is the only viable cost-effective solution. Not that there's a lot of GPU rendering going on in the industry yet, but it's expanding for certain as Nvidia's solutions take hold in various pipelines.

Thus, AMD is sometimes not even an option. Nvidia's own mental ray "iRay" (yes, I also hate the name) is CUDA-based, and there aren't many alternatives in the industry. And in some cases, there really doesn't need to be. AMD makes great cards too, but it would be impossible to recommend one for 3D creative content (Maya, 3DS Max, every CAD application) with any degree of honesty.

Don't get me wrong, I'm not a brand fanatic. I do tend towards AMD CPUs, but Nvidia GPUs are the cornerstone of any nutritious CG artist.
 
Hi. I'm a bit lost about this topic, aren't these cards mainly supposed to play videogames instead of these GPCPU applications? Because I bought the HD6850 instead of the gtx 460-765MB because of this (these 2 cards cost the same where I live) , the AMD's card doesn't have this feature, but I have never needed it...
 
[citation][nom]julianbautista87[/nom]Hi. I'm a bit lost about this topic, aren't these cards mainly supposed to play videogames instead of these GPCPU applications? Because I bought the HD6850 instead of the gtx 460-765MB because of this (these 2 cards cost the same where I live) , the AMD's card doesn't have this feature, but I have never needed it...[/citation]

You are under a misguided assumption that consumer video cards are just for gamers. If that was the case there would be no need for the wide variety of discreet cards out there (or one could even argue no need for a gaming PC period since gaming can be all done on a console).
 
[citation][nom]blazorthon[/nom]10% performance increase? Nothing to complain about there.[/citation]
and nVidia is complaining about the 30% increases of the GCN architecture
 
[citation][nom]lordstormdragon[/nom]...CUDA is the only viable cost-effective solution. [/citation]Yes, free is certainly cost effective. 😀

In my opinion, AMD / ATI shot themselves in the foot with this one when they initially put a price on their GPGPU development package. IMHO, NVidia was the smarter one in making CUDA free when the GPGPU packages were first introduced several years back, and I suspect that this is why so many adopted CUDA.

NVidia realized that they would sell more GPUs if CUDA was free, and selling hardware is what the "game" is all about. M$ has been doing this sort of thing for years by giving away educational and other versions of various stuff like Visual Studio and Word; it has been very successful for them, and seems like it is one of the few things M$ has done that was smart - from a business standpoint.

The last I heard, though, makes me think that AMD is also now giving away their equivalent as I believe that their package morphed into openCL, and is free.
 
[citation][nom]jtt283[/nom]Now they need to make the drivers play nice so CUDA will work if there is also an AMD card in the system.[/citation]

The drivers all ready allow other graphic devices to be present in the system and work fine. You are confusing Cuda with Physx.
 
This is the proof that nVIDIA can effectively do something other than GPUs and boards, unlike AMD. This is the proof that nVIDIA is really ahead of its competitor.

What makes nVIDIA the greatest graphics company is sure the overall quality of its architecture (Fermi 2.0 is not bad at all, and Kepler will be awesome), but also what they make ARROUND the GPU, just like CUDA, PhysX, and all what makes their card GPGPUs. AMD doesn't do this, and this leads to less efficient boards, almost useless for computing. OpenCL can't rival with CUDA, since this one is so optimized and so efficient. The open source can't compete with this engineering level present at nVIDIA.

Today, a GPU isn't a simple GPU like yesterday. Today, when you have a nVIDIA board, you have nothing less than a GPGPU and the ability to help science. Just look to all BOINC projects that runs under the CUDA envirnment compared to AMD with OpenCL !

AMD should really make something out of their boards other than GPU drivers to get the power from their boards. Unfortunately, time goes by and they don't realize they have to make the move ... or simply doesn't have the money for... which wuld not surprise me at all.
 
Status
Not open for further replies.