Nvidia Takes Cautious First Step Into Graphics-accelerated Mainstream Programmin

Status
Not open for further replies.

christian summer

Distinguished
May 8, 2008
21
0
18,510
if you consider the power efficiency of gpgpu processing to that of normal intel based chips you will also see many disadvantages...sure we all have a miniature super computer inside each graphics card...sure we have super high system bandwidth over the pci x(2.0)etc bus and extremely fast and high capacities of video ram...but the gpu's eat a hell of a lot more power under load than a general purpose processor...

while it would be great to take advantage of the gpu horse power, especially in fpu intensive processing, i dont see the gpu completely replacing the processor anytime soon...i am an artist that does a lot of music and video, and it would be great to offload a lot of the processing, but when i am running word or surfing the internet i dont need my computer eating quite as many watts as playing cod4...

-c
 

mr roboto

Distinguished
Oct 22, 2007
151
0
18,680
Nvidia needs to get their ass's in gear and bring Folding@Home to their GPU's. ATI has had their GPU's ready for a while yet Nvidia refuses to simply optimize their drivers for this. I guess they want people to buy supercomputers to accomplish this task. I love Nvidia's cards but this really pisses me off. Assholes.
 

Horhe

Distinguished
Jan 26, 2008
192
0
18,680
There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
 

Horhe

Distinguished
Jan 26, 2008
192
0
18,680
There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
 

fransizzle

Distinguished
May 16, 2008
17
0
18,510
Although I don't see the end of the CPU anytime in the near future, there are certain tasks that a GPU could, at least in theory, do much much faster and I personally can't wait for it to happen. Anything that can make my computer substantially faster with the hardware I already have is awesome by me. Nvidia needs to hurry up and get this out and working already.
 

dogman-x

Distinguished
Nov 23, 2006
44
0
18,530
I think NVidia's approach is perfect. Certain things work better on CPUs, and certain things work better on GPUs. In particular, the hardware structures in GPUs and other accelerators vastly outperform multi-core CPUs for many math intensive tasks, particularly for imaging, video, financial, geology, etc., while CPUs are still quite necessary for decision based logic and control. So you need both types of processors to be effective. CUDA is a perfect development tool to enable this, and LAME is a perfect mainstream application that can benefit from acceleration.

We're past the days where we can just raise the clock speed. New programming models are necessary. Homogeneous multi-core designs (e.g. Larabee) will fall short. Heterogeneous multi-core (many different types of cores) will dominate in the future. Although the bandwidth of the PCIe 2.0 bus is very capable, the latency of this bus will be an issue. The best designs will have all the different types of cores on the same chip. So while NVidia has a great development tool with CUDA, hardware designs along the lines of AMD's Fusion may be the way of the future.
 
Since the beginning weve had cpus. Almost all the programming has been aimed at cpus since weve had transistors. Thats our history. Given the opportunity, I believe we will see huge benefits from gpu processing. You read about alot of these super computers that have thousands of cpus on them being replaced by handfulls of gpus and still tripling their output. Running something like this is less exspensive, costs less up front, and has higher potential than any cpu based system. I think theres going to be more and more a trend heading in this direction for super computing. The cpus function is slowly being replaced there. Soon we will see it more and more in server , and someday on desktop. The gpu isnt dead. Intel says it is, while they invest billions in them. What a joke. They know whats going on here, but Im not buying the gpu is dead, while they (Intel) invest all that money in them
 
G

Guest

Guest
I like it. And my ass hurts. And its hot outside. And ... why are you reading it, dork?
 

cryogenic

Distinguished
Jul 10, 2006
449
1
18,780
[citation][nom]lujooo[/nom]Wintel will never allow this happen.. [/citation]

They can't do a damn thing about it if the nVidia's programming model gets adopted and becomes a de facto standard before Intel has a chance to unveil its own model with Larabee. Just like it happened with AMD64 by the time Intel wanted to implement their own 64bit instruction set, the AMD one was already supported by Windows, Linux, Unix, Solaris and many more and none of the software companies wanted to support yet another standard that is different but basically offers the same thing.

nVidia is wise on this, it knows that they must push GPU computing into mainstream before Intel has a chance to do it with Larabee, unfortunately to succeed in doing so they will need support from the software giants like Microsoft, Sun, Oracle, the Linux crowd and alike. I don't think that just providing a CUDA development environment will be enough, they might need OS support at the core (something which Intel will likely manage to obtain shortly after they release Larabee).


 

wild9

Distinguished
May 20, 2007
527
0
18,980
I think that from a design viewpoint such hardware would really show the advantages of AMD's architecture (Hypter-transport especially). I just can't help feeling this techonology is being held back for the same reason if someone managed to get cars to run on water..all current technology would be dead in the water, with severe losses.

There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded.

Some tasks require more number-crunching capability than those CPU's can muster, and it would take 10's or even 100's of them to even begin to match the capability of a few GPU's..imagine the power consumption not to mention the footprint.

I don't think all can and should, be ported - it's too complex and in some cases completely needless. I think you'll still have powerful CPU's just that they'll act as bridges/interfaces rather than act as the soul number-crunching device. Closest I ever saw to this 'transputer' type hardware was the Amiga range of computers, that had multi-tasking built into the hardware, and those systems were a joy to use. I'd like to see a similar thing happen on the PC, and would not mind buying a GPGPU chip (or several), to speed up my applications, but I don't think we'll see it just yet, not in mainstream use anyway. Too many conflicting interests here, most of which are of a commercial nature..
 
Status
Not open for further replies.