Nvidia's CUDA: The End of the CPU?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Intel is making a processor to make GPUs obsolete, Nvidia is making a GPU to make CPUs obsolete... This doesn't entirely make sense to me, even if one could do everything to other could do, wouldn't they be even more powerful together?
 
EXCELLENT article. Cearly explained some differences between the CPU and GPU. I hope to see more of this progress - Such as GPU physics acceleration as part of DX11 or any non gaming app !! But for now I don't expect the CPU to be completely replaced anytime soon.
Still big news and may completely change how CPU -> GPU offloading is designed after Sandy Bridge/Bulldozer.
 
I'd like to see Google release an API that ATI and NVidia can use just to piss of M$. lol
Infact such an API could make it possible to see other graphic companies emerge to compete with ATI and NVidia. Now that'd be something I'd love to see!
 
I had a hard time understanding anything at all in this article, and I've been a PC enthusiast for awhile.
Should I take a college class?
 
Fedy Abi-Chahla,COOL! Excellent! that's what I have been talking about this too. CPU VS GPU Wars are coming very soon! unless InTEL leads big jump otherwise it's game over! Perhap WARP9? cos it's the only choice.
So far so good, Fedy Abi-Chahla, U are the only one who realise that beside me. hehe....
 
[citation][nom]Shadow703793[/nom]Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU. [/citation]

A Linux distro that runs on a GPU, absolutely phenomenal potential..
 
First, let me say that this is an excellent article.

The author (Fedy Abi-Chahla) states:
The idea of using graphics accelerators for mathematical calculation is not recent. The first traces of it go back to the 1990s.

Actually this idea actually goes back even eariler to the late 70's early 80's. The Evans & Sutherland PS2 MPS products were calligraphic line drawing CAD stations that used custom hardware to perform very fast matrix arithmetic (geometry calculations)on line end points. This system what connected to DEC PDP-11 (and later to VAX-11) computers via a proprietary Unibus card. One of the features of the graphics device was the ablility to read-back data (for hit testing etc). Programers, especially at universities, realized quite quickly that they could use the MPS as a fast matrix multiplier.

I personally saw this done by a graduate student named Frank Ford Little at the University of Utah Math department circa 1980.

 
the cuda might be a nice optimization of resources for gaming to lower the cpu bottleneck or get more resources for better physix. but nvidia will not get the chance to tape out the need of a fast cpu.intel and amd whont let them.the cpu will just always stay importment unless you intergrate both the cpu and gpu cores in one chip like fusion is ment to be. both have total diffrent optimizations. and i dont see a gpu do x86 or x64 tasks yet. first nvidia would need a very expansive licence to do such a thing then the whole process would still be very complicated it would require them to make a complete platform and then there competition will boycott them very hard.
so its not likley to happen. they just have some improved physix but not much more.
 
When I read 700 processes cpu I doubt it. Few processors will go beyond 70 processes & has been that way forever it seems.
Increase in speed to complete task is good from 500 ns to low of 20 ns in some instances.
Signed😛HYSICIAN THOMAS STEWART VON DRASHEK M.D.
 
I have been reading people talking about using OpenCL instead of CUDA. I would not be surprised if CUDA becomes to OpenCL, for nVidia as Glide was to OpenGL for 3DFX.

For those that do not know, 3DFC Voodoo cards were able to run both OpenGL and 3DFX's superiorly fast Glide technology. This Glide was supported only by 3DFX cards (nVidia did buy the technology later on though). If you had a Voodoo card (I had Voodoo 3 3000) and games that supported glide or D3D, you were better off using Glide because it gave amazing visuals (for the time) and was faster than the alternatives. Glide was a standard in it's own right. I still remember changing from 3DFX to a "better" card only to be disappointed with D3D's performance and ability. Before then I had never really used OpenGL before then either on account of Glide working with everything I had.

I would not be surprised if nVidia can come up with a better way than OpenCL to handle this task. CUDA seems to be progressing nicely after all. And since Apple (shudder) is using nVidia hardware for their demo, it would seem that OpenCL runs fine on the CUDA enabled cards.
 
Well soon the guys always pretending to be sooo serious..("no i never play games I just bought this 8800 to get nice dvddecoding") can get a real reason to buy a hefty card.
It's like the Linux-lovers that I know that sit and rant endlessly about the crapability of windows and when you are shown their comp...its dualbooted and its running xp ("just for the moment!")...
 
Great article content GPU and Nvidia's CUDA. The "end of the cpu" question raised in the title was a poor choice that distracts from conveying the extrordinary benefits that GPU processing and the CUDA platform can deliver.

Instead of debating CPU versus GPU technobabble,it would be more constructive to analyze what new functionality is possible when the best of multi-core CPU technology is combined and augumented by some of the best available GPU technology.

Toms hardware does an outstaing job of analyzing new hardware developments. New hardware tests commonly result in incremental performance improvements of a few percentage points over previous models - and that is often big news.

Whats makes the GPU use and CUDA platform so extraordinary is that early adopters are building applications that have achieved dramatic (50% to several hundred percent) increases in the performances of their applications.

Some evidence in support those performance claims is available at
http://www.nvidia.com/object/tesla_testimonials.html

Instead of exotic supercomputing hardware or large clusters that are cost prohibitive for widespread access - a suggested testbed setup is taking an off the shelf workstation like the Lenovo ThinkStation D10, adding the Quad-Core Xeon processors, and adding the Nvidia Telsa GPU board. With that minimal testbed setup, it becomes possible to more accrurately benchmark and assess the performance benefits possible from the combined strengths of Intel multi-core and the Nvidia GPU/CUDA technology.

Beta CUDA Plugins for mainstream applications like Photoshop are beginning to appear and are available for download. Other engineering apps like Matlab and Wolfram's Mathemetica are also adapting to take advantage of some of the new multi-core and GPU possibilites.

What we would like to see in a future Toms Hardware article on GPU technology are some benchmarks and assesments in how much of a performance improvement can made on mainstream applications.

Next generation Adobe technology (Photoshop CS4 or next) is often rumored to more fully utilize GPU technology. There are similar claims for benefits to Autocad 2009 and other engineering apps.

I think we will see some more dramatic headlines as the the combination of multi-core cpu and GPU technology shatters some of the exising performance benchmarks and enables new possibilities on a desktop platform.






 
Status
Not open for further replies.