Nvidia's CUDA: The End of the CPU?

Status
Not open for further replies.

mr roboto

Distinguished
Oct 22, 2007
151
0
18,680
Very interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.

I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.

On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now.
 
G

Guest

Guest
Well if the technology was used just to play games yes, it would be crap tech, spending billions just so we can play quake doesnt make much sense ;)
 

dariushro

Distinguished
Nov 22, 2007
53
0
18,630
The Best thing that could happen is for M$ to release an API similar to DirextX for developers. That way both ATI and NVidia can support the API.
 

dmuir

Distinguished
Dec 19, 2006
31
0
18,530
And no mention of OpenCL? I guess there's not a lot of details about it yet, but I find it surprising that you look to M$ for a unified API (who have no plans to do so that we know of), when Apple has already announced that they'll be releasing one next year. (unless I've totally misunderstood things...)
 

neodude007

Distinguished
May 25, 2008
125
0
18,680
Im not gonna bother reading this article, I just thought the title was funny seeing as how Nvidia claims CUDA in NO way replaces the CPU and that is simply not their goal.
 
G

Guest

Guest
I think the best way to go for MS is announce to support OpenCL like Apple. That way it will make things a lot easier for the developers and it makes MS look good to support the oen standard.
 
[citation][nom]Mr Roboto[/nom]Very interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX http://en.wikipedia.org/wiki/Gore-Tex 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now. [/citation]
Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU.
 

LogicalError

Distinguished
Jun 18, 2008
1
0
18,510
FYI: Apple has been working with the Khronos group (the people behind OpenGL at the moment) to make an API called OpenCL which should do all the things that Cuda et al can do. Since it's not just Apple that's behind it, but also the Khronos group, it should be cross platform. So who knows.. maybe this is going to be the unifying API for this.. well, until Microsoft comes up with 'DirectC' ofcourse
 
G

Guest

Guest
I know that this is not too close to the article, but i hope that it is still not too OFF topic.
I just have a question, and someone might answer it (the TH is full with smart guys). My problem is that there are too many misconceptions floating around in the net regarding CUDA and overall the whole GPGU businnes.
I have seen somewhere, that these GPU's are able to do Double Precision floating point calculations, but personally i find this unlikely.
Others say that you can take directly your parallel code writen in C or Fortran90, and adopt it to CUDA, because the standard stuff can run serial on the CPU and the most computationally expensive part parallel on the GPU. On top of that you can 'adress' or cummunicate with your GPU directly from a Fortran code with sort of system calls (i think this is BS).
Quiet frankly, i have not found a site on which i can really rely on, where they show an example (source code and explanation) of how something like this could be done.
 

bf2gameplaya

Distinguished
Mar 19, 2008
262
0
18,780
I wish Intel and NVidia would get over themselves and co-operate and finally give total system performance that big ass boost it needs.

Intel is wasting time ray-tracing on a CPU and NVidia is wasting frames by folding proteins on their GPU.

"You're doing it wrong!"
 
G

Guest

Guest
No, the best would be if we got an open API, like OpenGL. I seriously do not want another DirectX locking me to MS >_
 

thr3ddy

Distinguished
Feb 8, 2007
26
0
18,530
@dariushro: That would quite possibly be the worst thing that could happen to GPGPU. Microsoft equals Windows and GPGPU and super computing is not Windows' strongest point (understatement).

It would be better for a neutral party composed of GPGPU experts from different IHVs to initiate something like what you propose, more like what the OpenGL ARB creates, a specification.

IHVs and other companies could then implement this standard on their own hardware, thus decentralizing development from the ISV. If you leave development of this type of technology up to Microsoft (or any other single developer) you'll end up with vendor lock-in, which is a Bad Thing, for all of us.

Anyway, CUDA is great but not cross-platform compatible (Intel, AMD/ATI, etc.) which makes it impossible to implement in commercial software, unless a CPU-bound alternative is provided, which would defeat the purpose of the architecture.

On a similar note: think of the choice between the PhysX SDK and Havok Physics. Do you want partial GPU accelerated physics supported by one brand (PhysX, NVIDIA G80+) or do you want to stay CPU-bound but have the same feature set regardless of the hardware (Havok)?
 

magnesious

Distinguished
Jun 7, 2008
3
0
18,510
If you had the patience to read this entire thing, I'd recommend you look at the CUDA programming guide(link) It's the same information, but less terse.

Tom's also forgot to point out that development is possible via emulation (emuDebug build setting, I think, with the .vcproj they give you), so anyone can get their hands dirty with the API. You don't get the satisfaction of seeing cool speedups, but it's just as educational, and easier to debug. No screen flickers :)
 

MxM

Distinguished
May 23, 2005
464
0
18,790
I wonder if a PC can be build today without processor at all? It probably requires different BIOS for mobo and some kind of x86 emulator for NVIDIA card, but is it possible in principle without any modifications in hardware?
 

godmodder

Distinguished
Dec 22, 2005
35
0
18,530
The end of the CPU is nowhere near. To think the GPU could be used for every task is just absurd. The GPU is only good for tasks which can be massively parallellized. Unfortunately, not that many tasks, apart from graphical processing, can be divided into smaller, completely independent parts.
 

JonathanDeane

Distinguished
Mar 28, 2006
1,469
0
19,310
Sounds interesting, I think the whole branch out into the CPU thing is a response to Intel working on an integrated CPU/GPU, maybe Nvidia is a little nervous about that, although I think Intel's built in GPU would probably suck, its tempting to think they could clock there GPU's at 2-3Ghz and manufacture on a 45nm node toss in some of that DRAM built in 204.8GB/sec hmmm maybe not so bad....
 
G

Guest

Guest
To see Why 4 threads faster than 2 threads on 2 cores, see this example:
Let we split our task into equal number of chunks, and run each in a different threrad.
One thread performs our task at 2000 ms.
If two chunks (threads) run on two iddle cores, they will finish together, after 1000 ms.
if two chunks (threads) un on two cores, one idle and one 50% busy, the chunk run on the the 50% busy CPU will advance 50% until the other chunk is done on the other (idle) core. Then, it will be likly re-assigned to the other (now idle again) core, to finish its 50% to go. - In total, 1000 + 500 = 1500 ms.
if THREE chunks run on 2 cores, one of them 50% busy, two chunks will be done on the idlde core (666 + 666 = 1333ms). Third chunk will run on the other core at 50% speed, taking also 1333ms. We finished at 1333ms!
In total: splitting task into more chunks than cores can make sense if there are also OTHER tasks runing on your system.
Roman
 
G

Guest

Guest
The beta release only works on the 2 new Nvidia cards. The official release is supposed to work on all g92 cards so the 8800gt and newer. (much more power efficient option)

Programs like these wont be using all the parts of a GPU. I expect they will probably prove to be dependent on something like clock speeds or shader numbers/speeds or pure bandwidth but doubtfully all.

Depending on which is the primary factor 8800gt could prove to be almost as fast as these in this one instance.

long story short i think this will end up running very similar to how folding@home variates among cards.
 
Status
Not open for further replies.