Exclusive Interview: Nvidia's Ian Buck Talks GPGPU

Status
Not open for further replies.
G

Guest

Guest
How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?
 

matt87_50

Distinguished
Mar 23, 2009
1,150
0
19,280
I ported my simple sphere and plane raytracer to the gpu (dx9) using Render Monkey, it was soo simple and easy, only took a few hours, nearly EXACTLY the same code. (using hlsl, which is basically c) and it was so beautiful, what took seconds on the cpu was now running at over 30fps at 1680x1050.

a monumental speed increase with hardly any effort (even without a gpgpu language)

its going to be nothing short of a revolution when gpgpu goes mainstream (hopefully soon, with dx11)
computer down on power? don't replace the cpu, just dump another gpu in one of the many spare pci16x slots and away you go, no fussing around with sli or crossfire and the compadibillity issues they bring. it will just be seen as another pile of cores that can be used!

even for tasks that can't be done easily on the gpu architecture, most will still probably run faster than they would on the cpu, because the brute power the gpu has over the cpu is so immense, and as he kinda said, most of the tasks that aren't suited to gpgpu don't need speeding up anyway.
 
G

Guest

Guest
NVIDIA, saying that "spreadsheet is already fast enough" may be misleading. Business users have the money. Spreadsheets are already installed (huge existing user base). Many financial spreadsheets are very complicated 24 layers, 4,000 lines, with built in Monte Carlo simulations.

Making all these users instantly benefit from faster computing may be the road for success for NVIDIA.

Dr. Drey
Bloomington, IN
 

techpops

Distinguished
Jul 3, 2009
56
0
18,630
While I can't get enough of GPGPU articles, it really saddens me that Nvidia is completely ignoring Linux and not because I'm a Linux user. Ignoring Linux stops the GPU from being the main source for rendering in 3D software that also is available under Linux. So in my case, where I use Cinema 4D under Windows, I'll never see the massive speedups possible because Maxon would never develop for a Windows and Mac only platform.

It's worth pointing out here that I saw video of Cuda accelerated global illumination from a single Nvidia graphics card, going up against an 8 core CPU beast. Beautiful globally illuminated images were taking 2-3 minutes to render, just for a single image on the 8 core PC. The Cuda one, rendering to the same quality was rendering at up to 10 frames per second! That speed up is astonishing and really makes an upgrade to a massive 8 core PC system seem pathetic in the face of that kind of performance.

One can only imagine what would be possible with multiple graphics cards.

I also think the killer app for the GPU is not ultimately going to be graphics at all, while in the early days it will be, further down the line, I think it will be augmented reality that takes over for the main GPU use. Right now, it's pretty shoddy using a smart phone for augmented reality applications, everything is dependent on GPS, and that's totally unreliable and will remain so. What's needed for silky smooth AR apps is a lot of processing power to recognize shapes and interpret all that visual data you get through a camera to work with the GPS. So if you're standing in front of a building, an arrow can point on the floor leading into the buildings entrance because the GPS has located the building and the gpu has worked out where the windows and doors are and made overlaid graphics that are motion locked to the video.

I think AR is going to change everything with portable computers, but only when enough compute power is in a device to make it a smooth experience, rather than the jerky unreliable experimental toy it is on today's smart phones.
 

pinkzeppelin97

Distinguished
Jul 15, 2009
28
0
18,530
[citation][nom]zipzoomflyhigh[/nom]If my forehead was that big due to a retreating hairline, I would shave my head.[/citation]

amen to that
 

jibbo

Distinguished
Sep 4, 2009
2
0
18,510
[citation][nom]shuffman37[/nom]How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?[/citation]

There is GPU acceleration for Linux. I believe NVIDIA's provided a CUDA driver, compiler, toolkit, etc for Linux since day 1.
http://www.nvidia.com/object/cuda_get.html
 

jibbo

Distinguished
Sep 4, 2009
2
0
18,510
[citation][nom]Matt87_50[/nom]surely there is nothing stopping openCL going to linux?[/citation]

NVIDIA released Windows and Linux OpenCL drivers in June.
 

techpops

Distinguished
Jul 3, 2009
56
0
18,630
@jibbo It's my understanding that you need directx11 to really make use of CUDA and if you wanted a cross platform app to work in Windows and Linux you'd have major hurdles adapting CUDA to work with both. Enough of a hassle that at least one huge 3D company has turned its nose up at CUDA and is just waiting for however many years until OpenCL is something worth looking at.
 
G

Guest

Guest
linux is almost as gay as its users, stfu noobs
How does using something besides Windows dictate my sexual preference? If you wouldn't mind I'd like to know your logic behind that statement.
 

sumitg

Distinguished
Sep 5, 2009
1
0
18,510
There is a GPU (CUDA) accelerator plug-in for Excel from SciComp
http://www.scicomp.com/parallel_computing/GPU_OpenMP/
 
G

Guest

Guest
I wish he would have confirmed that 570 times faster comment from the big honcho. 570x faster in 6 years. Sounds like a load of marketing garbage to me. Then he said cpus will only be 3 times faster in 6 years.

I agree parallel computing is looking like the wave of the future. Claiming that in 6 years you will be producing a processor that is nearly 600 times faster than the competition is ridiculous. The fact that the head of this company said that, makes anything coming out of these guys mouths sound like marketing instead of solid science.

The fact that you had this interview and didn't feel the need to put that question to him suggests this interview is old, or you just didn't feel like asking a very pertinent question.

Guess that's why I don't read this site much anymore. Amateur hour
 

techpops

Distinguished
Jul 3, 2009
56
0
18,630
@Davinchy 6 years is a long time in tech. At the nub of the cpu vs gpu debate, the cpu has been designed as a single chip that's now trying to become multi cored. The GPU has always been this way, so it has a huge head start given that the future is massively parallel. If we were magically blessed with a 256 core cpu today, hardly anything could make use of it. Not so with the GPU and really the exercise for making faster gpu's just boils down to making more cores and knowing that all the code will just naturally take advantage of those extra cores.

I'm still not sure that anyone can predict what will happen in tech in 6 years time but I'd place a bet that the GPU will be still be surprising us all with just how powerful it continues to get in a small cheap package.
 
G

Guest

Guest
I remember the thrilling Brook/TenDRA pos. How do these bullshitters stand themselves?
 
Status
Not open for further replies.