Tim Sweeney: GPGPU Too Costly to Develop

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

i8cookiemonster

Distinguished
Aug 7, 2009
22
0
18,510
Wow the level of ignorance I'm seeing here is really astonishing. I'm glad to see someone posted a Wikipedia link to Tim Sweeney...I hope a few people actually read it.
My belief is that he is correct in saying that soon we'll see an end in the seperation of GPUs and CPUs. Its already been happening, and this is the course that was set in motion ever since DirectX 8 and OpenGL 2.0 introduced programmable pixel and vertex shaders. Over the last several generations the GPU has been becoming more and more robust and coming closer and closer to being able to run 'general purpose' code. Larabee is the essentially the culmination of the GPU and CPU (being highly parallel like today's GPUs yet compatible with everyday x86 instructions) and it IS the natural progression of things. The benefits of this are numerous. It frees the programmer from the restrictions of the hardware...instead of having to know which functions a piece of hardware can execute, he needs to know how FAST it can execute them. It's truly a game changer. An example of what this means...take a look at Crysis. It's a great looking game. But, it's and engine created for and restricted by the capabilities of the hardware (DirectX 10 specifically). Now imagine a hypothetical future game engine. It's written totally with a custom software renderer. The implications of this is that the renderer is limited to the imagination of the programmer more than the capabilities of the hardware. DirectX 11 may be the last version before this happens (as it introduces a vast amount of general purpose capabilities and lifts many restrictions...bringing it closer to compliance with the more common CPU). AMD and Intel are both in a great position for this, and NVidia would like you to think it's not happening, but it is:
http://www.theinquirer.net/inquirer/news/1051248/nvidia-announces-x86-chip
 

techguy911

Distinguished
Jun 8, 2007
1,075
0
19,460
I use gpu based video conversion software its 500x faster than cpu with my current rig setup, then next dx version should have built in support for gpu based offloading that would eliminate the need for programming in cuda.

Windows 7 makes use of gpu for things like encoding sound and video and imaging thus gpu usage will be in windows 7 they should be getting ready for that.
 

eodeo

Distinguished
May 29, 2007
717
0
19,010
CPUs trying to replace GPUs and GPUs trying to replace CPUs. Interesting.

So far GPUs have more than 10x faster chips under their hoods. It will be interesting to see how CPUs overcome this deficit.

When you look at i7 with 4 physical and 8 logical core and ati 4850 with 800 physical cores, its easy to see why it would be a pain to paralelize all the code to that many cores, but lets not forget that the physical barrier for single core speed has been reached. Thats why you dont see 4ghz commercial CPUs or higher. You only see more cores, p4 and above.

Parallelization is the way to go be it GPU or CPU
 

matt87_50

Distinguished
Mar 23, 2009
1,150
0
19,280
Yes! couldn't agree with him more! this is why I'm alot more excited about the intel x86 larrabee than any gpgpu!

I think what he's saying about 10 TIMES THE COST!! is a little exaggerated, just a scare tactic to tempt people to the safety of the unreal engine.
but graphics coding these days is really just the task of arguing with the various graphics APIs, calling functions in different orders, with different arguments until it does what you want. and the added frustration of having to learn new languages for each type of gpu and gpgpu implementation is even more painful. I just want larrabee and 48 cores at my command with c++, where I can say "ok, you are going to do this, and you are going to do it exactly this way!"

the biggest example of how the current architecture is too complex is with the ps3, you have to code manual bits of the rendering pipeline in c++ to run on the spus, and then switch between the graphics api and manual code multiple times just to render one frame! copying data to and from the gpu... just way too complicated.
 

xrodney

Distinguished
Jul 14, 2006
588
0
19,010
He thus provides an example, saying that it costs "X" amount of money to develop an efficient single-threaded algorithm for CPUs. To develop a multithreaded version, it will cost double the amount; three times the amount to develop for the Cell/PlayStation 3, and a whopping ten times the amount for a current GPGPU version.
This is b*****t.
Difference between single and multithread application is definetly not twice a cost. This would be maybe true for simple applications, but for more expensive and complex software, difference in cost between single and multithread application have diminishing return, thats mean more costy application, less it cost to make it multithread.
Not sure about GPGPU programing, but i seriously doubt that if single thread app cost 10 mil tak GPGPU modified version would cost 10x of that.

Me personaly would not pay anymore for any app not able use advantage of multiple CPUs as its not that hard implement, and i dont wana suport lazy programers (ofcourse only those that can gain something from more CPU)
 
G

Guest

Guest
xrodney: I'd have to disagree, as a .NET developer, everytime I have to add a new thread to an application, my fee doubles... True story :D

Although I'd say that the OS is the biggest hinderance, most developers have a difficult time effectively gaining real-time performance when having to synchronize threads... Microsoft ought to open-source their thread scheduler, so that developers can understand how it works. Their scheduler sucks compared to Linux's anyways, so it's not like they'd be losing valuable trade secrets.
 

ebattleon

Distinguished
Sep 19, 2007
43
0
18,530
About 10 years ago IBM was working for a CPU that would handle CPU, GPU and Audio processing functions. What is really being discussed here is not really new it just a natural progression of the Technology after all Less is more. Less things to fail, less power use, less copper, plastic and pollution so I am all for dreamphantom's UMPU (universal mega processor unit)
 
G

Guest

Guest
This is a very interesting topic. Over the years, a lot of functionality that used to be separate from the MB has been integrated into it. I don't think the idea of video processing being merged with the CPU is too far fetched an idea. However, if that were the case, we might see a lot more variety of the CPU/GPU chips. There would have to be a wide range of options to be able to deliver to both the average consumer as well as the hardcore gamer. Instead of buying a high end video card, gamers would just opt for the high end CPU/GPU. It does make sense to avoid the PCI-E bus and just do it all within a single chip or even multiple chips on the MB. I guess only time will tell.
 

bounty

Distinguished
Mar 23, 2006
389
0
18,780
Tim "chicken little" Sweeney

"2006-7: CPU's become so fast and powerful that 3D hardware will be only marginally benfical for rendering relative to the limits of the human visual system, therefore 3D chips will likely be deemed a waste of silicon (and more expensive bus plumbing), so the world will transition back to software-driven rendering."

http://archive.gamespy.com/legacy/interviews/sweeney.shtm
 
Status
Not open for further replies.