Tim Sweeney: GPGPU Too Costly to Develop

Status
Not open for further replies.

ravewulf

Distinguished
Oct 20, 2008
973
33
19,010
I'm not going to comment on the economics as I don't know enough about it (although I would guess they are inflated a bit), but the benefits of multithreading must be weighted and determine if it is a good fit for the application. Video compression needs it, a simple text editor less so.

As for the "death" of GPUs, I doubt that will happen anytime soon. Far off in the future, probably.
 
G

Guest

Guest
C++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU.

What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...
 

eyemaster

Distinguished
Apr 28, 2009
750
0
18,980
Well, he has a valid point, where programming in a simple way for a CPU is much simpler than writing for an API like Direct X and Open GL. They do provide a good way of hiding the hardware video cards and providing a common interface. So, a major con and major pro for video cards and their API's.

Until processors are fast enough to replace all that the video cards of today can do now, at the same speed, I don't see video cards going anywhere anytime soon. At the same time, when CPU's are fast enough, GPU's will also have advanced enough that they will still make a difference big enough. They progress together. Where games are concerned, I can see that the CPU would go away or be less significant than the video card.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
Reminds me of the CEO of my last company. He tells us that he has no clue what we do every day and then goes on to tell us that it must be faster and cheaper.

Sweeney obviously has no clue what DirectX and OpenGL are, but is convinced there are better ways to do graphics processing. I know how the programmers at Epic feel.
 

Blessedman

Distinguished
May 29, 2001
583
1
18,980
I think Tim is just wrong, I mean maybe(!) when CPU's have 32 cores (2016?) you could afford to gobble up 20 or so for rasterazation and vertex setup. Doesn't that though kind of push into the area of programming for a GPGPU? I thought that's what Directx and OpenGL was for so they didn't need to keep reinventing the wheel... This is the perfect time for a small team of highly motivated young programmers to spend a few summers in their basement and bang out the next generation engine for GPGPU's.
 

ptroen

Distinguished
Apr 1, 2009
90
0
18,630
Well for starters you have a slow PCI bus that is just well slow. The significance of this is a developer needs to write instructions(ie code) that takes this into account and then call the API to do stuff(shader code etc...). To complicate matters further you have PPU code(physics code which is really just glorified collision detection) which may sit on the graphics card but not necessarily. Also their is the sound card which will call positional acoustic events in 3d space. All of the code of the game itself in a effect has to be load balanced with the PCI bus working overtime.

Going back to the GPU compiler topic what would be nice is to just use C++ templates or the CLR of .net and just stick some templates and very quick load balance CPU/GPU code churned out however regardless of the language at hand the developer will still have to construct a good object design which will take some time. The worst case is a bit of code duplication because of different languages which is what we have right now but honestly it's not really that bad unless you don't understand the architecture then creep sets in. For example within DirectX you have constant buffers and vertex types where you can set the structures which will communicate the type of information back and forth between CPU mobo and GPU land since the primitive types are standardized(IEEE32 bit floats) it's pretty trivial for a programmer to know what's going where however I must agree it's quite annoying to try to integrate physics api with gpu.
 

hellwig

Distinguished
May 29, 2008
1,743
0
19,860
This all goes to a lack of understanding of the underlying architecture. I worked at a company that was enforcing what they called 3-View design. The only problem with this design system was that determining what the system should do, and determining what the system should be made of (i.e. hardware) were independant processes. This meant you developed a system without knowing the limitations of the hardware its running on. I pointed this out to the instructor who was teaching the class they offered at work, and he couldn't even respond.

A good example of this is Crysis. How much money did the producers put into that game to give it cutting-edge graphics and effects, only to find out consumers needed a multi-thousand dollar computer to benefit from that hard work, thus most people would never see it?
 
G

Guest

Guest
CEOs tend to be business people with degrees in Business Administration that rarely know the details of what they manage. This guy clearly doesn't understand code, he's taking "recommendations" and "data, in an executive format" that have been regurgitated up the chain of command a few times, combined with some arrogance and self-importance.

We used to be a nation where inventors founded a company to create and sell their invention, now we have a bunch of spoiled, rich-kid schmucks running "established, brand name" companies. It's nearly impossible to start a new company now, and any person with brilliant ideas has to find a job at an established company, and then have their ideas "managed" by a bunch of ignorant MBAs. Then we wonder what happened to America...
 
Frozenlead: Developers too lazy to learn to multithread/GPGPU optimize code.

Since when in the tech field do people complain about moving forward? If you can't keep up with the train, you lose.
 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
For a company that develops the most used engine in videogames. Thats a poor idealogy. 2 cores is too much money. If a competitor develops a GPGPU version, they will definetly face a backlash in engine sales.
 

Wayoffbase

Distinguished
Apr 28, 2009
148
0
18,690
[citation][nom]deltatux[/nom]I would rather listen to John Carmack talk the state of gaming technology than to listen to Tim Sweeney's baseless talks.[/citation]
That's a tough call. I'll choose option 3 if I can: ignore both.
 

omnimodis78

Distinguished
Oct 7, 2008
886
0
19,010
[citation][nom]_barraCUDA[/nom]C++ or CUDA? Is Nvidia sponsoring this guy? If CUDA was so freaking wonderful in it's present state, there'd be more applications that use it. The fact of the matter is that 99.999% of applications run fast enough on a modern CPU without any good reason to run it on GPGPU. What's more absurd is him making that ridiculous rant without giving a nod to OpenCL, which aims to do everything he talks about...[/citation]
Well I can tell you first hand that when I enable CUDA in Coreavc for my HD movies, CPU utilzation drops from about 10-20% to mostly 1%. Yes, same speed - but why not utilize the power of the GPU for tasks that it can very easily perform?
 

LORD_ORION

Distinguished
Sep 12, 2007
814
0
18,980
Carmack has a unique position, he surrounds himself with the most elite programmers, and is one of the most eilite programmers himself. I get the impression that Sweeney has more business acumen, and thus aproaches the situation from that perspective.

In the end, I agree with Sweeney... having a unified programming architecture is more cost effective... and I see larrabee's architecture ultimately dominating mainstream PC gaming.
 
G

Guest

Guest
Having done some GPGPU work myself, I can agree that it's a significant amount of work to port general purpose code to the GPU. The amount of effort depends on exactly what you're trying to do, and sometimes the whole exercise can end up producing a much smaller speedup than expected.

The biggest hurdle is the fact that GPGPU is still not standardized (DirectX 11/compute & OpenCL are just starting out), so there are several standards to work with. The algorithm still needs to be written for the CPU, as there are still users out there without proper GPGPU hardware support. All of that adds up to a lot of risk, which the financial guys don't like much - so the 10x figure doesn't seem too crazy.

Of course, when a GPGPU'd algorithm works well, it's pretty incredible.
 

ash9

Distinguished
Aug 15, 2009
17
0
18,510
Sweeney can only be talking about Larabee, which is conceptually a bunch of cpu's strapped together..if i7's are $500 and up that one will cost $3000 in lots of 1000- reduced wafer size or not, its hyperthreading and all that overhead thats costly- and I dont see signs of Intel spending on R&D lately.
asH
 

MamiyaOtaru

Distinguished
Jun 19, 2008
23
0
18,510
right, cause larrabee will be made of a bunch of their most expensive processors. I'm pretty sure it is actually a bunch of shrunk P1s or P2s isn't it?
 

hannibal

Distinguished
This is one way of telling why there are so few games that support "new" technologies like multi core support etc. They are too costly to make... Hmmm... This is great new for companies like AMD and Intel who will put their GPGPU products in line sooner or later. Nice new technology that nobody wants to support for many years. Just like AMDs' 64 bit support that is now starting to see some light after what 10 years? waiting...
 
G

Guest

Guest
mamiya0taru: Larrabee will be a bunch of Atom CPUs on a single die. It's funny how they come up with their Tflop/Gflop numbers for Larrabee, if you factor in the slight increase in clockspeed, multiply the Gflops of Atom by the number of cores, then they're claiming that somehow what are essentially Atom cores wind up being as fast as indvidual Nehalem cores, and at a lower clockspeed. Synergy, brilliant design, or outright lies?
 
G

Guest

Guest
Today's CPUS are pretty fast. They may be specifically geared towards more general purpose tasks but that is a response to the market. If and when they see that dedicated graphics processor are overkill for most users in both power consuptions and processing power I think we will start to see more SSE3/SIMD/3DNOW/OPENCL whatever type decoder units in CPUs. Considering that NVidia is the only real stand alone GPU maker you can see that the technologies to integrate the CPU and GPU into a single unit and the shift to in-CPU graphics processing have already begun.

I would go so far as to say that modern CPUs could dump a lot of the logic they currently carry and absorb some GPU tech that does the same but better and then release the new instructions as an extention like sse or 3dnow to give direct access to it from opencl libraries.

In theory, if you could get equal CPU power as is in modern GPUs you could eliminate a lot of the bottleneck/latency that is the PCIe bus. Something like hypertransport between general instruction execution units and gpu-type execution units.

Companies are already looking at this and the GPU's day will come, but it will be a borg like assimilation, not extinction.
 

amnotanoobie

Distinguished
Aug 27, 2006
1,493
0
19,360
[citation][nom]syadnom[/nom]I would go so far as to say that modern CPUs could dump a lot of the logic they currently carry and absorb some GPU tech that does the same but better and then release the new instructions as an extention like sse or 3dnow to give direct access to it from opencl libraries.In theory, if you could get equal CPU power as is in modern GPUs you could eliminate a lot of the bottleneck/latency that is the PCIe bus. [/citation]
The hard part about what you are suggesting is that we take 2 completely different architectures, mash em up and hope for the best. This is like saying take a CISC processor and combine it with the properties of RISC processor.

[citation][nom]dreamphantom_1977[/nom]If it cost's so much then why are so many companies developing code for it???1. GpGPU is 2-100 times faster then the cpu, so I suppose it depends on what you are using it for and how long u plan to run it.
[/citation]
Other companies are using it for applications that never originally used the GPU, examples include video transcoders, CFD simulators, and other math applications. Since the GPU is a number crunching monster (ADD, MADD, etc) it is a good fit for these applications.

[citation][nom]syadnom[/nom]
Folding@home for example has been around for years, and hundreds of thousands of computers run that code..
[/citation]
Folding@Home is not a general-purpose application, it is a highly mathematical application and would benefit from any number crunching hardware. FAH is a perfect app for the GPU since it knows all the data beforehand (since the data sets would probably be small), and you just feed the data and your formula and just wait for the result. The GPU performance drops if you need to take a look at some derived data which is in the VRAM, RAM or HDD of your PC.

[citation][nom]syadnom[/nom]
2. For games, if the code is developed mainly on the cpu, like GTAIV, then obviously it's gonna bottleneck the game and people are gonna be pissed off and avoid the game because they don't have the cpu to run it. But if they coded it better to take advantage of the gpu more in my opinion the game would have sold much better and the developing costs would have payed off in the long run.[/citation]
I'm not really sure about the application of GPGPU with games, because if the GPU today is already busy drawing everything does it really have enough free time to do other tasks? Let's face it, if the GPU is an underused resource during gaming, then why is a 9400GT not good enough to play recent titles at high settings.

[citation][nom]syadnom[/nom]
The gpu is here, it's in it's prime, and is not going anywhere. If anything the cpu and gpu are gonna combine into one big mega chip, probably with ray tracer, dsp, all built in, and actually i can forsee a futer will it will take on all the fuctions of the motherboard and the psu and will become modular and "completely" wireless including being powered by wireless means. This will be a superchip that will do anything and everything. I'm going to name it,( remember you heard it here first lol), I'll call it the "UMPU"- for Universal Mega processing Unit
[/citation]
I think Intel is going to try to do that before the guys in green and maybe before AMD. Remember the 80-core thing Intel did before (and what it was actually meant to show)?
 
Status
Not open for further replies.