[citation][nom]syadnom[/nom]I would go so far as to say that modern CPUs could dump a lot of the logic they currently carry and absorb some GPU tech that does the same but better and then release the new instructions as an extention like sse or 3dnow to give direct access to it from opencl libraries.In theory, if you could get equal CPU power as is in modern GPUs you could eliminate a lot of the bottleneck/latency that is the PCIe bus. [/citation]
The hard part about what you are suggesting is that we take 2 completely different architectures, mash em up and hope for the best. This is like saying take a CISC processor and combine it with the properties of RISC processor.
[citation][nom]dreamphantom_1977[/nom]If it cost's so much then why are so many companies developing code for it???1. GpGPU is 2-100 times faster then the cpu, so I suppose it depends on what you are using it for and how long u plan to run it.
[/citation]
Other companies are using it for applications that never originally used the GPU, examples include video transcoders, CFD simulators, and other math applications. Since the GPU is a number crunching monster (ADD, MADD, etc) it is a good fit for these applications.
[citation][nom]syadnom[/nom]
Folding@home for example has been around for years, and hundreds of thousands of computers run that code..
[/citation]
Folding@Home is not a general-purpose application, it is a highly mathematical application and would benefit from any number crunching hardware. FAH is a perfect app for the GPU since it knows all the data beforehand (since the data sets would probably be small), and you just feed the data and your formula and just wait for the result. The GPU performance drops if you need to take a look at some derived data which is in the VRAM, RAM or HDD of your PC.
[citation][nom]syadnom[/nom]
2. For games, if the code is developed mainly on the cpu, like GTAIV, then obviously it's gonna bottleneck the game and people are gonna be pissed off and avoid the game because they don't have the cpu to run it. But if they coded it better to take advantage of the gpu more in my opinion the game would have sold much better and the developing costs would have payed off in the long run.[/citation]
I'm not really sure about the application of GPGPU with games, because if the GPU today is already busy drawing everything does it really have enough free time to do other tasks? Let's face it, if the GPU is an underused resource during gaming, then why is a 9400GT not good enough to play recent titles at high settings.
[citation][nom]syadnom[/nom]
The gpu is here, it's in it's prime, and is not going anywhere. If anything the cpu and gpu are gonna combine into one big mega chip, probably with ray tracer, dsp, all built in, and actually i can forsee a futer will it will take on all the fuctions of the motherboard and the psu and will become modular and "completely" wireless including being powered by wireless means. This will be a superchip that will do anything and everything. I'm going to name it,( remember you heard it here first lol), I'll call it the "UMPU"- for Universal Mega processing Unit
[/citation]
I think Intel is going to try to do that before the guys in green and maybe before AMD. Remember the 80-core thing Intel did before (and what it was actually meant to show)?