Nvidia: Moore's Law is Dead, Multi-core Not Future

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

liquidsnake718

Distinguished
Jul 8, 2009
1,379
0
19,310
So what does Nvidia propose to do? Train more programmers and research & devt for parallel programming or to build a CPU or to integrate this thype of process on their future gpu-cpu hybrid or gpgpu?
 
G

Guest

Guest
"oberonqa
You would still end up with people in the group sitting around waiting for everyone else in the group to finish reading so that they could discuss what was read and then formulate a one sentence summary of the paragraph written."

the summarizing analogy is not really a good example as it requires a subjective view, but lets run with it anyways..... a programmer who is not used to parallel task would have written it like that, one who wants to really leverage the power of parallel computing would have done something like this....

quantify what the summary should be about, break book up into sections, when a group of readers have finished one section have them prepare a summary based upon quantified characteristics for that section, then have then prepare a summary for the next sections based upon positive and negative outcomes against the quantified characteristics, when second section group are done reading, sum up the quantified characteristics for both sections, have them select the relevant summary from the previous positive and negative summaries that matches the quantified characteristics, and again have them prepare scenarios for negative and positive outcomes... repeat

based upon this setup you could well have the full summary written up before the people even finish reading the last chapter, yes you would have created alot of wasted effort, but hey if the people aren't doing anything but waiting you might as well get them to work. The real problem with such a setup is that it requires alot of creative thinking and planning to work right
 
G

Guest

Guest
I agree with most of what he said but not with everything. He blames lazy/stupid developers but as a developer I must object.
The proposed alternative is not really usable at this point. CUDA is still at infancy and is not really usable for application development. I can increase some specific functions of my program on CUDA but I can't really program anything on this platform.
Good luck on programming an alternative app on platform that requires manual memory management. If (a big ?) you ever ship the product it will be faster than reference app but feature wise it will be inferior.
 

kronos_cornelius

Distinguished
Nov 4, 2009
365
1
18,780
[citation][nom]jenesuispasbavard[/nom]I've considered parallelising numerical integration, and I for one think it is impossible. You NEED the results of the previous step in order to process the next step. Parallel execution at different time steps is impossible.Serial processors still have their uses, and in applications like this, they're so much faster than one CUDA core, say (which is all I'll be able to use).[/citation]

You may be able to do it with a pipe line, if you have enough input to take advantage of the parallelization.
 


but then you run into the problem the Pentium IV's had, where the branch prediction failed and you had to flush out the pipeline which costs a lot of cpu cycles, and thats not really parallel, the way eek out more performance is with a better architecture giving better IPC (ie more cores doesn't really help, and you can't really push the clock speed a whole lot)

keep in mind the more you try to make the code parallel the more locking issues arise when there are shared resources between threads (or processes)
 
Status
Not open for further replies.