Nvidia: Moore's Law is Dead, Multi-core Not Future

Status
Not open for further replies.

figgus

Distinguished
Jan 12, 2010
364
0
18,780
Translation:

"Our tech is the future, everyone else has no idea what they are doing. Please buy our GPGPU crap, even though it is inferior to what our competitors are making right now for everyday use."
 

2zao

Distinguished
Dec 27, 2009
82
0
18,630
!!
Maybe this will open some eyes.... but i doubt many for now....

too many are in a stupor doing things they way that is the norm and easiest for them instead of how should be... how long will it take for people to wake up to the direction change needs to go?
 

yay

Distinguished
Jan 9, 2007
324
0
18,790
Thats all well and good, until you need to do one thing BEFORE another, like when rendering a scene. Or maybe he forgot that.
 

eyemaster

Distinguished
Apr 28, 2009
750
0
18,980
What are you waiting for then, Bill Dally, go ahead and create that chip... ...ha, that's what I thought, even you can't do it.
 
except without serial optimizations general apps (not compute apps) will suffer since the serial optimizations allow for fast comparisons where as their compute cores on the GPU are very inefficient at this. Yes it will help computing, but general apps will suffer.

And really, some programs (algorithms) can never turned into a parallel app
 

matt_b

Distinguished
Jan 8, 2009
653
0
19,010
and that programmers trained in parallel programming are scarce.
I totally agree with this statement here. However, if this were to change, and more were trained in how to properly program for parallel computing, then the same could be said about the need to train more on how to properly program for serial/series computing - which is where we are currently in processor design. I think it's more fair to say the insufficiency lies on both sides.

On another note, am I the only one finding it amusing that the chief scientist of R&D at Nvidia is stating the CPU consumes too much energy??? Did he forget about the monster they just released, or does he still consider it to be within acceptable power requirements or efficient enough?
 

ravewulf

Distinguished
Oct 20, 2008
973
33
19,010
Dally said that the long-standing, 40-year-old serial programming practices are ones that will be hard to change, and that programmers trained in parallel programming are scarce.

This. Extremely few programs today even properly use the limited amount of cores we have now. Look at all the programs that are still single threaded that could easily benefit from parallelism (QuickTime and iTunes for one). There are also other algorithms that simply CAN"T be made parallel (some parts of video encoding that depend on previous results for the next task).
 

triculious

Distinguished
Mar 12, 2010
161
0
18,690
while I agree that there are instances where parallel processing works way better than serialized, you can't altogether switch from one to the other
then there's parallelized code, which is hell for programmers

and then there's what we could call "Dally's law": your graphics card must be twice as hot every 12 months
 

rhino13

Distinguished
Apr 17, 2009
590
0
18,980
Wait, so then you'd have a bunch of people who only understood one paragraph and nothing else? It's all gotta go back to serial at some point! This is a bad example.

But I do agree with what he's saying. We need to put more effort into parallel speed than serial.
 

dman3k

Distinguished
Apr 28, 2009
715
0
18,980
What he says is true. It's the programmers' fault for not using more parallel programming. But unfortunately, there's only so many things that you can parallel.

His reading an essay analogy is the perfect example of that. People have to read one word at the time. Not getting a bunch of people to read a few words and be done, because that would make no sense at all.
 

MxM

Distinguished
May 23, 2005
464
0
18,790
The Moor's law is only about # of transistors. It is irrelevant how many processors are build with this transistors. It does not say that # of transistors is proportional to performance or MHz speed or anything like that. I find the reference to Moor's law in NVIDIA paper just a marketing trick to promote their architecture and SCUDA. What they discussing has nothing to do with Moor's law, quite the contrary, it is how to get better performance from the same amount of elements.
 

nevertell

Distinguished
Oct 18, 2009
335
0
18,780
The problem now is the x86 and the userbase that depends on it. What we need is a new mainstream architecture, that emulates x86 and which does parallel stuff really fast.
 

peanutsrevenge

Distinguished
Dec 7, 2007
28
0
18,530
This touches on something I've been saying for a while, a MASSIVE change is required, but it needs hardware and software developers to work together and change together, one cannot change without the other!
 

daworstplaya

Distinguished
Oct 30, 2009
220
179
18,760
[citation][nom]rhino13[/nom]Wait, so then you'd have a bunch of people who only understood one paragraph and nothing else? It's all gotta go back to serial at some point! This is a bad example.But I do agree with what he's saying. We need to put more effort into parallel speed than serial.[/citation]

Thats a bad example, think of it as doing video encoding. If you have one core doing all the work. Then that core has to do every frame line by line before it can move to the next frame. But if you had 4 cores, you could divide each frame into 4 parts and each core could work on their part before moving to the next frame. Obviously there would need to be a controller that kept all the cores in sync and combined each part of the frame back to a whole so that it would make sense to the end user. However even with that overhead it would still be much faster.

[citation][nom]dman3k[/nom]What he says is true. It's the programmers' fault for not using more parallel programming. But unfortunately, there's only so many things that you can parallel.His reading an essay analogy is the perfect example of that. People have to read one word at the time. Not getting a bunch of people to read a few words and be done, because that would make no sense at all.[/citation]

Human understanding and core thread execution are 2 different things and I don't think you can use that analogy when trying to differentiate parallel vs serial processing. Even if it doesn't make sense to each individual core that is only reading a small portion,it will make sense when it put back together in the end by the controller, to the end user that ultimately reads the document.
 

Trueno07

Distinguished
Apr 15, 2009
508
0
18,990
This change isn't something that consumers will clamor for, or something that software companies will push. A revolution in technology, whatever it may be, will be needed to bring this type of computing to the mainstream. Whether it be a parallel programing language, or some type of chip, something will have to be released so that everyone will look at parallel processing and say "That's the future, right there".
 

etrom

Distinguished
Apr 26, 2010
44
0
18,530
Dally said that the long-standing, 40-year-old serial programming practices are ones that will be hard to change, and that programmers trained in parallel programming are scarce.

Ok, how about turning the billions of working COBOL lines of code running in mainframes of huge companies into parallel computing? You, Mr. Dally, do you accept this challenge?

Nvidia is becoming a huge biased company, spreading the lobby of parallel computing to everyone. We are still waiting for something real (and useful) to get impressed.
 

jenesuispasbavard

Distinguished
Oct 26, 2009
118
0
18,680
I've considered parallelising numerical integration, and I for one think it is impossible. You NEED the results of the previous step in order to process the next step. Parallel execution at different time steps is impossible.

Serial processors still have their uses, and in applications like this, they're so much faster than one CUDA core, say (which is all I'll be able to use).
 
its not that a lot of programmers are not rained in parallel apps, its just not easy to do. The easiest apps to make parallel are mathematical algorithms, though any data shared has to have a lock so multiple threads can't access it the same time, and this really hurts performance
 
Meh. Like always, this is going to be one of several paths the industry can go down at once, with the same overall goal of faster performance.

Just like CPUs didn't JUST go from single-core to dual-core to quad-core, or JUST go from 90nm to 65nm to 45nm, or JUST go from one socket and chipset to another ... or RAM didn't JUST improve in clock speed, or JUST improve in capacity, or JUST improve in latency ... what Dally's talking about is not the only way to improve efficiency. Like always, you've got several improvements going on at the same time, and if this guy's theory comes to pass, it'll be one piece of the puzzle among a sea of others.

It's not going to require some massive upheaval in the software and hardware industries at once. Technologies do not have to be introduced in lock-step with one another. How many programs today even make the most efficient use of existing CPUs and GPUs? Yet they're already coming out with hardware that has even more cores or more bandwidth. As always, there's going to be steady improvement and no "clean break."
 

cloakster

Distinguished
Jul 24, 2007
131
0
18,690
I think the biggest problem if this happens is the average consumer. If they focus on parallel rather than more cores, the average consumer is still gonna think the more cores the better :p
 

husker

Distinguished
Oct 2, 2009
1,208
221
19,670
[citation][nom]MxM[/nom]The Moor's law is only about # of transistors. It is irrelevant how many processors are build with this transistors. It does not say that # of transistors is proportional to performance or MHz speed or anything like that. I find the reference to Moor's law in NVIDIA paper just a marketing trick to promote their architecture and SCUDA. What they discussing has nothing to do with Moor's law, quite the contrary, it is how to get better performance from the same amount of elements.[/citation]
Exactly what I was going to say, so bumping it here. Moore did not attempt to predict or limit specific architectures. Nvidia either doesn't understand Moore's law or is deliberately misusing it.
 
Status
Not open for further replies.