Nvidia: Moore's Law is Dead, Multi-core Not Future

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

etrom

Distinguished
Apr 26, 2010
44
0
18,530
[citation][nom]dragonsqrrl[/nom]Actually in the context of what he's advocating Nvidia's latest high performance architecture is very power efficient, especially when comparing performance per watt to conventional processors and GPUs. I sure hope you're not basing your criticism about the power efficiency of the Fermi architecture on gaming benchmarks, especially when this article focuses entirely on parallel computing, because well... that would sound a little ignorant on your part. The people waiting for huge breakthroughs to back up the legitimacy and potential of parallel computing may not be able to appreciate the benefits until it's already upon them. Although Fermi may not be "real" or impressive enough to trigger understanding for some people, it's a genuine step in the right direction, and represents as big of a breakthrough as this area of computing has ever seen.The link below shows just a few examples of Fermi's performance potential in this area, parallel/GPGPU computing, an area that is still young and open to massive optimizations. Have a look...http://anandtech.com/show/2977/nvi [...] he-wait-/6[/citation]

My statement that you made a quote above was based in my Developer experience.

All of the talking that NV is making around CUDA, Fermi and Parallel prog. sounds very promising, but very academic and theoric yet, they're bashing the multicore serial processing which is still evolving, man, the first Core2Duo was released only four years ago, in 2006!

This discussion is much like the thing that Adobe Flash is obsolete and must be replaced, of course this will happen, but not in the next few years.
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
this is the type of tech we need, i agree, more cores don't do crap if the software is not optimized to take advantage of it. Parallel programing is the future and would love to see nvidia bring out a processor with this kind of capability
 


actually its not, do you realize how many applications you use that can't be turned into a parallel app

also creating parallel apps is not by any means trivial, any app that gets turned parallel, data has to be shared between the threads which means locks so only one thread can access it at a time, and unlocking and locking is expensive (not to mention whatever thread is waiting for the lock waits until it is released)

and saying that the devs need to make more apps parallel isn't going to help the situation, if you wanted to help, pick up a c++ primer, a book on pthreads or windows threading and sit down and program

EDIT: and really, GPUs are already parallel compute devices, the only problem is general apps wouldn't run well on it because many apps can't be made parallel and gpu's are bad at conditional statements (if/then/else, cost a few clock cycles)
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]etrom[/nom]My statement that you made a quote above was based in my Developer experience.All of the talking that NV is making around CUDA, Fermi and Parallel prog. sounds very promising, but very academic and theoric yet, they're bashing the multicore serial processing which is still evolving, man, the first Core2Duo was released only four years ago, in 2006!This discussion is much like the thing that Adobe Flash is obsolete and must be replaced, of course this will happen, but not in the next few years.[/citation]
Thanks for the clarification.
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
He gives the example of reading an essay, where a single reader can only read one word at a time – but having a group of readers assigned to a paragraph each would greatly accelerate the process.

What if you have to write an essay instead? Would a single writer complete an essay faster or would a group of writers be able to accelerate the process?
 

matt_b

Distinguished
Jan 8, 2009
653
0
19,010
[citation][nom]webbwbb[/nom]I was thinking the same thing then I realized something. He is probably thinking of it in terms of the power requirements per core. With the GTX480 containing 480 cores and having a 250W TDP it is effectively using 0.5w per core (that includes reductions from that TDP for RAM). Current CPUs are using 32.5w per core. If you are going strictly on a rating of power per core then the Fermi architecture is actually 65 times as efficient than Intel's Bloomfield.[/citation]
With consideration of a watt per core standpoint, no doubt about it. My question is, how powerful in comparison is one core from the GTX480 compared to one CPU core of the Bloomfield architecture in your example? As a whole, it may be 65 times more efficient, but is it 480 times more powerful/faster than a single core from a modern CPU? Core for core, no it is not, so the analogy is flawed. We all know how much more powerful and complex today's GPU is compared a CPU of the same era, but one CPU core does not equal the power of one GPU core whereas we can compare 32.5w versus 0.5w. Furthermore, if we are going about it with more cores is not the answer, then why in the world did Nvidia shove "480" of them in one package if this in their train of thought? Their logic doesn't follow through on their own beliefs - they're being hypocritical here. The ultimate goal would be to reduce core count, increase throughput, and reduce power consumption as a unit. As I see it, the industry is on the path of the complete opposite of all the above - especially power as a whole. Powerful or not, the GTX480 pulls a LOT of energy to function.
 

ncr7002

Distinguished
Jun 17, 2008
34
0
18,530
I disagree, I can't wait for my uber mega awesome 1024 core CPU which will require at least an entire power plant to work... /sarcasm
 

tleavit

Distinguished
Jun 28, 2006
145
0
18,680
Is this an article from 10 years ago? This info isn't anything different then what I read from most people 10+ years ago.
 
G

Guest

Guest
Well the reason Moore's law exist is that the integrated circuit shrinks every 18 months (the original version of Moore's law actually predicted every 12 months), so for the same amount of energy expended you get more performance. He should have said that that the performance doesn't scale linearly with the number of transistors, not "tremendous" expense in energy when you pack more transistors with a smaller process.

Also, Moore's law is more of a principle generalized from observation than a "theory". It's wrong to call it a theory. In fact, you can argue that because Moore's law has became an industry goal that we in fact observe the doubling of transistors every 18 months, because everyone in the industry is trying to keep up with this law.
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
[citation][nom]matt_b[/nom]I totally agree with this statement here. However, if this were to change, and more were trained in how to properly program for parallel computing, then the same could be said about the need to train more on how to properly program for serial/series computing - which is where we are currently in processor design. I think it's more fair to say the insufficiency lies on both sides.On another note, am I the only one finding it amusing that the chief scientist of R&D at Nvidia is stating the CPU consumes too much energy??? Did he forget about the monster they just released, or does he still consider it to be within acceptable power requirements or efficient enough?[/citation] It' still a fair remark CPU's do consume too much energy more so AMD than Intel's tho and they usually consume quite a bit more than what they are rated at because CPU's are often overclocked more than GPU's and overclocked harder.
 
Good statement. We have to change the ways we program to get the most out of our processors these days.

Ironically, for those bashing the statement: if it was made by Intel or AMD or whoever your God is, you would follow it in a heartbeat. Don't forget that irony, people.
 

descendency

Distinguished
Jul 20, 2008
582
0
18,990
His point is irrelevent when you consider that the bulk of computers today are CISC based. Powering a chip that does a lot for you is only good if you actually do it.

When you consider how much more efficient a good RISC processor is, you start to wonder why anyone would talk about efficient (especially power) without talking about this.
 

spoofedpacket

Distinguished
Jun 13, 2009
201
0
18,690
[citation][nom]etrom[/nom]Ok, how about turning the billions of working COBOL lines of code running in mainframes of huge companies into parallel computing? You, Mr. Dally, do you accept this challenge?Nvidia is becoming a huge biased company, spreading the lobby of parallel computing to everyone. We are still waiting for something real (and useful) to get impressed.[/citation]

Those mainframes running COBOL apps don't need to change. I know it feels all internet-smart-guy to post it, will probably get thumbs up from a few ignorant people who never worked on a aging mainframe and dealt with the mindset around them (preservation); but seems pathetically useless to 'challenge' someone stating a lot of facts to get into a new line of work instead of continue to work on providing focus on more parallel computing for current and future generations of systems.
 

intelliclint

Distinguished
Oct 20, 2006
69
0
18,640
[citation][nom]nevertell[/nom]The problem now is the x86 and the userbase that depends on it. What we need is a new mainstream architecture, that emulates x86 and which does parallel stuff really fast.[/citation]

Every chip x86 from the Pentium Pro, Athlon, and Transmeta are actually emulators of the x86 CISC processors. They break down the old x86 codes into micro-OPs. The Ops are executed in a RISC fashion which is one of the reasons multi-pipelined (called superscalar) architecture that is in each core works. Advancements in performance have been from improvements in the decoder and caching on the already decoded instructions.
I guess he actually missed the transition of the industry
 
Parallel processing is the future, you can only keep increasing clock speeds so much before quantum physics kick in. The current x86 (and by extension x64) instruction sets are heavily optimized for serial execution with each processor acting like its own world.

What we need is a fundamental shift from treating every program + processor as a closed system (back to v86 mode which is still being used) to treating them as an open ended system. I work with mostly SPARC's and those things are amazingly powerful as parallel processing, but fall behind x86 chips at serial processing. Makes them ideal for database's and number crunching applications.
 
I've been invisioning a slightly different design change. Rather than the CPU becoming parallel in the way Nvidia is invisioning. I forsee the CPU to remain, but having a parallel card to go along with it. Both have advantages. Why must we choose between them? We might start to focus more on parallel computing, but serial computing can't die either.
 

etrom

Distinguished
Apr 26, 2010
44
0
18,530
[citation][nom]spoofedpacket[/nom]Those mainframes running COBOL apps don't need to change. I know it feels all internet-smart-guy to post it, will probably get thumbs up from a few ignorant people who never worked on a aging mainframe and dealt with the mindset around them (preservation); but seems pathetically useless to 'challenge' someone stating a lot of facts to get into a new line of work instead of continue to work on providing focus on more parallel computing for current and future generations of systems.[/citation]

Totally agreed! Beside that, if everyone decide to downsize their systems, I would lose my job, hehehe, I'm only 23 years old, but I'm the COBOL business for 4 years so far.
 

mahmoodadeel

Distinguished
Mar 7, 2010
2
0
18,510
you cant make computing more fast because hard drives are too slow as compare to cpu or gpu . first find the way to increase the hard drive speed at least 5 times then all of you realize that suddenly how fast your computer has become.
 
Status
Not open for further replies.