Nvidia: Moore's Law is Dead, Multi-core Not Future

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
The whole idea that parallel computing can miraculously solve all efficiency problems and that serial execution can be eliminated completely is unrealistic.

Like others here have already pointed out, at some point threads need to interact and exchange data that has been processed, and if one thread has not yet finished, the other thread is gonna have to wait. Parallel processing can't be perfect because different threads will have different run times.

Some tasks can be broken into efficient parallel threads, for example in graphics where some identical computations have to be repeated for a large array of data - this can be done and is being done with parallel processing units.

But some tasks can't be broken further into smaller identical parallel tasks, and then you will fall back to the same situation we have now. If a thread is waiting, at best you can have the core its using to run another thread.

I do think that there's still room for improvement in the way programmers break applications and tasks into multiple threads. But serial programming is not going away.

And I agree, nVidia needs to talk less, and do more (or maybe this time, do Moore...)
 
Oh, while I am thinking about it: the ultimate serial process - pregnancy. No matter how many women you put to work on the task, it still takes 9 months (give or take) to produce a baby.
 
Well, I just got back from the future and what I saw was not what he said.

What I saw was something that looked like an SD card ... And you buy this sort of "blade-like" applications manager/multiplexer that has howerver many of these SD slots as you can afford, linked by fiber-bundles.

The way it worked was that, when you purchase an application, like PhotoShop, it comes on this SD card that has a multicore serial proc for each module of code and for each major subroutine ... fractal distribution of embedded code, as flashable firmware, with its own optimized read and write-thru massive dram caching and dedicated SSD render-zones for one-way (read-to-write) rendering. The computer circuitry and optimized pipelining is actually compiled as part of the code ... the code and the hardware are inseperably integrated ... each sw module generates it's own hardware architecture at compile time, and ... ultimately ... this will all be re-flashable ... But will be masked, in the early stages.

Very low power ... no piracy ... everything is based on "Super-Local" architechture.

Been there.

= Alvin =
 
Since he seems to know alot about this and is cheif engineer @ Nvidia.I'd like to see these thoughts and theories put to use on a video card that doesn't need a small power station to run and if maybe they can get it back down to a size that actuall fits inside a pc and can still call it a "card".
 
Does NVidia build CPUs?

I have two words for those rebels without cause: Scrap x86. This would require ALL software to be re-written, but I'm sure that we could achieve much better performance with a modern architecture while having the same amount of transistors..
 
[citation][nom]teodoreh[/nom]Does NVidia build CPUs? I have two words for those rebels without cause: Scrap x86. This would require ALL software to be re-written, but I'm sure that we could achieve much better performance with a modern architecture while having the same amount of transistors..[/citation]

...And while we are at it, scrap all wheels, screws and levers. I am sure that we could easily replace all of those technologies in a short time if we only think outside of the box...
 
simple, as in some popular movies someone need to say " We gonna need a bigger boat!" So if go parallel we need new software, and for most new OS's, no M$ won't be happy with that.
 
Nvidia: Moore's Law is Dead, Multi-core Not Future, are they SURE about it? don't make pple's laugh @your comment by knowing nothing just becos u are Nvidia. LMAO LOL. Who says Moore's LAW is DEAD? what a laughing stock. well, I say Moore's law has not DEAD @all.
 
I wish NVIDIA would just shutup and put out their own processor already. They talk so much crap and are so very right in everything they say. Yet all they do is just whine.
 


not re-written, but recompiled except for any asm code, that needs re-written
 
He is right, the future lies in architectures like CELL, the problem is that most "programmers" today expect easy systems.
Remember the fortran66? Or the punch cards? Remember that the program HAD to fit only a few bytes of memory?
Cell is to programmers today what a computer system was in the 60's.
 
"jsc
h, while I am thinking about it: the ultimate serial process - pregnancy. No matter how many women you put to work on the task, it still takes 9 months (give or take) to produce a baby"

there's a very easy solution to that, get more then one girl pregnant at a time.......
 
For the most part, I have to agree with Nvidia. I do believe there needs to be a fundamental change in processors and programming as a whole. The use of x86 is inefficient no matter how fast you run it. It's like comparing a top-fuel dragster to my Honda civic. Sure I'd get pwned on a single straight coarse, but on a multi-turn road track, I'd be the one sitting in the winner's circle.
 
He didn't take the essay analogy far enough. Yes a group of people assigned to read a paragraph would indeed read the paragraph faster than a single person reading a paragraph. But what if the group assigned to read that paragraph were assigned to read the paragraph and then come up with a one sentence summary of the paragraph?

You would still end up with people in the group sitting around waiting for everyone else in the group to finish reading so that they could discuss what was read and then formulate a one sentence summary of the paragraph written.

That is the problem with parallel processing. As others have pointed out, it's great for tasks that are not dependent on the completion (or failure to complete) of other tasks... but any task that is dependent on the results of a previous task would not work in parallel processing.
 
Well I'm so far not impressed by how much Intel has held back technology. I mean sure I still would buy Intel products but it is crummy. The i7 and all the i series are ones I am skipping this time around. Core2duo still runs all the games perfectly fine. i7 has integrated memory controller in the cpu but the ram is still outside the cpu (which is fine I guess) but the problem is that the ram is shared with all 8 threads. core2duo, of course, ram is shared with both threads too. We aren't improving a whole lot here. It can compute a lot faster than a core2 but well they need to do a lot of things to change the architecture of the modern pc.

There is a lot of room for improvement. Just talking about moore's law and this and that doesn't cut it for me. there's several different angles they can approach building a computer from the ground up that they haven't even scratched the surface on yet.

But anyway looks like the stock market is crashing so by not buying computers and such I am hanging onto a bunch of money that I will need to survive on.
 
while we are trying to make programmers program via parallel processing lets create MR. Fusion and have Hoverboards... because thats what we need making everythign in the past dead...
 
[citation][nom]tester24[/nom]Changing to parallel computing is going to be the equivelant of changing the auto industry from using gasoline to hydrogen. It wont be a fast change nor and software companies are ill equiped to deal with this change as of right now.[/citation]

That critial first step is getting programmers to code & optimize for 64 bit chips instead of 32 bit. There are probably only 25% of non-64 bit chips left in the mainstream consumer's computers-- and those people need to throw away their machines (salvage what ever updated parts they put in 'em) and build anew with 64 bit chips! Most consumers don't care too much about the technology.. it's we tech folk who have a fondness of cutting edge innovations which were on a fast track to having more capable hardware than software which have no apps to fully utilize them. Eventually, you will be able to overclock your machine to encode/decode blue-ray 1080p discs in faster than real-time then what will be the next killer consumer app then? huh?

Sometimes a car design just like a processor design is what it is.. changing the nature of it's power is a fundamental paradigm shift. It's not as simple as going from 4 bits to 8 bits to 16 bits to 32 and now 64 bits. Consumers are hanging on to their computing devices until they die more often than not (primarily for economic reasons more than anything).
 
I'm having a tough time taking this Forbes column seriously when the author misquotes Moore's Law and confuses it with House's Law. David House, Moore's colleague at Intel, was the one who predicted computing power would increase every 18 months, not Moore. So Bill Dally's beef is with House.

Bill Dally is a respected mind, and i don't wish to disparage the man, but the way I see it, Dally either 1) didn't do the proper research before writing his column and declared Moore's Law dead before even reading Moore's 1965 essay, or 2) Dally knew Moore's law quite well but decided, since his beef was with House's Law, that he'd take on Moore because it's sexier and nobody knows who David House is.

Either way? FAIL.
 
Status
Not open for further replies.