Nvidia Predicts 570X GPU Performance Increase

Status
Not open for further replies.
[citation][nom]ubernoobie[/nom]that bugatti veyron looks awefully realistic[/citation]
To me it looks like something you would see in NFS, nothing impressive.
 
Now that I'd like to see :) Really though, that's a bit outlandish. NVIDIA had better have something big up its sleeve to back up claims like that, because in my experience, nothing in this industry advances in leaps like that.
 
Yeah right... Nvidia has some magical breakthrough up their sleeves, 570x the performance in the same power envelope, since there's no way they can dissipate anymore heat than they already are...

Intel already tried and failed at this kind of breakthrough, they whipped up a frenzy with their sham terascale demonstration, then later they realized that Larrabee was going to suck, then they quietly watered down expectations, and it wouldn't even surprise me now if they quietly cancelled it's release altogether.
 
Here's my 'translation' of this article: The amount of GPU computing (using the GPU for non-graphic work) with increase by 570x. Considering the nearly non-existant uses for GPU based computing today, its definitely conceivable that there could be 570x more uses for GPU based computing. This just seems like whoever picked up the story took the figure out of context. Its technically true but not in the way we're reading it.
 
"a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"

I think someone got "mere" and "staggering" backwards...

On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.

-mcg
 
570x might not be that unrealistic. NVidia already have a roadmap and probably some rough designs for stuff that they'll be releasing in 2015. The thing is, they've kind of got an easier job than CPU manufacturers. Purely because they only deal in highly parralelizable code. When Intel/AMD release a new CPU they can't just double the number of cores, they have to make those individual cores faster than before because a lot of tasks simply can't be parallelized well. GPU makers don't have these problems. They can literally double the number of number crunching units without making those units any faster and end up with a GPU that can do twice the number crunching work.

Think of it this way, right now nVidia's top spec GPU is the G200b found in the GTX 285. Its made using a 55nm fabrication process. Current expectations are that 22nm fabrication will take over in about 2011-2012 with 16nm coming in about 2018 at the latest. Using a 22nm process, the same size die as in the G200b could fit 8 copies of the G200b in it. 16nm would give 16. You could say that GPU makers get that kind of scaling 'for free'. Its then up to them to come up with advances in design etc. A 16nm process would also run a fair bit faster anyway without the use of extra cores.

16nm die fab = 16x as many processing units + faster processing units

Put together 16nm die fab + 6 years of refinements and R&D + larger dies if necessary = 570x increase not that far fetched a claim.
 
Spanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x. Currently they don't believe that they'll get past 22nm due to quantum effects, so 16nm is a moot point until they tell us otherwise. Even if your horrible math was right, how do you explain the 570x increase with your theoretical 16x the transistors per die size? Quadruple the die size(not realistic, and yields would be horrible) to get 64x, then you still need to somehow wring 9x the performance per transistor of an already mature tech. When pigs fly.
 
[citation][nom]MrCommunistGen[/nom]"a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"I think someone got "mere" and "staggering" backwards...On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.-mcg[/citation]

I think it was sarcasm. But you know how that doesn't always translate well through text.
 
I suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.
 
[citation][nom]schuffer[/nom]I suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.[/citation]

While I doubt the 570x prediction myself it is actually what he said. Or at least what his slide said. Check the pictures.

http://blogs.nvidia.com/nTersect/
 
[citation][nom]Spanky_McMonkey[/nom]Spanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x. Currently they don't believe that they'll get past 22nm due to quantum effects, so 16nm is a moot point until they tell us otherwise. Even if your horrible math was right, how do you explain the 570x increase with your theoretical 16x the transistors per die size? Quadruple the die size(not realistic, and yields would be horrible) to get 64x, then you still need to somehow wring 9x the performance per transistor of an already mature tech. When pigs fly.[/citation]

It was a very rough calculation and besides which, I was doing it on an area basis. the xx nm refers to the minimum size of features that can be drawn, however its a length. Its like resolution. So if you can fit two times as many of something in width wise and two times as many things in length wise then you can fit a total of 4x as many things in in total.

So, if the length of things you're using is 55nm (apologies my original calculations were actually with 65nm) then going to 22nm could fit 6.25x the amount of stuff in and 16nm could fit 11.8x the amount of stuff into the same space. Of course, if you also increase the layer count proportionately (little unrealistic imo since layers are pretty thick due to all the extra stuff used) then the same volume of a 55nm chip could fit in 15.6x as much processing power with the switch to 22nm or 40.6x as much processing power with a jump to 16nm.

40x could easily become 80x with a doubling of the core frequency, 80x become 320x by making the dies four times as large (or simply having four GPU chips on one board), 320x could become 570x with improvements in design. Yes in some respects transistors are mature tech, however, when you get down to the 16nm level its not quite as "mature" since you have all kinds of quantum problems/advantages to deal with.
 
unless something comes along that will completely change the GPU, like a new design or some kind of new materiel to replace silicon, or make the GPU even bigger(lets say the size of a motherboard 😛), no chance that its going to be 570x. not to say i don't want it to happen.
 
Status
Not open for further replies.