Nvida Updates Its GPU Roadmap at GTC 2013

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
wasn't that intel made an agreement with nvidia that nvidia will permanently shut out from getting x86 license with 1.5 billion settlement to nvidia? also to fight intel head to head with x86 most likely a waste of resource and R&D for nvidia.
 

downhill911

Honorable
Jan 15, 2013
109
0
10,680
Nice to see how many engineers and scientist are reading Tomshardware and are being able to comment, judge and criticize on something which we have almost absolute 0 information since it is from future.
Or do you guys have your personal time travel machine?
 

f-14

Distinguished
[citation][nom]downhill911[/nom]Nice to see how many engineers and scientist are reading Tomshardware and are being able to comment, judge and criticize on something which we have almost absolute 0 information since it is from future.Or do you guys have your personal time travel machine?[/citation]

one thing i learned about best buy and their logistics, it's in stock 6 months before they sell it.

so as far as time travel machines go with only 3 months left before it becomes in stock at distribution centers means it's actually in production RIGHT NOW. so while you can throw your pennies at a wishing well some people actually have their hands on them. i'm pretty sure people had their hands on them in 2012 , you know, the ones called evaluation versions so people can test them for defects and code/calculation problems.

ya..maybe if you would have read those TPS reports..... why do you think they are given out in triplicate!

i'm sorry but your going to have to meet with the BoBs.
 

downhill911

Honorable
Jan 15, 2013
109
0
10,680
[citation][nom]f-14[/nom]one thing i learned about best buy and their logistics, it's in stock 6 months before they sell it.so as far as time travel machines go with only 3 months left before it becomes in stock at distribution centers means it's actually in production RIGHT NOW. so while you can throw your pennies at a wishing well some people actually have their hands on them. i'm pretty sure people had their hands on them in 2012 , you know, the ones called evaluation versions so people can test them for defects and code/calculation problems.ya..maybe if you would have read those TPS reports..... why do you think they are given out in triplicate!i'm sorry but your going to have to meet with the BoBs.[/citation]
So you're saying that these people with hands on Maxwell and Tegra 4 chips posted those comments on Tomshardware?
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
The unified virtual memory on Maxwell sounds good it's still not the vram ram disk I've been hoping and wanting someone to make for windows for the last 5 years or so though, but at least vram for virtual memory is a step closer.
 


the license are not transferable. this has been agreed between the x86 licensee. it would be different story if via are the ones that acquire nvidia. this might be also the very reason why nobody going to buy amd even if amd are cheap to acquire.
 

evga_fan

Distinguished
Aug 22, 2010
76
0
18,640
Lol, am I the only one that thinks that the GPU roadmap looks 'fail' and (for once) contradictory to what nvidia means?
We (and certainly they) want every generational leap to have atleast an equal or exponentially greater performance increase compared to last gen. If you look at the graph every generational leap seems to yeald lesser and lesser boost in performance. That is until you look more closely at the graph and the Y-axis in particular, where they've decided to increase the numbers exponentially the further up you move on the Y-axis. This gives the cuve the look of 'deteriorating' performace when really they should flip the curve to get the effect of an exponentially increase in performance.
Not that brilliant from a marketing perspective to say the least! Maybe they're just making up for past times when they've for instance, chopped the lower parts of a bar diagram, focusing only on the differences between the bars, making it look like nvidia is several times as better as the competition.
 

kartu

Distinguished
Mar 3, 2009
959
0
18,980
[citation][nom]nukemaster[/nom]I do not think Intel will let that happen.[/citation]
Yeah, if not that licensing thing, mighty nvidia would have created x86 CPUs that are twice as fast and consume half of the power.
Oh, give me a break...
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
I don't understand how the unified memory interface will work with Maxwell cards, it's not like regular memory interfaces are designed for GDDR, nor will Maxwell suddenly start using DDR. Could be used with Parker, but i'm unable to understand the unified address space thing for regular PCs.

I mean, sure, the CPU would benefit, the iGP of the CPU could benefit, but how does Nvidia's own GPU benefit?
[citation][nom]nukemaster[/nom]I do not think Intel will let that happen.[/citation]
Even if they did, i'm not sure i see the point. They'd be too far behind Intel/AMD to make the effort worthwhile, VIA exists, sure, but barely.

And it's not like they'd suddenly come up with the best x86 CPU EVAR!, i mean look at Tegra/2/3. They'd need to dump tons of money into R&D and start from the 486 designs or something else which isn't a protected design anymore. AMD doesn't have much money, but it's not like they have to start from scratch.

Only possible if they buy AMD/VIA or something.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]ojas[/nom]Only possible if they buy AMD/VIA or something.[/citation]
[citation][nom]renz496[/nom]the license are not transferable. this has been agreed between the x86 licensee. it would be different story if via are the ones that acquire nvidia. this might be also the very reason why nobody going to buy amd even if amd are cheap to acquire.[/citation]
Oops, forgot that.
 


I too don't know what it would accomplish for sure, but whether or not the memory is DDR or GDDR shouldn't matter. The CPU shouldn't care what memory is on the graphics card because it's shared memory and the same for the GPU and the CPU's memory is true to, just like how the CPU doesn't care what type of memory the paging file or swap partition is on.

One thing that I'd like to say to people mentioning the PCIe bottle-neck in this is that since the graphics cards at worst are generally not bottle-necked even by PCIe 2.0 x4, that using even most of the PCIe bandwidth for the CPU to access a little GPU memory shouldn't hurt much and since it'd only be a small fraction of the memory bandwidth too, I don't think that it'd hurt even gaming performance much to be using up to around 10GB/s of the GPU's memory bandwidth and PCIe bandwidth (for PCIe 3.0 x16) for the CPU.

As far as copying textures over to the graphics card or providing a more coherent method of transferring data for CPU PhysX, it might actually help something a little.
 
Status
Not open for further replies.