Rumor: Nvidia Prepping to Launch Kepler in February

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]ukulele97[/nom]In February you'll see Nvidia CEO waving another mockup on a stage and telling you how great the card will be some day in a very distant future...You remember this?http://semiaccurate.com/2009/10/01 [...] oards-gtc/[/citation]
the mockup just for show you how fermi looks like. It's no problem at all.
check out the pcwatch link and look at the demonstration. Do you think it's possible they use a mockup too?
 
[citation][nom]ukulele97[/nom]In February you'll see Nvidia CEO waving another mockup on a stage and telling you how great the card will be some day in a very distant future...You remember this?http://semiaccurate.com/2009/10/01 [...] oards-gtc/[/citation]

Holy F that blew my mind. Some basement dwellers have far too much time on their hands.

OMGSS!!~11!!!!! The card is a fake!! The Cake is a lie, the world is ending.!!~!!!!
 
[citation][nom]phate[/nom]Holy F that blew my mind. Some basement dwellers have far too much time on their hands. OMGSS!!~11!!!!! The card is a fake!! The Cake is a lie, the world is ending.!!~!!!![/citation]

But... uh... the cake IS a lie.
 
From the rumors and leaks I've seen, Nvidia's first Kepler GPU will be positioned in the upper mid-range segment of their lineup, probably the equivalent of the GTX560ti/gf114. It won't be their high-end 512-bit GDDR5 card, but unfortunately it'll probably be priced like a high-end card if the rumors of it's performance competitiveness with the HD7970 turn out to be true.

The Kepler architecture is based on Fermi, so making predictions about its specs and performance probably isn't quite as pointless as the transition from g70 to g80, or gt200 to gf100. This is pure speculation, but based on the rumored memory capacity, performance, and release schedule, I'm just throwing out my best guess for the specs of Kepler's initial release card:

768 Cuda Cores
2GB 256-bit GDDR5 (high clocked, at least 160+ GBps bandwidth)... seems far more likely to me given the rumored memory capacity, especially if this isn't their high-end GPU.
or 2GB 512-bit GDDR5 (lower clocked, potentially much higher bandwidth... but, really Nvidia?)

Assuming the rumors are true, this seems reasonable, logical, and if clocked right, potentially very performance competitive with the HD7970. The big unknowns are transistor count/die size, power consumption, physical dimensions, and availability.
 
I've been kinda worried about all this secrecy surrounding Kepler. I've been looking forward to see nVidia's answer to Southern Islands. The lack of any talk may mean that nVidia's not quite too confident about what their answer is. I'm not thinking that "Kepler will fail," but rather, as others have suggested, that nVidia may pull a bait-and-switch and give us simply yet-another refreshed Fermi. Given how much of a leap SI is over the best Fermi has to offer, this would be a lose-lose proposition for all of us: nVidia's cards would fall way behind, and AMD's would have no reason to drop from their already-unprecedented (and exorbitant) prices; $549US is a steep price for a single-GPU card.

I'm hoping that this sudden move of the release means that nVidia actually has something good on their hands. At the very least, it should mean that we'll get our answers on just what they have in store soon enough.
[citation][nom]lord captivus[/nom]Nvidia, please dont rush it...I own a GTX 285, I allready have 512 bus![/citation]
In all honesty, there's no DIRECT matter betweeen whether it has a 512-bit, 256-bit, or 384-bit interface. A 512-bit interface is no more advanced than a narrower one, just of a greater scale: kind of like how two GPUs isn't necessarily more advanced than one.

This is the same deal over the Radeon 7970's question over memory type and interface width: both are just means to an end: making the interface wider, or taking more advanced (faster) memory chips are both a means of increasing a video card's memory bandwidth, and increasing the cost of production.

A reason why not everyone just makes arbitrarily wide memory interfaces is twofold: for one, as I've often said, it makes the PCB more complex, as each extra bit requires another pair of pins from the GPU, and the appropriate traces on the motherboard... This doesn't just make it harder to design a (larger, more expensive) board to fit it all, but it also further raises the price by requiring more RAM chips installed: remember that every 32-bits of interface means another chip.

The other reason is more technical: because of those extra pins, a memory interface also requires a specific amount of space along the edge of a GPU's die for all the leads to connect to. So that means that to get a memory interface of a certain width, the GPU's die area must pass a certain threshold. It's been pretty easy to correlate this relationship: with the GF206, there's been a strict boundary in die area where the width of the interface changes:

- less than 110 mm²: 64-bit
- 115-196 mm²: 128-bit.
- 196-368 mm²: 256-bit.
- 484-529 mm²: 384-bit. (G80, GF100, GF110)
- 470 mm², 576 mm²: 512-bit. (GT200b and GT200, respectively)

I know that the GT200b was a die-shrink of an existing chip, so it likely means that here, a 512-bit memory interface is unlikely unless the die is perhaps at least 550 mm² in size.

[citation][nom]sanadanosa[/nom]the mockup just for show you how fermi looks like. It's no problem at all.check out the pcwatch link and look at the demonstration. Do you think it's possible they use a mockup too?[/citation]
As I recall that event, the problem with the Fermi "fake" was that nVidia's own CEO declared that it was the real thing. Not once did nVidia back away from their obviously-incorrect claims. Showing a mock-up card as a demonstration of what the thing would look like is one thing, but it becomes problematic when the maker claims it's a REAL cards.


I do agree, that 2GB on a 512-bit interface is highly unlikely; while 1-gitabit GDDR5 cells exist, I don't think they might see much use on such a high-end card, in lieu of the 2-gigabit ones as seen for the 7970.

As for that range of bandwidth on a 512-bit interface... the 2.0-3.0 GHz clock range for VRAM is kind of a wasteland: GDDR5 doesn't go that slow, and DDR3 doesn't go that high for video cards. (at least yet) GDDR3 has gotten as high as 2.48 GHz for the GTX 285, but I'd wager such cells cost considerably more than any DDR3 or slower GDDR3. Similarly, GDDR4 is the only thing that's filled the rest of that range, and the speed with which it was dropped indicates that it likely costs nearly as much as GDDR5.


Well, knowing the (assumed) switch to the latest 28nm fabrication process, and a mere 50% increase in processing elements, I'd estimate a die size of around 400mm²; give or take. (probably closer to 380 for a 256-bit interface; up to 420-450 for a 384-bit one)

As for performance, if nVidia goes with lower memory bandwidth, I would honestly question how competitive it could be to the 7970; at that point the design, I think, may start becoming bottlenecked by its own memory bandwidth: nVidia's top-end cards have stood in the range of 160 GB/sec since the GTX 280. Given that 5.5 GHz appears to be the current ceiling for GDDR5, that means that 176 GB/sec would be the ceiling for 256-bit interfaces. This would likely be insufficient for such a power level; this was hinted at by AMD's unprecedented adoption of a 384-bit interface for Tahiti.
 
[citation][nom]nottheking[/nom]Well, knowing the (assumed) switch to the latest 28nm fabrication process, and a mere 50% increase in processing elements, I'd estimate a die size of around 400mm²; give or take. (probably closer to 380 for a 256-bit interface; up to 420-450 for a 384-bit one)As for performance, if nVidia goes with lower memory bandwidth, I would honestly question how competitive it could be to the 7970; at that point the design, I think, may start becoming bottlenecked by its own memory bandwidth: nVidia's top-end cards have stood in the range of 160 GB/sec since the GTX 280. Given that 5.5 GHz appears to be the current ceiling for GDDR5, that means that 176 GB/sec would be the ceiling for 256-bit interfaces. This would likely be insufficient for such a power level; this was hinted at by AMD's unprecedented adoption of a 384-bit interface for Tahiti.[/citation]
A good assessment, and I basically agree with the concerns you raised with the specs I've posted. Something doesn't quite add up. The question is, is ~160 - 180 GBps enough bandwidth to a feed a 768 Cuda Core GPU? It certainly seems like a potential bottleneck. If Nvidia goes with a 384-bit interface, that would clear up any concerns over memory bottlenecks, but it would also severely limit their options for a mobile version of the GPU. Nvidia traditionally uses desktop GPU's in this performance segment (upper mid-range) in their high-end mobile cards (gf114, gf104, g92...). They could always narrow the interface for the mobile high-end (like the GTX480M, for example), but that's less than ideal. It would also mean the rumors regarding the 2GB memory capacity are completely wrong, but that wouldn't really surprise me.

Who knows... The only alternative I've thought of is a possible 512 CUDA Core GPU, with 2GB GDDR5 on a 256-bit interface. With the architectural enhancements in Kepler, it could potentially perform competitively with the HD7970, the core would just have to be clocked significantly higher than 780 MHz. Ultimately this approach does seem a little more balanced given the potential bandwidth a high clocked 256-bit memory interface could provide.
 
I don't care about any PC graphics cards until a new generation of consoles is released. In case you haven't noticed, basically all PC games have been built to easily fit PS360 hardware specifications.
 
Well well well:
http://semiaccurate.com/2012/01/19/nvidia-kepler-vs-amd-gcn-has-a-clear-winner/
 
"and AMD's would have no reason to drop from their already-unprecedented (and exorbitant) prices; $549US is a steep price for a single-GPU card."

You didnt live in the time of the Geforce 2 Ultra dont you? ^^
 
So the 680 will be about 15-20% faster than a 580 as is the 7970, so much for all the bollox about it being 80% faster then, yet another bunch of retards are going to through all their cash down the toilet again, and then again when the 700 nvidia and ati 8000 series come out in 2013, lmao makes me laugh!
 
No point upgrading now, going to wait until the new xbox720 and ps4 are out, nearly all games are ported from those anyway, see what sort of performance they have, then make up a PC that matches those specs. Or you could be a total moron and throw your cash down the toilet, for no gain.
 
Status
Not open for further replies.