Details on Nvidia Kepler GK104-based GTX 670 Ti Surface

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]ojas[/nom]No, this is most likely not speculation. When Kepler went live, the press was given access to their corporate ftp server for the launch goodies (pics, white-papers, reviewers guides, etc).Anyway, you know files have tags, right? So at least two of the supposed GTX 680 images had 670 Ti as a tag. I mean, i have the files with me, so this is first hand info Anyway, this is believable, because they do the same with the 560 Ti 448 Core edition.[/citation]

You should try reading the source link. All these 3DCenter rumors are based on their forum speculation, and it is told right there on the news. Even the specs have "possibly" etc right there in parenthesis.

This is how baseless rumors are born: take a foreign source, leave the part about it being pure speculation, release as "fact".
 
[citation][nom]Pherule[/nom]Haven't you heard the stories? These ARE the mainstream gpu's. (Although factually, calling them gpu's is incorrect) Nvidia hasn't released their high end yet because the 7000 cards can't compete... for now.[/citation]
I don't know why this opinion keeps getting voted down when it's so likely to be true. This architecture is obviously much more closely related to the 560 than the 580, including the crippled FP64 performance. What probably happened is that Nvidia anticipated AMD having a much more potent product than what the 7970 turned out to be. Judging from the performance of this "680", they probably decided that that if they put the chip that should have been in the 680 up against Tahiti, AMD would go out of business. :-/

This article pretty much says it all: http://www.xbitlabs.com/news/graphics/display/20120216220753_HPC_Customers_Are_Waiting_for_Nvidia_Kepler_Processor_Company.html This is not the Kepler they're looking for. Unless they're planning to make a version of this chip where every CUDA core can handle DP math, there's another architecture coming. In the meantime, Nvidia is probably scrambling to come up with a cut-down version of this chip that can take the 660 slot in the lineup.
 
[citation][nom]phate[/nom]What are the odds of being able to unlock the disabled SMX ala an Athlon X3?[/citation]

Approximately 0%.

[citation][nom]Old_Fogie_Late_Bloomer[/nom]I don't know why this opinion keeps getting voted down when it's so likely to be true. This architecture is obviously much more closely related to the 560 than the 580, including the crippled FP64 performance. What probably happened is that Nvidia anticipated AMD having a much more potent product than what the 7970 turned out to be. Judging from the performance of this "680", they probably decided that that if they put the chip that should have been in the 680 up against Tahiti, AMD would go out of business. :-/This article pretty much says it all: http://www.xbitlabs.com/news/graph [...] mpany.html This is not the Kepler they're looking for. Unless they're planning to make a version of this chip where every CUDA core can handle DP math, there's another architecture coming. In the meantime, Nvidia is probably scrambling to come up with a cut-down version of this chip that can take the 660 slot in the lineup.[/citation]

GK104 is not similar to GF104 in architecture, they are similar in features. There's a difference. GK1xx would all be similar in architecture. What you said is like saying that Nehalem (LGA 1156) i7s and Sandy Bridge i7s are similar in architecture because they have similar features (both are Hyper-Threaded quad core CPUs with 8MB of L3 cache, two 64 bit DDR3 controllers, etc. Those are features, so those two types of i7s are separated by different architectures, but have similar features. The same applies to the GF1xx and GK1xx GPUs with similar names.

Also, what Nvidia probably anticipated was AMD not focusing on compute so much. AMD's large focus on compute performance is why they seem so far behind. It's the same thing when you compare the GTX 570 to the Radeon 6970... They have similar performance, but the 570 has a FAR larger die (albeit with a small part of it disabled) and is more compute focused, so it's greater size and power usage don't actually make it faster than the much more gaming oriented Radeon 6970.

With AMD gaming focused and Nvidia compute focused, it took Big Fermi's much larger dies to meet and slightly beat Caymans' much smaller dies in performance. Even then, power usage was still being sacrificed. The difference here is that AMD is not willing to make massive dies to compete with Nvidia in performance like Nvidia is because of the problems associated with doing so. Large dies are exponentially more expensive than smaller ones to make and are exponentially less likely to pass binning because they are more likely to have one or more defects.

Nvidia and AMD simply switched places with who was gaming focused and who was compute focused. This was a smart move by Nvidia because it gives them the same advantage that AMD had, significantly reduced costs for the GPUs in comparison with their competition. Nvidia also did it at a perfect time because it forces Fermi uses who want a Nvidia upgrade to use a card that is almost worthless in compute. Considering the rapidly increasing importance of compute for gamers, this may be the last generation that will be able to do this without suffering adverse effects. Basically, Nvidia wants to make the GTX 600 cards as cheap as they can and as good as they can for the price to entice Fermi owners to upgrade from their compute heavy cards to compute worthless cards and once compute becomes more important, Kepler owners will have little choice but to upgrade to a future more compute heavy architecture.

Really, it's genius. It basically ruins the future proofedness of all of Nvidia's current cards and their future Kepler cards so that Nvidia owners will need to upgrade more often. AMD, on the other hand, is going to suffer the short term effects of Nvidia's methods by using somewhat more expensive dies and higher power usage than Nvidia, but in the long term, they might be the better investment. AMD would only be the better long term choice if compute heavy games are out in time for the GTX 700 and Radeon 8000 series (assuming that those are the names for the next generation of cards). If compute doesn't become paramount to gaming quickly enough, AMD may also be screwed over even worse in the next generation if they don't make compromises or find a way to unify top non-compute based gaming performance and compute performance without sacrificing increased size and power usage.

All in all, a very smart move by Nvidia. I don't like it, but I can't deny the implications of it. It might take until two generations from now before compute becomes a part of gaming and until then, the non-compute focused method will probably reign supreme.

Maybe bit mining and folding will allow an AMD card to make back it's increased cost due to increased power usage. It certainly won't do much good on a Nvidia card.

Also, no, AMD would not have gone out of business had Nvidia used Big Kepler instead of GK104. The performance of the 680 is already enough for 1080p 3D, 2560x1600, Eyefinity/Surround 3840x1024, etc (basically, any resolution with very approximately four megapixels). Going beyond that on a single GPU would be an extremely niche product that would not see any use by more than 99% of the high end gamers that are among the top few % of gamers budget/performance wise, let alone the entire graphics industry.

Quad 680 and quad 7970 are both enough for even triple 2560x1600 or triple 1080p 3D. The only people who would buy such cards would be people who want something like that, except with two or three cards instead of four, or want to game at 4K+ resolutions (approximately 9 megapixels with the 4096x2160 resolution shown in the latest 4K panel to hit the news) and that will be extremely expensive. The cost of the displays would be greater than the cost of the already extremely expensive graphics. Then there's the question about whether or not any modern CPUs could handle such a setup.

Nvidia new that such cards would hardly sell at all right now. Maybe they will be released, but probably not, or at least not until much more intensive games are released or the cost for such display setups goes down significantly.
 
It will be nice to see what the gtx 670ti performs like. Hopefully it beats the hd7950 as the article says, but judging by how ahead the gtx680 is compared to the hd7970, it should have no problem doing so.
 
[citation][nom]fil1p[/nom]It will be nice to see what the gtx 670ti performs like. Hopefully it beats the hd7950 as the article says, but judging by how ahead the gtx680 is compared to the hd7970, it should have no problem doing so.[/citation]

With the 680 being above the 7970 by enough to be a tier ahead (although only barely) of it that the next Kepler card below the 680 would be competing with the 7970, not the 7950. This is no different than the 6970 only competing with the GTX 570 instead of the 580 for performance in current games. At least now, AMD does something several times better than Nvidia (the 7970 is almost six times faster than the 680 for dual precision compute with both cards at stock speeds) and the fact that games are becoming more dual precision compute performance reliant does not bode well for Nvidia's Kepler, especially considering that even the 680 is only half as fast as the 580 for dual precision compute.

If this 670 TI is to compete with the 7970 in performance, then the 7970 (and inherently, 7950 and 7800 cards) will drop in price relevant to the performance difference. I can guarantee that if the 670 TI is 20% slower than the 680, then the 7970 is probably ahead of the 670 TI, even if only be a little, so it won'y need to drop more than another $80-100. If the 670 TI is to be 20% lower than the 680, then it's going to be between the 7970 and the 7950 in performance and will probably be between them in price.
 
[citation][nom]youssef 2010[/nom]WOW, Nvidia in 2012 seems to act like a well lubricated machine, ready for anything.Let's hope this extends to their proprietary techs, like PhysX[/citation]

Look at Kepler's compute performance. You don't really expect them to be very good at PhysX and CUDA, do you? Also, no, Nvidia is not doing to good. We still haven't seen another Kepler card to be released, nor even the only Kepler card, the 680, in decent supply globally. It didn't take AMD this long to solve their supply issues and now, AMD has six new cards from three different families out and all in stock.

Then, we have Nvidia's not even knowing how to re-badge cards properly... The GT 630 is faster than the first GT 640 (believe it or not, Nvidia seems to plan on making three different GT 640s, all with that same name despite them having widely varying performance and one of them is slower than the GT 630). Nvidia is making these low end GT cards a mix of Fermi and Kepler cards, further adding to the confusion.
 
Status
Not open for further replies.