GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]alyoshka[/nom]Well, Looks like we have 20 to be exact number of Nvidia Fans here...... of 16 Pages of comments it's those 20 who have given thumbs ups to pro Nvidia comments and the exact same number of thumbs downs to pro ATI comments..... Interesting.[/citation]
The comment system stops counting thumbs up/down at 20...
 
G

Guest

Guest
[citation][nom]alyoshka[/nom]Well, Looks like we have 20 to be exact number of Nvidia Fans here...... of 16 Pages of comments it's those 20 who have given thumbs ups to pro Nvidia comments and the exact same number of thumbs downs to pro ATI comments..... Interesting.[/citation]
[citation][nom]outlw6669[/nom]The comment system stops counting thumbs up/down at 20...[/citation]

lol fail.
 
[citation][nom]iammykyl[/nom]Can anyone tell me if V,sinc works with a 120hz monitor?[/citation]

Yes, but you only get 120FPS with V-Sync if your cards are pumping out more than 120FPS. Otherwise, it will be limited to 60FPS like a 60Hz monitor would be and that all but ruins the point of a 120Hz monitor.
 

iammykyl

Distinguished
Oct 23, 2011
9
0
18,510
Thanks for the reply.

From the article. Nvidia’s solution to the pitfalls of running with v-sync on or off is called adaptive v-sync. Basically, any time your card pushes more than 60 FPS, v-sync remains enabled. When the frame rate drops below that barrier, v-sync is turned off to prevent stuttering. The 300.99 driver provided with press boards enables adaptive V-sync through a drop-down menu that also contains settings for turning v-sync on or off

AS I am aiming for a display with 16 :10 aspect ratio, 1920x1200res, there should be no problem.
 

Darksol

Honorable
Apr 13, 2012
43
0
10,530
Woah after a long time I saw Nvidia's flagship GPU smoke the AMD flagship that badly. HARDOCP's review shows even a wors situation for AMD
 
[citation][nom]Darksol[/nom]Woah after a long time I saw Nvidia's flagship GPU smoke the AMD flagship that badly. HARDOCP's review shows even a wors situation for AMD[/citation]

[citation][nom]Darksol[/nom]ubercake 300.99 drivers are up on the nvidia website[/citation]


... The 7970 is closer to the 680 than the 6970 is to the 580, so I fail to see anything in your post besides fanboyism. Had it been more like the 7970 is equal to the 580 and the 680 was where it is now, then you might have a point. However, the two are fairly close. I'm still waiting for a complete overclock comparison where the 7970 isn't held back at 1125MHz by Catalyst and the 680's full potential is explored through more than EVGA Precison managed because it seems that MSI Afterburner does better with the 680 than Precision.

The only reason that the 680 is beating the 7970 is also fairly obvious... Nvidia went from compute heavy to compute light performance, freeing up power usage and die space in the process whilst AMD went from compute light to compute heavy, consuming more die space and using more power in the process. They basically switched sides in the compute versus non-compute argument. It was a smart move by Nvidia because it gives Nvidia time to take advantage of the cheaper GPUs and such (Big Fermi/Kepler is FAR more expensive to make than GF104/GF114/GK104 because larger dies are more expensive and less likely to pass binning without defects) while AMD is losing on two of the fronts that they previously touted as their best, power consumption and cost.
 
G

Guest

Guest
[citation][nom]blazorthon[/nom]... The 7970 is closer to the 680 than the 6970 is to the 580, so I fail to see anything in your post besides fanboyism. Had it been more like the 7970 is equal to the 580 and the 680 was where it is now, then you might have a point. However, the two are fairly close. I'm still waiting for a complete overclock comparison where the 7970 isn't held back at 1125MHz by Catalyst and the 680's full potential is explored through more than EVGA Precison managed because it seems that MSI Afterburner does better with the 680 than Precision.The only reason that the 680 is beating the 7970 is also fairly obvious... Nvidia went from compute heavy to compute light performance, freeing up power usage and die space in the process whilst AMD went from compute light to compute heavy, consuming more die space and using more power in the process. They basically switched sides in the compute versus non-compute argument. It was a smart move by Nvidia because it gives Nvidia time to take advantage of the cheaper GPUs and such (Big Fermi/Kepler is FAR more expensive to make than GF104/GF114/GK104 because larger dies are more expensive and less likely to pass binning without defects) while AMD is losing on two of the fronts that they previously touted as their best, power consumption and cost.[/citation]

The 6970 competed with the 570 not the 580.
 
[citation][nom]ivan1984[/nom]The 6970 competed with the 570 not the 580.[/citation]

What the 6970 competed with is irrelevant. The 6970 and the 580 are the top single GPU cards from their families and thus they are the only relevant comparison for this context.

I replied to a post who compared the 7970 to the 680 and said that the 680 smoked it. That was not true and I went further in depth as to why it wasn't, and then explained part of why things are the way they are with this. I never even said that the 6970 competed with the 580 directly anyway.
 

qiplayer

Distinguished
Mar 19, 2011
38
0
18,530
Gtx680 uses less power than the 7970. If you oc both at the maximum, lets say to 1250/1300mhz the difference is even more visible. A 7970 windforce +system had a powerdraw of 570watts, where without oc it was 430 about.
This to say the 680 is far a better card, nvidia has yo struggle less also with cooling, and has had an easier job that amd will, in making dual gpu's.

If we look at games it has sense imho to take the card that runs better the game you play.
For the computing I think it's just not activated for marketing reasons. But I don't know gpu engeneering.

 
[citation][nom]qiplayer[/nom]For the computing I think it's just not activated for marketing reasons.[/citation]
Actually, the GK104 is nVidia's mainstream chip and was not designed with compute performance in mind.
Their actual highend chip, the GK110, is still to be released and was designed with a focus on compute.
Dropping all those extra transistors needed for compute allowed nVidia to build a gaming chip that is smaller and more efficient (for gaming) than ATI's 7xxx series.

The past few generations nVidia has been pushing hard compute performance at a sacrifice to gaming performance, die size and efficiency while AMD focused on gaming performance while sacrificing compute performance.
This generation they both decided to switch roles and, as AMD is unwilling to outspend nVidia on die size (GF100 and GF110 where, while GK110 will probably be, 520mm²+ behemoths), they unfortunately ended up with a marginally inferior gaming chip that is massively better with computing tasks.
 
[citation][nom]qiplayer[/nom]Gtx680 uses less power than the 7970. If you oc both at the maximum, lets say to 1250/1300mhz the difference is even more visible. A 7970 windforce +system had a powerdraw of 570watts, where without oc it was 430 about.This to say the 680 is far a better card, nvidia has yo struggle less also with cooling, and has had an easier job that amd will, in making dual gpu's.If we look at games it has sense imho to take the card that runs better the game you play.For the computing I think it's just not activated for marketing reasons. But I don't know gpu engeneering.[/citation]

Nvidia used a smaller die and sacrificed compute performance for that power efficiency. 680 owners had better hope that games using DP math don't get common for years or else they will have crap performance in them compared to the 7970. Let's not forget how the 680 loses power efficiency at high clocks more than the 7970 does.

[citation][nom]outlw6669[/nom]Actually, the GK104 is nVidia's mainstream chip and was not designed with compute performance in mind.Their actual highend chip, the GK110, is still to be released and was designed with a focus on compute.Dropping all those extra transistors needed for compute allowed nVidia to build a gaming chip that is smaller and more efficient (for gaming) than ATI's 7xxx series.The past few generations nVidia has been pushing hard compute performance at a sacrifice to gaming performance, die size and efficiency while AMD focused on gaming performance while sacrificing compute performance.This generation they both decided to switch roles and, as AMD is unwilling to outspend nVidia on die size (GF100 and GF110 where, while GK110 will probably be, 520mm²+ behemoths), they unfortunately ended up with a marginally inferior gaming chip that is massively better with computing tasks.[/citation]

GK104 and GK110? So, what happened to GK100? Does the Big Kepler skip a generation? That's unheard of and undoubtedly wrong. GK100 is the next big die size. GK110 would mean it's a second generation Big Kepler chip, yet there isn't even a first generation Big Kepler chip yet, so how is that possible?

Otherwise, agreed.
 

qiplayer

Distinguished
Mar 19, 2011
38
0
18,530
Actually, the GK104 is nVidia's mainstream chip and was not designed with compute performance in mind.
Their actual highend chip, the GK110, is still to be released and was designed with a focus on compute.
Dropping all those extra transistors needed for compute allowed nVidia to build a gaming chip that is smaller and more efficient (for gaming) than ATI's 7xxx series.

The past few generations nVidia has been pushing hard compute performance at a sacrifice to gaming performance, die size and efficiency while AMD focused on gaming performance while sacrificing compute performance.
This generation they both decided to switch roles and, as AMD is unwilling to outspend nVidia on die size (GF100 and GF110 where, while GK110 will probably be, 520mm²+ behemoths), they unfortunately ended up with a marginally inferior gaming chip that is massively better with computing tasks.
blazorthon 05/10/2012 3:55 PM

tanks for the explanation :)
 
[citation][nom]blazorthon[/nom]GK104 and GK110? So, what happened to GK100? Does the Big Kepler skip a generation? That's unheard of and undoubtedly wrong. GK100 is the next big die size. GK110 would mean it's a second generation Big Kepler chip, yet there isn't even a first generation Big Kepler chip yet, so how is that possible?Otherwise, agreed.[/citation]
Well, it is all speculation until the chips are actually released and named.
The chip that I am describing though (be it GK100, GK110 or something else) is the successor to the GTX 580's GF110 and would be the large die with a focus on compute.
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
[citation][nom]SkyWalker1726[/nom]AMD will certainly Drop the price of the 7xxx series[/citation]
AMD can't afford to drop it's prices...they will go bankrupt
 

hannibal

Distinguished


I think that it is just a educated gues. The size of 7970 GPU is bigger than CPU from 680, so it is suposed to cost more to produce. The same problem that "plagued" Nvidia 500 series vs AMD 6000 series. The 7970 is bigger because there is wider memory bus and there are more part that are used in calculation. So 7970 is much closer (feature vice) to Nvidia 580 than it is to 680.
So this time Nvidia should have easier time to reduce prices than AMD does. Ofcource it allso depends on how good or bad yealds each company gets, but normally bigger chips have higher failure rait than smaller ones, so that advantage allso should be in Nvidia this time... Ofcource GPU world is not pure mathematical eguation.
 


GPUs, not CPUs. Also, the 7970's GPU, the Tahiti, is slightly smaller than the Cayman in the Radeon 6970 which is already much smaller than the GK110 in the GTX 580. The 7970 might have a 384 bit bus like the 580 does, but it is still nowhere near as large as the GK110. Yes, Nvidia's cards are currently cheaper to make than AMD's in the same performance range, but that doesn't mean that AMD would go out of business. In fact, the Tahiti is only something like 20% larger than the GK104. I'd have to check the numbers for a more exact comparison, but it really isn't so much larger that it's unreasonable at all. Like I said, it is smaller than the Cayman, an already small GPU. GK104 is simply even smaller. If anything holds back Radeon 7900 cost-wise, it would probably be the memory chip count rather than the GPU size.

If AMD wanted to, they could probably drop prices a lot farther without going even nearly bankrupt. That AMD has continually been making price drops seems to confirm this whereas nebun seems to have no evidence behind his/her claim.
 

hannibal

Distinguished


Yes. I agree. The main reason ofcource is that 580 was build in 40 nm proses and 7970 in 28 nm proses, but in feature wice they are more similar than 7970 and 680 are.
And there have to be some room in the price otherwice it would not cover the development expensies.
 


Yes, the price has to be high enough to cover both manufacturing costs, R&D costs, and still provide profit as well as cover any other expenses. However, there is still room for the prices to drop further even with these in mind. Since the R&D has already been invested, it is a finite amount that must be covered, so it is eventually not a factor anymore once it has been covered.

Even if prices didn't have room right now, assuming that the R&D costs are still a factor, once more cards have been sold so that the profit can cover R&D costs, prices can drop again because R&D has been covered. Then there is also the fact that the 28nm process used in the manufacturing of GCN GPUs, like any other process technology, will improve over time, meaning that the cost of manufacturing can drop over time, further increasing room for prices to drop.

There are probably many other ways for the price to drop that I may have missed. The simple fact is that the chances of AMD going out of business for dropping prices are low. Heck, with lower prices, there's also the increased amount of purchases to consider.
 
Status
Not open for further replies.