GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation

Page 12 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.



Anyone having problems with the review haven't read it and that it wasn't read was mostly because of you destroying your credibility with the crap "leak" of the Kepler video cards that has been proven wrong time and time again. Sure, it's up to them to read it, but it was up to you to not spread misinformation previously, thus destroying your own credibility to the point that people disregarded you when you actually had something factual to say. They should read it anyway and if they don't that's their problem, but you could at least admit that you screwed up too.

Different games give different results, but you said 20% instead of 15-20% (even ~20% would've been better than saying 20%) so you saying this now is an after the fact correction now that I've already corrected you.
 


Nvidia does pay off some game developers and Nvidia admits to this too. Many of the Nvidia sanctioned games are optimized for Nvidia cards over the Radeons. Now like my previous post said there are some situations where this doesn't work out for Nvidia (mostly multi-GPU situations or new architectures, IE a game optimized for Fermi's compute power would run more poorly on Kepler, VLIW4, and VLIW5 in comparison to GCN and Fermi in non-compute optimized games).

The effects of this are often over-exaggerated (especially in systems that don't have great CPU power because Radeons historically needed more somewhat CPU power than Geforces for comparable performance and this is still true today), but easily measurable and usually are even obviously noticeable during play.

As for the rest of the post that you replied too, well it's more like misguided speculation than trolling, at least in my opinion.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
[citation][nom]blazorthon[/nom]Remember, not only is AMD worth about half of what Nvidia is worth[/citation]

AMD is twice NVIDIA's size. Unless you mean the GPU division, of course?
 


Good point, We'll need to look at other GTX 680 reviews with Afterburner used to overclock instead of Precision or anything else that doesn't increase the base GPU clock frequency.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
[citation][nom]blazorthon[/nom]According to Yahoo finance, Nvidia is worth $9.38B and AMD is worth $5.57B, so AMD is more like half of Nvidia.http://finance.yahoo.com/q?s=NVDAhttp://finance.yahoo.com/q?s=AMD[/citation]
Market cap. I was thinking more of assets, revenue and employees, but it's probably not the right method.
 

dgingeri

Distinguished
I have to share this: in December, my system was idling at 252 watts and would run as high as 596 watts while playing WoW. I was running a Core i7 920 at 3.5GHz and dual GTX470s in order to maintain my 1920X1200 with 8X AA. With my tax return, I bought a Core i7 2600k, motherboard, memory, and a GTX680. My performance is about 30% higher, my idle power consumption is at 126 watts and my load power is at 333-345 watts. I've never, in 26 years of building custom systems, had my power consumtion go down while performance goes up. Even when I got my first GTX470, my power consumption went down, but so did my performance. (I side graded from a Radeon 4870X2 to the GTX470, specifically to cut power consumption.) I tell you, this is unprecedented in computer technology. This GTX680 is an awesome piece of technology. Nvidia really make a masterwork with this card.
 


You have a good point, but the same thing would have happened had you bought a Radeon 7970 instead of the 680. The difference is simply that the 680 outperforms the 7970 somewhat. This isn't an unprecedented thing, it happens with every die shrink and to a lesser extent, every generation between die shrinks. For example, even a Radeon 6870X2 would be faster than the 4870X2 and it uses about the same amount of power (very slightly less). The 6870X2 would be right behind your GTX 470 SLI too. If you could get them, you could also have gotten two Radeon 6930s that are between a 6870 and 6950, but use the same Cayman GPU as the 6950/6970 (albeit with fewer cores active than the 6950 from binning). That way would use less power than the GTX 470 SLI, but be faster at the same time too.

Computer technology advancement is a constant thing (although it is always changing, it's always there). Looking into CPUs, we have Intel improving on performance per watt with every generation and AMD, well, kinda stagnating there... However, ignore AMD's current failure and look at the Intel CPUs. We have Core 2 that had 140 and 150w TDP CPUs as the top Core 2 quads. Nehalem had 125w TDPs on it's top quad core CPUs that outperform the top Core 2 Quads. Then came Sandy Bridge that had 95w TDPs on it's top quad core CPUs that outperform the top Nehalem quad core CPUs. Now we have Ivy Bridge coming out with 77w TDPs for it's top quad core CPUs and they outperform the top Sandy Bridge CPUs. More performance for less power (or at least more for the same amount of power, still better efficiency and that's the point) is a consistent phenomenon in the tech industry.

Going back to the graphics market, we have the GTX 295 matched in performance by the GTX 480, the 480 uses a lot less power than the 295. Then there's the GTX 580 that outperforms the 480 despite it using slightly less power than the 480. Now we have the 680 that outperforms the 580 (well, at least in games, encoding, and 32 bit compute, the 580 owns the 680 in 64 bit compute) while using quite a bit less power. None of this is unprecedented; it happens every year.
 

dgingeri

Distinguished
[citation][nom]blazorthon[/nom]You have a good point, but the same thing would have happened had you bought a Radeon 7970 instead of the 680. The difference is simply that the 680 outperforms the 7970 somewhat. This isn't an unprecedented thing, it happens with every die shrink and to a lesser extent, every generation between die shrinks. For example, even a Radeon 6870X2 would be faster than the 4870X2 and it uses about the same amount of power (very slightly less). The 6870X2 would be right behind your GTX 470 SLI too. If you could get them, you could also have gotten two Radeon 6930s that are between a 6870 and 6950, but use the same Cayman GPU as the 6950/6970 (albeit with fewer cores active than the 6950 from binning). That way would use less power than the GTX 470 SLI, but be faster at the same time too.Computer technology advancement is a constant thing (although it is always changing, it's always there). Looking into CPUs, we have Intel improving on performance per watt with every generation and AMD, well, kinda stagnating there... However, ignore AMD's current failure and look at the Intel CPUs. We have Core 2 that had 140 and 150w TDP CPUs as the top Core 2 quads. Nehalem had 125w TDPs on it's top quad core CPUs that outperform the top Core 2 Quads. Then came Sandy Bridge that had 95w TDPs on it's top quad core CPUs that outperform the top Nehalem quad core CPUs. Now we have Ivy Bridge coming out with 77w TDPs for it's top quad core CPUs and they outperform the top Sandy Bridge CPUs. More performance for less power is a consistent phenomenon in the tech industry.Going back to the graphics market, we have the GTX 295 matched in performance by the GTX 480, the 480 uses a lot less power than the 295. Then there's the GTX 580 that outperforms the 480 despite it using slightly less power than the 480. Now we have the 680 that outperforms the 580 (well, at least in games, encoding, and 32 bit compute, the 580 owns the 680 in 64 bit compute) while using quite a bit less power. None of this is unprecedented; it happens every year.[/citation]

Really, it is unprecedented. I have always had my performance go up and up. there really is no way to get by that. The difference is that the top of the line, and in my case a couple steps down from top of the line, has increase in performance while decreasing power usage. I typically stick with 3 steps down from cutting edge with processors and one step down on video cards, at least since they became a deciding factor. (Back in the early days, there really wasn't much difference on video card performance. Processor meant everything.) This time, I actually got the top of the line video card, for the first time ever.

My Core i7 920 was faster than my older Core 2 E8400, but it used more power. The Athlon 64 X2 5600+ used less than the E8400. The previous Athlon 3700 used less power than the 5600. etc, all the way down to my 486SX 25MHz. The processor has always increased in power used as the performance increased.

With my video cards, it has been much the same. The two GTX470s used more power than the 4870X2. The 4870X2 used more power than the pair of 7800GTs I had before that. The 7800GTs, even one of them, used more power than the 6800GTS I had before that. All the way down to my old Cirrus Logic 512k that came with my original 486SX system.

Hard drives are the one part that has stayed pretty steady in power usage. They go up in performance and size, but have gone down a little in power usage.

I guess at the moment, I'm a bit off from my old habits, in that my processor is 4 steps down from cutting edge and my video card is top of the line, but in all I'm pretty close to normal for me. Yet, my performance gained significantly, more so than usual, and my power usage went down for the first time ever.

I could give you a whole run down on CPUs and video cards all the way back to my 486SX machine if you want. Hard drives would be a bit more difficult.
 

qiplayer

Distinguished
Mar 19, 2011
38
0
18,530



Hi tanks for the reply :)

I just gave a look for the folding at home, it's like giving a bit of the own cpu/gpu power for a good cause. The folding of proteins (actually I don't know what folding in this case means).

Cud you give me some links for it?
Or also other things like this?





 


No, it's not unprecidented. Comparing a i7-920 to en E8400 is comparing a quad to a dual core, there's no way it wouldn't use more power if they're only one generation apart because that's how CPU differences are. Even the first Core 2 Quads didn't use less power than many of the Pentium Ds (dual core Netbursts) and that's one of the greatest differences between two generations in fairly recent CPU generations.

You're comparing apples to oranges to prove your point and that simply doesn't work unless you fix the comparison, but you didn't. Despite the two GTX 470s using more power than a single 4870X2, they have so much more performance that they still had higher performance per watt (they were more energy efficient). A GTX 580 uses more power than a GTX 470, but it's more energy efficient nonetheless because the performance difference between the two (favoring the 580) is greater than the power usage difference.

Your power usage went down despite getting the top of the line card now because the top of the line card uses only just under 200w of power. It is apples to oranges to the 4870X2 and dual GTX 470s because it is a single GPU compared to previous generation dual GPUs. You would get the same result comparing a further generation back, the 3670X2, to the GTX 560 TI instead of the 4870X2 to the GTX 680. The 680 probably won't even be the top of the line Nvidia card unless AMD allows it. If AMD releases a better card (they have one in the making, now we need to find out if it gets made), then Nvidia will release a monstrous GTX 685 or 680 TI that easily passes up the 680 and has an ~250w TDP to equal the past top single GPU cards.

Here's some simple ways to show what I mean. The GTX 295 is roughly equivalent to the Radeon 4870X2 and both are dual GPU cards from about the same time, they have the same process node too, that's an apples to apples comparison there. The GTX 480 is the next single GPU flagship card from Nvidia after those two dual GPU cards. It not only uses less power than them, but it gives about the same performance. A single generation later, we have the GTX 580 that uses less power than the 480 despite having a fairly large performance advantage over it. The 480 to the 580, however, are still an apples to apples comparison because not only do they use the same process node, they even use the same architecture, the 580 was simply a more optimized, bug fixed version.

Here, we have the 295 being beaten by the 580 in both power usage AND performance and there is a two generation delta between them. Now, you compare the GTX 470 SLI setup with the GTX 680. The 680 is, like the 580 and 295 comparison, two generations ahead, uses less power, and is higher performing. I don't know about you, but I see a trend here. The 680 actually under-performs it's assumed target based on this data because it technically should have performed as good as the GTX 590 and it doesn't for some easily explained reasons. I can go into that to if you ask me to.

Looking back on the AMD side, the next generation after the 4000s was the 5000s and their top single GPU card was just slightly behind the 4870X2 in performance, but used a lot less power. It was close enough that a simple overclock would fix this and it would still use a lot less power than the 4870X2. The next top single GPU card from AMD, the 6970, does match the 4870X2 and still uses far less power. This is all fairly simple math and paying attention. No, none of this is unprecedented unless you didn't keep up with video cards over the years. First, the 4870X2 and GTX 470 obviously were made for different markets (the 4870X2 is a higher end card despite it being older) and a one generation difference isn't enough to change that. It should come as no surprise that the 470 was more power efficient because it had a newer process node and architecture, but the two cards are still for different markets.

Quite frankly, if you could afford a 4870X2 back then, I'm surprised that you didn't replace it with a GTX 480 instead of a 470. The power usage would have dropped, but performance would not have.

Historically, power usage also increased on the higher end models of processors and video cards, but power efficiency also increased so the lower end models did increased in performance instead of power usage. However, there has been a reversal in that trend ever since Core 2 for Intel CPUs (AMD has remained about the same in power usage with each generation for a while now; 95w for low/mid end and 125w for high end) and about the same time for video cards. Some video cards still use more and more power, but the vast majority of cards are using less and less. The GTX 580 uses less than the 480, the Radeon 7970 uses less than the Radeon 6970, the GTX 680 uses less than the GTX 580, etc.

I said this in my last post and I'[ll say it again. Core 2's top TDP for it's fastest quad core CPUs :150w. Nehalem's top TDP for it's fastest quad core CPUs: 125w. Sandy Bridge's highest TDP for it's fastest quad core CPUs: 95w. Ivy Bridge's highest TDP for it's fastest quad core CPUs: 77w. Obviously, power usage is going down as performance increases here as of the last several years.

GTX 480: 250w. GTX 580: 244w. GTX 680: 195w. It's happening in graphics cards too. Besides that, even as TDPs increased in the past, the performance per watt increased more than power usage did so power efficiency increased regardless.

All of your examples are apples to oranges comparisons or highly outdated and thus not so relevant because they don't reflect modern technological trends in the industries.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
[citation][nom]blazorthon[/nom]I said this in my last post and I'[ll say it again. Core 2's top TDP for it's fastest quad core CPUs :150w. Nehalem's top TDP for it's fastest quad core CPUs: 125w. Sandy Bridge's highest TDP for it's fastest quad core CPUs: 95w. Ivy Bridge's highest TDP for it's fastest quad core CPUs: 77w. Obviously, power usage is going down as performance increases here as of the last several years.GTX 480: 250w. GTX 580: 244w. GTX 680: 195w. It's happening in graphics cards too. Besides that, even as TDPs increased in the past, the performance per watt increased more than power usage did so power efficiency increased regardless.All of your examples are apples to oranges comparisons or highly outdated and thus not so relevant because they don't reflect modern technological trends in the industries.[/citation]

The 680 is different in that it doesn't have a double speed shader clock; adding more shaders was obviously a much more efficient method of extracting performance. It's also got a narrower memory bus. Despite the much faster core clock, the comparison isn't exactly apples to apples here either, to be fair.
 


It's no worse than every time we compared the Radeon 6970 to the GTX 580 because both were the top single GPU cards of the family. The 6970 didn't have hot-clocking (the double shader clock) and the 6970 had a narrower bus. This is as apples to apples as a comparison between the generations will get until a Kepler GPU better than the GK104 is released. Fair or not, it's the only comparison we can make until Big Kepler. The problem is that Nvidia might not even release Big Kepler if AMD doesn't release a 7980 with the full 2304 shaders of the Tahiti and a higher clock frequency because the 680 is already faster than the 7970.

The top single GPU cards that are out from each generation as of right now; that is the comparison I did. Sorry, but you can't get any more apples to apples in that because there will always be differences between the cards.
 

sephirotic

Distinguished
Jan 29, 2009
67
0
18,630
[citation][nom]johnners2981[/nom]Damn prices, in europe we have to pay the equivalent of $650-$700 to get one[/citation]

HA! You think that is bad? Here in Brasil we have to nearly pay more than double that price with taxes and abusive profit margins, (about 1100usd) but don´t forget we don´t earn as much money as you guys does there.
 

sephirotic

Distinguished
Jan 29, 2009
67
0
18,630
Finally after 4 years and three generations of card, NVIDIA made a decent launch, with a low power comsuption high performance card with decent pricing. Maybe i'll go back to nvidia after 2 AMD cards in a row (4850, 6870). Well, maybe not yet, i want to see prices drop and the new "670" and "690" releases, maybe i'll catch the 670 if it is as good release as this one, it will also be probably a better bet for SLI on a 8x/8x PCIe 3.0 platform like my x68 with a PSU not as potent (650w Corsair). Keep up the healthy competition AMD and Nvidia!
 

maverick knight

Distinguished
Apr 17, 2011
156
0
18,710
Some AMD fanboys are in denial... shocking!!!

Honestly, the 680 and 7970 are new in town so driver updates will separate the performance between the two, but not by much. It has lots of hardware upgrades and innovation but it doesn't do anything new that a 400 or 500 series already do. Because of that, at least for me anyway, is smarter to SLI my current 570 and it should catch up to the 680.

With this new release the competition will heat up and hopefully prices will drop soon.
 
G

Guest

Guest
Hard launch my ass, There hasn't been any stock for over 2 weeks since the "launch"
 
I just got two on Friday. Drivers are definitely not ready for SLI. BF3 performance with 680s in SLI is worse than with a single 680 performance. This is discouraging. Knowing Nvidia, they'll get the SLI part of the drivers worked out within a month. Great drivers is what turned me to Nvidia. I was completely impressed with the performance of the 580s in SLI. Although, I didn't get my 580s until they were on the market for three months so they had time to fine tune SLI things with the drivers.

Anyhow, if Nvidia doesn't get the SLI part worked out, I'll be scoping 7970s within a few months. Given past history, I'm confident Nvidia will get things worked out soon.

The drivers by which the reviews were all conducted are not available on the Nvidia site (300.99 I think). Somewhere in there, they must've cranked up single-card performance while abandoning SLI performance for launch. I'm sure there selling a lot more to single-card buyers than SLI buyers, so this will maximize the word-of-mouth advertising, but I just want everyone to know SLI is kind of junk at this point with BF3 and two 680s in SLI with the 301.10 drivers.

Beside that, single card performance is awesome with every title.
 
[citation][nom]adsent599happyyouok[/nom]He means waiting for the GK110, that will be a more of a compute card while this GK104 is more equiped towards gaming.[/citation]
For me the reference 680s run about 5-10C cooler than my reference 580s. This might help with your situation?
 
Well, Looks like we have 20 to be exact number of Nvidia Fans here...... of 16 Pages of comments it's those 20 who have given thumbs ups to pro Nvidia comments and the exact same number of thumbs downs to pro ATI comments..... Interesting.
 
Status
Not open for further replies.