Seven GeForce GTX 660 Ti Cards: Exploring Memory Bandwidth

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jackbling

Distinguished
Jul 21, 2011
213
0
18,680
It is late in the topic, but the gigabyte windforce oc 660ti was an easy choice for me:

Came with borderlands 2 (59.99)
has physx (batman, planetside 2, borderlands 2)
i have a 3d vision setup

Were it not for the three items above (physx isnt a strong bullet point, but those are 3 games i currently/will heavily play), ATI would have won me over; this is the most compelling generation of AMD cards in many years. There just isnt enough difference in price for me to care, and ill prob be rocking 1920x1080 for another year or so.

I may pick up an ATI card for a second gaming system, if only to support their direction.
 

loops

Distinguished
Jan 6, 2012
801
0
19,010
The bandwidth choke kills it. When spending 300 bucks on a single card it better f888n take care of AA.

A gtx 480 seems more capable with AA. That is sad.... if true.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
Blazorthorn :

you can either say this
Te Tahiti is only a little more than 20% larger than the GK104 and it has a 50% greater memory interface with obviously great results. The 7950 consumes about as much power as the 670 does. Your argument doesn't work at all.

or you can say this :
My theory is profit. Even making a GPU that is only somewhat smaller than another can make it significantly cheaper to mass-produce because it means more chips per wafer and gives a higher yield due to the chance of any one chip having a problem being lower.

how the *** can you say both ? :pfff:
 
[citation][nom]mayankleoboy1[/nom]Blazorthorn : you can either say this or you can say this :how the *** can you say both ?[/citation]
[citation][nom]mayankleoboy1[/nom]The problem with wider memory interface is that it exponentially increases the chip's die-size. Hence, cost per wafer and power consumption will increase a lot.IMO both AMD and Nvidia should use the XDR2 memory in the next series of cards. That would give the same bandwidth at half the interface size.[/citation]

You said that having a larger memory bus would increase die size and power consumption exponentially when this is obviously not true and I refuted that because of it not being true. I didn't say that it doesn't affect the cost of the GPU significantly. It wouldn't be significant enough to kill profitability (made obvious by AMD's continual price cuts) of the card as a whole, but it's considerable.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]jtt283[/nom]It's not "what did AMD cripple," it's "is there a nVidia card that is NOT somehow crippled?" Of course lower-end cards have fewer shaders, or less ROPS, or something else is reduced compared to a more expensive card, but nVidia seems to be doing other kinds of mutilating, just to leave something out, and I can't figure out why...[/citation]
It's either money, or the fact that they're having lower yields, or because they were having problems with GK110 when the 680 launched, or at that time they thought that the 680 and derivatives would buy them time till GK110.

TBH, i truly believe these cards aren't the planned kepler cards, at least as far as the naming convention goes. This is mostly because when the 680 launched, one of the board-shots had "670 Ti" as an image tag in windows explorer.

So i'm speculating that the true 680 was supposed to be a GK110 part, but then they started having production issues.

They'll hopefully revert to the 500-series style of doing things with the 700-series, minus the power consumption.

[citation][nom]blazorthon[/nom]I think that zooted's point was that Nvidia isn't competing at the entry to mid level, not that no one can compete with Nvidia at the entry to mid level.[/citation]
Yeah i thumbed him down then re-read...sorry zooted!
 
This is old news but people punted GK104 to no end. Great if you want to play with no AA or goodies but frankly for a $500/400/300 you at least expect 4xMSAA. To suggest AMD slash due to results, its more a case of bleeding Nvidia out the market. TSMC is AMD's new fab process, nvidia share that but on now minimal process. In short Nvidia don't get enough waffers to warrent a price drop, now they are releasing GK110 or Kepler full which will be anything from $100-200 more than the parts they replace.

 
[citation][nom]blazorthon[/nom]I think that zooted's point was that Nvidia isn't competing at the entry to mid level, not that no one can compete with Nvidia at the entry to mid level.[/citation]
Fair enough.
I really hope nVidia assembles their pooh; gets all the pieces into one card, at any level in the market.
 
[citation][nom]sarinaide[/nom]This is old news but people punted GK104 to no end. Great if you want to play with no AA or goodies but frankly for a $500/400/300 you at least expect 4xMSAA. To suggest AMD slash due to results, its more a case of bleeding Nvidia out the market. TSMC is AMD's new fab process, nvidia share that but on now minimal process. In short Nvidia don't get enough waffers to warrent a price drop, now they are releasing GK110 or Kepler full which will be anything from $100-200 more than the parts they replace.[/citation]

AMD uses a somewhat different TSMC 28nm process than the one that Nvidia uses, at least according to what I've read.
 

jlwtech

Honorable
Mar 8, 2012
58
0
10,630
Why is it that most review sites seem to be shining AMD in a bad light?

If, for example, AMD goes out of business then Nvidia will have a monopoly on GFX cards which means prices will skyrocket. Anyone feel like paying $800-$1000 for a gtx780?

Even IF AMD is producing inferior products to Nvidia (and I don't think they are), and they only exist to keep the pressure on Nvidia and therefore keep the prices fair. That, by itself, is hugely important to me as an enthusiast.

I can't understand it.... Why do most reviewers seem so eager to show that Nvidia is better than AMD?
From everything I have read, and experienced myself - I think they are roughly equal in terms of performance per dollar.

By the way, of the last 5 GFX cards I have owned 2 have been AMD/ATI and 3 have been Nvidia. I am NOT a fanboy....
 
Keep an eye on Newegg and the Ice Q 7870. When they come in stock, they only last hours because for some reason they have them priced at $219. Ridiculously good deal. I'm going to pick up another one as soon as possible.
There's two versions: one with an axial fan and one blower style. The blower is marked at $210 while the other is $260. Any idea why the price discrepancy? How's the noise on the blower?

Thank you, Tom's! I know some people just want to skip to the end and grab the card that gets labeled "the best." I liked this more investigative approach. Rarely is there a single product that is absolutely the best. Some cards are quieter, some overclock better, some have lower power requirements, etc.

One crazy thought I had was using the two 7750 passive cards in a Raven case. The air stack goes right down the fins so it becomes a quasi-active cooling effect. Granted who owns a Raven case that would use such low-end cards?
 
ASUS seems to be suffering from a quality control issue with the GTX line of cards. My friend bought a ASUS GTX 670 and it also was defective, exhibiting the same issues as Tom's did.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


It performed better everywhere else because (after looking at igor's tests), they turned off Tessellation here at Tomshardware, which cleeve already said (see 650/660 review comments, page 3 I think) was an advantage for Nvidia. So yeah, when you turn off winning features (weird? why do that? He gave his reason below but, 18 other sites didn't? well 19 with hardocp as they run with physx also - why not a game that has it is better with it on), you will get a strong card to become weak. When you exacerbate a semi-strength of the competing card, you may be able to show the BETTER card finally losing. And that happened here. Since the others didn't do that, the show real world like performance which is the opposite of this review. The left the cards alone rather than trying to find some way to make 660TI weak. When you don't try to mess with them to show them weak they obviously (18 reviews showing 660TI winning?) are quite a bit better than Igor/Toms is showing. FAIL. Anandtech had the same thing, 1 loss and 6 wins for 660TI (ad them to your list too..Hardocp was a wash even with 8XMSAA). After reading this review...Not sure I can say I'm suspicious still. Blanket statements after turning off tessellation (which would change the whole story). INteresting choice. Raise your hand if you go home and turn of tessellation etc on your new 660TI to prove you can make your card slower than it really is. Nuff said. Raise your hand if you turn off physx also, because you DON'T want those pretty add-in graphics they gave you, you want it to look UGLY right? NOT ME. If I buy this card, I will race home to turn ON tessellation and turn ON PHYSX. But maybe I'm just being dumb. I'd like to see this same set of tests run without turning features off (who does that when they get home?). I think things would be like the other 18 reviews...LOL. If we called all these cars, I'd say the other 18 reviews had 4 tires on the 660TI. They've run the 660TI here with only 3 tires (2? Physx wasn't mentioned much if at all). I'd like to understand why.
First off, I don't use maximum tessellation unless it provides a visual benefit, and doesn't cause too much of a frame rate hit. In the case of the 660 ti and the games we tested, I found it fails on both counts. Nvidia and other sites crank up tessellation, and I believe this gives the GeForce a sizable advantage in a number of tests. "
from his Cleeve 08-17-2012 at 11:08:56 AM post.
http://www.tomshardware.com/forum/page-3279_56_100.html

Nuff said? It's right here in the comments section page 3 if viewing in the forums or page 7 if from the review I think :) Well it's in that links comments not directly here. So you'd turn this off if you wanted to hide this advantage correct? :) Hard ocp calls this out in many of their reviews, about the weakness of tessellation. So if you ever turn it off and benchmark 7000's vs 600 series expect to show 7000's in a much better light than reality. They have tessellation problems. Still not sure why toms chooses to change out of the box settings. I mean they sell the card with tessellation (both sides), so why hamper the winner by turning it off? So the loser can become the winner? Strange.
"why turn off tessellation"
Google that without quotes...watch what you get...LOL. "AMD cheats turning off tessellation" etc...ROFL.
http://www.hardocp.com/article/2012/03/22/nvidia_kepler_gpu_geforce_gtx_680_video_card_review/9
Tessellation on=different story...NV victory ;) So yeah, ad this as site 19 looniam
 

BlizzardGamer

Honorable
Jul 8, 2012
64
0
10,630
[citation][nom]WILDLEGHORN[/nom]But I'm still confused...After reading everything & from the conclusion which 660 Ti's are people supposed to go with?There are so many out there, but which ones are recommended by Toms???Someone plz clear this up as there's no clear cut recommendations made in the conclusion, thanks![/citation] I think they didnt exactly recommend any of them, from what I read.
 
G

Guest

Guest
Ok, they've had driver issues of their own, but those either have been or are being fixed. What am I missing?
This:
http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html

My crappy and cheap 9500GT have Win 8 ForceWare Control Panel support, unlike expensive mid-high
range in their time HD48xx Radeon. Not to mention soon to be lousy legacy driver support for games
which could be run on mid-high range HD4xxx cards but thanks to ATi, it will be obsolete.

No wonder that with this policy ATi (now thankfully AMD) will never be No.1 in graphic cards department for longer period and taken seriously.
 

jonjonjon

Honorable
Sep 7, 2012
781
0
11,060
"This is a phenomenon we expected to see when Nvidia first announced GPU Boost, and we're starting to see it mess with the value of add-in board partner offerings. It makes less sense to pay more for a higher base clock when the effects of GPU Boost affect each piece of silicon differently (and there's nothing you can do about it)."

so whats the point of a review if its all a silicon lottery. i guess the cooling solutions are different. so the msi or gigabyte card i buy might be a middle of the pack card? do they do binning for the OC versions so you know you are not getting a garbage chip? i would image they send the absolute best chips to be reviewed. also if the 660ti is limited by the memory bandwidth why arent all the manufactures overclocking the memory? seems like the obvious thing to do. would they have to buy better/more expensive memory to overclock it?
 

Ygrek

Honorable
Nov 15, 2012
2
0
10,520
Let me criticize a little the correctness and consistency of the conclusions in this article as to relation of GTX 660 TI's memory interface and its behavior here in the "Testing For Memory Interface Limitations".

The authors of this article state that the only reason of the problem of GTX 660 TI (when its FPS drops at AA 8X) is its memory bus 192 bit. And then the say that its 192 bit bus is sufficient reason to not buy it. But ...
As we all know, the memory bus itself is not important. What is important is the product of the memory bus and the frequency of memory, which is called "memory bandwidth".
Remember, the old card HD 4830 has 256 bit bus and DDR3 and its bandwidth is 57.6 GB/sec? But a little newer HD 4770 which has only 128 bit bus but the faster DDR5 has almost the same bandwidth 51.2 GB/sec, HD 4770 being often faster than HD 4830.

Here we have:
HD 7950 bandwidth - 240 GB/sec, bus 384 bit;
GTX 670 bandwidth - 192.3 GB/sec, bus 256 bit;
HD 7870 bandwidth - 153.6 GB/sec, bus 256 bit;
GTX660TI bandwidth - 144.2 GB/sec, bus 192 bit - note, the bandwidth is only 6.5% less than for HD7870.

OK, let's assume that 6.5 % is still enough for such a big FPS drop at AA 8X.
Now, look at the graph "Absolute Framerate 2560x1440 (FPS)". What do we see?

1) Let's compare the red (HD7950) and the blue (HD7870) curves. They behave pretty much the same, the red one just being shifted up accordingly to higher performance of HD7950. The distance between them at AA 8x is even less than at noAA. But the bandwidth of HD7950 is 56.25 % larger than that for HD7870. And if you prefer to compare the buses - 50% larger. So, why such a big difference in memory bandwidth and bus does not lead to any changes in the curve behavior? It is if the bandwidth is allegedly so important.

2) Analogously, let's compare the green (GTX670) and the yellow (GTX660TI). Again, the differences for the bandwidth and the bus are considerable (both 33% ) and still no change in behavior of the curves - only the shift due to different performance.

3) The most interesting - let's compare the green (GTX670) and the blue (HD7870) curves. GTX670 has the same 256 bit bus and 25.2% larger bandwidth than HD7870. But GTX670 still drops below HD7870 at AA8X. What, again because of more narrow bus or less bandwidth?

In short, the conclusions about the main role of "bad memory interface" of GTX 660 TI in this article are, to put it mildly, exaggerated. There should be other explanations.

To my opinion, the memory bus, and the more so the bandwidth, have nothing to do with these "results" at AA 8X.

P.S. Why did they choose the game Batman Arkham City? It looks this game doesn't love the nVidia cards.
P.P.S Where is the comparison with Zotac AMP! Edition model of GTX 660 TI that has 158.6 GB/sec memory bandwidth which is 3.2% higher than for HD 7870?

P.P.S I didn't find any comments on power consumption comparison. Why does GTX 660 TI (TDP 150 Watts) consumes more watts than HD 7870 (TDP 175 Watts) while it has less TDP?
 
Status
Not open for further replies.