GeForce GTX 690 Review: Testing Nvidia's Sexiest Graphics Card

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Another fantastic article Chris!!!

I am recuperating from some fun dental surgery. At least we know now the GTX 690 is not like the GTX 590 in that it sufferers from being 'downgraded' Core/Memory/Shaders speeds.

What would be really cool is to see SLI GTX 690's and with 3D; it might make me reconsider it as an option over 3-WAY 4GB GTX 680. I too assume in SLI the GTX 690 can retain it's 4GB vRAM for 5900+x1080 (5760x1080).

You still need to do an article on PCIe 2.0 v PCIe 3.0 with high resolutions (e.g. 4x~8xAA) and 3D. In it it would be nice if you could employ that registry mod on the X79 + GTX 680 for PCIe 3.0.
 
G

Guest

Guest
$1200 on new egg? I thought it suppose to be $1000. Too much for now...
Wonder when the price will go down. They say stock will be updated on May 4 and 7, but will 690 be as much available as 590 was last year?

Looks like AMD is not existent in high-end VGA market anymore:| Gonna trade my 7970 for Kepler.
 

fatality1515

Distinguished
Jan 22, 2012
43
0
18,530
Just as I expected, 690 + Luxmark = pathetic performance.. Next purchase will be an AMD card.. Some do more than just play games...
 

omnimodis78

Distinguished
Oct 7, 2008
886
0
19,010
[citation][nom]outlw6669[/nom]WTF!?!GTX 590 launch...GTX 580 launch price: $500GTX 590 launch price: $700Difference: 140%Now today GTX 690 launch...GTX 680 launch price: $500GTX 690 launch price: $1000Difference: 200%So, is it just me or is nVidia really gouging on the price here?Why the hell else would they be charging an additional 43% more than their last dual GPU launch while using less silicon?Come on AMD, we really need some more competition here.[/citation]
Actually, a 140% difference would mean that the card would cost $1200. The difference between $500 and $700 is ~28.5% So calm down, it's not THAT bad! I'm sure your 2nd example is also mathematically off.
 
[citation][nom]omnimodis78[/nom]Actually, a 140% difference would mean that the card would cost $1200. The difference between $500 and $700 is ~28.5% So calm down, it's not THAT bad! I'm sure your 2nd example is also mathematically off.[/citation]

His/her math is correct. Going from 500 to 700 is a 40% jump. 10% of 500 is 50 and 50 times 4 (10% times 4 is 40%) is 200. 200 plus 500 is 700. You are counting down from 700 instead of counting up from 500 like you should have done. 500 is ~28.5% less than 700, but 700 is 40% greater than 500.

However, outlw6669 is wrong because he.she does not seem to realize the difference in performance between the 680 and 690 is greater than the difference in performance between the 580 and the 590. The difference in release time between the 580 and the 590 may have also let the price per performance for the graphics cards increase, so the 590's launching price might be deflated somewhat in comparison to what it would have been had it launched with the 580. However, I don't remember the prices of those days right now, so that's mere speculation, but it is something to consider. Does anyone here know some common prices for the 580 at the time of the 590's launch in March of 2011?
 
BigMack70 brought up a good point about the VRAM capacity being a cause for concern. Before I say anything about the 680, look at this.

Nvidia has often historically had less VRAM than AMD at the same performance level and also fairly often, even at higher performance levels. Lets just look at the GTX 570, 580, and the Radeon 6970 right now.

The 580 and 570 came out around two years ago. Back then, 1.5GB, even at their performance level, was more than enough, even for SLI setups. Even today, it is still enough for single GPU and even dual GPU setups of the 580. However, three 580s (with a resolution, quality settings, and AA/AF respective to their performance) can end up with a huge VRAM capacity bottleneck. In many cases now, two 580s can need the AA/AF turned down because of the VRAM problem (all of this is assuming that these are 1.4GB 580s, not 3GB 580s).

However, like I said, this simply was not so when they first came out and a while after that. However, even today, the 6970 has enough VRAM for dual and triple GPU setups with resolutions, quality settings, and AA/AF respective to their performance. The 6970s and even the 580s have obviously had enough VRAM to continue to be great for quite a while now and still have timeleft in them, probably enough time to wait for the Radeon 8000 and perhaps GTX 700 cards (assuming that those are the next series).

So, the 580 1.5GB was not a VRAM capacity bottle-necked card and it still isn't today (for most situations). The same can be said about most of Nvidia's cards. Going further back, we can look at the GTX 285 and 295. The 295 has less VRAM capacity than it's competitor, the Radeon 4870X2, but that didn't stop it from being great for years to come, did it (and even now, a decent card for anything up to 1080p if you're willing to skip DX11 support and deal with it being about half as efficient as cards with similar performance right now, such as the Radeon 7850)? I think that the claims of Nvidia having VRAM bottle-necked cards were way over-hyped and exaggerated, at least until Kepler.

Back to the 680 and 690. These cards have the same amount of VRAM per GPU as the 6970. The problem with this is that they are almost twice as fast per GPU as the 6970. They also have the VRAM bandwidth of the GTX 580, but this is not as big of a deal (although it does mean that they probably have a minor/moderate bandwidth bottleneck. I don't think that this should be worried about very much. IT can be proved if increasing VRAM bandwidth scales performance better than it does for most other cards).

The problem is that now they are twice as fast as those 6970s, but have the same VRAM capacity. I do not think that the 680 has the proper VRAM capacity for their GPU performance and that this can and will hamper their longevity as high end cards. If I were to consider buying a 680, it would be a 4GB 680. If I can't get one, then I will compromise by getting a Radeon 7970 or overclocking the hell out of a 7950. They seem to be very close to a 7970 even if you overclock both the 7950 and the 7970, probably due to the fact that the only difference between them would them be that the 7950 only has a block of shaders and such disabled and that increasing the core count does not scale performance very well.

This is shown to a lesser effect on the 6950 and 6970, without flashing the 6950. This scaling problem gets worse with higher core counts, showing a diminishing returns that bare an alarming resemblance to the diminishing returns effect seen between increasing clock frequency and power usage. Nvidia seems to know this and is actually working around this in their next or second to next architecture (there was a recent article about Nvidia patenting the basis for an architecture that should stave off this multi-core parallelism scaling problem).

Point is, I'd take an overclocked Radeon 7950 3GB that then has performance close to a stock 680 2GB over the 680 2GB even if it means considerably higher thermals just because I don't want to risk needing an early upgrade due to the 680 running out of VRAM well before it's GPU is fully loaded. I hope that 4GB per GPU GTX 680s and 690s are made and that they have reasonable prices because if not, I'd have to compromise. I'm certain that the electricity bill hike would be cheaper than upgrading such a high end graphics card more often than I should have to.

On the note of agreeing with BigMack70 on this, I do disagree about his/her opinion of Nvidia's turbo boosted GPUs. I would simply like for Nvidia to give us greater control over it so that both the people who want it can use it and the people who don't want it don't have to use it if they buy the card. It would also mean another thing for enthusiasts to play with and I think that many of us would like that ;)
 


Nvidia has usually had the high end market over AMD. The best possible graphics setups have been Nvidia for some time now (quad GTX 285, quad GTX 480, quad GTX 580, quad GTX 680 and dual GTX 690, etc.), so it's not really bad. The high end market isn't a very highly populated market and doesn't make nearly as much money (despite the far higher prices) as the low and mid-end markets for these companies, let alone the professional and compute graphics card markets.

Also, it's AMD nowadays, not Ati. It hasn't been Ati for many years (although AMD didn't switch the brand name for the graphics cards over to AMD until Radeon 6000 aka Northern Islands).
 
G

Guest

Guest
EVGA removed Lifetime Warranty for this monster so you spend 1200 bucks (yes theirs is $200 over retail) and you get a lousy 3 year warranty. Only for the most hardcore. By the way the PNY GTX 680 has a lifetime warranty so if anyone would spend 1000+ on two of them you would get more performance and better warranty.
 
G

Guest

Guest
Makes me sad how they massively cripple their GK104's FP64 and CL performance on purpose to sell heavily overpriced Quadro's and Tesla's with the same GPUs on it. This card is a straight gaming card, nothing else you can do with it. If you can't afford a Quadro for thousands of dollars, you're stuck with AMD/ATI.
 
[citation][nom]PizarroCore[/nom]Makes me sad how they massively cripple their GK104's FP64 and CL performance on purpose to sell heavily overpriced Quadro's and Tesla's with the same GPUs on it. This card is a straight gaming card, nothing else you can do with it. If you can't afford a Quadro for thousands of dollars, you're stuck with AMD/ATI.[/citation]

That isn't exactly what happened. This is a little technical, but it should show you what went on with Kepler

There are two types of cores in Kepler. One type is only capable of doing 32 bit math, but they're also optimized for it more than any core that does 64 bit math could be. This is part of why the GTX 680 is ahead of the 7970 even though the 7970 both has more cores and higher memory bandwidth. The 7970's cores are optimized for both types of math, but they are not optimized as much as the 32 bit cores because the 7970's cores are supposed to be capable of 64 bit math and they were intended to be good at it (which they are).

These 32 bit cores are what the consumer Kepler cards are primarily made of. This gives them good gaming performance and greater efficiency than GCN during gameplay, but means that they can;t do 64 bit math at all.

The second type of Kepler cores are optimized for 64 bit math as much as the 32 bit cores are optimized for 32 bit math. They can do 64 bit math at full speed (instead of 1/2 of 32 bit or much worse, for example, the GTX 580 had a 1 to 8 64 bit to 32 bit ratio), but they are very inefficient for 32 bit math because they are larger and more complex than the 32 bit cores, but are larger because of their 64 bit optimization. So, they are epic for 64 bit math, but suck for 32 bit math in comparison to the 32 bit cores.

The GK104 and such have far fewer of these special 64 bit cores than they have 32 bit cores because the Kepler cards are intended to be as streamlined for gaming as possible. This means that the 1 to 24 ratio of 64 bit to 32 bit math is probably because of this hardware issue, rather than Nvidia just turning a hypothetical switch in the BIOS to limit the dual-precision performance.

So, Nvidia decided to give us the most energy and die space efficient gaming performance as reasonably possible while also forcing us to go AMD or professional Nvidia/AMD if we want high 64 bit compute performance. At least the 7970 can hang with some of the current high end professional cards when it comes to raw compute performance, despite it's much lower price. A professional GCN card or a professional Kepler card (using only the 64 bit cores) could be incredibly faster than the current professional boards, especially if it's a professional GK100 (yes people, it's the GK100, not the GK110, that is the direct successor to the GF110 in the GTX 580. The GK110 is the successor to the GK100 if a second generation of Kepler cards is created).
 

Rattengesicht

Honorable
May 6, 2012
7
0
10,510
[citation][nom]blazorthon[/nom].....[/citation]

I understand the technical part (I'm btw the guy who posted there ;-)).
Because we dont have a professional equivalent of the new 104, we can't know about the "BIOS-switch".
But that was certainly the case with the older geforce's.
Hacked drivers = Quadro (well, not entirely, i know.)

But thats not my point. Nvidia has every right to differentiate their products. But this is massive, as there is nothing between Geforce and Quadro.
You get either expensive gaming cards or extremely expensive GPGPU cards.
With AMD you get a bit of both worlds without spending a fortune. At least for now.

And I'm saying this as a huge nvidia fanboy. Just disappointed with this situation, as they give me no choice other than switching over to the competition.
Feels really bad.
 

evga_fan

Distinguished
Aug 22, 2010
76
0
18,640
Sure, it's a BEAST of a card and performs well. No doubt! However, knowing the fact that there is (or rumoured to be) a GK100/110 (or whatever you want to call it) in the making, sort of takes a lot of "sting" away from this one (and the 680 for that matter) i suspect.
I mean, you pour your hard earned money and invest into high-end hardware and expect to have the best. Atleast until the next generation arrives. Only to find out that this was just something to buy themselves some more time before the real deal comes out.

Now here is the skinny: If the 680 is infact the "hunter" then it should be priced accordingly, thereby forcing the competition to drop the prices as well. Then, once the GK100/110 is ready, they could advertise it as "The best" and slap a $500 dollar pricetag.
Why(?) would you ask, seeing as it does actually perform at the very top and hence justifies its pricetag.
Well, if the customers would actually KNOW WHAT they are buying, they would infact NOT hold on to their money. People who actually wanted a 560-class card and coincidentally a 560-class pricetag would KNOW that this IS the card and not something it wasn't intended to be. So this would eradicate a lot of uncertainty and raise the awareness of the customers. (Granted NV probabily don't want us to but I'm argumenting why they just as well could). Not to mention (as I said) people wouldn't hold on to their money which I suspect a considerable amount of you do.
Don't worry Nvidia, those of us who wants the best WILL give you our money and pay for the best but please dont fool those who settles for less!

All I'm saying is if I want a Ferrari - I want a FERRARI! I don't want something that looks or performs at the moment like a Ferrari - say a cheap Porsche. The Porsche is only there purely because the oppositon doesn't call for more.

Chris, you say that there's no guarantee that the "tank" card will outperform GTX 690. True, but neither do I expect it to! The tank card could infact fall right inbetween the 680 and 690 (maybe slightly towards 690) making the SLI configuration the fastest option!
I would expect such a configuration to fetch a pricetag of a thousand dollars not the 690! Now I am afraid of even thinking of what the pricetag of such a tank card will be, not to mention two of those. Especially if one can outperform one 690...

Now don't get me wrong. Priceshifts do occur once an new generation is released, obviously. However if the priceshifts starts now in this generation, making a 680 cost as much as 560 did in favour of a tank card that will replace the pricetag - NV has really pulled a fast one out of their customers...
 
[citation][nom]evga_fan[/nom]Sure, it's a BEAST of a card and performs well. No doubt! However, knowing the fact that there is (or rumoured to be) a GK100/110 (or whatever you want to call it) in the making, sort of takes a lot of "sting" away from this one (and the 680 for that matter) i suspect.I mean, you pour your hard earned money and invest into high-end hardware and expect to have the best. Atleast until the next generation arrives. Only to find out that this was just something to buy themselves some more time before the real deal comes out.Now here is the skinny: If the 680 is infact the "hunter" then it should be priced accordingly, thereby forcing the competition to drop the prices as well. Then, once the GK100/110 is ready, they could advertise it as "The best" and slap a $500 dollar pricetag. Why(?) would you ask, seeing as it does actually perform at the very top and hence justifies its pricetag. Well, if the customers would actually KNOW WHAT they are buying, they would infact NOT hold on to their money. People who actually wanted a 560-class card and coincidentally a 560-class pricetag would KNOW that this IS the card and not something it wasn't intended to be. So this would eradicate a lot of uncertainty and raise the awareness of the customers. (Granted NV probabily don't want us to but I'm argumenting why they just as well could). Not to mention (as I said) people wouldn't hold on to their money which I suspect a considerable amount of you do.Don't worry Nvidia, those of us who wants the best WILL give you our money and pay for the best but please dont fool those who settles for less!All I'm saying is if I want a Ferrari - I want a FERRARI! I don't want something that looks or performs at the moment like a Ferrari - say a cheap Porsche. The Porsche is only there purely because the oppositon doesn't call for more.Chris, you say that there's no guarantee that the "tank" card will outperform GTX 690. True, but neither do I expect it to! The tank card could infact fall right inbetween the 680 and 690 (maybe slightly towards 690) making the SLI configuration the fastest option! I would expect such a configuration to fetch a pricetag of a thousand dollars not the 690! Now I am afraid of even thinking of what the pricetag of such a tank card will be, not to mention two of those. Especially if one can outperform one 690...Now don't get me wrong. Priceshifts do occur once an new generation is released, obviously. However if the priceshifts starts now in this generation, making a 680 cost as much as 560 did in favour of a tank card that will replace the pricetag - NV has really pulled a fast one out of their customers...[/citation]

For it's gaming performance, the 680 is worth the money. I don't understand why people are upset about the "hunter" being used as the top consumer GPU (GK100 looks like it will go into only the professional cards) when that despite it's hunter status, it is still the fastest single GPU card for most games. The GTX 480 managed to equal the GTX 295 in performance (more or less) and that was the last new architecture. The 680 managed to equal the 590 (more or less). In fits this trend. This time, Nvidia doesn't need 500mm2 or larger dies to compete with AMD's sub 400mm2 dies and that's important for Nvidia. For it's gaming performance, the 680 is worth $500. It's more than twice as fast as the GTX 560 TI 2GB despite that card being about half of the price of the 680. The 680 has more performance for the price than many lower end cards do, let alone high end cards.

So, just because this card shares it's roots with the GF114 more than the GF110 doesn't mean that it isn't worth the price tag that is asked for. I can guarantee that just because they were also *mid-ranged cards, the GTX 460 still went for a pretty good price when it first came out because it was as good as or better than the previous generation's high-end single GPU cards. The 680 is fast and there's no denying that.

We don't know if the GK100 will end up in ANY consumer cards (if not, then GK104 will be the top). It probably won't. GK110 might show up in the next generation of consumer cards, but the GK100 will almost certainly be only in Quadro cards and Tesla cards. Also, for gaming performance, the GK100 probably wouldn't be more than 35% to 50% faster than the GK104. It is supposed to have 2304 CUDA cores.

So, was Nvidia pulling a fast one just because the name of the GPUs (GF104, GK104) isn't to your liking? What if Nvidia simply renamed them? The GK104 performs in-line with Nvidia's previous trend in new architecture performance and was priced accordingly. Complain about it all you want, but you're simply wrong.
 

evga_fan

Distinguished
Aug 22, 2010
76
0
18,640
[citation][nom]blazorthon[/nom]For it's gaming performance, the 680 is worth the money. I don't understand why people are upset about the "hunter" being used as the top consumer GPU (GK100 looks like it will go into only the professional cards) when that despite it's hunter status, it is still the fastest single GPU card for most games. The GTX 480 managed to equal the GTX 295 in performance (more or less) and that was the last new architecture. The 680 managed to equal the 590 (more or less). In fits this trend. This time, Nvidia doesn't need 500mm2 or larger dies to compete with AMD's sub 400mm2 dies and that's important for Nvidia. For it's gaming performance, the 680 is worth $500. It's more than twice as fast as the GTX 560 TI 2GB despite that card being about half of the price of the 680. The 680 has more performance for the price than many lower end cards do, let alone high end cards.So, just because this card shares it's roots with the GF114 more than the GF110 doesn't mean that it isn't worth the price tag that is asked for. I can guarantee that just because they were also *mid-ranged cards, the GTX 460 still went for a pretty good price when it first came out because it was as good as or better than the previous generation's high-end single GPU cards. The 680 is fast and there's no denying that.We don't know if the GK100 will end up in ANY consumer cards (if not, then GK104 will be the top). It probably won't. GK110 might show up in the next generation of consumer cards, but the GK100 will almost certainly be only in Quadro cards and Tesla cards. Also, for gaming performance, the GK100 probably wouldn't be more than 35% to 50% faster than the GK104. It is supposed to have 2304 CUDA cores.So, was Nvidia pulling a fast one just because the name of the GPUs (GF104, GK104) isn't to your liking? What if Nvidia simply renamed them? The GK104 performs in-line with Nvidia's previous trend in new architecture performance and was priced accordingly. Complain about it all you want, but you're simply wrong.[/citation]

Like you just said, 680 is undeniably fast and there's no debate in that. If 680 remains as the fastest singelgpu card for this generation - then fine.

The problem arises when infact a fastER gpu gets released in the same generation. Consumers that have been lead to believe that 680 is the best, now has to has to adjust to the fact that there is a new kid on the block. Better in almost every aspect.

Now we've seen that the 680 does well in gaming but not so much in compute performance (such as Sandra, LuxMark and encoding/decoding in fx MediaEspresso) something I think will be adressed in "big" kepler.

The trend you're talking about is oviously going up. Otherwise, what's the point of the newer cards but who's to say that trend shouldn't be steeper (i.e kepler hasn't peaked)? The fact that you are bringing all these "claims" about what GK100/110 is, are only rumours. Something you or nobody else for that matter, cannot know for sure until we actually see it. All we can do is speculate, so here I go:

I don't care what there are calling it, GK104 or GK-the-fastest-son-of-a-gun. The only indication and speculation I can make of GK104 is this: It's not common for a top-of-the-line to have an ending of "4" in it's code name. Something tells me there WILL be another model that will dethrone the 680 - the real successor to the 580. (Not only meant to be used in Quadro/Tesla cards for professional use but ALSO for gaming purposes). All I'am worried about is if 690 costs $1000, how much would you think "big" kepler will cost?
 
[citation][nom]evga_fan[/nom]Like you just said, 680 is undeniably fast and there's no debate in that. If 680 remains as the fastest singelgpu card for this generation - then fine.The problem arises when infact a fastER gpu gets released in the same generation. Consumers that have been lead to believe that 680 is the best, now has to has to adjust to the fact that there is a new kid on the block. Better in almost every aspect.Now we've seen that the 680 does well in gaming but not so much in compute performance (such as Sandra, LuxMark and encoding/decoding in fx MediaEspresso) something I think will be adressed in "big" kepler.The trend you're talking about is oviously going up. Otherwise, what's the point of the newer cards but who's to say that trend shouldn't be steeper (i.e kepler hasn't peaked)? The fact that you are bringing all these "claims" about what GK100/110 is, are only rumours. Something you or nobody else for that matter, cannot know for sure until we actually see it. All we can do is speculate, so here I go:I don't care what there are calling it, GK104 or GK-the-fastest-son-of-a-gun. The only indication and speculation I can make of GK104 is this: It's not common for a top-of-the-line to have an ending of "4" in it's code name. Something tells me there WILL be another model that will dethrone the 680 - the real successor to the 580. (Not only meant to be used in Quadro/Tesla cards for professional use but ALSO for gaming purposes). All I'am worried about is if 690 costs $1000, how much would you think "big" kepler will cost?[/citation]

I see your point, but every recent source that I can find says that either they know nothing about the GK100, or it's only going to be delegated to the Quadro and Tesla families because Nvidia is satisfied with how well GK104 did for gaming. You have to realize that Nvidia's usage of these huge dies for gaming cards was only because of compute. Nvidia found a way to lower compute performance while increasing gaming performance, alll while decreasing die area and power usage (relative to what Kepler would have been had it used the FP64 blocks more extensively for consumer cards).

I don't think that there will be a GK100 successor to GK104, at least not in the GTX 600 series.
 
I try not to base speculation purely on the model numbers. They often change or don't even correlate well with the previous generation. For example, compare the 6870 to the 5870. If you didn't know much about graphics, then you wouldn't expect the 5870 to actually be the faster card for most games and workloads.

Point is, I like to base speculation on many more factors, some of which can include the hardware, the drivers, the software, the particular company, and their products in other markets, among even more factors. I just don't see a good point in Nvidia releasing a higher performance GPU than the GK104 at this point. It can already do pretty much everything at 2560x1600 in the GTX 680.

As of right now, a 685 or 680 TI with the GK100 would just be so far ahead of everything right now. If higher resolution than 2560x1600 or 2560x1600 3D and 120Hz monitors were cheaper, then I wouldn't be so sure of this. The only resolution where such a GPU would be favorable right now is 5760x1080 and 5760x1200 because they might be able to be played with only a single GK100 equipped card. It would probably have a price between the 680 and the 690, probably closer to the 690. $700 to $800 seems to be the most likely price for a hypothetical GK100 card at launch.
 

evga_fan

Distinguished
Aug 22, 2010
76
0
18,640
[citation][nom]blazorthon[/nom]I see your point, but every recent source that I can find says that either they know nothing about the GK100, or it's only going to be delegated to the Quadro and Tesla families because Nvidia is satisfied with how well GK104 did for gaming. You have to realize that Nvidia's usage of these huge dies for gaming cards was only because of compute. Nvidia found a way to lower compute performance while increasing gaming performance, alll while decreasing die area and power usage (relative to what Kepler would have been had it used the FP64 blocks more extensively for consumer cards).I don't think that there will be a GK100 successor to GK104, at least not in the GTX 600 series.[/citation]

Fair point. Funny, however I've read some articles that points to the contrary. Date of release is obviously too hard to tell but I wouldn't expect anytime soon.

Seems however that should they drop compute performance, they've effectively given AMD that distinct advantage (best of both worlds as it may) for their gamingcards. I just don't see them dropping compute performance, seeing as it IS relevant to some degree.

It was merely a speculation soley on the codename but I think you understand that it's got more to it than that.
 

Rattengesicht

Honorable
May 6, 2012
7
0
10,510
[citation][nom]evga_fan[/nom]Seems however that should they drop compute performance, they've effectively given AMD that distinct advantage (best of both worlds as it may) for their gamingcards. I just don't see them dropping compute performance, seeing as it IS relevant to some degree.[/citation]

That was exactly my point. As much as I love Nvidia, for some (probably not that much) people this new generation of Geforce's is useless.
Remembers me of formula 1 somewhat. You get to be the fastest in the circuit, but dont expect that car to be useful in rushhour city-traffic.
 

zloginet

Distinguished
Feb 16, 2008
438
0
18,790
[citation][nom]Marcus52[/nom]That pair of 690s with the matching bridge looks so sweet, I want 2![/citation]

Not nearly as nice as my 7970s with koolance ar-7970 waterblocks on them...
 
Status
Not open for further replies.

TRENDING THREADS