Nvidia GeForce GTX 690 4 GB: Dual GK104, Announced

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I gave up waiting for Nvidia to learn what Supply Chain is and pull their head from their ass. Been using Nvidia cards since the days of Counter Strike 1.0. Now I'm the proud owner of a Sapphire 7970 OC.
 
MSRP is $999 and The fan blowing into the case is a no no, Ill stick with my SLi'd GTX680 koolanced blocked 1400Mhz beast thank you!

Anyways quadSLI drivers sucks ass and will always suck ass
 
[citation][nom]blazorthon[/nom]The Kepler cards only come close to the GCN cards in single precision compute performance, not dual precision (which is far more important for professional/media work).
Get a good display configurations such as a 2560x1600 display or a 3D 1080p display and you will quickly find yourself wishing that you had more performance. Games today can fully load FAR better graphics setups than two 6970s. Thw whole point of more expensive graphics cards is to play above 1080p, not at 1080p. Also, if you get a seriously intense games such as Crysis 2 DX11 modded, then you can find yourself actually fully loading two 6970s at even 1080p and barely hitting even 50 to 60FPS.[/citation]



See, this is made with the assumption that 3D doesn't make you want to gouge your eyes out after 15 minutes, that there are a lot of people stupid/neive enough to buy into the 3D hype.

I don't need my games to be represented in a hype format on an over-priced display with performance cut in half that adds eye strain to be happy playing my games.
 
[citation][nom]gsxrme[/nom]MSRP is $999 and The fan blowing into the case is a no no, Ill stick with my SLi'd GTX680 koolanced blocked 1400Mhz beast thank you!Anyways quadSLI drivers sucks ass and will always suck ass[/citation]


After the cost investment into watercooling a video card, with a WB adding $100-150 over the price of an already ridiculously expensive configuration, you sure d*** better want to be sticking to that for a couple years, at the minimum.
 
[citation][nom]tuffjuff[/nom]See, this is made with the assumption that 3D doesn't make you want to gouge your eyes out after 15 minutes, that there are a lot of people stupid/neive enough to buy into the 3D hype.I don't need my games to be represented in a hype format on an over-priced display with performance cut in half that adds eye strain to be happy playing my games.[/citation]

I only mentioned 1080p3D as needing similar performance as 2560x1600. None of the rest of the comment is made under the assumption of someone wanting or not wanting 3D gaming displays. After I mentioned 1080p3D, every other mentioning of 1080p was meaning 1080p2D, hence why I didn't add the 3D to it.
 
At $1000, it is almost a complete waste of time. I say "almost" because nVidia will undoubtedly sell the 20 or so they manage to produce. This was only made so they could say they had it. It will for sure suffer the usual nVidia production-level and availability SNAFUs.
 
[citation][nom]battletoad_boy[/nom]I guess Nvidia was too busy trying to sell everybody on their latest and greatest instead of fixing the massive stuttering issues of the 680. Doesn't matter if single card or SLI, these cards are busted right now for many people. Vsync also is really screwed up.Here's a month old 10 page thread at Nvidias offical forums (where nvidia has remained completely silent on the issue.) READ THIS BEFORE BUYING A 600 SERIES CARD. Dont just take toms word that the card is fine (how long did it take toms to admit microstuttering was a real thing?)http://forums.nvidia.com/index.php [...] 26227&st=0[/citation]

It's a great find of info you're providing to everyone here but I have to say that if you read to the bottom of this thread the latest drivers seemed to correct a lot, if not all of their issues. To make sure, it's worth a few minutes googling to see if there are any other unresolved issues out there before marrying yourself to one of these cards. If it appears fixed, go get yer card son!
 
[citation][nom]Zephids[/nom]Great Idea Nvidia. You don't have enough 28nm Chips in stock to sell GTX 680s, so you should release a card that take TWO 28nm Chips so you can sell even LESS GTX 690s. *thumbs up*/endsarcasm[/citation]

Not necessarily,The 690 clock speeds are slower than the 680.
What I am guessing is those 680 cards that did not make the grade were binned and are now reused for the 690 which has slightly less clock speeds,So all is not wasted and no one would have got any extra 680 cards in supply anyway. Good move by Nvidia if this is what they have done, Maximise profits early on from the misfortunate happenings with 28nm delays and slow roll outs from TSMC of kepler chips.
I doubt anyone is missing out on their 680 cards because the 690 is eating them all up from supply!
 
[citation][nom]IQ11110002[/nom]Not necessarily,The 690 clock speeds are slower than the 680.What I am guessing is those 680 cards that did not make the grade were binned and are now reused for the 690 which has slightly less clock speeds,So all is not wasted and no one would have got any extra 680 cards in supply anyway. Good move by Nvidia if this is what they have done, Maximise profits early on from the misfortunate happenings with 28nm delays and slow roll outs from TSMC of kepler chips.I doubt anyone is missing out on their 680 cards because the 690 is eating them all up from supply![/citation]

The article said that the GPUs going into the 690 are supposed to be better binned than those in most of the 680 in order to keep the power usage of the 690 as low as reasonably possible.
 
[citation][nom]blazorthon[/nom]Quad 590 would be slower than the 690...[/citation]

Someone thumbed this post down... So, I'll explain it. A single 680 is roughly equivalent to a single 590. A 690 is almost twice as fast as the 680 (probably more like 1.8 times faster or thereabouts). The 590 doesn't even scale that well because dual SLI scales FAR better than quad SLI. So, two 590s in quad SLI would be beaten by a single 690 in dual SLI. This is a well known phenomenon (two GPUs scale FPS better than three and four GPUs, but three and four GPUs eliminate stutter caused by dual GPU configurations to make up for their reduced scaling).

Point is, two 590s, even if you can get them for the same price as a 690, will not be faster than the 690. Maybe two 590s like those dual GTX 580 Asus Mars cards could beat the 690, but not two reference 590s or close to reference 590s.
 
[citation][nom]youssef 2010[/nom]Seems to be a good card. This smells like serious trouble for AMD[/citation]

Actually, no, it doesn't smell like trouble for AMD. If anyone thinks that a $1000 card, no matter how good, will sell more than the cheaper, low/mid end cards, they're wrong. This card won't hurt AMD whatsoever even if AMD didn't make a 7990. Besides that, this is also a highly VRAM bottle-necked card (both capacity and bandwidth), so it might not even be a good card for what it is at all.
 
luxmark.png


It looks as if the GTX680 will fall somewhere in there, so far.

I'm not representing this as an end-all OpenCL metric, but yah have to like the direction AMD is going in compute with GCN and a unified APU design.



 

I disagree.

For a single monitor solution, even at 2560x1600, 2GB of VRAM is enough.

In a multi-display setup, every bit of the block's 4GB will be fully utilized. That's plenty.
 


A single 680 can play 2560x1600 with the settings in the most intensive games maxed out at about 60-80FPS, so this card probably isn't going to be used for 2560x1600 too much (maybe for it in 3D or for 120Hz monitors). Even then, 2GB CAN and WILL be a bottleneck at 2560x1600 for plenty of situations, especially with very high AA and max quality settings. It usually isn't a huge bottleneck, but it is a bottleneck nonetheless.

In a multi-monitor setup with a pixel count exceeding 4M by much in the most intensive games, it would run out of VRAM because the 4GB is not shared between the two GPUs. There is no situation where each GPU can use more than 2GB because that is all that each GPU has.
 

In multi-display setups don't each GPU take turns rendering frames, effectively using all of the 4GB on the block, albeit not at once?

It seems to me that it is better than a single GPU with 2GB framebuffer doing all the work.
 


Well, now I'll need to look into it a little more, but I am very sure that multi display systems are treated identically to a single display with the same pixel count by the graphics card so long as the graphics card has enough video output ports to facilitate the displays. IE, each GPU takes turns doing a frame, but since they take turns, they must have all of the data for each frame in each GPU's memory in order for the GPU not working on a frame to have the necessary data for the next frame. IE, there should not be any situation where a GPU in a graphics card configuration can use more memory than the memory that is directly connected to it right now.

In the future, dual GPU cards might be able to share memory between the GPUs on the card so that a dual GPU card doesn't have VRAM problems as bad as single GPU per card SLI/CF setups do. However, there should probably never be a way for a GPU to directly access the memory of another GPU on a separate card. There is simply no way to have a connection with enough bandwidth for this between two cards because that would mean that each GPU would need to be connected to each other on an interface that has at least as much bandwidth both ways as the memory interface between each GPU and it's respective memory. That's only for dual card setups, it get's far worse with three and four card setups.

Some GPUs can share memory between themselves and the CPU (usually just integrated graphics and some old, low end, discrete graphics cards), but as of yet, no GPUs can share memory (to my knowledge) with other GPUs. I think that it was AMD who mentioned the idea of using the same technology that was used to merge the pools of memory for some low end GPUs and CPUs in order to merge the pools of memory between two GPUs on a single card. However, I'm not sure of when this is supposed to be done.
 
Thanks.

I understand that the memory is not shared, so you still effectively have 2GB of VRAM and that's that.

What I am proposing is that it can't be compared to a single card GPU's framebuffer seeing as you have two distinct GPU's with their own dedicated memory sharing the load, so to speak. While it is true that no more than 2GB of VRAM can be used at any given time, both GPUs are still rendering, each one using it's own VRAM independently of each other; it only follows that the load would not be nearly as intensive.

The way I picture it is two men taking turns shoveling pig slop into a hog pen. True, there is only one shovel at any one time in the pen, but I can imagine for the men (GPUs), it's not nearly as demanding. Especially for multi-hog - I mean multi-display - setups.

:)
 
[citation][nom]PCgamer81[/nom]Thanks.I understand that the memory is not shared, so you still effectively have 2GB of VRAM and that's that.What I am proposing is that it can't be compared to a single card GPU's framebuffer seeing as you have two distinct GPU's with their own dedicated memory sharing the load, so to speak. While it is true that no more than 2GB of VRAM can be used at any given time, both GPUs are still rendering, each one using it's own VRAM independently of each other; it only follows that the load would not be nearly as intensive.The way I picture it is two men taking turns shoveling pig slop into a hog pen. True, there is only one shovel at any one time in the pen, but I can imagine for the men (GPUs), it's not nearly as demanding. Especially for multi-hog - I mean multi-display - setups.[/citation]

I think I see what you're misunderstanding here. You are assuming that the GPUs are equal parts, working together in tandem. However, they are not. One GPU does all of the work for a single frame, then the other GPU does all of the work for the next frame. Both GPUs need all of the data relevant to their own frame in their memory. So, they use exactly as much memory as they would if it was a single GPU system, it's just that each GPU is only doing every other frame instead of every frame.

Each GPU does the work for every pixel in each frame, even if that frame is split across multiple monitors. The GPUs technically work on the monitors as if they were one large monitor when they are rendering a frame. It is only split up when they send the finished frame to the separate displays.

Does this clear it up?
 
Each GPU takes turns rendering the frames. This is true, but I have already said that a few comments back. So while you will never have more than 2GB (with the 690) in use at any one time, it is still more advantageous than 2GB in a single card alone.

The fact is that the load is divided between each card, each card that acts and processing information totally independently of the other. They don't share the load rendering frames, so to speak, but they do share the load in that they take turns rendering each frame. In that, the memory size is not doubled, but the speed in which data is processed is improved dramatically (think Raid for GPUs). In fact, benchmarks prove there is an effective advantage in regards to VRAM in dual GPU configurations.

 
Status
Not open for further replies.