Nvidia GeForce GTX 690 4 GB: Dual GK104, Announced

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
Now instead of mentioning that the VRAM will not be doubled, you specifically refer to bandwidth in a comment that wasn't about bandwidth.

I am sorry, but the Memory bandwidth is every bit improved in dual-card configurations, and it is just so obvious that I am going to bite my tongue off going back and forth with you. Just look at the data-rate of dual-cards vs their single card. Ya' can't have more data-transfer without more bandwidth, sorry to say. And you know what?

Bamhttp://www.hwcompare.com/9825/geforce-gtx-580-vs-geforce-gtx-590/
Bamhttp://www.hwcompare.com/9569/radeon-hd-6970-vs-radeon-hd-
Bamhttp://www.hwcompare.com/4505/radeon-hd-4870-1gb-vs-radeon-hd-4870-x2/

It's usually twice as effective.

Notice how with the 4870x2, the bandwidth is EXACTLY doubled. Wanna know why? Unlike with most dual-card GPU's, the memory clock is the same. That is more or less your data-rate, and guess what that can be chalked up to? The B word. Hence, 4870x2 rather than 4990 or some crap.

If you wanted to get technical for the sake of winning a meaningless argument, then you could say that not only is the VRAM, data-rate, and bandwidth not doubled in dual-card setups, but nothing whatsoever is. And that would be true. You still have two cards processing independent of the other drawing frames. But one only has to look at how high-levels of AA and AF are handled in dual-card GPUs to see that you have more bandwidth, regardless of whether or not it is technically true (which I still say it is for all intents and purposes).
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
I have a question. The following came from the link that you provided yourself.

What does this mean?

The second specification Radeon HD 7970 Tri-Fire has going for it is memory bandwidth. Each Radeon HD 7970 video card has a 384-bit bus with memory running at 5.5GHz, which provides 264GB/sec of memory bandwidth. In comparison, each GeForce GTX 680 video card has a 256-bit memory bus with memory running at 6GHz, providing 192GB/sec of memory bandwidth. On paper, HD 7970 Tri-Fire should have advantages when it comes to running three video cards together. In the real-world, it doesn't always work out the way you think it should.

In your own words.
 


I'm gald that you asked.

The problem was that the 7970 has crap drivers for that test (only the original drivers from January work in Eyefinity, this test was Eyefinity, AMD screwed up their recent drivers's compatibility with Eyefinity, I don't know what they did wrong to screw it up).

All that link was meant for was to show you that the 680's memory provided a bottleneck. I said that the 7970 had problems. I simply said that the problems weren't related to it's VRAM, not that it didn't have problems.

If you read the first half dozen or so pages, you will see that the 680 is shown to have a VRAM bottleneck and [H] tells us that themselves. The whole point was to show you that the 680 has VRAM bottle-necking problems and that link shows us that.
 


I already explained to you why the bandwidth is not really doubled, only the EFFECTIVE bandwidth is doubled. The two are not the same thing, they just LOOK the same to someone who refuses to understand such technology. This has nothing to do with just dual GPU cards and has everything to do with any all all modern Crossfire and SLI multi GPU configurations.

Fine, it's semantics, but you're still wrong, even if it's only through semantics. I've explained to you why it is wrong to say that the bandwidth is doubled and you don't seem to like the truth for some reason. I am not denying that even though it is not really doubled, it acts as if it is doubled. Either you really like to argue incorrect semantics, or we have another misunderstanding going on here and I' m not misunderstanding what I'm saying, so you seem to have a problem.

Are you still trying to argue your VRAM capacity is better with multiple GPUs theory? That has not been proven at all by anything that you have shown me and everything that I've found so far seems to contradict that theory.

EDIT: Data rate and bandwidth are not even close to being the same thing, so please don't even try to argue about THAT.
 


SLI/CF is beneficial because it halves the necessary amount of performance to reach a certain performance level (it's actually not really halved but is somewhat worse and how much worse depends on the multi-GPU scaling of the cards, halved is just the best possible theoretical). VRAM capacity does not effect performance unless you run out of capacity, in which case it makes performance drop like a rock. It is not the same as the other factors because of this. VRAM capacity is a different type of factor for performance because no matter what, increasing VRAM capacity will never increase performance unless you are running out of it, but improving the clock frequencies of the GPU and memory will always improve performance, unless there is a different bottleneck.

The advantage of SLI and CF is clear: It allows two (or more, up to four right now) GPUs and their resources to interleave the frames that they draw in order to get better performance than one of those GPUs and it's resources would grant. The problem with it is it does not scale perfectly across multiple GPUs and the VRAM capacity does not scale whatsoever. You need to have at least the same amount of VRAM per GPU as you would for a single GPU that could do the same job.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
[citation][nom]blazorthon[/nom]...but improving the clock frequencies of the GPU and memory will always improve performance... ...It allows two (or more, up to four right now) GPUs and their resources to interleave the frames that they draw in order to get better performance than one of those GPUs and it's resources would grant...[/citation]

Which resources, if you please?

[citation][nom]blazorthon[/nom]Data rate and bandwidth are not even close to being the same thing, so please don't even try to argue about THAT.[/citation]

Where did I say they were? In fact, I clearly distinguished between the two.

The higher the data-rate, the higher the bandwidth required - that was my point.
 


Resources as in the VRAM bandwidth, the GPU's internal resources (memory controllers, shaders, ROPs, etc.).

Your second question,

Notice how with the 4870x2, the bandwidth is EXACTLY doubled. Wanna know why? Unlike with most dual-card GPU's, the memory clock is the same. That is more or less your data-rate, and guess what that can be chalked up to? The B word. Hence, 4870x2 rather than 4990 or some crap.

Unless the B word didn't mean bandwidth. Data rate refers to how many transfers can be done in a single clock cycle (in this context). DDR technologies (excluding GDDR5) all have two transfers per cycle. GDDR5 has four. XDR (from Rambus) has eight. XDR2 (also from Rambus) has sixteen. So, quadruple data rate, octal data rate, and hexadecimal data rate. Data rate times frequency times the bit width of the interface equals the bandwidth in b/s. Divide by the byte size to get B/s (divide by 8 for these memories). Then, convert to GB/s. That is the bandwidth. So, data rate and bandwidth are related, but not the same. Your last post states that you knew this even though your post that I replied to implied you did not know this. However, your last post that I am now replying to tells me that you do not know what data rate is.

Data rate is, like I just said, the amount of transfers per cycle (for this context). It probably didn't mean that exactly when the term first came out, but that is what it comes down to and how the term is used nowadays. DDR is called DDR because it has double the data transfer speed of SDRAM running at the same frequency.

The 4870X2 is just called the 4870X2 because it is two 4870s integrated into one graphics card. I do not refute that this is why it is called that.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
I know exactly what data rate is.

"...that not only is the VRAM, data-rate, and bandwidth..."

I clearly distinguished between the two. When I said that they were, "more or less the same", I merely meant in how one affects the other.

I have a feeling you knew exactly what I meant, and are attempting to detract from the original point.
 


Those posts make the mistake of assuming that the GPUs each render part of a frame instead of interleaving frames they they each draw individually. The first post is correct in that the effective bandwidth has doubled by having two cards and the person they were talking to is wrong (as I was when we started this argument).

The reason that this interleaved method is used instead of the older working in tandem on each frame method is because the interleaved method scales performance across GPUs better and eliminates a problem with the previous tech that would sometimes cause a line to appear at the boundaries between the sectors of the image (two GPUs would have a one or two pixel thick line between the two parts of the screen being rendered by each GPU). There is a trade off, micro-stutter. This is caused by both GPUs finishing frames at about the same time instead of in an interleaved fashion, causing there to be a short amount of time where there are two frames sent to the displays, but then a long period of time where no frames are sent to the displays. Micro-stutter has gotten progressively better with each generation of graphics cards.

Partly you are both right.

Since both cards must keep an identical memory data set, both must write exactly the same data to the memory and this causes the write bandwidth to be the same as one card (and even has some overhead).

However either card can read whatever data they like so the memory read speed is doubled, acting a bit like Raid1 (hence why the capacity is the same).

Most cards do not split the rendering of each frame, they take turns in rendering each frame.

For example, this guy is the most correct of them. GPUs can read data from both sides. However, his saying that the read doubles is wrong because for that to happen, the GPUs would need to be connected in an interface at least as fast as the VRAM bandwidth and it must be full duplex. My explanation for why the memory bandwidth seems to double is correct. In fact, the slow read speed of a GPU trying to read data from it's partner's VRAM is another cause of stutter in some workloads that require data to be shuffled between the GPUs. This can be a problem when data is present in one GPU's VRAM that the other GPU needs in their VRAM, but is generally not a big deal.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
You've been great and I thank you for answering my questions and for your time.

No offense, I'm tired. This is the last post on this subject, at least for now. I will post it, read your reply if you choose to give one, and then I'm taking a nap.

You are arguing technicalities and semantics not pertinent to our original debate. I couldn't care less about any of that. But I will try to show you where I am coming from one final time and I hope we can come to some kind of understanding. I tried the analogy with the pigs and shoveling, but that didn't really work. So, let me explain it this way...

Say, for instance, the maximum speed you and I eat food is 100% the same. Yet, I claim that I could eat twice as fast. We go to a buffet to put it to the test. You eat 6 pieces of pizza, and do so in exactly 30 minutes.

When It's my turn, I take the 6 pieces of pizza and magically shrink them to half-size. I then proceed to eat them and it takes only 15 minutes before I finish all the pieces.

Do I eat twice as fast?
 


Technically, nothing really get's "doubled". It's just that each GPU is handling half of the frame rates, so it looks that way. In practicality, it can be argued that they doubled... However, that's just another argument in semantics and I think that we have both made our points in that by now.


Here's an example. Let's say that we have two 6970s in an Sandy Bridge E system with an i7-3930K at 4.5GHz, so the CPU isn't a bottleneck. We have four 4GB RAM modules at 2133MHz and CAS 10, so they aren't a bottleneck. The two 6970s are in an x16/x16 PCIe 2.1 configuration, so the PCIe lanes aren't a bottleneck. This seems to be the best case scenario to get the best possible scaling, let's just call it 100% for simplicity sake. We both know that although it's excellent scaling and is as close to 100% as reasonably possible with these cards, but this will make the math easier to do and understand for both of us.

The Radeon 6970 (reference model) has:
Stream Processors: 1536
Texture Units: 96
Raster Operators: 32
GPU Clock Frequency: 880MHz
Memory Capacity: 2GB
Memory Interface width: 256 bit
Memory interface type: GDDR5
Memory Frequency: 1.375GHz (GDDR5=quad data rate, so 5.5GHz effective frequency)

Let's say that it can hit 30FPS. If the Radeon 6970 can hit 30FPS and if it had 100% dual GPU performance scaling, then two 6970s would have 60FPS. That means that it would act like a single card that had all of the above factors doubled except for clock frequencies and memory capacity. Doubling the VRAM to 4GB with this hypothetical single GPU card would not make a difference in performance. However, look at this almost identical situation.

Instead of 30FPS, the 6970 gets 60FPS and it's 2GB of VRAM is maxed out or near being maxed out (let's assume that 1792MB or so is being used. This means that the VRAM capacity is not being a bottleneck at all right here).

If we still have the 6970's scaling at 100%, then two of them would get 120FPS. The problem? The display is only a 60Hz display, so it can't display 120FPS. Having the FPS that high and the Hz that low (as we already know) would cause horrible tearing. So, instead of using V-Sync that would waste the performance of the second 6970, we double the intensity of the graphics in the settings of our game.

But oh no! With the previous settings, the 6970s were already using 1792MB out of 2GB, so the 6970s ran out of VRAM and are fighting to get even 5FPS! So, even though we know that it has the GPU performance and memory bandwidth to hit 60FPS, it doesn't have the VRAM capacity to hold the higher quality textures and AA. So, if we have 6970s that have 4GB instead, (I don't think any exist, but bare with me), then it has enough VRAM to hold these textures and AA.

This shows a problem of the 680... It doesn't have the VRAM capacity to hold them. It also seems to be limited in performance by it's memory bandwidth. However, this type of bottleneck does not dorp performance like a rock off a cliff, it's much more gradual. For an excellent example, see a comparison of Llano with 1066MHz, 1333MHz, 1600MHz, 1866MHz, and 2133MHz, if you find one.

I hope that this clears it up.
 


It can be argued that you ate the smaller pizzas faster than you would the larger ones. However, the rate at which you eat pizza does not change, just the amount of work that you must do to contribute to finishing each pizza because they have been shrunken. If you count speed by how many pizzas that you finish, then yes, you are eating pizzas faster. However, you are not eating pizza faster, just pizzas. It's not the same. I can agree that it is practically semantics, but it is technically not semantics, it is only practically semantics.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
It does, and I agree whole-heartedly.

In the end, it comes down to perspective, as with my analogy.

Say I had two heads and you had one. We both only have one stomach.

I split what I eat with my other head, and eat twice the food you do. I can't eat any faster than you can, but what I have to eat is halved due to the fact I have two heads. So technically I can eat twice as fast, but not really.

That isn't the best analogy, but it gets through what I am trying to say.

EDIT: Actually, I made a mistake.

A better example would be a couple, each with their own body. Because nothing is shared - but you catch my drift.

Thanks for another go-round. See ya' soon.
 
There is something else that I thought that I should mention.

With a GPU similar to the 6970's Cayman (or any other modern GPU), the best way to get double the performance in a single GPU would be to double the GPU and memory clock frequency and then double the VRAM capacity to not be a bottle-neck. This is because single GPU cards scale things such as increasing the shader core count (aka stream processor in the case of the 6970) poorly. As the number increases, scaling suffers from diminishing returns. So, although using two GPUs like the Cayman from the 6970 will scale very well, making a single large GPU with all of the characteristics of the 6970 doubled would be slower than two 6970s (although it would be fairly fast).

For example in how core count has diminishing returns, compare the Radeon 6950 (at reference performance) to the 6970 (at reference performance). The 6970 is very far ahead. Then overclock the 6950 to the 6970's GPU and memory clock frequencies of 880MHz and 1375MHz. The 6950 is then less than 8% away from the 6970, too close to be able to see a difference between the two if you compare them side by side in otherwise identical machines. That is also without even doing that BIOS flash to turn a 6950 into a 6970 (most current 6950s can't do that anymore because the disabled cores are damaged by a laser during manufacturing and trying to BIOS flash them into a 6970 often bricks the 6950). To see how the scaling has a diminishing returns effect, then compare the 7950 to the 7970. They are even closer when the 7950 is brought up to the 7970's frequencies than the 6950 and 6970.

So, having two 6970s will scale better than a single GPU with double everything within the 6970. A single Cayman with the frequencies doubled would also scale better because clock frequency increases don't have such a direct diminishing returns on performance improvement. They do, however, have a direct diminishing returns on energy efficiency, whereas the increasing of internal resources does not have a direct diminishing returns on power efficiency.

Nvidia is actually working on a new architecture (presumably going to be in the Geforce GTX 800 cards, but that's pure speculation by me) that will improve this problem. I don't know about AMD, but Nvidia patented this, so AMD might have a tougher time improving this problem than Nvidia, that is unless AMD can find another way to help it.

http://www.tomshardware.com/news/nvidia-patent-gpu,15466.html

I put in a large comment on that article trying to explain it all right there a little more, if you're interested. Keep in mind that it is PARTIALLY theoretical (the scaling problem of the current GPUs is not theoretical, just how I interpreted the patent). I tried to keep that in mind and to keep that in the minds of it's readers by specifying what it theoretical. I'm not an engineer in Nvidia, so I don't know exactly how they intend to go about it. I simply tried to explain it as best as I could after reading the patent and explain why it would be helpful.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
Thanks for all your time, I really appreciate it. This kind of thing is surprisingly scarce online. Anytime I can learn something new is alright by me. I will give your comment and the article a read.

I do wonder why they don't just build the dual cards to have more VRAM. I understand what you are saying, don't think I don't. I also agree for the most part. Anytime you have a situation where VRAM will provide a bottleneck before the card becomes dated, we have a problem. And that is a very real possibility with the 680, let alone the 690 (where it's almost guaranteed to b-neck at some point). It's like building a blindingly fast car that will still be fast when it eventually becomes too small. It's just a waste of money to pay for speed that will be bottlenecked by size. You need a balance between size and speed. And when you throw cards together, the disparity between size and speed it accentuated to the point that the card will depreciate in feasibility long before you've gotten your money's worth and/or the technology becomes dated. That is a very real risk when you go the SLi/CF route, unfortunately.

How hard would it be to have a 6GB 590 and an 8GB 680? The technology is there.
 


The 1.5GB 580 uses 128MB RAM chips and the 2GB 680 uses 256MB RAM chips. 512MB chips are on the expensive side and that is why they are rare, even if for something like the 680 and even more so the 690, they should have been a necessity. To have 6GB on the 580 is feasible because it would take 512GB chips. However, 8GB on the 680 is not feasible because it would take 1GB chips and those are so expensive that only some of the most expensive server RAM modules have them. Compare the prices of 8GB DDR3 1333MHz or 1600MHz modules to the prices of 2x4GB kits of DDR3 at the same speed. The 8GB modules are more expensive than the two 4GB modules are. This is because the 8GB modules use 512MB chips. Now, if going from the 256MB in the 4GB modules to the 512MB chips in the 8GB modules costs that much more than it should, imagine the prices for 1GB chips. They are larger chips (the current fabrication processes can't shrink the transistors and capacitors down enough to keep them at the same size as the 256MB and 512MB chips) and the price goes up exponentially.

Sure, the technology is there, but the money is not there for 1GB chips. If the 680 had a 512 bit GDDR5 interface instead of a 256 bit interface, then it could reasonably have 8GB (ironically, that would also solve it's moderate VRAM bandwidth bottle-neck too). However, that would have meant a lot more transistors on the GK104 and would have increased the cost of the GTX 680. It would have also increased performance because of the reason specified in parenthesis and it might have been enough of an increase to justify the price hike, but maybe not.

So, although a 6GB GTX 580 is feasible and a 4GB 680 is feasible, an 8GB 680 is not. The closest thing to a 6GB GTX 580 would be the Quadro 5000 card. I think it has the GF100 (from the GTX 480), except it has a 1 to 2 DP to SP ratio instead of the 1 to 8 DP to SP ratio (it is a professional card, SP is Single-Precision, and DP is Dual-precision. They refer to two types of floating point performance with DP generally being more important for compute tasks). Whether or not it is the same GPU with a setting in it's video BIOS changed or a modified version of the GPU, I don't know. However, it is the only video card that I know of that has more than 3GB of VRAM per GPU at this time. I think that it is SAPPHIRE who is making a Radeon 7970 with 6GB of VRAM specifically for supreme quad crossfire performance (the 6GB VRAM will keep it from being bottle-necked at pretty much all resolutions, even something like triple 2560x1600 Eyefinity monitors and it is also supposed to have a 1335MHz core clock, putting it ahead of a stock, reference GTX 680 in performance), so if someone wants it, they will probably have 6GB of VRAM per GPU on a consumer graphics card sometime this year.

As for dual cards not having more RAM, Nvidia doesn't seem to care. Nvidia is no stranger to having less VRAM than AMD (remember when the 4870 and 4870X2 were mentioned? They have 1GB of VRAM per GPU. The 4870X2's competitor, the GTX 295, has 896MB of VRAM per GPU, aka 0.875GB). AMD has ben better about this more often than not, although the Radeon 5970 is pushing it (performance more or less on par with the GTX 580, but the 5970 has 1GB par GPU. To be fair, it is a much older card and is even older than Nvidia's GTX 400 cards too). AMD is better about this, but really, that is only because AMD is better about the VRAM capacity on single GPU cards. Nonetheless, AMD's cards were technically more future proofed than Nvidia because of this.

Some card makers make a non-reference design with double the reference memory. For example, there are some 6GB GTX 590s (3GB per GPU) and some 4GB 5970s (2GB per GPU). However, this has little to do with AMD and Nvidia, the card makers simply made the decision to use higher capacity memory chips. They always charge a big premium for this too because they know that it is important.

However, then there are some custom made cards such as the Radeon 6870X2 that has far less VRAM than it should have. It's even faster than the 5970 and it was made in a time where games used a lot more VRAM, yet it had the same 1GB of VRAM per GPU! There are only two 6870X2s available in the American market, both have this capacity bottleneck.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
You make some excellent points. I understand that the cost and feasibility of doubling the VRAM in the 690 is a major factor, but the mistake is in the reference design and even though the memory interface size is determined there should be some kind of non-reference alternative. If someone is willing to spend $1000 on a dual GPU then surely they would be willing to spend a little more considering it could make or break the practicality of such a long term investment.

The 6870x2 is a little more sensible, all things considered. The 6870x2 is, at least in my opinion, the best dual-chip solution for 1080p displays. The 6870x2 won't (or shouldn't) be the first choice for someone considering an investment for Eyefinity/2560x1600/HD3D. The 5970 Black edition (which you referenced in the second to last paragraph) is a far better option for Eyefinity/2560x1600.

Lastly, there is a single card that has over 3GB of VRAM.

*sounds trumpet*

http://www.geeks.com/details.asp?invtid=N430GT-MD4GD3&cat=VCD

The amazing GeForce GT 430 4GB in all it's glory! Weighing in at an extremely practical 4GB of...wait for it...DDR3 with an impressive bandwidth of ... ... 28GB/s bandwidth, using a beastly 128bit mbus! How's that for feasible? IT IS GENIUS!
 
[citation][nom]PCgamer81[/nom]You make some excellent points. I understand that the cost and feasibility of doubling the VRAM in the 690 is a major factor, but the mistake is in the reference design and even though the memory interface size is determined there should be some kind of non-reference alternative. If someone is willing to spend $1000 on a dual GPU then surely they would be willing to spend a little more considering it could make or break the practicality of such a long term investment.The 6870x2 is a little more sensible, all things considered. The 6870x2 is, at least in my opinion, the best dual-chip solution for 1080p displays. The 6870x2 won't (or shouldn't) be the first choice for someone considering an investment for Eyefinity/2560x1600/HD3D. The 5970 Black edition (which you referenced in the second to last paragraph) is a far better option for Eyefinity/2560x1600.Lastly, there is a single card that has over 3GB of VRAM.*sounds trumpet*http://www.geeks.com/details.asp?i [...] D3&cat=VCDThe amazing GeForce GT 430 4GB in all it's glory! Weighing in at an extremely practical 4GB of...wait for it...DDR3 with an impressive bandwidth of ... ... 28GB/s bandwidth, using a beastly 128bit mbus! How's that for feasible? IT IS GENIUS![/citation]

The problem with the 6870 is that with the quality settings and AA that it's GPU performance can handle, it's VRAM capacity is just slightly too low, even at 1080p, so settings can't go as high as they should. The next jump up from 1GB is 2GB, so 2GB per GPU would have been a far better choice, especially since it's the only way to get Radeon 6870 triple and quad Crossfire.

That GT 430 is odd... It's probably not intended for gamers, but some other usage. Either that, or it's designing company is stupid.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
In my opinion, the 6870x2 is the best solution for 1680x1050 and 1080p save for the 580, although there are a few games where AA would have to be turned down due to the 1024MB limit as you pointed out.

No, as far as I know that 430 is for gamers. I have known about that card for a while now and have read up on it. I think it's retarded, myself. The VRAM is useless in that card. Probably just a quick way to make a cheap buck off the stupid.
 
[citation][nom]PCgamer81[/nom]In my opinion, the 6870x2 is the best solution for 1680x1050 and 1080p save for the 580, although there are a few games where AA would have to be turned down due to the 1024MB limit as you pointed out.No, as far as I know that 430 is for gamers. I have known about that card for a while now and have read up on it. I think it's retarded, myself. The VRAM is useless in that card. Probably just a quick way to make a cheap buck off the stupid.[/citation]

1680x1050? It might be an excellent card for 3D 1680x1050. 3D doesn't seem to need much more VRAM than 2D, but it would definitely put that kind of GPU power to work.
 

PCgamer81

Distinguished
Oct 14, 2011
1,830
0
19,810
You think two 6870s could handle 3D? I have been thinking about grabbing a 3D monitor for HD3D gaming and if two 6870s could handle it I know two 6970s could.

Let me add that when I say "handle", I mean high visual settings (if not maxed out), with some levels of AA/AF, and 60fps. I wouldn't even consider 3D unless I thought I could get at least that.
 
[citation][nom]PCgamer81[/nom]You think two 6870s could handle 3D? I have been thinking about grabbing a 3D monitor for HD3D gaming and if two 6870s could handle it I know two 6970s could.Let me add that when I say "handle", I mean high visual settings (if not maxed out), with some levels of AA/AF, and 60fps. I wouldn't even consider 3D unless I thought I could get at least that.[/citation]

Two 6870s (or a 6870X2) could do it at 1680x1050. Two 6870s could not do it at 1080p, at least not in most modern games with very high or maxed out settings and AA/AF.

EDIT: Two 6870s (or a 6870X2) overtake the GTX 580 and the Radeon 5970 in performance.
 
Status
Not open for further replies.

TRENDING THREADS