2x256mb 6800 GT = 512mb for textures?

apesoccer

Distinguished
Jun 11, 2004
1,020
0
19,310
This feels like a noob question to me...how would the textures be treated in this situation? Are two sets of 256 actually equal to 512, or are they really just 256 since this isn't a case of stacking 2 256mb's, just two seperate instances of 256? That isn't a very clear question, but i think the gist of the question is clear enough mb mb.

Current machines running F@H:
Athlons: [64 3500+][64 3000+][2500+][2000+][1.3x1][366]
Pentiums: [X 3.0][P4 2.4x5][P4 1.4]

It's not worth saying unless it takes a really long time to say!
 

cleeve

Illustrious
Textures are utilized as though there was a single 256mb card.

Remember, each card is rendering the same scene, so each card needs the same textures loaded.
Not to mention, one GPU couldn't grab textures from the second card...

________________
<b>Radeon <font color=red>9700 PRO</b></font color=red> <i>(o/c 332/345)</i>
<b>AthlonXP <font color=red>3200+</b></font color=red> <i>(Barton 2500+ o/c 400 FSB)</i>
<b>3dMark03: <font color=red>5,354</b>
 

apesoccer

Distinguished
Jun 11, 2004
1,020
0
19,310
I knew they would be '...rendering [from] the same scene...', but i was wondering if they actually needed to render only half as much. But I don't think i'm looking at textures correctly...I think i need to ask a different question. How do textures work? That is an important part of the questions i didn't ask earlier. The way i envision textures, are that they are layers of, basically, pictures layed over top of one another, built as one unit, so as to give depth. Is this correct? or close? way off base?

Current machines running F@H:
Athlons: [64 3500+][64 3000+][2500+][2000+][1.3x1][366]
Pentiums: [X 3.0][P4 2.4x5][P4 1.4]

It's not worth saying unless it takes a really long time to say!
 
Like you say, the access to textures would be limited across the two cards. I'm not sure if the SLI bridge allows some sharing or not, but it'd be interesting to see one Gainward gets their 512mb cards out (but you still have 2 gpus to account for). I wonder if the Gigabyte 3D1 share 256mb or not, that may give some indication of the limitations/benifits.

I also wonder whether the cards would treat memory differently depending on SFR/AFR with AF/AA applied (in this case AF access may be more interesting).


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 

cleeve

Illustrious
Well, a texture is a bitmap (picture) that's placed on a piece of geometry.

If one card only had half of the texture information, it could only display certain parts of the geometry. But in SLI, each line of resolution is alternated between the cards, so each card needs a copy of each texture.

KnowwhatImean?

________________
<b>Radeon <font color=red>9700 PRO</b></font color=red> <i>(o/c 332/345)</i>
<b>AthlonXP <font color=red>3200+</b></font color=red> <i>(Barton 2500+ o/c 400 FSB)</i>
<b>3dMark03: <font color=red>5,354</b>
 

apesoccer

Distinguished
Jun 11, 2004
1,020
0
19,310
Ok, that makes sense. Hmm, yea that makes sense. Well that answers those questions. Thx Cleeve!

Current machines running F@H:
Athlons: [64 3500+][64 3000+][2500+][2000+][1.3x1][366]
Pentiums: [X 3.0][P4 2.4x5][P4 1.4]

It's not worth saying unless it takes a really long time to say!
 
3DFX SLI worked by one card doing even number lines ant the other odd number lines.

Nvidia SLI is one card does the top part of the screen while the other card does the bottom part of the screen. This could be 50/50, 40/60, 30/70 or 70/30 60/40 etc depending on what is on the screen.

I aint signing nothing!!!
 
Actually there are two ways to do 'SLI' with the 'new' nV configuration.

The screen-spliting Alienware solution of SFR, and the alternate screen drawing ATI method of AFR.

AFR does what it implies, each card renders every other frame.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 

cleeve

Illustrious
Either way, each GPU needs it's own dedicated memory. :)

________________
<b>Radeon <font color=red>9700 PRO</b></font color=red> <i>(o/c 332/345)</i>
<b>AthlonXP <font color=red>3200+</b></font color=red> <i>(Barton 2500+ o/c 400 FSB)</i>
<b>3dMark03: <font color=red>5,354</b>
 
I'm still curious (need to re read a review [like Lars']) if the 3D1 share memory between the cards or one access memory on the fron of the card and one the back (or some other half/half).

That is of course the 'exception' example but I really should look into that if it slows down at work today, cause I'd like to know.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
Dang, looks like the 3D1 has two seperate bundles of 128mb.

<A HREF="http://www20.graphics.tomshardware.com/graphic/20050111/gigabyte_3d1-03.html" target="_new">At the heart of the 3D1 lie two NVIDIA GeForce 6600 GT processors, each with its own 128MB frame buffer and a memory bandwidth of 128 bits. In its marketing brochures, Gigabyte happily adds these numbers up, quoting 256MB of memory and a bandwidth of 256 bits - not unlike the way XGI promoted its own dual-core solution. However, the truth is that only 128MB per card on a 128 bit bus are available in real terms, since each chip requires its own memory and can't "see" that of the other chip.</A>


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil: