Slobogob :
You don't happen to work at nvidias financial division, calculating the launch prices, would you?
To be honest, I think that right now, nVidia's people know less about what they're doing than the average enthusiast does. So I think that appeals to nVidia's authority at this point are pretty much moot.
Slobogob :
The Shader count may be a factor in the cost but it is clearly not the dominating one.
However, RAM prices affect things QUITE well. I'd note that both the 3850 and 4850 came with 512MB of GDDR3, being what is now a middling amount of what is now a commonplace and cheap kind of memory. Meanwhile, the supposed GTX 350 outright DOUBLES the amount of memory, as well as switching from cheap GDDR3 to expensive GDDR5. GDDR5 is a memory technology in its infancy, meaning that currently, 512 mbit (64MB) chips, the smallest ones, are the only kind that are quite plentiful at the moment, meaning that they're considerably cheaper than 1024 mbit (128MB) chips. A 512-bit memory interface means that you're going to have a whole 16 chips of RAM, which is fine for 1024MB; that's just using 512 mbit chips, which are also even cheaper in GDDR3 form. Howeve, the 1024 mbit ones that would be required to get a whole 2048MB on a 512-bit interface would be over twice as expensive... On top of the cost changes in going from GDDR3 to GDDR5. You're probably talking winding up paying 3-5 times as much for the VRAM, which on a board with so much of it, is going to be a significant portion of the price.
So yeah, when comparing that supposed GTX 350 to the GTX 280 as far as price goes, comparing the HD 4850 to the HD 3850 is a very flawed analogy that doesn't take all that much into the picture.
John Vuong :
Those specs don't look legit to me. Even if it was true, then wow. This card looks like a killer, not to mention twice as expensive. I would love a 512 bit and GDDR5 but I don't see that happening anytime soon.
Plus, as I noted, there was the part where none of the throughputs were divisible into the clock speeds, or even divisible into any logical number of processing units that the chip might have. (gimme a break; I'm assuming it'll be a
whole number, and even a multiple of 8, as have nVidia's stuff since the G70)
JAYDEEJOHN :
The chips would be approximately 35% larger, I know it says 576m^m2, but unless its a completely new arch, its 2 G200s, which after die shrink and doubling is more like 760m^m2, or around 380 apiece.
Actually, the way I calculated it out, a shift from a 65nm process to a 55nm process would take the chip from 24x24 mm (576mm²) to 20.3x20.3 mm, (412mm²) only 28.5% in terms of die area savings. This could be eaten up by an additional 39.8% transistors, which would bring the total of transistors up from 1400 million to 1957 million... To put that into perspective, that's about 81.8% the transistor count of a G80.
JAYDEEJOHN :
And that brings us to the clock speeds. There you have a 33% increase, which as I said, you just cant do, and keep within thermals.
Plus, each transistor shrink does not grant an increase in maximum stable clock speed that matches the decrease in size, otherwise we'd have seen a doubling in clock speed every 18 months outside of the NetBust era. But rather, if memory serves, the typical full-node step only allows for transistors to switch up to around 20-30% faster than before on average, while retaining the same level of reliability; as we saww on top-end Radeon cards, we went from a maximum of:
■540 MHz on 130nm, (Radeon X850XT PE) to
■650 MHz on 90nm, (Radeon X1950XTX) to
■743 MHz on 80nm, (Radeon HD 2900XT) to
■775 MHz on 55nm. (Radeon HD 3870)
Similar, on top-of-the-line GeForce cards:
■425 MHz on 130nm, (GeForce 6800ultra) to
■550 MHz on 110nm, (GeForce 7800GTX 512) to
■512 MHz on 90nm, (GeForce 8800ultra) to
■675 MHz on 65nm (GeForce 9800GTX)
Obviously, I left out some processes because they weren't used for high-end GPUs; I'm not counting the (almost always higher) core speeds found in mid-range GPUs, since the GTX 350, as supposed, is not a mid-range GPU by any stretch of the imagination.
JAYDEEJOHN :
The 4870x2 requires a 8 pin and a 6 pin, tho you can run it 6 and 6, as saw in some previews. At 8 and 6 is currently as high as we can go, period. Doubling a G280s power is adding another 90 watts or another 8 pin to it, and it just cant be done
Yeah, the limit on a PCI-e 1.x slot is 250 watts; 75 for the slot, 75 for a 6-pin connector, and 100 for an 8-pin connector; the GTX 280 already comes dangerously close to that level... A higher level is possible if you're using a PCI-express 2.0 slot, since if I recall correctly, a x16 slot can supply 100-125 watts of power, bringing the total to 275-300. Of course, if you make a card that needs that much, then you screw out the 90% of people that have a perfectly good motherboard and CPU (including Core2extremes) that would not be able to use the card because it couldn't provide enough power.