Why would they use 256 bit when the highest end card in the 77xx series doesn't even have that. But yes this should be better than the HD6670
Well back in the day of the GTX 200 series, all high end GPUs including the highest end 285 used GDDR3. The GTX 285 had a 512bit bus because GDDR3 was/is so slow compared to GDDR5, so they need a LOT of width to push all that data through. Even the GTX TITAN uses only a 384bit bus because it uses GDDR5 and the GTX 680 uses 256bit. Memory bandwidth is calculated as follows. For example, the GTX 285:
512(bus width) / 8 = 64.
1242(GDDR3 clock speed) x 64 = 79.488 GB/s
79448 x 2 = 158.896 GB/s (you double it because DDR=Double Data Rate)
Geforce specification say the GTX 285 does 159 GB/s, so they rounded it off.
The GTX TITAN has a 384bit bus and still does 288.4GB/s and the GTX 680 with 256bit does 192.2 GB/s
If the HD 7730 has a 256bit bus for GDDR3 and ran at 1200MHz (6670 ran at 900MHz and a 128bit bus, equalling only 28.8GB/s) it would equal 76.8GB/s which would actually be somewhat useful and perfect for a GPU that doesn't require an external power source. Normally it's the memory bandwidth that slows these low end GPUs down more than anything. Hell I can put my GTX 680 Lightning's core clock at +450MHz (which is
VERY high) and it only gives me about 7 more FPS on average. If it put my memory to +500 MHz, an average overclock for the GTX 680 (which is the stock speed for the GTX 770, which is just an OCd 680) I get on average an extra 10 or so FPS, and this is with a 192.2 GB/s data rate. Memory speeds matter a LOT more than what the GPU teams seem to realize. I'm just ranting now
:lol: Sorry.