[citation][nom]dragonsqrrl[/nom]Basically AMD is designing GCN with compute performance and efficiency in mind.[/citation]
This is one thing I'm crossing my fingers for: true FP-64 support. With the claim given for the 6000 series a year back, I remember crossing my fingers and being disappointed; sure, it was more efficient than the 20% rate for the 5900 cards that had some DP support added, but it was still 25%. Furthermore, it wasn't because it took a whole cluster to do it; it's just among each cluster's set of SPs, (which were NOT symmetric) it only ever had enough parts to handle one DP-FP op at once, even for the ludicrously common ones such as the multiply-accumulate instructions.
I'm hoping that this time around, this is fixed, and we can see a true 50% efficiency rate, which is akin to how VMX in CPUs works; the single-precision elements in their SIMD units can essentially pair up to act as single dual-precision elements.
If this can be done with GCN, the prospects are definitely ones to salivate over: 2.048 TFlops of dual-precision computing power from a single GPU?
[citation][nom]xtra0rdinaryxd[/nom]rambus created ddr, ddr2, ddr3 and ddr4...it is their technology and patents...[/citation]
Actually, Rambus only invented (or has a claim to inventing) the original DDR. DDR2, DDR3, and the preliminary work being done on DDR4 (as it's not finalized...) were all done by JEDEC, of which Rambus is NOT a member of.
[citation][nom]DSpider[/nom]It's not 8000 MHz... For example the HD 5670/6670 which uses GDDR5 at 4000 MHz, it's actually 5 x 800 MHz.Or just like 1333 MHz DDR3 modules are actually clocked at 666 MHz (DDR3: Double Data Rate 3).I think the way they put it is actually pretty stupid (marketing?). It's like saying a dual core processor clocked at 3.2 GHz is 6.4 GHz just because it's dual core.[/citation]
Actually, you're a bit off there; GDDR5's effective data rates are quadruple that of its command pin clock speed, not x5: For a GDDR5-4000 module, that means the command pins run at 1 GHz, and the data pins transfer at 2 GHz double-pumped to an effective 4 GHz.
For XDR2, yes, it's effectively x16 the rate of the "nominal" clock speed, which I'm assuming is, likewise, the clock rate of the command pins; so XDR2-8000 technically runs at 500 MHz, but the data transfer rate is effetively 8000 MHz.
You're definitely off on the dual-core comparison, though, because the increased bandwidth is done through the SAME number of data pins; the transfer rate per-pin has, indeed, gone up. This is why the term "Megatransfers per second" (MT/s) was invented, to eliminate confusion. From the sounds of it, XDR2 simply takes the idea of DDR3/GDDR5 and goes even farther: the command pins operate at speed
n, with the data pins operating at speed
n8, double-pumped to make it effective as
n16.
So for XDR2-8000, as mentioned above, has a nominal clock rate of 500 MHz, but still has a transfer rate of 8000 MT/s, hence the "effective" 8000 MHz.
[citation][nom]crc8[/nom]XDR2 for HD7900 is just a hoax. No one produces this type of memory - go check web pages. The whole quality of this rumor article is questionable.[/citation]
No one produces it, because Rambus is a "fabless" company. They DID invented it, and their original announcement of the standards & designs
date back to 2005. No one really cared, as no one really cared about the original XDR, either... That is, until the PS3 started using it.
So XDR2 seems to be indeed very real, so I would believe if AMD is deciding they're going to use it. Rambus' particularly proprietary memory technologies (RDR and XDR, two they have an uncontested claim of ownership over) happen to be infamous for being expensive to produce... As a result, it's unsurprising to see it not used for the 800-series, instead seeing GDDR5 in that role.