C) Bandwidth is measured per pin, not per 'channel'. Channels are an arbitrary term. Pins are what take up space on the motherboard. i850 uses 32-pins total for RDRAM channels. A 'dual-channel DDR SDRAM' chipset would use 128-pins total for DDR SDRAM channels. This is less bandwidth per pin. For the same amount of real-estate on the motherboard you could instead use 128-pins for RDRAM and get much more memory bandwidth.
D) PC1066 RDRAM will have less latency than your DDR SDRAM solution. (PC1066 RDRAM has exactly 3/4 the latency of PC800 RDRAM. DDR SDRAM does not scale in such a linear fashion.)
DDR DRAM is basically just a more advanced flavor of SDRAM, with an added twist at the data pins. If you recall the SDRAM timing diagrams from the previous RAM Guide, you'll know that SDRAM transfers its commands, addresses, and data on the rising edge of the clock. Like regular SDRAM, DDR DRAM transfers its commands and addresses on the rising edge of the clock, but unlike SDRAM it contains special circuitry behind its data pins that allows it to transfer data on both the rising and falling edges of the clock. So DDR can transfer two data words per clock cycle, as opposed to SDRAM's one word per clock cycle, effectively doubling the speed at which it can be read from or written to under optimal circumstances. Thus the "DDR" in DDR DRAM stands for "Double Data Rate," a name that it gets from this ability to transfer twice the data per clock as an SDRAM.
Another way to think about the need for multiple banks is to think of it all in terms of keeping the data bus full. If a 64-bit wide data bus is being run at 100MHz, then it has a lot of clock pulses flying by on it every second. Since two words of data can ride on a single clock pulse (one word on the rising edge and one word on the falling edge), the bus has a lot of potential clock pulses open every second that are available to carry pairs of 64-bit data chunks. So think of each clock pulse as having two empty slots that need to be filled, and think of the 100MHz bus as a conveyor that carries those pairs of slots by at a good clip. Now, if there were only one bank of memory, then there could be only one row open on each DIMM. If you had a system with only 1 DIMM then that one row would have to put out enough data every second to fill up all of those slots. Since memory accesses often move around from row to row, that's not likely to happen. That one bank would have to keep switching rows, and since switching rows takes time, a lot of those clock pulses would have to fly by empty-handed while the bank is doing its switching.
It is in fact doubling the clock speed in the SDRAM itself. Data is transferred twice per memory clock. One data transfer must be complete before the next can begin. Thus, data transfers are in fact taking place in half the time between memory and the processor.
-Raystonn
Memory latency expressed as a number of memory clock cycles actually remains exactly the same for RDRAM. The clock cycles being used as the unit value there are processor clock cycles
http://www.aceshardware.com/Spades/read.php?article_id=140
Yes, your are right. QDR, ODR et.c would have about the same latency as DDR. But this has nothing to to with JEDEC vs RAMBUS. RAMBUS also uses a DDR philosophy. If there ever was a SDR RDRAM standard (which I doubt) I'm sure the latencies was about the same as the DDR one.Sure it is. The memory itself is operating at a frequency of 200MHz. The interface (bus) from the memory to the chipset's memory controller is operating at 100MHz, but transmitting twice per clock. If DDR is only marginally better than SDR in terms of latency, then QDR would be only marginally better than DDR in terms of latency. Latency would really be hurting it. Meanwhile RDRAM is scaling up using frequency...
You are probably right about the parallell thing. The industy is going towards serial solutions in other areas (USB, serial ATA et.c). So I agree RDRAM has a better potential of achieving high _bandwidth_ in a system.Right. However, they were unable to keep the timings constant. SDRAM simply cannot keep up. Its parallel nature causes more problems and makes it more difficult to ensure no signal conflicts.
Actually no. 32-bit and 64-bit RDRAM modules are on the roadmaps. This would remove the requirement of needing more than a single memory module per 64 pins.that matters to the mobo manufacturers not us, we would require more memory modules to use this extra bandwith per pin, which would increase OUR cost.
That depends entirely on the speed of the DDR SDRAM, along with the platform in question. At worst, PC1066 RDRAM has about the same latency as PC2100 DDR SDRAM on the same platform. In other comparisons, it shows as having less latency.pc1066 does NOT have lower latency than ddrsdram
One interesting detail that no one seems to have mentioned is that while the <i>theoretical</i> pin count should be different between RIMMs and DDR DIMMs, they are both listed as 184-pin parts.If we restrict the discussion to what is available today, then we only get 16 pins per memory module.
<A HREF="http://freespace.virgin.net/m.warner/RoadmapQ102.htm" target="_new">http://freespace.virgin.net/m.warner/RoadmapQ102.htm</A>However, if we do that, then the whole discussion on dual-channel DDR SDRAM is excluded from consideration, as it is not available today, nor on any roadmaps.
Rather old news, granted. And rather late (the "January" referred to is <b>this month,</b> which is all but over). But apparently it is on a roadmap somewhere.ServerWorks Grand Champion GC-HE Foster MP chipset is expected to be released in January. Grand Champion may well be the first chipset available for the Multi Processor version of Foster (Foster MP)and is expected to support up to 4 Xeon processors, 32Gb of DDR SDRAM memory and PCI-X. The Grand Champion's memory controller features support for 4-way interleaved PC1600 DDR SDRAM (PC1600 is used to keep memory access synchronous with the quad-pumped 100Mhz FSB). This gives the Grand Champion 6.4GB/s of memory bandwidth (100 x 2 x 8 (64bit) x 4) - enough to keep 4 bandwidth-hungry Xeon's happy.
ServerWorks Grand Champion GC-LE Foster chipset is expected to be released in January. GC-LE is the cut down version of Serverwork's GC-HE chipset, designed for use in 2-way Foster SMP environments. The memory controller supports 2-way interleaved PC1600 DDR SDRAM, giving a memory bandwidth of 3.2GB/s.
Some of the pins are entirely unused. Some of them are used to connect to the next RIMM module. (They are serial.)One interesting detail that no one seems to have mentioned is that while the theoretical pin count should be different between RIMMs and DDR DIMMs, they are both listed as 184-pin parts.
Thanks. I had not noticed that one. I tend to look only at the desktop chipsets/motherboards. This one is designed for the Itanium and Xeon only. It will not be available for the Pentium 4.http://freespace.virgin.net/m.warner/RoadmapQ102.htm
If we restrict the discussion to what is available today, then we only get 16 pins per memory module. However, if we do that, then the whole discussion on dual-channel DDR SDRAM is excluded from consideration, as it is not available today, nor on any roadmaps.
Sure it is. The memory itself is operating at a frequency of 200MHz. The interface (bus) from the memory to the chipset's memory controller is operating at 100MHz, but transmitting twice per clock. If DDR is only marginally better than SDR in terms of latency, then QDR would be only marginally better than DDR in terms of latency. Latency would really be hurting it. Meanwhile RDRAM is scaling up using frequency...
I would be willing to do that if someone could find a single set of numbers for both PC100 and PC133 or for both PC2100 and PC1600. Thusfar I have not seen anything like this.