Apparently Rambus DOESN'T own DDR

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
G

Guest

Guest
just a little side note on video cards and RDRAM. a while back i saw a high end graphics card start up company create a chip/board that used RDRAM. unfortunately i haven't heard about it actually making it to the market. i wonder if the company is still around. if anyone's interested they can be found at <A HREF="http://www.pixelfusion.com" target="_new">http://www.pixelfusion.com</A> haven't been there in months so don't know what's there or if it's even still there
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
True, they are evil enough to do something like that and squeeze every last penny out of everyone. But then, I doubt they would go that far because if they did, people would go to using just plain old SDRAM or develop a new memory technology to replace them ASAP.

I don't think Rambus would be that stupid.

Then again, it is Rambus...

-Got your money for nothing, and your chips for free.
-I want my, I want my, I want my AMD.
 

HellDiver

Distinguished
Mar 14, 2001
34
0
18,530
Regarding videocard issues brought up on this thread:
<b>1.</b> When memory bandwidth was CUT on the videocards in the past, they suffered tremendous performance hit. IIRC, the 64bit Riva TNT cards sucked bigtime, despite having the same core as 128bit versions.
<b>2.</b> I think videocards could seriously improve their performace by using RDRAM, because fetching the textures requires A LOT of B/W, (see the above point), and those textures aren't that small either, so I'm not so sure how the latency issue would play here. Anyone has any theoretical ideas backed up by numbers on this one? Something akin to home-brewn feasibility study?
<b>3.</b> Shifting to hi-B/W, high latency memory requires a design change on the chip part - one has to ensure he's using A LOT of B/W with few accesses to the memory to minimize the latency impact. Current graphics cores are optimized for [relatively] low latency SDRAM. So, it'll take some time before videocards manufacturers will switch to RDRAM (if at all).
<b>4.</b> RDRAM still costs a fortune compared to SDRAM, even DDR. Making a GeForce 3 use RDRAM (a theoretical option only, see point #3) would prolly add several hundred $$ to the cost of the card. I don't think I remember any consumer-grade (!) videocards costing as much as GeForce 3 based ones ever. When the prices on RDRAM will come down (again, if at all) you should expect to see RDRAM based videocards popping here and there.
<b>5.</b> RDRAM memory controller must be integrated into the graphics chip, but first it must be designed. Intel has such designs, graphics chips manufacturers don't. And RDRAM requires a complex controller because of the quad-pumped bus. (The signalling technology used relies on much smaller voltage differences than usually accepted by TTL/CMOS devices. Overshot/undershot that would be trated as OK by SDRAM will mean erroneous values with RDRAM.)

As for RDRAM as a technology, I think currently it would fit best on various portables :
<b>1.</b> RDRAM memory generally requires much lower pin count than SDRAM (and obviously even lower than DDR SDRAM), which means less traces on the mobo required, which in turn means cheaper mobos (less layered).
<b>2.</b> RDRAM signalling relies on lower voltages. So I <i>presume</i> it should require less power to run (less lines to driver, albeit at higher frequency, but at lower voltages). If someone knows differently - let me know.

As for "dual channel"/"single channel" issue that was brought up on this thread...
<b>1.</b> IIRC, i820 was a single channel RDRAM mobo. (No, there's no need to point out it's performance, cince it was running P3 which simply couldn't put RDRAM to any good use!) So, there is NO strict requirement to use dual channel RDRAM.
<b>2.</b> However, in a funny way, it turns out that RDRAM is NOT the only memory which gives higher benifits to CPU if used "multi-channel"ed. E.g. ServerWorks new server-oriented chipset uses dual channel DDR SDRAM. [ex-]Digital Alpha mobos were almost always "multi-channel"ed, requiring that all banks are filled (I actually think with identical memory sticks), and Alpha was kicking some major ass as long as R&D was still doing some good work. High bandwidth requires many memory channels operating at once (i.e. wide bus), since doubling the bus is peanuts easy compared to doubling the memory clock rate (and DDR/QPB improvements are already here).
<b>3.</b> The big advantage of having "multi-channel" RDRAM as opposed to "multi-channel" SDRAM, is RDRAMs <i>much</i> lower pincount. The chipsets for Alpha 21264 sported a total of 7 (SEVEN!) chips, 4 of which were D-Chips (i.e. memory interface). The Alpha 21464 is supposed to have 8 (EIGHT!) -channel RDRAM, and it'll have the memory controller ON-DIE! Smell the difference? (Let's just hope that it'll actually get to the silicone stage!)

As for people bashing Intel for forcing RDRAM down P3 users throats...
<b>1.</b> And what exactly do you folks think AMD is doing with AMD-760 chipset? As it was demonstrated multiple times on the net, DDR SDRAM Atlon-based platforms have only marginal performance improvement over SDR SDRAM-based ones (especially 133MHz DDR FSB based ones). I am yet to see one serious website openly recommending people to get such boards (unless it shows up under "Dream System" configuration). Guess what, although Athlon actually <i>can</i> use a bit more memory B/W than P3 (EV6 anyone?), the results were more than disappointing for DDR SDRAM proponents. Yet DDR SDRAM seems to be the most hyped up thing around. Same thing exactly Intel did. AMD is just prepping public for the next generation cores that will be able to put that DDR SDRAM to good use. Let's not be idealists - AMD wants its next generation core to land on good grounds, just as Intel wanted it with P4. And if some clients overpayed for Athlon/DDR combo - well, life's hard!
<b>2.</b> Regardless everything said, Rambus management should definitely fry in hell. But they served as another wake-up call to the industry, I guess...

As for statements along the lines of "P4 is running 200MHz above Athlon yet performs worse"...
<b>1.</b> That is ouright wrong comparisson to begin with : P4 and Athlon use completely different CPU architectures. P4 has longer pipeline and [relatively] simplified execution units. Obviously clock-wise it'll lose even to P3 (i.e. if anyone would run P4 at 1GHz, 1GHz P3 would slap it silly). But the behavior is "by design", because P4s architecture allows to ramp up the clock speeds <b>FAR</b> beyond anything current architectures can hack (yeah, yeah, Athlon's out there on 1.3GHz and will prolly get even higher with a die shrink. But Athlon core is at the end of its career. P4 was just launched.) P4 will prolly hit about 2GHz by the end of the year, while no T-Bird will be nowhere near to watch it doing so. For the same reasons P3 never made a stable 1.13GHz CPU. And P4 will get much higher clocks before Intel will need completely different core.
<b>2.</b> If you'll recall, same thing happened when RISC CPUs were introduced - RISCs needed far higher clock rates to stay competetive, because they sacrificed the complexity of the core to get the clock speed. Throughout the '90s Alpha was always a clock leader. Others would sometimes come close, but not beat Alpha at ability to increase the clock speeds. (again, as long as Alpha was actually still developed)
<b>3.</b> P4 is the first CPU of the new architecture, and that certainly means teething problems. As someone pointed out, PPro had identical problems, and as I can remind you, so did the very first Pentium cores. Only after the platform was overall "dug in", bugs were ironed out, clock speeds increased and software vendors started optimising the code, only then Pentium became a formidable CPU. Give P4 some time, and AMD will have a fair run for its money trying to catch up. (another good point - AMD is always catching up. Just like Russians were during the cold war - always catching up to latest Western military breakthroughs. P4 is out there, going through the infancy problems. AMDs new core is still to be seen)

To sum things up : I don't think things are SO black and white as some may want to claim. "RDRAM sucks!", "P4 sucks!", "AMD rulez!", "DDR rulez!", "Athlon will tear everybode a new one!"... None of those is correct. The Cold War in CPU technologies - including the according memory technologies - is raging on, and just as in real Cold War (as long as it was going on) there is no winner or loser - sides exchange devastating blows non-stop. What I'm actually worried about is one side winning. Because just as it happened in the real Cold War, even if the "Good Empire" (AMD here?) kills the "Evil Empire" (Intel? he-he :) ), what we'll end up is another monopoly. And being a monopoly, the "Good Empire" will VERY quickly roll down to the best practices of "Evil Empire"... Just look at some conducts of USA in the 90s... (<b>NOTE:</b> I don't intend to start a brawl on the last issue. Just take it as my opinion, ok? No need to try and change it on this forum.)

P.S. Whew! That's along one! Written during some late hours too, so, some of the stuff may repeat/look a bit strange. My apologies in advance.
 

Kelledin

Distinguished
Mar 1, 2001
2,183
0
19,780
Yes, RDRAM can clock to high MHz. This feature is rather useless, though, unless individual RDRAM chips are soldered directly onto a system board within short trace length of the memory controller. Upon further research, it turns out that speed is actually the biggest trace length inhibitor. Transmitting signals at high frequency over long traces requires both extra shielding (i.e. 8- or 10-layer PCB) and traces with reduced capacitance (it's impossible to send a clean signal down a wire if the signal you sent on the last cycle is still lingering in the wire). This was the problem in the i820 chipset--the signals to and from the RIMMs were getting corrupted in the motherboard traces. The trace length had to be reduced, so a RIMM slot had to be cut off. As of the i850, this problem was not addressed; it's still only two RIMMs per channel. The problem will only get worse as clock frequency increases.

Currently the only way I see that this problem can be worked out is if the current RIMM packaging is drastically altered; from that standpoint, I suppose RDRAM has room to grow. If the packaging was modified to be less serial and more parallel, then not only does clock speed become less of a concern, but the latency would drop considerably. RDRAM chips could then communicate with a memory controller at something closer to their native latency (20ns) instead of a signal accumulating latency as it passed through chip after chip.

Looking back in old technology papers, it turns out that Intel's original plan for RDRAM was to package RDRAM chips in standard 168-pin DIMM format. I wonder what happened to this idea?

Kelledin
<A HREF="http://kelledin.tripod.com/scovsms.jpg" target="_new">http://kelledin.tripod.com/scovsms.jpg</A>