Apparently Rambus DOESN'T own DDR

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
<b>LOOKS like some of you guys jumped the gun and the articles were full of crap..</b>
I think you're the one that jumped the gun with any fact backing up. First you called it's a German Judge
http://forumz.tomshardware.com/modules.php?name=Forums&file=faq&notfound=1&code=1
<b>Give me a break a german judge ...
international courts rarely side with foriegn patent claims,

infineon is a german company..

mean nothing there are 4 more cases pending, and the really good ones are in california where rambus filed, including Micron, and infineon..</b>
Now you called the report (meaning it states the fact) is crap.
For someone who called himself a professional, I really doubt it!
BTW, by the time I write this message RAMBUS's stocks down further to $15.8/share (-34.14%). Now you're right it's time to buy its stocks.
 
Well, we'll definately see the truth in the matter shortly, ie, when SDRAM and DDR SDRAM chipsets for Pentium 4 comes out. It's just that right now, the Pentium 4 really isn't proving itself when compared to Athlons, or even Pentium III. Whether it is the Pentium 4 itslef that's causing it, or the memory. It very well may be the Pentium 4. Sure RDRAM has good bandwith, one benchmark I saw gives a Pentium 4 RDRAM (256 MB) system at 1,400 MB/S, while the Athlon with regular SDRAM gets about 540 MB/S (256 MB). But, about Quake 3, you say a 60 fps advantage? Well to be fair that is comparing a 1.73 GHz Pentium 4 to a 1.2 GHz DDR Athlon system... I mean, if you take a 1.4 GHz Pentium 4, that increase drops to only 20 fps, still with a 200 MHz clock speed advantage. But, if you look at the benchmarks in Windows 2000 for Quake 3, an Athlon 1.4 GHz is able to get 200.6 fps, while a 1.4 GHz Pentium 4 system scores 212.4, only 12 fps (5.5%) faster, for like double the memory bandwith at similar clock speed. Now, for such a memory bandwith intensive application like Quake 3, obviously it doesn't benefit too much by Rambus' or Pentium 4's supposed superior bandwith over a PC2100 DDR Athlon system. Well, I'm trying to share a 56k connection with someone whos playing Counter-Strike on another computer, so looks like I can't do any more research for this post :). So again, we'll see in the coming weeks just exactly what Rambus is made of when teamed up with the Pentium 4. Will Intel intentionally handicap their SDRAM chipsets for the Pentium 4, like they may have with 815, just to make Rambus look better??? :) See ya.

"We put the <i>fun</i> back into fundamentalist dogma!"
 
NO
that was a mistake as there is a german case ongoing as well,
the first post was not specific as to which case,
there are 6

and obviously you do not read much, as the articel was retracted and redone, as it contains alot of BS,
and misinfo.
the AMERICAN judge posponed the case till april,
seeing as how RAMBUS ordered Infineon to produce
evidence in rambus favor making infineon a liar

OOPS there goes your theory
CAMERON

CYBERIMAGE
<A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
Ultra High Performance Computers-
 
Come on Cyber....I posted the original post, and it clearly states that this is a US case in Virginia with a US judge who is named. Second, as I pointed out to you, Electronic News did NOT retract their article. They said that they stood behind it. As a matter of fact, as I pointed out to you in a previous post, the editor that you quoted stated that it was a "relatively minor error". And for all intents and purposes, they have been proved correct. The ruling was signed YESTERDAY (which means that it was infact written yesterday as they reported), but it was not released until today. Further, they stated that the judge had made a preliminary ruling on arguments related to definition of terms which favored Infineon. Again correct. They were incorrect in regards to a possible summary judgement dismissing the case against rambus, but that was a predicition. They did not say a summary judgement had been issued. They said that one might have been. So on the face of it, you are completely incorrect on this issue, and are ignoring and misrepresenting the facts that are known.

As for the delay, you are correct that a delay has been granted. "Payne ordered Infineon's lawyers to go to the company's headquarters in Germany to conduct a full document search, according to Rambus. The judge also gave Rambus time to take the deposition of Infineon CEO Ulrich Schumacher." the judge also said that he was concerned with "systematic non-disclosure" on Infineons part. That could mean that Infineon is no saint of a company either and if so shame on them. Or it could simply be nothing. There is absolutely no guarantee that the extra time will show anything at all. In another article, the judge said that he was not saying that the non-disclosure was intentional but that he intended to get to the bottom of it before the case went any further.

Now what you absolutely refuse to aknowledge is that Rambus was dealt a serious blow by the same judge with regard to their patent claims. He wrote in part "``Also, it was difficult to credit (the expert's) testimony on disputed terms because it reflected the general, and disturbing, tendency of Rambus to distance its current constructions from what the inventors said in...the specification, and, in doing so, to use the claim construction process to broaden claims,''. As I said before, he is accusing them of trying to make the patents more broad than they really are by an abuse of the legal process. Add to that that the same judge has ruled that Jurors can hear arguments from Infineon that Rambus has committed criminal fraud under RICO, and I think that you can see that even if Infineon DID withhold information intentionally, Rambus is sure as hell not going to look like a poster boy for ethics before ANY jury unless it is composed of you and 11 clones.

I have aknowledged that I do not know how this will turn out. I have also aknowledged that Infineon MAY not have completely clean hands. We'll see. But come on, I am really starting to wonder if you work for Rambus, because you seem to be simply ignoring the facts.
 
COMM ON,
the articel was retraced , look it is not the same article the pulled and edited it,
read my post from CNet and reuters.

the judge had not made any decision, the articel you posted said that rambus was ruled against..

be real, it went from rambus loses to
CASE POSTPONED BECAUSE JUDGE RULES ON RAMBUS SIDE TO POSTPONE DUE TO DISCOVERY AND DOCS WHICH PROVE INFINEON WAS LYING ABOUT THE DATE THEY KNEW ABOUT PATENS.

DO YOU REALLY THING JUDGE WOULD POSTPONE CASE A MONTH IF HE DID NOT REALIZE RAMBUS HAS PROOF THAT INFINEON IS HIDING SOMETHING
AND OPRDER THEM ALL THE WAY BACK TO GERMANY TO DO A DOC SEARCH>.

GET REAL MAN.
DONE TALKING ABOUT THIS< POINTLESS THE STOCK IS UP IN AFTER HOURS TRADING AND I ALREADY MADE SOME $4000 TODAY ON IT
YOU WOULD HAVE GOTTEN KILLED IN YOUR PUTS INCIDENTLY
ha ha
CAMERON
CAMERON

CYBERIMAGE
<A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
Ultra High Performance Computers-
 
Cyber, I give up presenting you with facts. Read Reuters, CNET, Business Wire. No one is saying that the case if OVER. But the court did RULE against Rambus in a pretrial motion regarding important facts to be presented in the case. There are two things going on here. Pretrial rulinng and decision to delay. Sorry if it sounds like a flame, because most of the time you are more reasonable, but continue to live in denial of cold hard facts if you wish.

Mike
 
Jesus, do you realize how bitter you sound?

I have little against Intel, but I despise Rambus ethics, and not once have I seen solid proof or technical theory stating that their product is superior. I'm quite through being civil, and you apparently left all notion of civility behind when you signed your soul over to Intel et al.

<i><b>Capitalist businesses have no loyalty to you, only to your money.</i></b> That's harsh, but it's the American way. It's no good being loyal to any corporation. Grow up and have a nice life.

Kelledin
<A HREF="http://kelledin.tripod.com/scovsms.jpg" target="_new">http://kelledin.tripod.com/scovsms.jpg</A>
 
<b>and obviously you do not read much, as the articel was retracted and redone, as it contains alot of BS</b>
You just don't get it, do you? It's the report to show Rambus' stock is plunging when it happened. It definitely didn't state that the judge has ruled anything yet.
http://www.bigchart.com/news/articles.asp?newsid=776421492&symb=RMBS&sid=41488

><b>The company said no ruling on the patents had been issued, and called reports of a negative ruling inaccurate...

... Rambus spokeswoman Kristine Wiseman called the article inaccurate. "There was no ruling that has been issued as of yet," she told Reuters.</b><
No, it just you is the one who don't clearly read it much.
<b>the AMERICAN judge posponed the case till april,
seeing as how RAMBUS ordered Infineon to produce
evidence in rambus favor making infineon a liar</b>
You just proved my point by drawing such conclusion!
 
<b>but continue to live in denial of cold hard facts if you wish.</b>
Thanks <b>mpcmike</b>. That's the word I've been looking for to describe him <b>'DENIAL'</b>. I'm just wondering why he so blindly defends for RAMBUS.
 
yes, you can compensate for high latency, but only to a certain degree.
what we need is a low latency, high bandwidth memory system, which DDR isn't terribly impressive right now, nor is Rambus. Honestly, we probably would be better off running .13 micron memory at higher real speeds...
I am not particularly impressed with ANY computer, processor or memory system right now, my 1Ghz Tbird is pretty fast, but not fast enough, the P4 would only be MARGINALLY faster on CERTAIN apps, which would not make it worthwhile to upgrade. Quake III framerates don't impress me much. As long as I break 30fps on average, I'm happy with it.
the higher latency ram IS slower at responding, afterwards, it has the increased bandwidth to make up for it, but we have made strides to improve the response time of computers, we will not accept ANY ground lost. maybe I'm wrong in feeling this way, but hey...

----------------------
why, oh WHY, is the world run by morons?
 
Right now, the problem with DDR DIMMs is that there's not much to take advantage of them. There's the Athlon, which does show marked improvement using DDR, but not as much as we'd expect. The T-bird is a great chip, but its prefetch unit isn't up to funneling through enough data to saturate DDR bandwidth or negate DDR latency.

The P4, despite being castrated, <i>does</i> have a better prefetch unit; if it didn't, RDRAM's latency would bite hard. If ServerWorks (Broadcom) can get their P4 DDR chipset out, we might see DDR <i>really</i> shine.

Kelledin
<A HREF="http://kelledin.tripod.com/scovsms.jpg" target="_new">http://kelledin.tripod.com/scovsms.jpg</A>
 
Time and time again I see you post argueing the technical advantages of Rambus RDRAM, However, I have yet to see you defend the business practices of Rambus. You simply are missing the point. Rdram may in fact be a superior technology. However Rambus's implementation of it was far from honorable. Particapating in JEDEC, an open standards council, bringing its ideas to the forefront, only to terminate its membership and then later sue the companies that choose to use it ideas it presented at JEDEC could very well be its undoing. A precendence has already been set in another case very simular to this. Was it a co-incidence that Rambus choose to quickly depart from JEDEC after this ruling? I highly doubt it. There are certain disclosure rules that need to be adhered to when partipating in an open standards council that Rambus choose to ignore. For this reason they may eventually loose all there claims to DDR technology and deservedly so. If they would have adhered to the rules of the council they were particapting in it very well could have been that two years ago memory makers would have opted a different route and we would have an entirely different memory type. At best they have thrown technology back several years with there shady dealings. Councils such as JEDEC are formed to bring a certain symetry and compatability so we can as users not have to worry about compatability issues as end users on every stick of memory we use. If Rambus is to win its court battles how will this help us? It won't!

In the not to distant past it was microsoft being sued for a monopoly of its OS and questionable business practices. Only one pc manufactorer had the guts to stand up and testify against them, IBM. The other companies were much to afraid of retaliation. This is much like the scenario you present with so many memory manufactorers paying royalies to Rambus now. Still, microsoft lost.

A little bit of knowledge is a dangerous thing!
 
I think any time computer technology tries to take five steps forward, it takes a step backward to do it.

Look at the comparison of a Pentium MMX to a Pentium Pro.

Why did the Pentium Pro take that minor hit in performance at it's very early stages? Because the MMX chips were seasoned technology where as the P2 core was rebuilding a few things for improved scalability. But look how much faster a mature P Pro was compared to a PMMX.

Now we see the same thing with the P4. It takes a performance hit from what we'd expect given the P3's performance. But the P3 has reached the end of it's days, where as the P4 is a LOT more scalable and in it's very early days. A mature P4 will be considerably faster than a P3 ever could be.

The same is also true of memory. Right now in it's early forms, RDRAM isn't looking too hot. But it has incredible scalability potential. Meanwhile SDRAM is reaching the end of it's rope. Sure, we can double it's rate into DDR. We can even push to quadruple it's rate into QDR. But after that it's bandwidth is going to have it's mature point. It won't scale farther. We'll need something completely different.

Whether we choose RDRAM now, or go with that something completely different then, either way it is likely that the early versions won't be extremely superior to SDRAM derivatives. In time, they will be, but not at the beginning.

Look at how long RDRAM has been shunned. It may have been out for a while now, but technologically, it's still very much in it's infancy because no one has been wanting to put money into it for use in a P3. Now with the P4, that is starting to change. But only just starting.

You say, "we will not accept ANY ground lost". Yet this is how the computer industry has always been when it goes through a major change. It loses a tiny bit of ground to set the stage for a massive gain of ground.

The only difference between now and the past is that now computers are affordable enough that everyone can surf the internet and find this kind of information. Where as in the past, this kind of information was known only amongst the rare computer expert.

The accessability of the information is the only thing that has changed from the past, not the method of improvement.

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
Latency is performance critical. If it weren't, RDRAM wouldn't have been spanked by SDRAM on the P3 platform, and graphics cards wouldn't be using significantly more expensive 4.5ns DDR SDRAM over 5ns DDR SDRAM. If you have a process that needs big chunks of data from memory, RDRAM is for you. However, most PC tasks require small chunks of data quickly. I thought this was general knowledge, as I've been reading about the relative impacts of bandwidth and latency since EDO was king. Every 3D video card out there, including the high performance money-is-no-object cards, uses DDR SDRAM over RDRAM. Intel is the only computer company that has given RDRAM a second look. Only one game console, a platform where RDRAM's benifits are perhaps most apparent and most easily capitolized on, uses the technology. So no, it is very obvious latency is NOT just some buzz word, and DOES have very significant impacts on real world performance. To be so dismissive about it just re-affirms my low opinion of your intelligence.

/Athlon-1.2GHz@1370MHz(137MHz*10)/Asus_A7V133/
 
While I can appreciate your position, I believe you are missing a few pieces of the collective puzzle that is the SDRAM/RDRAM debate.

For example, both the P3 and the T-Bird architecture are limited in how much memory bandwidth they can gain from. The chips simply were not designed to use high memory bandwidth. Just as putting a 64bit PCI card into a 32bit PCI bus will not give you any improvements, so does putting a high bandwidth memory in a low bandwidth system.

So putting RDRAM into them accomplishes nothing, because the whole point of RDRAM is high bandwidth at the cost of latency. What you are getting is still low bandwidth because of the processor limitations, but with a higher latency. Of course the memory will look useless there.

Yet it doesn't. It still performs well for certain tasks and performs very similar to SDRAM in other tasks. Funny how RDRAM can do this even though the processors aren't utilizing all of it's higher bandwidth.

Meanwhile DDR SDRAM performs better in the P3 and Athlon. But why? If these chips aren't using DDR SDRAM's higher bandwidth any more than they are using RDRAM's, then what makes the difference?

Simple, you said it yourself. DDR SDRAM memory has gotten as fast as 4.5ns. Yet what is the fastest latency that single-rate SDRAM uses?

I'm 100% certain that if a 4ns single-rate SDRAM stick (if one existed) were put into a P3 and compared against a similar system with a double-rate 5ns SDRAM stick, we would see the single-rate SDRAM out perform the double-rate SDRAM. Why? Because with the bandwidth limitations of the P3, latency will be the only concern.

Yet, I am also sure that if you put memory sticks like that into a processor DESIGNED for high memory bandwidth (such as the P4) the DDR SDRAM would easily kick the pants off of the single-rate SDRAM. Why? The latency of the DDR stick would have been slower than the single-rate stick's was.

Could it be because of the higher memory bandwidth?

According to you, no. Because, according to you, bandwidth has no benefits. It's only latency that matters.

I'm sorry, but your idealism is wrong. It's simply not true. Latency may be everything in systems that CAN'T use higher bandwidths, but it is next to nothing in systems that DO use higher bandwidths.

The P4 uses higher bandwidths. The Hammer chips are bound to use higher bandwidths. Hopefully, the .13micron chips will also all use higher bandwidths. And if this is the case, then the importance of latency ONLY applies to old systems.

And if this gives you a low opinion of my intelligence, that's your problem, not mine. I know what my intelligence is. Just as I know that you are the only one who would lose by not accepting the possability that you are wrong.

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
I'm not missing any pieces. I was responding to CYBERIMAGE's ignorant comment that latency is just a buzz word and has no impact on performance. It does, and in fact has been proven through several new memory technologies to have a larger overall real world performance benefit than similar increases in memory bandwidth. This is not to say bandwidth has no benefits, I have no idea how you read that into my comments. It is of course critical to increase the amount of data available to the CPU as its ability to process that data increases. However, I am betting that once (if?) DDR SDRAM is available on the P4 platform, you will see it outperform RDRAM despite having a third less bandwidth to work with. Of course I could be wrong, especially since the P4 was designed from the ground up to optimize every advantage RDRAM offers while minimizing its handicaps, but I doubt it. So I don't necessarily have a low opinion of YOUR intelligence (unless you're yet another FUGGER/CYBERIMAGE persona) but you may want to work on your reading skills a bit.

/Athlon-1.2GHz@1370MHz(137MHz*10)/Asus_A7V133/
 
Sorry to those who read this in another thread but Supermicro make a DDR MB for the P4(well a UK supplier is selling it)

M

Opinions are like arseholes .... everybody’s got one.... :smile:
 
Did or did you not say, "Latency is performance critical. If it weren't, RDRAM wouldn't have been spanked by SDRAM on the P3 platform, and graphics cards wouldn't be using significantly more expensive 4.5ns DDR SDRAM over 5ns DDR SDRAM."?

I think my reading abilities are doing quite well. It's your ability to express your thoughts fully that may need additional work. Just a friendly suggestion.

What I find interesting is that every graphics card chip is being limited by insufficient memory bandwidth. It's a clear and obvious bottleneck. And every time higher bandwidth memory is put in, it makes a drastic difference in the video card's performance.

Yet no graphics card company is trying to use RDRAM, which is a bandwidth king. (But a latency chump.)

I wish I could ask nVidia engineers if their reasoning behind this is because they expect DDR SDRAM to one day perform with more bandwidth, or if the latency would actually give the cards a worse performance? Or perhaps the price difference makes a video card using RDRAM just too expensive? (Could we really believe that when we see the GeForce3 prices?) Or is it that the RDRAM's tendancy to run rather hot has made it difficult to integrate into a video card?

I mean it would seem as though the solution to this bandwidth bottleneck in video cards is right there. They could switch to work with RDRAM and work on upping RDRAM's bandwidth and lowering RDRAM's latency instead of working on improving DDR SDRAM's timing.

Yet they don't. They continue to push SDRAM. And one has to wonder if there is a specific reason for this, and if so what?

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
I still don't see how you can turn that into "bandwidth makes no difference" because it very obviously says no such thing. In fact, it says nothing about bandwidth at all, it merely points out, as was intended, that latency obviously makes a huge difference in performance.

I would assume that video card manufacturers dont use RDRAM for many reasons. The memory controller is very touchy and complicated, possibly too complex to fit onto a video card or run stabily. The memory runs hot, and video cards of today have enough heat issues as it is. Latency has been proven to make a larger impact on performance than bandwidth, so the performance gain from the bandwidth was probably not able to compinsate for the loss due to latency. RDRAM is expensive. A single 32MB or 64MB RDRAM module would have moved the price well into the high end, with a questionable performance payoff. SDRAM has the backing of the memory industry, and a guarenteed future. Considering how few improvements have been seen in RDRAM and the leaps and bounds that have been made in SDRAM, they also probably see a better future in SDRAM and are not willing to invest the time and effort into RDRAM if it will be fading away soon.

I'm sorry, as much as I have enjoyed many of your posts for their intelligence, I still cannot understand why you back RDRAM when its merits simply do not carry through to a large majority of real world applications.

/Athlon-1.2GHz@1370MHz(137MHz*10)/Asus_A7V133/
 
RDRAM is closer to the end of its rope scalability-wise than DDR. RDRAM <i>already</i> quad-pumps its bus. It's already limited to 2 RIMMs per channel, due to the limits that its latency imposes on trace lengths. The i820 chipset demonstrated Intel's inability to put three RIMMs on a single channel. RDRAM's already abominable latency gets worse and worse the more chips you put on a RIMM.

Kelledin
<A HREF="http://kelledin.tripod.com/scovsms.jpg" target="_new">http://kelledin.tripod.com/scovsms.jpg</A>
 
It's just that when you say, "Latency is performance critical", and, "Latency has been proven to make a larger impact on performance than bandwidth", it makes it sound as though you consider bandwidth imrpovements meaningless if they cause any longer latancy.

And depending on the situation, I can kind of agree with that, up to a point anyway. At least when bandwidth isn't of major importance.

But bandwidth is known to be very important in both processors and video cards. You keep saying how latency is so important. To back this up you mention things like 4.5ns memory used instead of 5ns to get yet more performance.

Yet you fail to mention the WHOLE truth about that. That the faster latency is in fact used to clock the memory faster, thereby increasing bandwidth. If you don't clock the memory faster, than your faster memory isn't going to give you any performance difference.

That .5ns difference can be about a 33MHz difference in clock speed. Video cards with 5ns DDR SDRAM have about a 466MHz memory clock. That gives them a bandwidth of approximately 3728mb/s. Where as 4.5ns DDR SDRAM can get to about a 500MHz memory clock, for a bandwidth of about 4000mb/s.

It's that bandwidth improvement that gives the actual performance gains. The latency in and of itself does almost nothing for performance.

And it is the bandwidth that is the bottleneck in video cards. A theoretical pixel fill rate means nothing if the memory won't stream your data to the GPU fast enough to achive that fill rate, which is the EXACT problem that we are seeing in video cards right now.

This is why the RADEON has it's Hyper-Z technology and the GeForce3 has it's Lightspeed technology. Both are ways of REDUCING REQUIRED BANDWIDTH. If latency made such a difference, then why are both ATi and nVidia working on BANDWIDTH issues?

It seems to me that bandwidth is a lot more important for performance than latency. And both ATi and nVidia would agree with me.

I do however agree with you that the reason RDRAM isn't used in video cards is probably mostly the heat problem, but also possibly the memory controller.

I don't however think it is that SDRAM has seen so many improvements where RDRAM hasn't yet, because if the video card industry were to be pushing RDRAM instead of SDRAM, you KNOW it would be seeing a massive number of improvements. The reason SDRAM has seen so many improvements is because the video cards push for them, not because PCs use them. After all, PCs still only support PC2100 memory when video cards are already using the equivalent of PC4000 memory. PCs have NOT been the ones pushing for faster and faster DDR SDRAM.

I know though, it's puzzling. Why do I stand up for RDRAM? Rambus is a PoS company that has tried to trick and swindle it's way into owning the whole memory world. I won't argue against that for a second because a large part of me wants to see Rambus hang out to dry for it.

But I do see that future hardware is going to need very high bandwidth. It makes all the difference. And RDRAM has a much larger bandwidth potential than even QDR SDRAM does.

But this potential is yet to be seen because Intel seems to be the ONLY company pushing for R&D into RDRAM. No one else seems to care for the memory. But then no other CPU manufacturer is trying to push hardware with such a high memory bandwidth as Intel is with their P4.

In a P3 or an Athlon, RDRAM is next to useless. And even DDR SDRAM and QDR SDRAM are next to useless. The P3 and Athlon both have bandwidth limitations, and so higher bandwidth only gets used up to a point. After that point any more bandwidth is useless to these CPUs. So in THEIR case, latency is very important.

But in the case of future hardware such as Intel's P4 derivatives and AMD's Hammers, bandwidth is going to mean a whole lot more. And latency will be of much less importance.

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
Slvr,
I was glad to see your comments regarding Rambus the company and their errrr....questionable (I'm being kind here)business practices. It really might be a question of a decent technology getting tarred and feathered by a bad sponsor. And I really think it's Rambus the BUSINESS that is slowing down the investment in R&D that you mention. It took a huge stock payment to get Intel to commit. And in retrospect, even if they like the technology, Intel is probably questioning whether getting into bed with the Rambus-snake was such a good idea. I think that virtually everyone in the industry is terrified to be at Rambus's mercy given their behaviour. Would you trust your business to a company like that? I doubt it. So, you do everything you can to avoid being locked into a relationship with them as you would be by adopting their proprietary technology. That's why everyone is so concerned about their patent claims on DDRAM. And it's why I hope and pray that they are not successful in enforcing what I firmly believe are questionable patent claims. It will not be good for anybody (except Rambus stock holders) if they are successful.

<P ID="edit"><FONT SIZE=-1><EM>Edited by mpcmike on 03/20/01 02:57 PM.</EM></FONT></P>
 
I don't know where you get your information from, but RDRAM is no where near it's maximum in bandwidth capabilities. The majority of RDRAM's bandwidth comes not form it's bus, but from it's clock speeds, which happen to be a lot more scalable than SDRAM's clock speeds. And, as you improve the clock speed, the latency of RDRAM improves as well. So PC1000 RDRAM should be very interesting to see. But yet it still has a lot of room for advancement even past that, if anyone puts the effort into it.

And yes, currently RDRAM has a few quirks to work out as you mentioned. But how much has anyone tried to improve RDRAM? No one is putting serious R&D into the technology. Everyone but Intel is dumping all of their resources into SDRAM technology. But what will they do when SDRAM has been maxed out?

Once QDR has been achived, it'll be at it's very end. The speeds of SDRAM are about as fast as they'll ever go. They've gone all the way down to 4.5ns, if not faster. There is no such thing as a 0ns latency, so the memory is really quite near it's end.

That's why I think RDRAM is useful. Not because of Rambus. Not because of it's current implementation. But because RDRAM has a lot of untapped potential that could easily be reached if only people would try to use it more and put more effort into RDRAM research.

I'm looking towards the future. It'll take too long to wait for nanotech memory. And Ferro-electric memory is still largely unknown.

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
Oh, I would find it highly unlikely that Rambus will win any of it's lawsuits. I can't see any judges siding with them over the issue. But then, I don't have any of the documents being presented in the court cases, so who knows? I still doubt they'll win though.

And as I've said before, even if Rambus does win, other DDR SDRAM manufacturers will just bring up papers showing that they were using the technology before it was patented, and they won't have to pay any royalties.

If Rambus does win though, then chances are Rambus will next go after AMD for their double-pumped bus. Isn't THAT a scary thought? They could also go after Intel's quad-pumped bus, but since Intel and Rambus are 'in bed together', I highly doubt that would ever happen.

Everyone greatly fears Rambus winning because it would mean the practical end of DDR SDRAM and QDR SDRAM. Yet with as much R&D that goes into SDRAM, if even half of that went into RDRAM, I'd bet RDRAM would become a lot more attractive.

So either way the court case ends, no matter what the outcome, I don't see that it would set the computer industry back any more than a year tops, and more than likely only a few months, if at all.

-Despite all my <font color=red>rage</font color=red>, I'm still just a <font color=orange>rat</font color=orange> in a <font color=white>cage</font color=white>.
 
You might be right that if R&D went into RDRAM it would prove to be killer technology. But if Rambus is the ONLY game in town, how much do you think it's going to cost? I can't help but think that Rambus would REALLY put the squeeze on and then we all have to pay. These guys actually make Microsoft look friendly! LOL And I don't even want to think about what would happen if they went after AMD....