Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
Thanks to some new <A HREF="http://www.tomshardware.com/technews/index.html" target="_new">R&D advancements</A>, it looks like we will be seeing some massive bandwidth in just a couple years' time. I will put forth a technical explanation below.

Current forms of RDRAM are a DDR (Double Data Rate) technology. For whatever memory clock is used, they transmit twice per clock. The i850 chipset provides a 400MHz memory clock (FSB). The RDRAM modules transmit twice per clock for an effective 800MHz rate. This provides 1.6GB/s of memory bandwidth per 16-bit channel. The i850 chipset provides two of these channels to obtain the advertised 3.2GB/s of memory bandwidth.

A new <A HREF="http://www.tomshardware.com/technews/index.html" target="_new">Octal Data Rate</A> (ODR) technology has been developed that can transmit 8 times per clock. If you kept the same 400MHz FSB clock, then this would be an effective 3200MHz (3.2GHz) rate. Note that this is still on a per-16bit-channel basis. This provides 6.4GB/s of memory bandwidth per 16-bit channel. On a dual-channel chipset with a 400MHz FSB this would provide 12.8GB/s of memory bandwidth.

Now this might not sound very impressive yet. After all, this is over 2 years away. We should get more than 4 times current bandwidth with 2 years of research and development. This is where the fun starts. 16-bit RDRAM channels will be a thing of the past in the second half of 2002. We will be using 32-bit channels by then, and 64-bit channels by 2004. In addition to this, by the second half of 2002 RDRAM platforms will be using a 533MHz FSB clock (PC1066). By 2004 they will be using a 600MHz FSB clock (PC1200).

Couple the ODR technology with dual 64-bit channels running off a 600MHZ FSB clock and you get 76.8GB/s of memory bandwidth. 76.8GB/s of memory bandwidth in just over 2 years is pretty nice, is it not?

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
If you turn this thread into an Intel vs. AMD war, I am going to find you and hang you up by whatever genitals you have left. I want to actually discuss memory technology and how it relates to our processors. I do not want to discuss public relations between processor companies. That got very old very fast. I want you to know that yes, I obviously do support Intel. But I do not support trolling.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
RD RAM at 600Mhz will have a lower latency than RD RAM at 400Mhz. How will the 64 bit data path, and the octal data rate affect the latency?
Why does RD RAM have a higher latency than SD RAM in the first place?

"Ignorance is bliss, but I tend to get screwed over."
 

zengeos

Distinguished
Jul 3, 2001
921
0
18,980
Ray, how will this affect latency..or has this been researched yet?

Mark-

When all else fails, throw your computer out the window!!!
 

zengeos

Distinguished
Jul 3, 2001
921
0
18,980
No fair! Your post beat mine!

grumble..

mutter..

danged slow cable internet!


When all else fails, throw your computer out the window!!!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"How will the 64 bit data path, and the octal data rate affect the latency?"

RDRAM's latency decreases as it ramps up in speed. The ODR (Octal Data Rate) will likely further reduce latency. The 64-bit data path will not affect latency at all. It is much like adding multiple channels. It increases bandwidth, but latency for accessing each channel remains the same.


"Why does RD RAM have a higher latency than SD RAM in the first place?"

This is due mostly to the overhead of laying out the circuitry in a serial nature. SDRAM uses parallel circuit pathways. However, these parallel circuit pathways do not scale well and latency actually increases as you significantly increase bandwidth, such as through DDR technology. PC1066 RDRAM has about the same latency as PC2100 DDR SDRAM. As both increase further in speed RDRAM will continue to attain lower latency and will surpass SDRAM in latency performance as well as bandwidth.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
What about cost?
It's only natural to assume that this new RAM will be more expensive for at least a little while, but what about the motherboards designed to use it?
Will moving to 32 bit and then 64 bit data paths mean that the motherboards will be 6 layer, or even 8 layer designs?

"Ignorance is bliss, but I tend to get screwed over."
 

AMD_Man

Splendid
Jul 3, 2001
7,376
2
25,780
Where is your proof Intel_inside? Within 2 years, many things can happen. AMD motherboards might start supporting RDRAM. 2 years is more like 2 centuries for computer technology. Countless new technologies may be released within the next 2 years. In two years, Intel and AMD might not even exist (possible, but highly unlikely). The computer industry is moving so quickly it's hard to predict more than a few months of progress.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"It's only natural to assume that this new RAM will be more expensive for at least a little while"

Every new technology starts out at a higher price. This is the nature of technology.


"Will moving to 32 bit and then 64 bit data paths mean that the motherboards will be 6 layer, or even 8 layer designs?"

Due to the serial nature of RDRAM, the memory circuitry takes up much less space. This can all be accomplished on 4-layer PCB motherboards.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
I think the industry is long overdue for a large increase in memory bandwidth, and I imagine that Intel has the marketing muscle to push software developers toward taking advantage of this.
While I’m not particularly fond of Rambus’ marketing and legal tactics, it looks like they’ll be developing some very useful technology over the next few years, and it will be interesting to see how DRAM technologies compete with that.
It will also be interesting to see if any graphics card companies adopt Rambus technologies, since they are always striving for lower latencies and higher bandwidth.
Could you imagine NVidia making a deal with Rambus similar to the one Intel has?

"Ignorance is bliss, but I tend to get screwed over."
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"I think the industry is long overdue for a large increase in memory bandwidth"

Agreed. This is the main reason that we need GPUs at all today. If we had sufficient memory bandwidth to the main processor, it could easily handle all your 3D graphics needs. While it is nice to have more computing power in the form of a GPU, the main reason we use them is not any remarkable processing performance delivered by them. We use them because they can be closely tied to video memory and have available huge amounts of memory bandwidth when rendering to video memory. This is because the GPU and memory reside on the same card and have a dedicated high-speed bus to each other.

We all know the limiting factor of the video subsystem is memory bandwidth. If we eliminated the memory bandwidth bottleneck in our systems we would have vast amounts of processing power available using our GHz CPUs to render beautiful images, even in games. The concept of a separate GPU would fall by the wayside because these GPUs are actually pitifully slow compared to our main system processors. The only benefit is the proximity to video memory. Without a highspeed memory bus connecting the CPU to whatever memory is to be used for the display, you are required to use another processor such as a GPU on a video card with dedicated video RAM.

Imagine the complex scenes that could be created in real-time using our modern processors if they were given enough memory bandwidth to act as the GPU. Games programmers roll over and beg when someone drops them the bone of being able to have a programmable vertex shader on the GPU. Well <b>everything</b> would be programmable if we used our main CPUs. You could literally do <b>anything</b>.

I would love to see fully ray-traced scenes in games. I would love to see a game world that made me think I was looking out a window. I am pretty sure everyone else would love these things as well. But such complex algorithms are not going to be forthcoming from video card companies. They do not specialize in computational power. They specialize in delivering bandwidth. We should look to the main CPU manufacturers in the industry for our computational power to be able to do these things.


"Could you imagine NVidia making a deal with Rambus similar to the one Intel has?"

Yes I could, but I would rather see GPUs replaced by our CPU with the coming of huge amounts of memory bandwidth. nVidia has been dreaming lately about completely replacing the main processor in your system as the central component. They need a wakeup call. Their 'processors' are pitiful compared to those of Intel and AMD.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Correct me if I am wrong but wouldn't an octal Rdram system need to coincide with an octal memory contorller (chipset) as well? Any info on such a chipset? And then to achieve this bandwith how do you see the processor to memory contoller bus operating ( fsb, octal pumped as well?)

Video editing?? Ha, I don't even own a camera!
 
G

Guest

Guest
Mr Raystonn sir,

Yes that is a hefty bandwidth figure you quote, very nice indeed. It seems though, as has been shown that in many cases while simulating todays available software through synthetic benchmarks, that the lower latency of the various SDRAM's often allows it to perform better in spite of it's lower available data bandwidth. This is demonstrated to be fairly accurate by comparing real world application performance. The reason I bring this up is because in the previous posts you mention that some of the new functions of this newer RDRAM is allowing for lower and lower latencies, and that raises a question in my mind. Do you have any opinions about whether RDRAM in any future incarnations will ever be able to also claim the lowest latency figures with respect to available SDRAM (or whatever competing technologies exist at the time) types at that time? If so, at what point do you predict that this milestone will be achieved?

Hopefully you see why I ask this.

Also, does anybody know if the agreements between Intel, and Rambus disallow chipset makers to support RDRAM/Athalon chipsets and motherboards?

Edit- oops, punctuation errors<P ID="edit"><FONT SIZE=-1><EM>Edited by knewton on 10/22/01 11:41 PM.</EM></FONT></P>
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
That’s a very interesting way of looking at it. As the power of CPU’s increases, it makes sense to move the work done by peripheral components to the CPU, saving costs.
I can imagine that we will eventually reach a point where the quality of graphics in games surpasses our ability to interpret it, and it is therefore pointless to increase the power of graphics cards. I can also imagine that the power of CPU’s will eventually so completely surpass the processing power required for this, that there will be no reason to not do this processing on the CPU.
I’ve always felt that it was only a matter of time before all of the work done in a computer is done on a single chip. The question is, how long will it take to get there?
76.8GB/s of bandwidth is certainly a step in the right direction, but it will take even more than that, I think, if we are going to render scenes like <A HREF="http://www.irtc.org/ftp/pub/stills/2001-06-30/warm_up.jpg" target="_new">this</A> in real time. That image took 100 hours to render on a 1.4Ghz Athlon with 1 Gb of DDR RAM.

"Ignorance is bliss, but I tend to get screwed over."
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"wouldn't an octal Rdram system need to coincide with an octal memory contorller (chipset) as well"

Yes, the OCD technology will be out in 2002. By that time there will be a supporting chipset.


"how do you see the processor to memory contoller bus operating ( fsb, octal pumped as well?"

The FSB would likely be operating at 600MHz, quad pumped off a 150MHz external clock.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
By 2004 they will be using a 600MHz FSB clock (PC1200).
This does not seem to be a logical progression, typically increases are by factors of 33 1/3. But then again by 2004 pci and even agp devices may be obsolete so who knows? However major changes would need to be made to the p4 to use this bandwith.

For instance given your projection

The Pentium 4's system bus would be clocked at 150 MHz and 64-bit wide, but it being 'quad-pumped', using the same principle as AGP4x. Thus it could transfer 8 byte * 150 million/s * 4 = 4,800 MB/s.

so you would need to radically change the p4 by adding another 64 bit pathway from the cpu to memory controller ( ie alpha) for a cpu to memory bandwith of 9,600 or Octel pump ( that just does not sound right) the cpu at 150. In either case wouldn't you have a completely different CPU?

Video editing?? Ha, I don't even own a camera!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"Do you have any opinions about whether RDRAM in any future incarnations will ever be able to also claim the lowest latency figures with respect to available SDRAM (or whatever competing technologies exist at the time) types at that time"

In the second half of 2002 PC1066 RDRAM will become the standard, with overclocking going somewhat beyond to probably around PC1150 or so. At that point it will have lower latency than the DDR SDRAM alternative.


"does anybody know if the agreements between Intel, and Rambus disallow chipset makers to support RDRAM/Athalon chipsets and motherboards"

There are no restrictions on the part of AMD or any other companies. They are free to license the technology just as Intel has done. Be aware that when they decide to do so it will take a considerable amount of time to ramp up support and get everything bug-free. New technologies take a while to perfect. [lowblow](Though VIA can just skip that part. ;)[/lowblow]

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
Remember that if you follow the more common interpretation of Moore's Law you get a 100-fold increase in performance every 10 years.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
Remember that if you follow the more common interpretation of Moore's Law you get a 100-fold increase in performance every 10 years.

Speaking of Moore's law, Intel has always followed it pretty closely haven't they?
Rambus will break it completely if they improve memory bandwidth by 24 times in just three years. How likely do you think it is that they will manage the same again after 2004?

"Ignorance is bliss, but I tend to get screwed over."
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Actually AMD has had a license to use Rdram for some time now, its just they have choose not to use it. Your conjectures for calling for RDRAM to be the standard in just over 8 months is a little over optimistic, especially with Intel just releasing the I845.

At that point it will have lower latency than the DDR SDRAM alternative


Only in comparison to the current DDR, but DDR will ramp in speed as well getting a double pumped bus of 166 then eventually a quad pumped bus.

Myself, I was hopeing for a completely different solution by 2004 perhaps megnetic ram technology?

Video editing?? Ha, I don't even own a camera!<P ID="edit"><FONT SIZE=-1><EM>Edited by ncogneto on 10/23/01 00:25 AM.</EM></FONT></P>
 
G

Guest

Guest
Well well well, if this is all true then things are looking rather grim for SDRAM. I just can't imagine what could be done to make it compete with numbers like this. Low cost can only take you so far. Oh well it kicked some butt back in the day.

"[lowblow](Though VIA can just skip that part. ;)[/lowblow]"

not really sure if you are referring to the fact that they seem to be imune to the need to license techs, or the fact that they seem to be immune from the need to perfect their new technologies before releasing. heh heh
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
The FSB would indeed need to be increased to make use of all of this bandwidth. However, this could easily be done by dropping back down to a low multiplier and using a very high FSB clockrate. That 600MHz FSB figure I gave was a bit inaccurate. It would be required to use PC1200 RDRAM at the same dual 16-bit channels (or single 32-bit channel) with the same multipliers. This would only achieve 4.8GB/s of memory bandwidth by itself.

To properly use all 76.8GB/s of memory bandwidth would require a 64-bit FSB with an effective rate of 9.6GHz. The Pentium 4 has a 64-bit FSB that currently operates at 400MHz. If instead of a multiplier, a divider was implemented, we could set the CPU's divider to 3 and have it running at 3.2GHz on a 9.6GHz FSB. Eventually the Pentium 4's core is expected to scale beyond 10GHz, so the divider may be unnecessary, depending on how long it takes to get there. This would unlock the full potential of 76.8GB/s of memory bandwidth.

Once we move to a new core (the Pentium 5) we can implement wider FSB buses. The 64-bit bus can be moved up to 128-bit or 256-bit, which would cut the FSB clock requirements by a factor of 2 or 4 respectively. Now you may be questioning the effectiveness of a processor with what seems like more memory bandwidth than it can handle. I assure you this is not the case. With more and more SIMD instructions being introduced, and most of them becoming standard among all competitors, a couple CPU clocks are capable of accessing a vast amount of memory. You will soon see FSBs with a higher clockrate than the processor.

I envision a time when the SVGA port for the monitor is attached to the motherboard and the CPU uses local memory as video memory with its massive bandwidth. All 3D processing would be done by the CPU (and much faster as well.)

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"a little over optimistic, especially with Intel just releasing the I845"

The i845 was released to cover the lower pricepoints (i.e. those who complain about high prices.) It is not intended to ever be the best performing platform.


"DDR will ramp in speed as well getting a double pumped bus of 166 then eventually a quad pumped bus."

That is nice and all but it will not beat the bandwidth available with RDRAM. Additionally, every time they bump up the speed on SDRAM its latency increases. It will quickly fall out of fashion as it moves beyond its original design specifications. It is time for something new.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"Oh well it kicked some butt back in the day."

So did EDO RAM... :)


"not really sure if you are referring to the fact that they seem to be imune to the need to license techs, or the fact that they seem to be immune from the need to perfect their new technologies before releasing."

I had the first in mind really. I believe most would rate them as the developer of the buggiest chipsets.

-Raystonn



= The views stated herein are my personal views, and not necessarily the views of my employer. =