Super RIMM or Super DDR

juin

Distinguished
May 19, 2001
3,323
0
20,780
So there are faster tha pc800 and sharper that pc2100
i talk about pc1078 and pc 2700 (i think)
There is no FSB that can support that speed the fastest is the actual RAMBUS,there is something i dont know tell my.
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"i talk about pc1078 and pc 2700"

Actually, You're probably talking about PC1066 RDRAM. Dual channel RDRAM systems can get 4.26GB/s of memory bandwidth from that. Dual channel DDR is not currently in development. PC2700 DDR will give you 2.7GB/s memory bandwidth.

"There is no FSB that can support that speed the fastest is the actual RAMBUS"

The P4 will support a FSB speed (133*4 = 533, doubled by RDRAM to 1066) capable of using PC1066 RDRAM before the end of this year. PC1066 is scheduled by Samsung to go into full production in Q3 of this year.

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

bhc

Distinguished
Apr 14, 2001
142
0
18,680
Raystonn -- I am curious about those claims of max bandwidth. Maybe you or someone can clarify them. Under normal operations, CPU can only fetch a small chunk of data at a time due to the size of L2 cache. So, after an initial access time, the data start to flow in. For DDR, I suppose you can get the max transfer rate since it is single channel. However, for dual channel RDRAM, if the data you want are stored in one place, then you can only get the max transfer rate of single channel. Is this right? I guess my point is dual channel does not equal to doubling the bus width. Can someone comment on this? Thanks.

**Spin all you want, but we the paying consumers will have the final word**
 

74merc

Distinguished
Dec 31, 2007
631
0
18,980
RDRAM only works in pairs on the 850.
my understanding of it is that the data is spread over the two RIMMs somewhat evenly, kind of like RAID striping.
otherwise, the dual channel RDRAM would be utterly useless.

----------------------
why, oh WHY, is the world run by morons?
 

bhc

Distinguished
Apr 14, 2001
142
0
18,680
Thanks. Do you know how evenly the data are divided? Is it alternate bytes or larger blocks? It seems that larger blocks would be easier to implement, but they may not give the max transfer rate. What do you think?

**Spin all you want, but we the paying consumers will have the final word**
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
I haven't looked up the specifics recently, but I believe each channel provides 32-bits of the 64-bit data provided by the memory bus. Thus, every 32-bits of memory is interleaved. It would be done in a similar fashion for dual channel DDR SDRAM.

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

kurokaze

Distinguished
Mar 12, 2001
421
0
18,780
Correct me if I'm wrong but I believe the next
iteration of DDR was PC2400??

Intel Components, AMD Components... all made in Taiwan!
 

juin

Distinguished
May 19, 2001
3,323
0
20,780
Thank you Raystone most of the question have been answer with your reply.

Sommeone have say that 5.4gb/sec of bandwith cannot be obtaine due to L2 cache but in "northwood" the cache are at 512KB so have been double 256 to 512 maybe that the key.I wonder why the 1 P4 was not use a FSB at 133x4.If we think P3 run at already at 133 so why dont have put a old 133 in quadra "channel".I remember when the have cut in half L2 cache to ensure better speed due i was not totaly use i wonder if the 512kb of L2 cache will be at full potencial or totaly use.
 
G

Guest

Guest
It had to do with the fact that at 100x4 the FSB and Memory Bandwidth were in perfect harmony (Both having 3.2GB/ps) when using PC800 RDRAM. With Northwood they are upping the bus to 133x4 but now they have new PC1066 RDRAM so both buses on Northwood will have 4.2GB/ps. That's exactly what Athlon did, it's bus was originally 100x2, and then later they upped it to 133x2.
 

rcf84

Splendid
Dec 31, 2007
3,694
0
22,780
Really intel is forcing this 512kb L2 cache. Cuz the current p4 FPU sucks. 512kb helps the P4 crushed more numbers. Good move intel beside RDRAM is being abandon by intel it looks like by 2003.

Nice Intel users get a Cookie.... :smile: Yummy :smile:
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"Really intel is forcing this 512kb L2 cache. Cuz the current p4 FPU sucks."

The Pentium 4 was originally designed as a 0.13 micron processor, with the full FPU and 512KB L2 cache. The additions of these things upon the move to 0.13 is not a response, but the original plan.

"RDRAM is being abandon by intel it looks like by 2003."

RDRAM will be supported for as long as it has great performance. There are currently no plans to drop support as it is presently the best solution in terms of overall performance.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
The nVidia Crush Chipset will have dual channel ddr. It will also have hyper transport that connects the north and the southbridge.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
G

Guest

Guest
> [...] Dual channel DDR is not currently in development. [...]

Dual channel DDR *is* in development. And that development is almost completed now, according to some sources. nVidia's Crush chipset is to be announced soon.

Leo
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
Care to post a link that shows dual-channel DDR as 'in development'?

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
 
G

Guest

Guest
Why, of course. Here is one link (it even shows the picture of an upcoming MSI mobo based on Crush):

<A HREF="http://www.viahardware.com/winhec3.shtm" target="_new">http://www.viahardware.com/winhec3.shtm</A>

A new leak suggests nVidia's Crush chipset will be announced on June 4:

<A HREF="http://www.theinquirer.net/24050103.htm" target="_new">http://www.theinquirer.net/24050103.htm</A>

The guys at TheRegister are secondguessing when they say one of the memory channels will be dedicated to the onboard GPU. The first link clearly shows that the GPU *can* use the retail MX's full bandwidth because of the extra channel (but doesn't have to; I'm sure it could even be disabled for an added AGP card).

I have also seen news about Abit's Crush-based mobos.

Leo
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
That does not show a dual channel memory configuration for use with the processor. In fact, it clearly shows a limit of 2.1GB/s for the FSB, which would mean single channel. This is further supported by the odd number of DIMM slots.

They do appear to have something special going on between system memory and the graphics chipset. However, this will only help in applications that are video memory bandwidth intensive. It will be of no help in normal applications that work on large sets of data with the CPU. Such data passes through the FSB from memory to processor and back. Basically, it looks like they have one channel of memory for the CPU and one for the GPU.

All video cards today that are not integrated chipsets on a motherboard have their own dedicated memory channel between the GPU and video memory. It is all on the video card. This is why, traditionally, add-on video cards with onboard memory have been faster than integrated graphics chipsets. This new memory channel we are seeing will speed up integrated graphics chipsets (video card built into motherboard), but add-on cards will continue to be faster due to having its own dedicated memory without any concurrency issues with the CPU.

Thus, for the power-user that wants the best performance, the add-on AGP card is still faster than this new integrated chipset. And since we'll be using the add-on cards, and this memory channel is for use only with the builtin graphics chipset, it's going to remain completely unused by the power-user/enthusiast segment.

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

HolyGrenade

Distinguished
Feb 8, 2001
3,359
0
20,780
How come you guys like speculating so much, and do so under a false pretence of having facts.


<font color=red>"My name is Ozymandias, King of Kings:
Look on my works, ye Mighty, and dispair!"</font color=red>
 
G

Guest

Guest
I guess it remains to be seen. I understood the first article as showing that since there are gonna be two memory channels, the video card could use a comparable bandwidth as a normal GeForce MX (that is, 2.1GB/s). If there were only one channel, it would always have to be less than that.

But I didn't see any indication that it would be a dedicated channel for video card only. According to the conceptual graph, anyway. Let's wait and see.

Leo
 

jeffg007

Distinguished
Dec 31, 2007
243
0
18,680
"That does not show a dual channel memory configuration for use with the processor. In fact, it clearly shows a limit of 2.1GB/s for the FSB, which would mean single channel. This is further supported by the odd number of DIMM slots."

You lie out your teeth.
Here is what the article stated:
"The big surprise here is the memory bandwidth: A whopping 4.2GB/s of DDR goodness. Obviously, NVidia is going to run dual channels here, unless they feel like trying to develop a new module standard."

You say "This is further supported by the odd number of DIMM slots."

the article:
"So how is NVidia running 2 channels with an odd number of DDR DIMM's?
Notice how one DIMM is placed separately from the other two on MSI's Crush motherboard? We believe that NVidia will be providing a dedicated DIMM for frame buffer for the onboard GeForce2 MX, and that their integrated chipset won't be SMA at all. Thus, main memory and system memory will NOT be shared."

Which means that the odd dimm is used for the integrated video core only. Pretty sweet if you ask me!

you said
"Thus, for the power-user that wants the best performance, the add-on AGP card is still faster than this new integrated chipset."

Ok i'll agree to this but if they make an integrated gforce 3 core that board will smoke. And from what i hear the xbox will have this setup. To bad they are useing a The Pentium III.

Did you even read the article or just look at the pics?

jeff

Tom's Hardware the march to the top 100 teams! <A HREF="http://setiathome.ssl.berkeley.edu/stats/team/team_type_4.html" target="_new"> Ranked 182 </A>
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
No false pretenses here. "Only the facts ma'am."

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
"You lie out your teeth."

No, I quoted the article, as did you. Your own quotes support what I said.


"Which means that the odd dimm is used for the integrated video core only. Pretty sweet if you ask me!"

Not too sweet if you ask me. This gives you one channel for the graphics chipset and one channel to the CPU. That leaves you with 2 DIMM slots for standard system memory. Surely you don't think they'd add yet another channel for the standard FSB memory accesses by the CPU? Then you'd be required to fill both slots and have no room for upgrades later. (You must have one memory module per channel.) This further supports there only being one memory channel going to/from the actual CPU, while the other is left to the graphics chipset.


"Did you even read the article or just look at the pics?"

Please follow your own advise.

-Raystonn

= The views stated herein are my personal views, and not necessarily the views of my employer. =