[citation][nom]InvalidError[/nom]You do realize that FBDIMMs are actually SERIAL using TMDS signaling, right? This exactly what I said... serial interfaces have a much easier time getting through sockets and slots at very high speeds because every bit can be locked on individually, no need to worry about clock skew between bits the way wide high speed parallel interfaces do.DDR uses strobe signals to break down the 64-bits DIMM bus into separately "clocked" 8bits groups but even this starts getting touchy beyond 2GT/s: if you have a 100ps setup and hold time on the bus, you have less than 300ps of wiggle-room left for everything else.But the overall latency x period yields roughly the same effective latency in terms of nanoseconds and gimping your clock rates achieves nothing more than sacrifice bandwidth. In benchmarks, higher bandwidth with lower effective latency practically always wins even if latency is numerically higher.Most DDR2-800 (and even DDR2-1066) RAM I can see on Newegg is 5-5-5, not 6 or 7 while DDR3-1066 is overwhelmingly 7-7-7.Lets crunch some numbers...- DDR2-800-5 = 5 / 800 = 6.25ns effective latency- DDR3-1066-7 = 7 x 1/1066 = 6.5ns- DDR3-1600-9 = 9 / 1600 = 5.625ns- DDR4-2133-14 = 14 / 2133 = 6.5ns (based on photos of pre-launch packaging)The ~50% cycles latency bump at the transition boundary between technologies usually favors the older stuff in benchmarks. DDR3-1600 may have higher latency cycle count than DDR2 or lower speed DDR3 but it still has lower effective latency even without having to pay for premium low-latency bins at higher frequency and latency cycle count.If the main objective is feeding an IGP, it makes no sense to sacrifice bandwidth for lower latency and in most CPU benchmarks, it makes little to no sense either when the clock rises high enough to offset latency cycle bumps. For mainstream RAM, whatever the current definition of mainstream may be, we we have been around the 6ns mark for most of the past 10 years.As for "chip count not being a problem", interface width and JEDEC are.Chips with wider data busses draw more current from their IOB power plane, need their internal architecture to be that much wider and likely slower. This means wider chips run significantly hotter (2/4/8X as much stuff happening inside, may actually require heat spreader) and also means it becomes more difficult to adequately bypass/filter the power supply to support higher speeds.The other thing is that JEDEC defined the DIMM interface as having one strobe signal per 8bits data group so having one 8bits chip per strobe signal (or a pair for double-sided DIMMs) is practically dictated by the standard itself - try finding a 1GB DDR3 DIMM that isn't 8x128MB configuration even though today's 1GB ICs would make a single-chip DIMM theoretically possible.Many things sound nice in theory but hit brick walls in practice.[/citation]
Yes, I realize that FB-DIMMs are serial. My point was that there are solutions.
Again, the DDR3-1066 modules are all either older modules or poorly binned modules. My point was that yes, at first, new DDR memories tend to have higher latencies, but that gets fixed later on. Furthermore, I repeat what I've already said multiple times: DDR4 is already supposed to have lower latency than DDR3 even when it comes out for us consumers and even if that fails to be done, it'll almost certainly be done by the time it gets into APU systems. However, I do admit my mistake in forgetting to put the 5 in the DDR2 numbers. I'll edit that into my previous post; thanks for pointing that out. It still doesn't change what I said, but it was a mistake of mine nonetheless.
You completely ignored the whole point of what I said about dropping frequency to tighten timings for an apples to apples comparison and took it to an irrational conclusion that had nothing to do with what I was saying. Like I said, that was about an apples to apples comparison. I never said that it would be practical for anything more than that.
I know how timings and frequency relate to latency. I've lectured people about it before (several times) and even Crashman pointed out that he knew that I knew this.
The chips on modern memory modules are all 32 bit wide chips regardless of how many there are on each module. Talking about differing widths changing things when they are all the same width doesn't make any sense. Differing chip counts on modules are there for having differing capacity modules and such reasons. Furthermore, if you care to look at modules historically that had lower chip counts such as eight or four, they were always capable of having higher performance than the full sixteen chip modules. In fact, some early DDR3 motherboards required high frequency modules to be modules that only had eight chips specifically because that was easier on the memory controller.
Having a single chip module is not theoretically possible because it takes two chips for the 64 bit connection. Heck, there might be reasons for why there needs to be four chips seeing as I've never known any two chip memory modules. Also, I've had four and eight chip modules of DDR2 and DDR3. Some of them had heat spreaders, some of them didn't. However, none of them needed them except when overclocked beyond even their rated specs significantly.
Many things may hit brick walls in practice, but we tend to overcome such obstacles. We adapt and overcome. That is how technology advances. For example, like I suggested earlier, we could take the concept of multiple serial lanes instead of true parallel connections such as is used in PCIe. With sixteen lanes, a PCIe 3.0 x16 slot has a theoretical bandwidth of nearly 16GB/s both ways and its practical bandwidth seems capable of getting close to that from what I've seen by looking at PCIe 3.0 x16 benchmarks. Like you suggested earlier, we can use eDRAM caches, although I suspect that 1GB with current fabrication processes is too optimistic because although much denser than the densest SRAM, eDRAM is still usually something like several times less dense than DRAM in memory modules according to what I've read about it.
Again, you speak of mainstream RAM. Again, I reply to that by saying that since the memory that we were specifically talking about at the start of this, DDR3-2400, is not mainstream, so mainstream memory is not the point of this discussion.