Ram can be VERY finicky, and in many situations not using a matched set results in a system that either is not stable or doesn't boot at all. Thats what he was referring to. If you want your best chance of it working and not being stuck with ram that you can't use you need to either buy a matched set, or at a minimum buy the identical DIMM to what you have. For example if you try to run a Samsung and a Hynix DIMM in the same system in most cases it won't work at all.
I have multiple
anecdotal examples, as well as links specifically to Intel's and AMD's statements, that oppose this predication. Even my signature system "Gramps" is an example that opposes this notion. Yes, motherboard/chipset and CPU IMC's can be finicky at times but it is not the fault, necessarily, of the RAM. In fact, I know that it is the CPU IMC that asserts compatibility since the Core i7-900s were released. The Core i7-900s/Xeon X54xx/X56xx never state that they support addressing any RAM stick with a capacity above 4GB (All consumer boards claim 4GB/slot) but yet Gramps here does work just fine with 8GB sticks. The requirement for it to work, after cross-referencing the technical documents for the IMC of the Xeon's (which is the same IMC in the i7-900s), is that the composition cannot exceed 512Mb/chip. Thus, dual rank (total of 16) 512Mb composition sticks will work in the system. This was not a function of the RAM being "finicky" but it is, in fact, the
Integrated Memory Controller inside the CPU that dictates this. Well, unless the Memory Controller resides in the Chipset, like it does in older systems, in which case this puts the onus on the motherboard/chipset (Remember VIA?). Lastly, there is a RAM slot topology and trace design that can effect overclocking. 1DPC, Daisy chain and T-topology can effect overclocking. 1DPC is the best for overclocking with daisy chain being 2nd best but only when using 2 DIMMs. Using 4 DIMMs then you want T-Topology for the best overclockability. Go figure, most RAM manufacturers use 1DPC or daisy chain motherboards, with cherry picked CPU IMCs, for their DDR4-3866+ kits.
Gramps, besides needing more VDIMM (Motherboard+RAM OC), has Elpida 128Mbx8 (dual rank -> 2x8=16 total), Micron (SpecTek) 512Mbx8 (dual rank -> 2x8=16 total), and Powerchip (PSC) 256Mbx8 (single rank -> 1x8=8 total) all playing nicely together, after tweaks, and the Elpida is overclocked from DDR3-1333 to DDR3-1680 with tighter timings. Elpida is widely known to be a pretty low quality memory chip manufacturer and is one of the reasons they went out of business. The Elpida RAM is most likely the reason that I must run the RAM at 1.58v. That voltage is completely within Intel's spec for the platform and in fact leaves me with 0.17v of room to possibly overclock further while still staying within spec (1.65v max).
I've got Crucial and Samsung to play nicely together (Micron + Elpida) in an old Dell that had a severly locked down BIOS (XPS 600) that was so bad it required reprogramming the EEPROM of the Elpida. It was overclocking the PCI-E bus causing instability and other hilarity (onboard sound ran at 1.25x speed). This is, yet again, not the fault of the RAM but more the BIOS/chipset of the motherboard.
Hynix + Samsung in a Samsung+AMD laptop
Hynix + Micron(/Crucial) in an HP Desktop that I made sure composition and rank were the same (
BIOS was finicky).
Hynix + Infineon in a Sandy Bridge board.
Hynix + Infineon in a P4 DDR1 board and I'm kind of guessing the second stick. I know it was a Corsair XMS (Infineon) and another with no heat spreader (Patriot/Hynix?).
OCZ(Way before Toshiba acquisition) + Micron + Infineon(Way before they merged with Micron) on an old DDR2 platform (Intel D975XBX2 + C2Q Q6600)
Samsung + Winbond with an LGA775 P4-3.4GHz Extreme Edition.
NEC + Micron (SDRAM back during the P3 days)
And these are the ones I remember off the top of my head or literally pulled down from the attic since I had so much time to create this post.
Even Intel themselves state:
https://www.intel.com/content/www/us/en/support/articles/000005657/boards-and-kits.html#dual
"Rules to enable dual-channel mode:
...
The following conditions do not need to be met:
Same brand
Same timing specifications
Same speed (MHz)"
Granted, this multi-channel specification (including Flex mode) wasn't completely ratified until the DDR2 specification. This was referred to as Symmetrical ("Dual") and Asymmetrical ("Single") or Ganged("Dual") and Unganged("Single").
AMD recently implemented this functionality proper with the Ryzen series. Older AMD CPUs actually had a setting in the motherboard to control the gang mode.
So, this assumption that you must purchase a multi-channel kit, or the exact same stick, is an unfounded one based on myths. While multi-channel kits do offer better performance (because of multi-channel) and compatibility it is not necessary to purchase a multi-channel kit for compatibility. The only time that there can be an issue is when you're using low density memory which can only result in the IMC having a harder time accessing the modules (usually requiring more voltage with either the RAM and/or IMC). AMD actually outlines this in their technical documents on the Ryzen chips:
https://community.amd.com/community...4/tips-for-building-a-better-amd-ryzen-system Looking at the chart in the link it appears that four sticks of dual-rank memory, which results in DDR4-1866, will be the hardest to overclock due to the IMC. This is not because the "Ram can be VERY finicky" but because the IMC itself is finicky.
I'll believe my personal experience as well as the statements made by CPU manufacturers who create the Integrated Memory Controllers that speak to these memory modules. I've been mixing RAM for over a decade. While it is good practice to purchase the same composition (dual vs single rank), manufacturer of internals for RAM modules, and same manufacturing die it does not necessarily mean that you must purchase a multi-channel kit, specifically, to ensure they work together. In fact, most manufacturers only warrant that their kits were tested to work together. Not that they will work together on
everything. This is why QVL's exist in the first place as they're tested by the manufacturer to ensure that the kit does work on the system (even if cherry picked by RAM manufacturers for DDR4-3866+ kits).
By the way, I do have access to a GTX 1080 Ti (EVGA 11G-P4-6593-KR). I was going to run those tests using that GPU soon but those videos were all I had on hand at the time. Shall I perform the tests with said GTX 1080 Ti at 720p or 800x600? Is the fact that the game is Star Citizen disqualify it from any and all validation even if I can prove performance scaling based on CPU single thread performance (which you would have seen if you actually looked at my previously linked thread/article on SC forums)? Shall I test Ashes of the Benchmark-Ahem- Singularity (A game I'd have to buy, but would if necessary) to show CPU performance or something more mainstream that is more representative of subjective reality? Shall I perform tests of multiple different mainstream games to give a proper sample? Do you have a certain method I should follow that will fulfill the requirements you have to allow my performance tests to be admissible? I must preface this with my pre-requisite that I do not purchase any EA, Ubisoft, or Activision games because I refuse to hand those disgusting companies any more of my money. I definitely would consider borrowing the games
if I knew anyone that owned them. Many of my friends do not own the games you've mentioned previously, Assassin's Creed Odyssey and Battlefield 5, so I'd be forced to find someone willing to let me borrow their game library account(s).
Actually, he does know what he is taking about.
Ya, OK. Sure.