Question CPU connected SSDs versus PCH connected SSDs

Status
Not open for further replies.
Mar 25, 2019
3
0
10
I'd like definitively to know if a SSD NMVe M.2 directly connected to the CPU by 4 free lanes will perform really better "in the real world" with respect to another identical one, but connected to the PCH and then by the DMI 3 bus.
 
Mar 25, 2019
3
0
10
It is a general question, that can be finalized to several builds. I have never found in forums a resolving answer. For example, but only for example, Core i7-9700k (16 lanes) , ASUS Prime Z390-A, AMD Radeon Pro WX4100 (8 lanes), HDD Sata 6TB. A 970 EVO 1TB M.2 for system and programs can be inserted into a M.2 slot (under PCH) or, by an adapter, into the second PCIe x16 slot (8 for the GPU +4+4 lanes), directly connected to the CPU.
Which position will perform better?
The same question can be applied also to any multi-lanes CPU/MoBo (Core-X, Threadripper,..) and so on.
Thank you for attention
To be exahustive: 32GB DDR4 2666, 850W gold PSU, Win 10 Pro x64
 
The modes are identical and in any event their access to the cpu is determined by the cpu. A current M.2 connection supports PCIe 3x4 and so does an adapter in the 2nd PCIe x16 slot . So static testing, e.g, by Crystal Disk Mark, will display identical values for either drive. You may rest easy.

In real life, as you say, there is no discernible or measurable difference because the cpu determines access regardless of available bandwidth and responding to performance commands.
 

TJ Hooker

Titan
Ambassador
What are you going to be using this PC for? You can already barely see a difference in real world speed between a SATA and NVMe SSD in the majority of cases, I can't imagine the difference between CPU and PCH lanes doing anything more than maybe a slight bump in storage benchmarks.
 
Jan 20, 2021
1
0
10
What are you going to be using this PC for? You can already barely see a difference in real world speed between a SATA and NVMe SSD in the majority of cases, I can't imagine the difference between CPU and PCH lanes doing anything more than maybe a slight bump in storage benchmarks.


very interesting conversation and OP question - something I am looking at right now

a lot of this depends on your MB and CPU and how the MB maps PCIE Lanes both CPU and PCH

My MB x299 has 2x M2 slots both on the MB these are both PCH - run through the DMI3 4mbs which means if I access both drives at the same time their bandwidth is cut in half - yes crystal mark goes from 3.5 GBs to about 1.5GBs on both NVME drives (I have performed this bench) so they are limited by the PCH PCIE BUS if accessed at the same time, how this affects real world performance is hard to work out but what it means is you will have 2x very fast NVME drives working at half speed so what was the point in buying them (500gb Samsung EVO Plus)

I have at my disposal 28 CPU lanes (CPU Limited) and 3x spare PCIE slots with the 4th taken up by the GPU x16, this MB is capable of running 3x GPU's 8x8x8x with 4 lanes to spare = 28

my intention is to add 2x more NVME M2 drives in raid 0 (I just want a single drive from 2)

Here is how I am going to connect my 4x M2 drives

1x in the MB M2 slot (PCH) at full speed
1x in each of the 3x spare PCIE (CPU 4x4x4) slots 2x of which will be 2tb NVME SSD's in raid 0 giving me a single 4TB drive at full speed and the 3rd will be my OS NVME drive, according to my initial look at the bios I can use 2x raid 0 using VROC without a key - it seems to be unlocked for this config

This configuration should in theory remove all bandwidth bottle necks on all NVME SSD's

This will not apply to all situations as some MB M2 slots are 1x PCH 1x CPU which would be a better configuration for me and probably everyone but it is what it is

So how you connect NVME drives depends on several things

Lane count CPU
PCIE architecture - PCH CPU Mother Board mounted M2's (how they are wired)
PCIE slots available
 
Status
Not open for further replies.