Question Can DMI lanes be as performant as PCIe ones when they aren't being multiplexed/saturated?

emcci

Reputable
Jan 21, 2019
12
0
4,510
After reading a lot I have arrived to the conclusion that DMI lanes are PCIe lanes that aren't accessible directly but through the southbridge motherboard chipset. There, they are multiplexed to the different southbridge elements like integrated NICs, non-CPU pcie slots, audio, USB, sata disks... All these elements share the same DMI lanes, so when the lanes get overwhelmed the devices stop running at their maximum speed anymore. At this point everything it's correct, right? (right??)

My question is, when DMI lanes aren't saturated can they be as fast as PCIe? DMI have an extra chipset on their path onto the CPU, and I don't know how much of an impact that will create.

Can a NVMe SSD connected to DMI lanes reach the same bandwidth and IOPS than a NVMe SSD connected to a CPU pci-e lanes when nothing else it's using the DMI lanes so they don't saturate?
 
Solution
But it's not PCIe it's DMI so it will lack the necessary protocols for communicating with a PCIe NVME device
theres basicly zero difference between pcie and dmi, its just pcie in slave mode with different name and technicaly we dont even need to know if theres difference between dmi and pcie as you dont directly connect to it anyway
All these elements share the same DMI lanes, so when the lanes get overwhelmed the devices stop running at their maximum speed anymore. At this point everything it's correct, right? (right??)
all devices connected shares maximum bandwith between CPU and PCH, if you saturate bandwith, some device will drop speed to fit in
My question is, when DMI lanes aren't saturated can they be as fast...
After reading a lot I have arrived to the conclusion that DMI lanes are PCIe lanes that aren't accessible directly but through the southbridge motherboard chipset. There, they are multiplexed to the different southbridge elements like integrated NICs, non-CPU pcie slots, audio, USB, sata disks... All these elements share the same DMI lanes, so when the lanes get overwhelmed the devices stop running at their maximum speed anymore. At this point everything it's correct, right? (right??)

My question is, when DMI lanes aren't saturated can they be as fast as PCIe? DMI have an extra chipset on their path onto the CPU, and I don't know how much of an impact that will create.

Can a NVMe SSD connected to DMI lanes reach the same bandwidth and IOPS than a NVMe SSD connected to a CPU pci-e lanes when nothing else it's using the DMI lanes so they don't saturate?
Quote from the Intel 12th Gen Data Sheet:

"Direct Media Interface (DMI) connects the processor and the PCH.​
The main characteristics are as follows:​
  • 8 lanes Gen 4 DMI support
  • 4 lanes Gen 4 Reduced DMI support
  • 16 GT/s point-to-point DMI interface to PCH..."

Sounds like it's going to be pretty much like 8 or 4 lanes (depending) with PCIe Gen 4 equivalent bandwidth. But it's not PCIe it's DMI so it will lack the necessary protocols for communicating with a PCIe NVME device. It won't work even if it was physically possible to connect one.

It's safe to say any PCIe device connected to the PCH will be limited to 4 (or 8) lanes of maximum bandwidth to the CPU. That's why direct-to-CPU attachment is more desireable for GPU's even though they're hard pressed to saturate even 4 lanes of PCIe gen 4. An NVME probably wouldn't be much affected, aside from any latency introduced by the PCH itself.

Curiously, AMD's Ryzen CPU's use a proper PCIe interface to communicate with it's PCH chipset. Since the CPU is an SoC it has everything needed to run without a chipset. In such a simplified computer design I've wondered if those 4 lanes that fed the chipset could be repurposed to feed another NVME, for instance, or a 4X PCIe slot or several 1X PCIe slots.

As Igor's article linked above suggests, AMD has made it possible. But I tend to doubt the DMI bus can be used as a PCIe bus.
 
Last edited:
  • Like
Reactions: emcci
But it's not PCIe it's DMI so it will lack the necessary protocols for communicating with a PCIe NVME device
theres basicly zero difference between pcie and dmi, its just pcie in slave mode with different name and technicaly we dont even need to know if theres difference between dmi and pcie as you dont directly connect to it anyway
All these elements share the same DMI lanes, so when the lanes get overwhelmed the devices stop running at their maximum speed anymore. At this point everything it's correct, right? (right??)
all devices connected shares maximum bandwith between CPU and PCH, if you saturate bandwith, some device will drop speed to fit in
My question is, when DMI lanes aren't saturated can they be as fast as PCIe? DMI have an extra chipset on their path onto the CPU, and I don't know how much of an impact that will create.
you probably mean direct PCIe lanes, bandwith wise , PCH lanes can be as fast as CPU lanes, there is increased latency due to PCH in between
Can a NVMe SSD connected to DMI lanes reach the same bandwidth and IOPS than a NVMe SSD connected to a CPU pci-e lanes when nothing else it's using the DMI lanes so they don't saturate?
it can reach bandwith, but IOPS would be lower (higher latency), and IOPS would be even lower when multiple devices tries to comunicate

here some rough basics
PCIe connected to PCH is pretty much network based devices connected to network switch (PCH)
devices comunicate with packets (256 bytes)
and that network switch (PCH) shuffles traffic between devices to keep low latency for all devices
PCH to CPU is like 16GB/s now (DMI 4.0 x8), so thats alot of free packets
i have no idea if PCH//BIOS/OS somehow manages devices priority, but probably not as theres network, usb, etc and you dont want high latency on mouse while downloading something
so in summary, if you dont max out link between PCH and CPU, then all should be fine :)
 
Last edited:
  • Like
Reactions: emcci
Solution
Quote from the Intel 12th Gen Data Sheet:

"Direct Media Interface (DMI) connects the processor and the PCH.​
The main characteristics are as follows:​
  • 8 lanes Gen 4 DMI support
  • 4 lanes Gen 4 Reduced DMI support
  • 16 GT/s point-to-point DMI interface to PCH..."

Sounds like it's going to be pretty much like 8 or 4 lanes (depending) with PCIe Gen 4 equivalent bandwidth. But it's not PCIe it's DMI so it will lack the necessary protocols for communicating with a PCIe NVME device. It won't work even if it was physically possible to connect one.

It's safe to say any PCIe device connected to the PCH will be limited to 4 (or 8) lanes of maximum bandwidth to the CPU. That's why direct-to-CPU attachment is more desireable for GPU's even though they're hard pressed to saturate even 4 lanes of PCIe gen 4. An NVME probably wouldn't be much affected, aside from any latency introduced by the PCH itself.

Curiously, AMD's Ryzen CPU's use a proper PCIe interface to communicate with it's PCH chipset. Since the CPU is an SoC it has everything needed to run without a chipset. In such a simplified computer design I've wondered if those 4 lanes that fed the chipset could be repurposed to feed another NVME, for instance, or a 4X PCIe slot or several 1X PCIe slots.

As Igor's article linked above suggests, AMD has made it possible. But I tend to doubt the DMI bus can be used as a PCIe bus.
Your first link it's broken but thanks for your answer! I wasn't talking about plugging a NVMe directly to the DMI lanes, but through the PCH/southbridge. Specifically interested about the difference between a NVMe connected directly to the CPU and another one connected through the PCH. But your answer has been useful anyway!

theres basicly zero difference between pcie and dmi, its just pcie in slave mode with different name and technicaly we dont even need to know if theres difference between dmi and pcie as you dont directly connect to it anyway

all devices connected shares maximum bandwith between CPU and PCH, if you saturate bandwith, some device will drop speed to fit in

you probably mean direct PCIe lanes, bandwith wise , PCH lanes can be as fast as CPU lanes, there is increased latency due to PCH in between

it can reach bandwith, but IOPS would be lower (higher latency), and IOPS would be even lower when multiple devices tries to comunicate

here some rough basics
PCIe connected to PCH is pretty much network based devices connected to network switch (PCH)
devices comunicate with packets (256 bytes)
and that network switch (PCH) shuffles traffic between devices to keep low latency for all devices
PCH to CPU is like 16GB/s now (DMI 4.0 x8), so thats alot of free packets
i have no idea if PCH//BIOS/OS somehow manages devices priority, but probably not as theres network, usb, etc and you dont want high latency on mouse while downloading something
so in summary, if you dont max out link between PCH and CPU, then all should be fine :)

Thank you so much, that's the answer I was looking for! In the near future maybe I can bench NVMe IOPS both in CPU-PCIe lanes and through DMI-PCH and compare their performances. Maybe could be interesting if I update this thread with the results!

Thank you so much both of you!:)👍