[SOLVED] Any Intel CPU with at least 40 PCI Lanes below i9-7900X ?

G

Guest

Guest
Hi!
I'm trying to find any information about Intel CPU from new Skylake-X line that has at least 40 PCI Lanes and is not so high-priced as i9-7900X but no luck. I'm thinking about successor of i7-6850K. Maybe someone of You is closer to Intel or came across such an information and can share with me.

Thanks!
 
Solution
No. Intel has greedily bestowed the magic 40 lanes only to those willing to part with $1000 or so....

The 7820X has 'only' 28 Lanes...

Although more lanes might be advantageous for some usage scenarios, what will the need be for lane counts greater than what is available with lesser processors? (Many point to this 'pci-e lane advantage' as though it will increase SLI performance: it does not.) Current NVME SSDs can't get above ~24 Gb/sec anyway, so 32 Gb/sec with 4 lanes is plenty per drive)

PCi-e 4.0 (64 Gb/sec) is coming next year, double the throughput of PCI-e 3.0...
No. Intel has greedily bestowed the magic 40 lanes only to those willing to part with $1000 or so....

The 7820X has 'only' 28 Lanes...

Although more lanes might be advantageous for some usage scenarios, what will the need be for lane counts greater than what is available with lesser processors? (Many point to this 'pci-e lane advantage' as though it will increase SLI performance: it does not.) Current NVME SSDs can't get above ~24 Gb/sec anyway, so 32 Gb/sec with 4 lanes is plenty per drive)

PCi-e 4.0 (64 Gb/sec) is coming next year, double the throughput of PCI-e 3.0...
 
Solution
G

Guest

Guest
@mdd1963 I have Gtx1050Ti for Linux host and Gtx1070 for Windows10 VM. Sometimes when playing GTA V I can see some 'slow-down's however I'm not sure if the reason is whether not enough number of PCI Lanes (7700K) or not enough horse power of Gtx 1070.
Other reason why I'm looking at Skylake-X are two Samsung nvme drives. I set them in RAID0 with Linux mdadm and testing them with 'gnome-disks' utility (not the best benchmarking tool but still) I get about 3200 MB in reads and about 650 MB in writes /s . I don't know if the limitations are Linux drivers for nvme included in kernel or again those missing PCI Lanes. However when I check nvme bus speeds with 'lspci -vv' command it shows me in Capabilities -> LnkSta : Speed 8GT/s, Width x4 so it should be fine.
Maybe I should really wait for PCI-e 4.0 as you mentioned... Paying 1k $ isn't too good option when change is coming next year... Paying 1k $ just for more PCI Lanes (also for more cores but I won't be using them anyway) also isn't too good option...

@BFG I know that AMD is trying to be competitive (or maybe is) for what's Intel offering bur I'm not convinced about AMD - it's just my personal preferences.


I have another question then: how many lanes actually do I have in my 7700K - Z270 built? I read about this and thought I understand it but yesterday one post ruined my knowledge. I know I have 16 lanes from CPU but how many lanes else do I have?
http://media.gamersnexus.net/images/media/2017/CPUs/7700k/z270-block-diagram.png
There are 24 lanes on this diagram coming from the chipset. Are 16 CPU lanes involved in this 24 chipset lanes somehow? What's in the case when I don't have any discrete GPU - can those CPU lanes be used by chipset? And backwards - can chipset lanes (if not used by chipset) be used by CPU?
 
Running a passthru GPU to a Win10 VM within KVM will almost certainly drop performance at least partially compared to native Windows...; I suspect the Linux guys that game within VMs would know by how much... (I had seen 95% native performance claims years ago, but, who knows how accurate that is with your specific scenario today)

Now in addition to worrying about numbers of PCI-e lanes are avail to the GPU within the chipset (8x lanes per GPU if/when 2 GPUs are installed), we need to worry about management of them thru Linux and KVM as well as overhead layers that might be added...
 
16 lanes connect directly from the CPU to the x16 slot on the motherboard, which can also be split to multiple slots as the diagram shows.

The CPU is only connected to the chipset PCH via DMI 3.0, which is equivalent to a x4 connection at 3.93GB/s. So while those 24 lanes may communicate with each other rapidly, if they need to access the CPU or main memory they will be bottlenecked by a x4 link--and nearly everything, even DMA transfers of NVMe, will need to go through main memory. A x4 device can connect at full speed as long as USB or SATA or ethernet are not in use at the time, so you can consider those chipset connected lanes to mostly be like multiple x4 ports on a switch.
 
Having the multiple M.2 slots means you can have two drives attached at the same time and each can independently deliver full x4 performance--just not at the same time. So the only time bandwidth would be limited would be if you tried to use both drives simultaneously such as in RAID0. The chipset having the ability to act as a switch chip is a large advantage--you can conveniently have many devices attached at the same time, and why not? It's something Intel includes with the chipset so they have paid for it already, and may as well load up their boards with more slots.

I should point out that some high-end boards use a separate, 3rd party switch chip such as IDT, PLX or Microsemi to further split this x4 connection into even more lanes so you can have the flexibility of even more simultaneously connected devices. This kind of makes PCIe more like the shared bus architecture like PCI was (instead of point-to-point), and just like with PCI and ethernet hubs vs. switches, bandwidth must be shared.

That's better than things used to be--for example with Z97 boards and their secondary x16 slots (that were electrically x4), Gigabyte wired things so installing any card in that slot would disable all the x1 slots. So you could either use that x16 slot or the x1 slots, not both. ASUS on the other hand, wired that 2nd x16 slot as x2 so that both their x1 slots would continue to work, but of course that meant the card in the x16 slot would have its usual bandwidth cut in half even if you weren't using the x1.
 
G

Guest

Guest
I see.

I'm determined about giving to my PCIe devices the ability to work at full speed so I started to think about 6850K. I think it's still noticeable CPU. I don't know... I must think about all of this...

I will mark first answer as solution because that was answer to my original question but for me all answers were valuable. Thank you for your help!
 

thibaut.noah

Prominent
Feb 12, 2018
1
0
510


I did have a passthrough to windows 10, host os was archi linux.
Since windows 10 was installed baremetal on my sdd i could run it without using linux also.
Meaning i could accuratly compare benchmarks running both from native windows 10 and passthrough windows 10, difference was 3% only and that was from the cpu load of linux.