i9 9900k pci lanes

eli_singer

Honorable
May 22, 2013
35
0
10,530
I'm trying to understand if i can get what i need from this processor.

I plan on buying the i9 9900k with a Gigabyte z390 Master.

I will be transferring the following from my current system:

4x16 sticks of RAM (64GB total)

GTX 1080

10Gb Ehternet PCI Express card

1 NVMe m.2 drive

2 SATA SSD drives

I will add a Blackmagic Decklink card to output video to my studio grading monitor.

I could probably add another NVMe drive down the road and another SATA SSD.

I followed some discussions and couldn't understand whether this setup is feasable or not.

My understanding is that i need 16 lanes for the GPU, 4 lanes for the 10GBe card, 4 lanes for the decklink card.

This sums up to 24 lanes.

I am not sure if the NVMe drives also take their share of lanes from the same pool or not and whether this CPU can handle it.

Would love some clarification from the knowledgeable members here
 
The 9th gen can handle that configuration.

The documents now refer to "platform pcie lanes" to promote the highest number, combining the lanes from the processor with those of the chipset.

The document here says the 9th gen has up to 40 and the x series has up to 68.

https://newsroom.intel.com/news-releases/intel-announces-worlds-best-gaming-processor-new-9th-gen-intel-core-i9-9900k/
 


Well what bothers me is that according to what i see online, here for example: https://en.wikichip.org/wiki/intel/core_i9/i9-9900k

The i9 9900k has only 16 lanes!

Eli
 

I believe what he is saying is that the chipset has lanes as well. so you add those to the CPU lanes to get the total lanes.
 


That's it. As I recall, modern Intel chipsets provide 24 lanes, with some variation on the kind of device that can be connected using lower numbers of lanes, wifi, m2 cards, etc. So 40-24= 16 lanes from the 9900 processor. My favorite is the x 9800, 68-24= 44 pcie lanes from the processor.

While theoretically a lane from the cpu will have less lag than one from the chipset, who will notice ?

 


I am talking about CPU lanes not Chipset lanes.
As far as i understand, the M.2 drives and the SATA drives all come from the Chipset lanes which is not a problem.
But everything that goes into the PCI-E slots comes from the CPU lanes and that's where i might encounter issues since i will be using a GPU, a network card, and a decklink card, all 3 in PCI-E slots.

Eli
 
See if you can figure this out

Strix Z390-H Gaming:

[EDIT] Socket 1151 for 9th / 8th Gen Intel® Core™, Pentium® Gold and Celeron® processors

2 x PCIe 3.0/2.0 x16 (x16 or dual x8)

Intel® Z390 Chipset
1 x PCIe 3.0/2.0 x16 (x2 mode)
3 x PCIe 3.0/2.0 x1

https://www.asus.com/us/Motherboards/ROG-STRIX-Z390-H-GAMING/specifications/
 


?!?!
 
The board provides expansion slots connected by pcie lanes. The board description of those slots names first the processors supported and underneath that
the cpu pcie lanes are described . There are "2x16" which will give 16 lanes if one slot is used, say by a gpu. If both are filled, each slot has 8 lanes. Both configurations use 16 cpu lanes. (Remember the 9800x, 44 cpu lanes. That's what I'm talking about.)

The chipset gives 5 more. (The 3rd x16 only has 2 lanes) therefore 21 lanes for expansion items.

The remaining chipset lanes ( 24-5) are used for usb, sata, etc.

 
here, this screen capture should make it easier - taken from the article provided in her first response

bad8e7d7-4704-4c99-8e80-e1160b92128f.png
 
The example provided by the Strix Z390-H Gaming board does not necessarily apply to all boards. Each manufacturer can provision a board's lanes as they wish, given the confines of Intel's products which I have noted.For the sake of meeting a challenge, lets stay with this configuration.

If you put the gpu and 1 of the other cards in the cpu x16 slots, both will run on 8 lanes,and I surmise that will be plenty for each item. ( As a member pointed out above, a gpu running on x8 works perfectly well.)

That leaves one remaining expansion card you say needs 4 lanes which lanes can only come from the chipset lanes.The sole slot available over x1 according to the configuration above is the x16 but running only @ 2 lanes.

Two things come to mind. Is the card compatible with a x16 slot ? Check the specs. Also, see whether the remaining card requirement is for a generation 2 pcie not generation 3. The second card may run perfectly ok in the x16 chipset slot.

Perhaps a card can be attached by usb ?

The final option would be to find a board configured to provide more chipset lanes
( Whoops, there goes the new usb gen 3.1 ABC port (lol).
 


Well the network card is an Asus XG-C100C card which uses PCIe Gen3 x4.
The Deckling card has a few models, let's say I'll go with the Mini Monitor 4K which uses PCI-Express 2.0 x4.

Looking at Z390 boards, i see there's PCIe Gen3 x1 slots on top of the X16 slots.
will the Deckling work on the PCIe Gen3 x1 slot?

In any case, it does seem that i rather just stick with my 7820x instead of going for the 9900k.
Only question is about the 9800x which i can't seem to understand what's the difference between it and the 7820x.
I don't understand why Intel didn't put out any CPU to refresh that tier around the same price. they go from $600 9800x to a $900 9820x. I don't get it.
 


There is no such thing as gpu lanes, just PCIe lanes. The chip set does not create i/o lanes, only mediate sharing of PCIe lanes.

I have to look into the NVMe technology a little bit more for a verdict, but it looks like a software based control wich will always load the cpu with its work.

9900K is basically half of a xenon e7 series (externally). The 390 chip set makes the accessories share one small set of these lanes and causes the system bottleneck. I imagine in the new PCIe architecture that's coming out next year should have a better chip-set than the 300 series.
 


Are you sure about system letting nvme use different PCI lanes when GPU's memory and RAM is bound with a DMA engine busy copying data? I meant, are they shared or distributed as "first come first serve" manner whenever a pcie lane is free?

 


Anything connected to the z390 is sharing the DMI bus. The DMI is full duplex, but if the device connected to the 390 chipset is half-duplex only 1/2 of the bandwidth is used at that one clock cycle instance. so if your hard drive and network card is on the chip, one device has to wait for the other to finish is data process. A round-robin polling scheme between devices. Everything on the PCIe slots are parallel processed independently from the x4 pcie bus that the dmi 3.0 is derived from.
NVMe would be applied to the rapid storage technology, but there isn't a clean way to use a HDD controller using that bus so if they are using that on board, I would assume its a software controller.

... This is what I gathered from reading the engineering whitepaper on this chip. And it answered a question on why people overclock to try to overcome the inefficiencies of the computer design.
 


So, does having a "software controller" makes less stutter when GPU pci-e is fully utilized and 2x NVMEs are streaming data concurrently?
 


actually it multiplies the problem because cue-ing is done inside an os. so now its HDD->dmi bus (with wait sate)->delay set by operating system->software controller-> wait state of operating system-> dmi bus (with wait state)-> HDD.

That is why a software based controller is just as bad as a performance hit as the old on board video that shared the cpu and ram.

To make matters worse, the thermal design of it has increased (due to wifi) so I can see a slow down happening due to heat when someone overclocks. It almost has the same cooling demands as a socket 7 233mhz pentium

to give you a parallel to this, the low end server chipset C236 is very similar.
 

Latest posts