PCI Express 4.0 Brings 16 GT/s And At Least 300 Watts At The Slot

Status
Not open for further replies.
Most, not some. Of course the biggest issue is going to be that Intel will have to push even more bandwidth. And it won't show up until Cannonlake at the earliest.

On another note, any word on HyperTransport 4.0?
 

josejones

Distinguished
Oct 27, 2010
901
0
18,990
I would like to know if PCIe 4.0 will be as pathetic as when PCIe 3.0 first came out and made absolutely no difference over PCIe 2.0 or, will PCI and its members get it right the first time?

REMEMBER?:

MSI Calls Out Gigabyte for "Not True PCIe 3.0"
http://www.tomshardware.com/news/msi-gigabyte-pcie-gen3-mobo,13377.html

Gigabyte Sets Record Straight on PCIe 3.0 Support
http://www.tomshardware.com/news/gigabyte-msi-pcie-3.0-gen3-third-gen,13485.html

And which CPU and mobo will PCIe 4.0 first support - the 200 or 300 series motherboard chipset? Will in be available in 2017 or 2018?

We cannot omit just how bad PCIe 3.0 was when it first came out. Is it best to wait until the 2nd generation 4.0 or at least revision 2 or what?
 


the performance will depend on application. back then is there any hardware that can really stress the usage of PCIE 2.0 making PCIE 3.0 as an absolute advantage to have?
 

CyranD

Distinguished
May 6, 2013
26
0
18,530
I believe 24 pin +4 Pin power connections on the motherboard currently maxes out at around 550 watts. Does that mean that motherboards that support pci-e 4.0 will require new/additional power connections since 550 watts would not be enough for multiple PCI-e 4.0 to use in a SLI/Crossfire setup for example.
 

dstarr3

Distinguished
Is there anything that's been close to saturating the bandwidth of PCIe 3.0? GPUs don't even saturate 2.0, and I don't think SSDs are that fast yet. I have to assume the biggest reason for the next generation is just the power delivery. Probably just doubled the bandwidth now because they'll need to eventually and we have the ability now.
 

CRamseyer

Distinguished
Jan 25, 2015
425
10
18,795
At Flash Memory Summit we saw a handful of products using 16-lane adapter cards with the lanes divided between 4x 4-lane cards (M.2). Storage is pushing the market but other areas like networking have a hand in the change. Mellanox has already displayed the ConnectX-5 with PCIe 4.0 support to drive dual 100GbE connections. The ConnectX-4 with PCIe 3.0 x16 can only drive around 108Gb/s throughput with two 100GbE connections running at the same time.
 

bit_user

Polypheme
Ambassador
Nice writeup.

I'm really excited about PCIe 4.0 (take that, NVLink!), but it comes too late for Intel's new server platform. So, we're probably looking at 2018, at the earliest, for support in Intel systems.
 

bit_user

Polypheme
Ambassador
This is a very selective reading of history, based on one incident with a bad board vendor. At launch, you could get SandyBridge-E boards that had true PCIe 3.0 support & full performance.

But your question is still valid: which chipsets & CPUs will offer true PCIe 4.0 support? It's probably too early to say, but it definitely won't be in Skylake-E and likely not Kabylake.
 

bit_user

Polypheme
Ambassador
Even at launch, there was a small but measurable difference in performance between fast GPUs in PCIe 3.0 slots, compared with the same system in PCIe 2.0 mode (some BIOS' have a PCIe version setting, which can enable a direct apples-to-apples comparison). GPUs have only gotten faster, so you should now see a substantial difference.

As for other peripherals, including SSDs, there are two benefits. First, by increasing the speed of each lane, you can reduce the number of lanes required. This lets peripheral manufacturers either increase speed, or reduce the number of lanes, thus lowering costs and allowing users to fit more peripherals in the same system.

Second, if you increase the speed, not only does it increase bandwidth, but it also decreases latency. Lower latency can improve performance, even in cases where the bus isn't 100% saturated. This probably accounted for the original benefit to GPUs. And lower latency obviously benefits realtime applications, like VR.

I don't know the details of the quoted performance measurement, but you might find a slight improvement by increasing the bus speed, simply due to the lower latency or less impact from transaction overhead.
 

tiagoluz8

Reputable
Jul 21, 2015
56
0
4,630
the graph says 2017 but my guess is we won't see real world PCI-E 4.0 until 2018. NVIDIA and AMD are launching 3.0 GPU's now (by the end of the year I hope the sets are completed, with the 1080 Ti, 1050 and Vega cards from AMD). Intel will launch Kaby Lake, which is just an update to Skylake, with 3.0 for sure. Zen will probably stick with 3.0 also.
 

Ksec

Commendable
Aug 21, 2016
6
1
1,510
I am more interested in the 10Gbps Ethernet. I thought consumer are moving to NBast-T, which is 5Gbps on CAT5e / CAT6. Have they manage to bring the power consumption and cost down on 10Gbps Ethernet?
 

tiagoluz8

Reputable
Jul 21, 2015
56
0
4,630


Yeah, but that weird mobo is a server mobo, I'm talking about consumer stuff, that will take a while.
 

jasonkaler

Distinguished
"Mobile, Internet of Things (IoT), and other battery-operated devices will benefit from low-power states and a new focus on burst performance. "

Really? How many mobile and IoT devices use PCIE?
None that I know of.
 

bit_user

Polypheme
Ambassador
It's used in at least some mobile SoCs.

Don't confuse PCIe bus with PCIe slots. The bus can be used as an electrical interconnect standard to connect different chips on a board, or even different hardware blocks inside a SoC.
 

jasonkaler

Distinguished


I guess I just haven't worked on any high enough spec IoT Devices. I'm still stuck on I2C and SPI
 

bit_user

Polypheme
Ambassador
Here's one:

http://www.anandtech.com/show/9330/exynos-7420-deep-dive/2

The cool thing is that you could theoretically take such SoCs, slap them on a board, and use the PCIe interface for a SATA controller, instead.
 
"If you look closely, there is a single 20-pin power connector like any other modern motherboard. There are also four 8-pin and two 6-pin power connectors."
Am I the only one that no matter how many times I look closely, I can only find a 20pin for the motherboard and 2 x 8pins, one for each CPU???
 

JonnyDough

Distinguished
Feb 24, 2007
2,235
3
19,865
It might be noted that while a card could use fewer lanes to do a specific task, it is possible to add in features to a card with the overhead built in to enable them. For instance, graphics cards became sound cards as well - which to process and output both graphics and sound required more bandwidth that PCI-E 2.0 was able to provide. Here's a thought, what new features might new cards combine? SSD/GPU?
 

lagittaja

Distinguished
Jul 21, 2009
2
0
18,510


hIBmU90.jpg
 
Status
Not open for further replies.