• Our team is working to address issues posting quotes or media to the forums. Please bear with us as we get this sorted out.

Intel's 100 Series Platforms Feature Less Connectivity Than You Might Expect

Status
Not open for further replies.

electric_mouse

Reputable
Sep 30, 2015
2
0
4,510
0
No big surprise here.... Semiaccurate covered this back before the Broadwell launch.

Part of the point of Skylake and Broadwell is to smash AMD and NVIDIA by reducing the number of PCIe lanes available to make it much less compelling to use a discrete graphics card rather than just settling for integrated graphics. Intel wants you to just play Candy Crush.
 

InvalidError

Titan
Moderator
Most people use only a fraction of the total connectivity available on any given CPU and chipset configuration, so most people won't notice the difference.

Since the SERDES are practically the same between PCIe, USB3 and SATA, expect Intel (and AMD) to use even more HSIO/multi-purpose serial ports in the future instead of dedicated standard-specific lanes to let board manufacturers decide what mix of IOs to offer. Ultimately, we may end up with chipsets that have 30 universal lanes shared any which way between the chipset's PCIe, SATA, USB3(.1), Ethernet and other media access controllers.
 

dgingeri

Distinguished
Dec 4, 2009
2,123
1
20,460
212
I noticed this when they first came out.

The worst thing is the arrangement of the slots. If all the SATA ports are used, there isn't a way to get more than one M.2 PCIe x4 slot, using PCIe ports 9-12. To get a second, the motherboard manufacturer would have to disable two SATA ports to use PCIe lanes 17-20. In addition, the chipset lanes cannot be used to make a x8 slot at all. This setup severely limits the possibilities on how the I/O can be used. The desktop chips, and the Xeon E3 series with them, are severely limited as to what hardware can be used with them.
 

jimmysmitty

Champion
Moderator


You are not considering that M.2 is a relatively new interface and before only one was available. Two is pretty overkill since one is capable of up to 32Gbps connections so what would be the downside to having two less SATA ports?

This is consumer end as well. Most consumers are not going to max out SATA ports and M.2.

The extra lanes were never able to be used to make an x8 slot and the only thing that benefits from that is a GPU, or server grade PCIe SSD but who is going to pay through the nose for that and buy a lower end setup?

If you want more than 2 GPUs you can get the EVGA Z170 board with the PLX chip but again it starts to move into the realm of X99 that is made more for high end enthusiasts who want more connections anyways.
 

CyranD

Distinguished
May 6, 2013
26
0
18,530
0


That makes no sense at all. Discrete Video cards uses the PCI-E lanes from CPU not chipset. What this article about does not affect AMD/NVIDIA video cards at all.

Skylake cpus have 20 pci-e 3.0 lanes exactly the same as haswell/broadwell. It the chipset pci-e lanes that have changed.
 

SteelCity1981

Distinguished
Sep 16, 2010
1,129
0
19,310
12
if intel would just use QPI for their mainstream cpu's this 4gb memory controller bandwidth limit would be a non issue, but they won't they keep QPI reserved for intel's extreme editions and make you pay a premium for that luxury...
 

CyranD

Distinguished
May 6, 2013
26
0
18,530
0


I am pretty sure the extreme edition still uses the DMI to connect to the chipset. QPI is used on muti-cpu motherbords to connect CPU to CPU .

It use to be used on the extreme edition(x58) to connect the memory controller but that no longer needed since it integrated into the cpu.
 
Yea, the bandwidth between the CPU and chipset does concern me. They got by with just 2 GB/s of bi-directional bandwidth on DMI 2.0, so the new DMI 3.0 may adequately keep the chipset feed. That is one area in which AMD greater surpasses Intel, as they use HyperTransport on all of their motherboards.
 

SteelCity1981

Distinguished
Sep 16, 2010
1,129
0
19,310
12



I am pretty sure the extreme edition still uses the DMI to connect to the chipset. QPI is used on muti-cpu motherbords to connect CPU to CPU .

It use to be used on the extreme edition(x58) to connect the memory controller but that no longer needed since it integrated into the cpu.

all E/EP editions starting with Nehalem use QPI and the x58 didn't have a memory controller on its chipset. if you recall the memory controller was integrated onto the Nehalem arch.
 

CyranD

Distinguished
May 6, 2013
26
0
18,530
0



I am pretty sure the extreme edition still uses the DMI to connect to the chipset. QPI is used on muti-cpu motherbords to connect CPU to CPU .

It use to be used on the extreme edition(x58) to connect the memory controller but that no longer needed since it integrated into the cpu.

all E/EP editions starting with Nehalem use QPI and the x58 didn't have a memory controller on its chipset. if you recall the memory controller was integrated onto the Nehalem arch.
You right about memory controller and that it has QPI. I am right that QPI is not used to communicate with the X99 chipset. Look at any block diagram for x99 similar to the first picture in this article. It shows the cpu connected to x99 through DMI 2.0. All the other lines coming from cpu is QPI but not that one.

 

chalabam

Honorable
Sep 14, 2015
110
7
10,695
2
Semiaccurate said it from day zero, but no major review site said a word on it.

That's why it all seemed fishy to me.

I congratulate Tomshardware for being too late, but at least publishing the issue.
 



Actually, Semi Accurate has not previously published this information. Semi Accurate pointed out that the DMI 3.0 was insufficient to feed all of the Z170's chipset connectivity at the same time. They never discussed HSIO lanes in depth and they never stated that SATA and USB 3.0 ports come from the HSIO lanes. As such, they never said that the number of PCI-E lanes was less than 20. So, while they did previously discuss the bandwidth issue, that is only a very small aspect of this article and not the key issue being discussed.
 

FlayerSlayer

Distinguished
Jan 21, 2009
181
0
18,680
0
will nv-link from nvidia help at all? or did i just ask something dumb?
NVLink is not coming to consumer boards. It's only for servers that need 5+ GPUs for high end processing, mostly machine learning or simulations.
 

James5mith

Distinguished
Jun 26, 2006
7
0
18,510
0
There is no big secret here. HSIO was part of most site's coverage of the Z170 chipset.

It was literally on the first page of the Anandtech review: http://www.anandtech.com/show/9485/intel-skylake-z170-motherboards-asrock-asus-gigabyte-msi-ecs-evga-supermicro

"The Z170 chipset features a massive Flex-IO hub. In the previous Z97 chipset, there are a total of 18 Flex-IO ports that can flip between PCIe lanes, USB 3.0 ports or SATA 6 Gbps ports. For Z170, this moves up to 26 and can be used in a variety of configurations..."
 

canadian87

Distinguished
Apr 19, 2009
11
0
18,520
1
Here's a question for everyone,

Why can't we get bigger computers? We keep going smaller and smaller, what if we took the same power we were capable of and start expanding the overall size? I wouldn't mind my computer being 2x the size it is now (I have a case capable of holding an eATX and any graphics card) and yet I've always wondered why we don't have processors that are just larger, instead of same size with smaller insides.

This was the best place I could think to ask.
 

InvalidError

Titan
Moderator

Two words: defect density.

The bigger the chips are, the more likely each individual die is to have potentially fatal defects, so there are limits to how large chips can be before the reject/failure rate forces drastically higher prices, especially in the cost-sensitive consumer space. On the motherboard side, once the CPU is hooked up to all of its external peripherals (GPU, RAM, misc. IO) there isn't much else for it to do, so no reason to "go bigger" there.

If you really want to "go bigger," you could get a 2/4/8/16 sockets server motherboard and put 12-18 cores CPUs in it if you have $150 000 to spare.
 

dgingeri

Distinguished
Dec 4, 2009
2,123
1
20,460
212


M.2 PCIe drives are new and expensive compared to SATA drives. Having two can be an asset to offset that a little. That would give someone the chance to have two small drives, 64GB or 128GB each, to put the OS on one and apps or games on the second. Only having one means just putting the OS one one and putting games or apps on a SATA drive, which would restrict the performance enhancements such a drive would provide.

The whole situation between the Z170 "consumer" level and X99 "high end" systems means features that are not needed are forced on someone who needs the "high end" I/O capabilities. No games today can use a 6 or 8 core processor, yet that is all that's available for the X99 systems, yet the low end doesn't have enough I/O to be able to handle the performance enhancements that would help a gaming system, such as multiple M.2 PCIe x4 SSDs and multiple video cards. In doing this, Intel has forced high end gamers to pay an extra $300-400 for extra processors and extra memory controllers they don't need, and do without a helpful aspect of the "consumer" level gear: quick sync on the IGP.

What gamers need is a CPU with 32 PCIe lanes, 4 cores, and dual channel DDR4. A basic Quick Sync capable IGP would be handy as well. This could fit in the price gap between the two. Right now, we have a "too low" and "too high" situation. Intel has screwed us over with this strategy.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS