Biostar Tries Its Hand At Z170 But Overlooks Design Flaw

Status
Not open for further replies.
Honestly even if Biostar make the MB in other size there is not much reason I will choose it over Asus, Gigabyte, Asrock or even MSI...
 
So....what, exactly, is the point of having multiple NICs? Even if you had dual connections through your router, that would only help your LAN file transfer speeds (which are already able to reach Gigabit speeds on standard routers), & won't affect your Internet speed (all Internet is limited by whatever broadband connection you have to your house, most of which are not only nowhere near Gigabit speeds, but also ends up just splitting the connection between both NICs, so you don't get any increased speed anyway).
 
An attempt was made.

Maybe they could have dropped the oversized plastic aesthetics, "military-style" heatsinks, onboard buttons, and LED debug indicator. But I like the brown/black color scheme!
 


The one benefit is that you can have one NIC doing LAN transfers and another doing WAN transfers.

I also do find it sort of pointless but they are not the first to do it nor will they be the last.

There is one benefit, if you ever get above 1Gbit internet and since there are no on-board 10Gbit NICs for consumers yet you would be able to utilize more bandwidth.
 
The Z170 chipset gives you 26 PCI-e lanes, the CPU another 16, total of 42 lanes.

6 are reserved for USB3
2 are capable of handling 2 gigabit ports
M2x4 x 2 is another 8 lanes, IF they're both in use simultaneously

Worst case then is 26 lanes remaining. That leaves enough with some simple switches to handle a full x8/x8/x8 tri-GPU setup. YES, it does mean whomever is putting it together will actually have to plan out the build appropriately, but why would they not give the people the option? Doing it this way makes it impossible to utilize the flexibility that the new Skylake setup allows.
 
Avus is right. Biostar is so so close to the bottom of my motherboard choices for the systems I build, that whatever the color scheme, the format or the product they have, it's not gonna be a concern, unless ASUS, Gigabyte, MSI, EVGA, ASRock (in that specific order - I rarely see myself going past the second option) can't get a specific thing right - chances are very low for that to happen across all manufacturers mentioned, BUT...if it happens, I might turn to...that's right....INTEL and only after that to Biostar.
 
The Z170 chipset gives you 26 PCI-e lanes, the CPU another 16, total of 42 lanes.

6 are reserved for USB3
2 are capable of handling 2 gigabit ports
M2x4 x 2 is another 8 lanes, IF they're both in use simultaneously

Worst case then is 26 lanes remaining. That leaves enough with some simple switches to handle a full x8/x8/x8 tri-GPU setup. YES, it does mean whomever is putting it together will actually have to plan out the build appropriately, but why would they not give the people the option? Doing it this way makes it impossible to utilize the flexibility that the new Skylake setup allows.

No, check our review of the Skylake CPUs. We covered the Z170 chipset in this review too.
http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-2.html

The Z170 chipset has 20 PCI-E 3.0 lanes. There is a diagram of it from Intel in addition to what is actually written. 6 are not reserved for USB 3, the chipset has dedicated USB 3.0 hardware built in which doesn't require PCI-E. None of them are actually reserved for anything, but you are right that one or two would be dedicated to the LANs.

You are right that if both are in use, the M.2 slots take up an extra 8 lanes. The end result is that if, worst comes to worst, you only have maybe 10 PCI-E 3.0 lanes available. It is more than enough to do an 8x 8x 8x setup though. I think the designers are just too accustomed to working with the Z77-Z97 chipsets. They are used to running everything into the CPU.
 


Intel stopped manufacturing Motherboards a while ago...
http://www.tomshardware.com/news/intel-ceases-motherboard-production-haswell,20631.html

Every manufacturer makes shitty mbs and great mbs. To claim ASUS is better than other brands, like saying Ford makes the best trucks, means you might miss out on some fantastic products.
 


I hear you, I'm not saying Biostar doesn't make amazing products. I'm sure there's plenty of stuff worth getting from Biostar, but as far as my personal motherboard preferences go, I've never been dissatisfied with ASUS, Gigabyte and ASRock more at the lower-end of the spectrum. It's just me. Buy Biostar, Buy ASUS, Buy whatever fits your needs and budget.
 
This article is what's flawed, not Biostar's design.

Yes, on paper the Z170 chipset offers 20 PCIe 3.0 lanes. But it's idiotic to suggest using them for Crossfire (let alone SLI). Because between the CPU and the chipset is only a DMI 3.0 connection, and its bandwidth is equivalent to PCIe 3.0 x4. And that has to serve EVERYTHING connected to the chipset. Hooking a GPU up to the chipset PCIe lanes is inferior to giving it a dedicated PCIe 3.0 x4 connection straight from the CPU. So the x8/x4/x4 design Biostar picked on its Z170X board is superior to what the author is suggesting here.
 


Ah - you are correct. It has 26 of what are technically HSIO lanes, the first 6 of which are allocated to USB3, the rest are PCI-e lanes. So - indeed, 20 lanes, + 16 lanes direct from the CPU. It still isn't a great layout though, as without having a way to actually utilize all three x16 slots via a PLX chip in a minimum triple x8 configuration, why bother having them?

At that point they're better off using mATX boards, routing x16 or x8/x8 to two full size slots, and calling it a day.
 
This article is what's flawed, not Biostar's design.

Yes, on paper the Z170 chipset offers 20 PCIe 3.0 lanes. But it's idiotic to suggest using them for Crossfire (let alone SLI). Because between the CPU and the chipset is only a DMI 3.0 connection, and its bandwidth is equivalent to PCIe 3.0 x4. And that has to serve EVERYTHING connected to the chipset. Hooking a GPU up to the chipset PCIe lanes is inferior to giving it a dedicated PCIe 3.0 x4 connection straight from the CPU. So the x8/x4/x4 design Biostar picked on its Z170X board is superior to what the author is suggesting here.

Have you heard of PLX chips? Those hardware PCI-E controllers that essentially gave you more PCI-E lanes? Those chips have been used on several motherboards over the years. They didn't increase the actual bandwidth coming from the CPU, but through hardware manipulated the available bandwidth and the additional connections to provide increased performance.

Intel has essentially done this for years. The DMI 2.0 interface lacked sufficient bandwidth to support 8 full PCI-E 2.0 lanes, and yet it could run with all 8 lanes used, plus everything else connected to the chipset. I'm suggesting that the chipset is going to be adequately able to handle as much as the CPU's PCI-E controller. Even if configured to use 16 PCI-E lanes from the chipset, it likely won't manage to reach the same amount of bandwidth as the CPU's PCI-E lanes. However, it certainly is capable of handling more than 4 lanes of PCI-E 3.0.
 


Well see that is a good point to bring up, but see my comment I posted in response to Sakkura. The chipset is essentially functioning as a PLX chip. That is how it takes what should not be enough bandwidth to feed everything connected to it, and yet some how manages to. It probably can't handle all of the lanes if used at the same time to their full potential, but you will have devices only using a portion of their available bandwidth, or idle after booting using basically none. It is doubtful that anyone will ever actually wire something into all 20 lanes at the same time, but 8 lanes for a GPU and managing everything else should be within its ability to handle.
 


That's completely irrelevant to this discussion. The question is how you assign the available lanes when you're making a board without expensive PLX chips. And as I explained above, Biostar did that the correct way. The chipset only has the bandwidth of PCIe 3.0 x4, so criticizing Biostar for giving a card a dedicated PCIe 3.0 x4 connection directly to the CPU instead of a shared PCIe 3.0 x4 connection through the chipset, with other things contending for the bandwidth, is ignorant.
 
So....what, exactly, is the point of having multiple NICs? Even if you had dual connections through your router, that would only help your LAN file transfer speeds (which are already able to reach Gigabit speeds on standard routers), & won't affect your Internet speed (all Internet is limited by whatever broadband connection you have to your house, most of which are not only nowhere near Gigabit speeds, but also ends up just splitting the connection between both NICs, so you don't get any increased speed anyway).
Several in fact!
Anyone building a server (which is a great afterlife of an old game rig) will want 2 NICs because even if they are not ganged together for 2Gbps of throughput, they can serve up a full 1Gbps to 2 different devices on the network at the same time.
Similarly, if you share a lot of files between your machines then you can push more data in more direcitons without bottlenecks.

Then there is true ganged mode where you use both NICs together for a single flow of data. 2Gbps sounds like a lot compared to internet speeds... but we are really only talking about ~200MB/s of throughput which is slow for large file transfers for anyone editing video or doing other LAN intensive tasks... but it is a heck of a lot better than the 100MB/s that you get with a single connection. Can't wait for 10Gbps Ethernet to become 'a thing' for home PCs.

Yet another use is to have your PC act as a sort of router/gateway for the internet. Use the PC to filter out garbage (attacks, adds, etc.), and buffer useful information (software updates, CSS, webpage artwork, etc) and you can get a much better internet connection without having to pay for faster internet. Doing this you can also filter out-bound traffic so that kiddos who are supposed to be asleep don't stay up all night streaming youtube videos (or *other* videos) from under their covers.

Having 2 NICs isn't exactly for everyone (I would personally prefer 1 10GbE NIC), but it certainly has it's uses.
 
Correct me if I am wrong, but I don't think you can do 16x16x8 on z170... at least not without a feature chip that supplies more PCIe lanes.

My understanding of the setup is that you have 16 lanes dedicated to video which can be allocated to a single 16x port, 2 8x ports, or 1 8x and 2 4x ports. The remaining pool of 20 lanes can only be grouped together up to a 4x connection. While there are more lanes available for use, you cannot allocate more than 4 of those lanes to any single device. This means that the best config you could hope for with 3 cards is an 8x8x4 setup. Still better than splitting up the 16x graphics into 8x4x4... but not by much. Plus, setting up a 8x8x4 connection may require the manufacturer to sacrifice other features that may want to use that pool of 20 lanes, so that is likely what they were trying to avoid.

According to Anandtech, this is why you are limited to 3 AMD cards on the new platform as xFire can run on a 4x slot, where nVidia's SLI requires at least an 8x connection.
 
Avus is right. Biostar is so so close to the bottom of my motherboard choices for the systems I build, that whatever the color scheme, the format or the product they have, it's not gonna be a concern, unless ASUS, Gigabyte, MSI, EVGA, ASRock (in that specific order - I rarely see myself going past the second option) can't get a specific thing right - chances are very low for that to happen across all manufacturers mentioned, BUT...if it happens, I might turn to...that's right....INTEL and only after that to Biostar.
But sadly Intel gave up their motherboard division a while back. They were not the prettiest of boards... but they were rock stable and would run for years. The school I am working at is just now starting to have mobo issues with their intel motherboard Pentium D systems which have been running non-stop for the last 10 years in all manner of heat and humidity during the summer and bitter cold in the winter. Maybe now that a few are dying off I can convince them to upgrade to i3 and i5 systems.
 
So the design "flaw" is that their entry level boards have the first PCIE slot running at 16x, and the second running at 4x. Lets look at some entry level boards from some of the other major manufacturers shall we?

ASRock Z170 Pro4 LGA 1151: First PCIE slot running at 16x, second running at 4x.

MSI Z170A PC MATE LGA 1151: First PCIE slot running at 16x, second running at 4x.

GIGABYTE GA-Z170-HD3: First PCIE slot running at 16x, second running at 4x.

ASUS Z170-P D3 LGA 1151: First PCIE slot running at 16x, second running at 4x.

So in other words, BIOstar's "flawed" design is the exact same design as every other major manufacturer. Why is it a flaw when Biostar designs their entry level board the exact same way as everyone else? Did they forget to pay their kickback or something?

And this my friends is exactly why getting hardware reviews from companies that get paid by the companies they are reviewing is a mistake.
 
What a horrible article title...

The only flaw here was to call this a flaw at all. Skylake was designed with a 8x + 4x + 4x PCIe VIDEO limitation. Biostar implemented it in the exact same fashion the so-called big boys (ASUS, GigaByte, ASRock, MSI, eVGA, etc.) did.

According to the original review's diagram and wording, the CPU and chipset's PCIe lanes are completely independent. All video PCIe traffic would route directly through the CPU's 16 PCIe lanes, while all remaining PCIe devices (M.2, LAN, etc.) would route through Z170's 20 (see quote below) PCIe 3.0 lanes.

That's simple enough.

From the review article:
Six of 26 are used up by USB 3.0 (which is where "up to 20 PCIe lanes" comes from)
So Rookie MIB was right: Z170 technically does have 26x PCIe, but USB 3.0 automatically chews up 6 of 'em.

It used to be that only Kevin would write headlines like this...
 
Status
Not open for further replies.