MSI Calls Out Gigabyte for "Not True PCIe 3.0"

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Also Z77 motherboards will be part of the panther point chipsets that will be using ivy bridge processors. Ivy bridge processors as I understand the write up from wikipedia said that you will be able to install an ivy bridge cpu into a cougar point motherboard e.g.:-Z68,P8P67,etc.
 
Always liked Gigabyte boards for builds but good point MSI and your Twin fan MSI GTX 560Ti I have works great.
Will have to look hard at MSI on my next build, if I ever do one again.
 
[citation][nom]otacon72[/nom]Sounds like a Gigabyte employee...lmao.[/citation]
No, it sounds like an informed and objective individual that actually looked at facts and the context of a marketing scheme between two (in this case) competitors. You don't seriously think that Gigabyte would falsify such basic data as technical specifications, do you? And you don't seriously believe that MSI constructed an objective campaign to discredit one of their most viable competitors, do you?
 
[citation][nom]dgingeri[/nom]What would be most useful would be having a bridge chip that would take the PCIe 3 x16 interface from the CPU and bridge it to 4 PCIe 2 x8 or 2 PCIe 2 x16 slots for CrossfireX or SLI 3 way capability without a major loss of performance.[/citation]

It's confusing. I'd rather have a specific board for a specific setup. Once you try to make one be everything for everyone, you end up paying a premium for features you won't use.
 
[citation][nom]sykozis[/nom]wow....this coming from a company with a laughable track record...So, any actual proof of MSI's claims....or is all the "proof" supplied by MSI? Also, did anyone else notice that the pics show different motherboards? The first comparison shows the Gigabyte GA-Z68X-UD7-B3, while the 5th pic shows the Gigabyte GA-P67A-UD4-B3....and on Gigabyte's website, there's no claim of PCIe 3.0 compatibility. In fact, for the GA-P67A-UD4-B3, Gigabyte actually states that all PCI Express slots conform to Gen2 specs....surprisingly enough, the same notice is posted on the spec page for the GA-Z68X-UD7-B3... So, exactly where is Gigabyte claiming these 2 boards support PCIe 3.0 when they list them as being PCIe Gen2 on the spec pages for both products? From what I can find, it seems MSI is simply grabbing at straws and using fraud as a marketing tool. Gotta love the purely false claim on the last page of their marketing scheme..."Only MSI Has True PCI Express GEN3"....so, I guess the PCIe3.0 controllers on newer ASRock boards are just a figment of my imagination....So....where is this software that MSI used to "test" these slots and determine that Gigabyte is using PCIe Gen1 slots on the GA-P67A-UD4-B3... I'd like to publicly bitch slap MSI by proving the results are false.[/citation]
Before you even post a comment maybe you should do some research for yourself.

Before you even think that I'm an MSI fan, I'm not. The last time I bought an MSI motherboard was over six years ago. It had problems running in SLI from Day 1. The micro-stuttering was just horrible.

MSI is disputing GIGABYTE's claim of PCIe 3.0 Ready Motherboards in GIGABYTE's news release from August 8, 2011. Read it for yourself here:

http://www.gigabyte.com/press-center/news-page.aspx?nid=1048
 
gigabyte supports ONLY 16gb/s ahahahah how ancient is that... seriously though, do we even utilize 1/4 of that?
 
Honestly it just looks like whoever typed up the Gigabyte news thingy made an error. I browsed through a few of the boards and none of them actually *say* that they're gen3 compatible. The spec pages specifically state:
All PCI Express slots conform to PCI Express 2.0 standard.
And on none of the pages other than the Sniper 2 could I find the little logo and associated description in the features tab. It's possible that they quickly took them all off, but I somehow doubt that.
 
Well, I just had a look at the Gigabyte website, and besides the G1.Sniper 2, none of them are listed as being PCI 3.0 ready.
 
Never went MSI before. I had Biostar before but everytime I flashed the BIOS it failed on me and every time they expect me to pay for shipping. Besides my one Abit board, Gigabyte and ASUS has NEVER failed me once and I've bought roughly 10 boards from them combined.
 
I was gonna buy MSI for my next computer, anyway thanks to Overclock Genie. I'm HOPING my computer holds out until Spring 2012 for this. I do want PCIe 3.0 components in it, as well. Hopefully, I'm giving manufacturers enough time to screw up the first run and do a better job on the second.
 
[citation][nom]internetlad[/nom]Either way i've never owned a gigabyte product, I've felt they're third tier products at a second tier price. I've been running an Asus board for years and it hasn't caused me problems yet![/citation]

FYI, the first(and last until now) ASUS board I tried was based on the 845D chipset , the P4-XP-X series, it has a particularly annoying issue that if you connect an IDE device and power-up the computer. It wouldn't detect any IDE devices. If you disconnect some of the devices and reconnect them again, the board recognizes them. And I was the lucky one out of the many that tried ASUS that year and had various problems. This caused ASUS to be out of my country's market for quite sometime. They only returned recently. On the other hand, I've used three Gigabyte boards with no issues. The 955X royal is 6 years old and still going strong.

Don't get me wrong, I respect ASUS and their revolutionary board designs. I was thinking of trying my luck with them again. But it'll take a lot to believe they're as reliable as Gigabyte.
 
so the fastest graphics cards use up a whole 8-9 lanes of the 16 available to it. So now the fastest cards will only use 4 lanes (once the cards themselves support pcie3). It really wont make a hill of beans difference for most people, but will make SLI and crossfire much easier to manage for quad+ graphics configurations. Not to mention supporting more lanes will be cool... but seemingly unnecessary in the immediate future.
 
> It really won't make a hill of beans difference for most people ....


I disagree, for a few fundamental reasons.

The PCI-E 3.0 spec published last November is very clear about increasing the clock rate of each PCI-E 3.0 lane to 8 GHz, and replacing the aging 8b/10b frame with a 128b/130b jumbo frame at the bus level.

This means that each PCI-E 3.0 lane will support a bandwidth of 1.0 Gigabyte per second in each direction -- hence, a MAX HEADROOM of 32 GBps through an x16 Gen3 edge connector (i.e. 16 GBps in each direction).

Where I anticipate this making a bigger impact is on high-performance storage subsystems that begin with an x16 Gen3 edge connector, and "fan out" to a multitude of very fast SSDs e.g. with multi-lane connectors like the current SFF-8087 and/or variations on that theme now being considered by the PCI-E SIG.

The fastest 6G SSDs are already bumping against the ceiling of the current SATA-III standard of 600 MB/second (6 GHz / 10).

The Gen3 standard adds only 1 start bit and only 1 stop bit for every 128 bits (16 bytes) of data, in effect removing almost exactly 20% of the data transmission overhead on the PCI-E 3.0 bus.

So, look at the approaching horizon and look forward to larger and larger SSDs with a standard interface speed of 8 GHz, instead of 6 GHz.

aka SATA-IV perhaps?

Hopefully by that time, there will be an option in both the SATA and SAS standards to extend the 128b/130b "jumbo frame" into the cable transmission protocol, and thus also into add-on RAID controller cards.

This could be easily implemented by storage manufacturers with a simple jumper, as is now the case with Western Digital HDDs that require a jumper to override the factory default and downgrade the interface speed to 150 MBps.

At the other end of the data cables, hopefully a simple BIOS setting will enable 130/128 "jumbo frames" over SATA and SAS transmission cables, and/or an on-board jumper will suffice as an interim measure.

What these 2 changes to the storage ecosystem accomplish, in effect, is a logical / topological extension of the PCI-E 3.0 bus standard outwards to all storage subsystems, on an "as needed" basis, with perhaps at most an engineering limit to the length of such data transmission cables -- not unlike the differences among CAT-5, CAT-5e and CAT-6 Ethernet cables.


I hope this helps.


MRFS
 


READ IT BEFORE MAKING ANY CRITICISM, GUYS !
 
[citation][nom]MRFS[/nom]> It really won't make a hill of beans difference for most people ....I disagree, for a few fundamental reasons.The PCI-E 3.0 spec published last November is very clear about increasing the clock rate of each PCI-E 3.0 lane to 8 GHz, and replacing the aging 8b/10b frame with a 128b/130b jumbo frame at the bus level.This means that each PCI-E 3.0 lane will support a bandwidth of 1.0 Gigabyte per second in each direction -- hence, a MAX HEADROOM of 32 GBps through an x16 Gen3 edge connector (i.e. 16 GBps in each direction).Where I anticipate this making a bigger impact is on high-performance storage subsystems that begin with an x16 Gen3 edge connector, and "fan out" to a multitude of very fast SSDs e.g. with multi-lane connectors like the current SFF-8087 and/or variations on that theme now being considered by the PCI-E SIG.The fastest 6G SSDs are already bumping against the ceiling of the current SATA-III standard of 600 MB/second (6 GHz / 10). The Gen3 standard adds only 1 start bit and only 1 stop bit for every 128 bits (16 bytes) of data, in effect removing almost exactly 20% of the data transmission overhead on the PCI-E 3.0 bus.So, look at the approaching horizon and look forward to larger and larger SSDs with a standard interface speed of 8 GHz, instead of 6 GHz.aka SATA-IV perhaps?Hopefully by that time, there will be an option in both the SATA and SAS standards to extend the 128b/130b "jumbo frame" into the cable transmission protocol, and thus also into add-on RAID controller cards.This could be easily implemented by storage manufacturers with a simple jumper, as is now the case with Western Digital HDDs that require a jumper to override the factory default and downgrade the interface speed to 150 MBps.At the other end of the data cables, hopefully a simple BIOS setting will enable 130/128 "jumbo frames" over SATA and SAS transmission cables, and/or an on-board jumper will suffice as an interim measure.What these 2 changes to the storage ecosystem accomplish, in effect, is a logical / topological extension of the PCI-E 3.0 bus standard outwards to all storage subsystems, on an "as needed" basis, with perhaps at most an engineering limit to the length of such data transmission cables -- not unlike the differences among CAT-5, CAT-5e and CAT-6 Ethernet cables.I hope this helps.MRFS[/citation]

Yeah sure but that's completely different than the intended or implied usage from the motherboard manufacturers. They're hyping it all for gaming and graphics cards. We know SSDs don't really enhance gaming to a "must have" level. The storage options all sound great for other arenas. I think its out of context for this article, and this discussion / comments.

I very much respect your contribution(s) to the community, so don't get me wrong. I enjoyed reading it. /peace
 
Status
Not open for further replies.