Status
Not open for further replies.
H87 chipset:



Intel’s 8 series Non-OCable chipset, w/o SLI support, this is a great chipset for non-OCers and single (or dual CFX capable cards at most, though it's not ideal to CFX, performance wise) GPU users. Boards are ranked according to Form Factor and quality in this list, as a mini-ITX board cannot have dual GPU (Crossfire) support. Features like eSATA have been given due weightage as well. Quality has been looked upon, and there are no persistent issues with most boards.

NOTE: OCing is supported on H and B series LGA1150 MoBos when Pentium G3258 is installed.

NOTE: No H87 board supports XMP memory (ie, higher than 1600MHz).

Helpful links:

Overclocking a Pentium G3258 on H81, B85, H87 & H97 Chipsets

ASUS Enables Overclocking on H97, H87, B85 and H81 Series Motherboards

NOTE: CrossfireX capability is not the best, performance wise, as x4 speed for the second card does not allow it to function to it's full potential, thus reducing performance.

Key:

Red: MSI
Blue: ASROCK
Green: GIGABYTE
Gold: ASUS
Black: EVGA

________________________________________

Tier One: Very high quality. These MoBos are the best H87 you can find, for their purpose (like mini ITX board for Home theater/ media center builds), have great storage features like eSATA. Appropriate boards are CFX capable, and they’re solid quality wise.


ATX Form Factor:

H87-PLUS
H87-PRO
H87-G43
G43 GAMING

mATX Form Factor:

CSM-H87M-G43
H87M-Plus/CSM
H87M Pro4 (ASM version as well)
H87M
H87M-D3H
H87M-PRO

Mini-ITX:

H87I-PLUS
H87N (TN version as well)
H87M-ITX

________________________________________

Tier Two: Good quality. Good general purpose MoBos with single x16 slot and decent quality. Good for everyday use and budget non-OCers, not recommended for high end builds.

H87M-E
H87M-HD3
H87M-E35
H87 Pro4


________________________________________


NOTE: This thread is a part of the article series 'Motherboard tier list'.
 


isn't this exactly what the recent gpu myth article seemed to refute? that the 4x pipeline is more than enough and does not seem to hinder ANY card since usage was VERY low.
 


Which article?
 
Math Geek, nowhere can I see x4 bandwidth lane comparision, the article focuses on x8 vs x16 lanes:

http://www.tomshardware.com/reviews/graphics-performance-myths-debunked,3739-3.html

We have a great article regarding this on Puget systems:

http://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/

x8 vs x16 has minimal difference, but it's not so with x4. It has been known to show considerable performance drops in x8/x4 setups vs x8/x8 setups in CFX.
 
Sorry for the late reply, but yes there is a noticeable difference between x4 and x8 or x16.

@MeteorsRaining, the same issue with PCI lanes running at X4 speeds can be found on lower end Z87 and Z97 motherboards, so you might want to go back and edit your lists.
 
Thanks! All suggestions and inputs are greatly welcomed :)

You might also want to check out the collab page:
http://www.tomshardware.com/answers/id-2369313/motherboard-tier-list-collaboration-interested.html

And the development files:
https://drive.google.com/folderview?id=0B9hz2FDneBfdTDA5QmNZZUNSZXM&usp=sharing

All the members who have helped making this list better will get the due credits in the credits section in main thread, once all linked threads are made, I'll update that thread with the names of members who've helped make this list what it is.
 


i read you link and it does not speak about x4 as i was hoping to see. and they also attach words like "meaningful" "notable" "faster" for 1.5 fps differences. i fail to see how 1 fps in any situation is "meaningful" i'm really trying to understand this and would love to see some similar numbers from actual x4 tests and such to back the "will cripple a card" statement. i am trying to understand this and read as much as i can find but have yet to see any real world data to back the pcie beliefs. your puget article uses 2 titans at 4k resolution and shows no practical fps difference to suggest that the pci lane is messing with performance by bottlenecking the data flow even at x8 gen 2. a need for more bandwidth would surely show when cutting it by 1/4 going from x16 gen 3 to x8 gen 2 with a drop in fps or some obvious performance issue. hence, the titan is using less than 100% of an x8 gen 2 slot. if it is using more than 50% of the x8 gen 2 then another cut in bandwidth to x4 would show in dropped fps. it would have to be utilizing close to 100% of the x8 slot to all of a sudden go from 100% card ability to not usable and crippled? this lets me believe some small bottlenecking at x4 with a titan is possible (probable even) but is your mobo article going to suggest that someone wanting to sli 2 titans do so on an h87 platform to suffer such performance drops and not a newer 40 lane cpu that will give each titan the cpu's full attention (which it clearly does not need)? someone with an h87 using sli will be using much older and slower cards which the data shows me won't suffer any from the x4 limitation since 2 titans at 4k will barely feel it (again no actual proof but the common belief we all have).

this is from the 4th page of the myth article "Our final graph shows, for the first time, directly (versus indirectly via FPS measurements), how extraordinarily small PCIe bandwidth requirements are for a modern graphics card (assuming the readings are meaningful and not "unreliable" as Nvidia says). PCI Express 2.0 gave us 8GB/s of bi-directional throughput from 16-lane links. Both cards fail to hit 10% of that, requiring a sustained figure under 0.8GB/s. That is a mere 5% of what you have available to a single card on a Haswell-based platform."

this is 2 cards together (750 ti and similar to what someone might try to do with an h87 mobo). some quick math tells me that x4 gen 2 is half of x8 gen 2 bandwidth which is half of x16 gen 2. so 10% usage at x16 = 20% usage at x8 = 40% usage at x4 on gen 2 pcie slot. i'd like to see some real world numbers to go with this but i have never seen any actual proof that "the x4 PCI lane will cripple the second graphics card's performance" 40% usage of the available bandwidth is hardly crippled.

again if there is some real world data to refute what i am saying, please show it to me. this is a very common issue/question and it would be nice if everyone was on the same page.
 
Sorry but you've misunderstood my statement. I meant, Puget systems 'already' has a great article regarding 'impact of PCIe gen and speed (x8 and x16) on the gaming performance'. Never do I say 1 FPS is of any considerable difference, a thousand occasions it has been asked here if 'PCIe 2.0 will bottleneck 3.0 card' and the answer is always a big no. I never challanged that.

Ok for the second part, take it like this, simply put, PCIe 3.0 puts bandwidth cap at x16 around 400% the need of current enthusiast cards (say). 2.0 is 200% the current need, at x16 2.0 that is. Now, if you take x8 lanes away, it's still at 100% performance so no issues there. But once you take another x4, which effectively cuts another 50%, it simply caps the card's potential. Which is why we say CFX at x4 cripples performance, artifacting due to that is also widely observed.

Then, you're missing a key point that you cannot SLI with x4 bandwidth, Nvidia doesn't allow that, you need x8 atleast, the reason being exactly what I said above, performance issues. Try CFX 290X at x16/x4 or lower, and you'll have the most choppy gaming experience ever.

Sadly, the real world graphs do not take enthusiast cards, with highest bandwidth, while benchmarking, but I can give a couple examples:

perfrel_1920.gif


And this link (gaming graphs): http://www.behardware.com/art/imprimer/836/

Observe. The difference in performance is indeed not significant as one might think. But what is not usually taken into account is the fact that the cards being tested have way lower bandwidth than enthusiast level cards like Titan Black or GTX 980 or R9 290X.

The problem lies there. If you take a R9 270 and CFX it at x4, then the performance drop may not be considerable vs x8. But if you take 290X in that slot there indeed is considerable performance difference. Now a bit of common sense, if you can max out any game with 290X CFX at very high resolutions, then why to take a performance hit by getting x8/x4 MoBo? Same applies for all other cards which support x8/x4 config for dual GPUs. The performance hit does cripple the experience, if not the opposite.

In a nutshell, 2-5 FPS hit with mid range cards is not much, but 15-20 FPS hit (with choppy perofrmance) with high end cards does hurt.
 


what you say is true about cutting the % usage but the article shows that the 3.0 x16 slot is not 400% more than needed but 2000% more than needed (5% total usage observed with the 750 ti) ramp it up some for the high end card and assume maybe 10% usage and the slot is still giving 1000% more than needed. so your still at 125% of total need on gen 2 x4. again it does seem logical that the 4x would impact a super card by only offering (a guess based on data avaiable) maybe 90% of bandwidth need but anything less seems able to run all out unimpeded.

i will read the new link when i get home today. i understand real world can/will differ from theory and i hope to see the real world data to go with the lab data.

what i got from the puget article (real world data) is that the titan uses less than 100% of a x8 2.0 slot which means it uses less than 25% of a x16 3.0. how much less we don't know but data seems to show a lot less than 100%. so the 3.0 x16 slot is at least 400% more bandwidth than needed for a titan and most likely more than this since i doubt the titan is using 100% of the x8 2.0 slot.

can someone please run these tests again lowering to x4 so we can know for sure?? i'm getting a headache trying to differentiate fact from what everyone "just knows"
 


Exactly, they do. You cannot take 750 Ti as reference and derive R9 290X's bandwidth from that. It's not comparable, 290X has much larger memory bus and bandwidth than 750 Ti. It's 85Gb/sec vs 320Gb/sec and 128bit vs 512bit specifically. That's four times not two. That is why applying maths to this stuff is not ideal. Becuase if we did, my hypothesis will still be right, albiet with 500% from the 400%. Big deal. If we go like that, x4 2.0 has around 65% bandwidth of what's needed. So you see the 'technical' bottleneck there.

But it isn't like that! Look at the image I posted, HD 5870 is miles lower than 290X in terms of bandwidth required for full performance. So if your (or even mine for that matter) were correct, it should operate at 100% performance with x4, which it clearly doesn't. How many times have you used sites like GPUboss or Game-Debate to compare GPUs?

If you actually have used, then you're not doing it right. We always compare real world benchmarks not synthetic performance estimates. If you can't deny a couple articles which presume performance to be the same, then you ought not deny the hundreds of people who've checked it and experienced significant performance drops.

Same with Puget article's data. You can't presume that x8 2.0 gives 100% performance, since in that scenario, the data should've been the exact same as x8 3.0, which it isn't. The difference is not considerable, yes, but if we go by the maths way, then it contradicts itself. In no world can you say x4 will give x8-like performance.
 
I see most the conversation is about gpu's on the H87 chipset but I was looking for information on capabilities of booting OS from PCIe x4 ssd on the H87. I have the ASUS H87M-PRO and in the bios there's the CSM option with PCIe boot device option in the sub menu but so far all I get at boot up is "The current BIOS setting do not fully support the boot device" . I've already tried every possible setting in there.What am I missing ?
 
Status
Not open for further replies.