ASRock H97 Anniversary Motherboard Review

Status
Not open for further replies.
This would be a good board to put a bunch of PCIe -> SATA adapters for a large scale NAS... Four PCIe to quad-port SATA cards can be had for $20 each or so, so along with the on-board SATA ports you'd be able to put together a 22 drive setup.
 

InvalidError

Titan
Moderator

That would be grossly inefficient and slow since each board would only have an x1 PCIe connection. You would be much better off with a motherboard with x4 slots and controller boards with 16+ ports each. Yes, those cost more than $20 but if you can afford to spend $3000 on HDDs + $100 in add-in boards, I think you can afford a $500 controller.
 
I realize that these things take a long time to get out and there are a lot of things in the pipeline. This would have been great information a year a go, but less useful today, especially with the G3258 being past it's' best by', although still a great chip.
 

RazberyBandit

Distinguished
Dec 25, 2008
2,303
0
19,960
"Cons: Full ATX width"

Seriously? How is that a con?
What's funny is that it's not 9.6" (244mm) wide, as per the ATX standard. It is built closer to the widths of the Flex-ATX or DTX standards, which measure 194mm/7.6" and 208mm/8.2", respectively. (If I owned the board, I'd measure it, like the author should have.)

The dead-giveaway that this isn't a standard 12" x 9.6" ATX board is the fact that it only has 6 mounting holes - it would have 9 if it were in fact a full-size ATX board. My personal moniker for boards built to such dimensions is ATX-Thin.
 

mac_angel

Distinguished
Mar 12, 2008
657
133
19,160
ASRock seem to make decent products, but as for their customer service and warranty, it's poor. Next to impossible to reach anyone, and redirected all over. I have one of their gaming motherboards that I sent back weeks ago and haven't gotten anything back since. Any time I write it, they say it is 'out of stock'. Maybe because I'm in Canada? Either way, really bad support. Never again.
 
I agree that this socket is now "old," but I have no control over length of the publishing queue. Soon I'll be submitting H170 reviews so as to be more current.
I list the full ATX width as a con because there are many boards with similar features (all but the multiple PCIe X1 slots) that are considerably smaller. In many settings it doesn't matter, but where it does, you can get most of the same features on a mITX board now.
 
Oh, and as to ASRock's customer service, for my part I've been satisfied. I bought their Z77E-ITX second-hand some years ago. It died (popped VRM; running stock, but in a cramped case with a hot GPU). Although I bought it without a warranty, ASRock replaced it for $50, and dealing with them was straightforward and easy.
 

Crashman

Polypheme
Former Staff
Actually, you're both right but Onus is more-right. The old standard for computers was horizontal desktops and racks, so top to bottom is "width" and front to back is "depth".

I try to avoid this confusion by not using the word "width" when describing a motherboard.

Joe's comments concern the availability of similar features in Micro ATX models. He assumes that you won't need five x1 slots and that 2 would do. He's probably right.
 

InvalidError

Titan
Moderator

Personally, the only add-in board I have put in my last two PCs is a GPU and even that might go away once CPUs get HMB/HMC and IGPs that leverage it. I wish there were more decent and reasonably priced mATX cases, full-ATX seems like such a waste of space for a regular single-GPU desktop system.
 


It's entirely possible that it would be limited in some regards but your math and logic are both off.

First, what would be the difference between an 4 port SATA x1 card and a 16 port SATA x4 card? You're still running 4 drives per PCIe lane, your overall bandwidth is still equal. The difference is that a 16 port x4 card is far more expensive.

Next, the overall bandwidth. A single PCIe x1 (v2.0) lane is 500MB one way. With 4 drives per card (or per lane) it would be (assuming no card overhead, PCIe overhead is already accounted for) 125MB available per drive. That not seem like much, but the fastest HDDs out there hit around 190MB/sec - I would say the average is in the 150MB range. Yes, you're leaving performance on the table in that regard.

The thing you're NOT considering though is the inherent limitation of your network interface. You're maxed out on network transfers at 1Gb/sec. Gigabit ethernet is your absolute limiting factor. 125MB/sec. So - comparing the overall PCIe bandwidth available (4 PCIe x1 slots @ 500MB sec = 2GB/sec overall ideal peak bandwidth depending on how the drives are RAIDed) you have way more drive bandwidth than network bandwidth. Even though you're leaving some drive bandwidth on the table, even if you dropped a 10Gbit ethernet card in the x16 slot - the drive bandwidth still exceeds the network bandwidth by a fair margin (1.25GB/sec network bandwidth vs 2GB/sec drive bandwidth).

The only way you'd really lose out in this situation is if you were running it as a JBOD with a 10Gbit ethernet (1.25GB/sec ethernet into 500MB/sec PCIe x1 lane). A very highly unlikely situation for your average home NAS builder considering the expense of 10Gbit hardware and the average demands placed on the average NAS. Anyone who's running that many drives (8-16 drives) would be using a RAID10 setup for speed and redundancy.
 


Actually, it looks very similar in dimensions to a board I picked up for my NAS a few months ago. The board I picked up was a Gigabyte B85M-HD3 which is a mATX board, BUT it only has the depth (front to back with the back being the rear IO panel) of an ITX board. It was quite handy as it went into a short depth rackmount chassis and left room for the ICYDOCK hotswap bay. The previous mATX board was a little deeper and things were interfering.

Still - you're right, this is more of a flex-atx style board than a true ATX. It has the ATX height and slots, but is significantly narrower and doesn't use that third row of mounting screws that even standard mATX board takes.
 

This, for the most part, although I've put in an occasional wireless card. I consider USB dongle antennas as tending toward useless, so if wireless is a need, I'm going to want a card.
One possible exception is drive space limitation in small cases. Most meet the minimum for general use, which IMHO is a system SSD, a data HDD, and an optical drive (why I consistently grouse about only two SATA cables included with motherboards). A fully loaded primary system will have a system SSD, a pair of HDDs in RAID1 for data, another HDD for backups, plus the optical. That may require an ATX tower.
 

akula2

Distinguished
Jan 2, 2009
408
0
18,790
I reckon those comparison boards aren't good enough. Don't get me wrong. With the same CPU, I built a few dozen of executive machines for whatever am doing. I chose these boards:

ASRock Z97 Anniversary
MSI Z97 PCMate
MSI Z97 Guard Pro (with DisplayPort)

Later, with same CPU, I also built a few Gaming machines using this solid board :
Asus Z97 Pro Gamer ATX board (with ROG features and quality).

Perhaps, you might consider adding Asus H97 Pro Gamer as the last board in your series because I know many folks who don't bother much about OCing.
 

Crashman

Polypheme
Former Staff

A reader would like us to do some PCH PCIe testing. We know the theoretical bandwidth limit, but there has to be more to test if you're a PCIe geek right? And, it looks like you have a great board for it!
 
Hmmm, what would you do, create a big RAID0 with controllers in each slot, and test throughput? I don't have half the equipment or space to test that. Any chance that ChrisR does, since he does drive testing? Or you in the "main" lab?
 

Crashman

Polypheme
Former Staff
I really don't know. But it would probably need to include the impact of several high-bandwidth devices on things like latency on the same board's network controller. Know anyone geek enough?
 
Hmmm, my friend Frederick Brier would be geek enough, but probably does not have the time; he's always involved with multiple projects, and is also mostly a software guy.
Does ChrisR have any thoughts on the subject?
I'm also thinking you'd need to test with multiple CPUs to see if there's any bottleneck there with a mere Pentium.
 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795
Personally I would try to get a protocol analyzer (http://teledynelecroy.com/protocolanalyzer/protocolstandard.aspx?standardid=3&capid=103&mid=511) so you can measure actual PCIe bandwidth and latency for each board. I've borrowed a system from LeCroy before for testing SATA SSDs.

In the scenario above all or most of the PCIe lanes are consumed. I would be worried about the OROM capacity of the motherboard. I haven't ran into the issue with server boards and HBA/RAID cards but I do have a problem with several PCIe NICs. Shifting all of those storage cards to a consumer class motherboard cause a problem. It's one of those try it and see things.
 

tuppydog

Honorable
Feb 18, 2016
6
0
10,510
Joe: "Mining on GPUs is no longer practical;" is a completely false statement. You should have added that to Mine Bitcoin on GPU is no longer practical. Other Alt-Coins like Ethereum and Dash (which at the moment can only be mined with CPUs and GPUs), have been VERY profitable last year and especially in Jan/Feb 2016. I can't wait to try this board out, as the old H81 BTC Asrock board was a hit or miss board, with severe quality issues. I hope this new board is better (can't believe they didn't add wi-fi!), good review sir, just don't forget that GPU mining has been recently resurrected from the dead with these upcoming new coins!
 

tuppydog

Honorable
Feb 18, 2016
6
0
10,510
Also, I meant to add that, this new board is probably an upgrade to the Asrock BTC H81 because of so many complaints and bad quality problems. I could be wrong, but this board is designed exclusively for mining, just an opinion....
 
I probably should have specified "BTC mining." Still, any other alt-coin mining I'd consider to be gambling, as they are all highly speculative with no established consumer marketplace (unlike BTC, with the possible exception of Litecoin).

 

tuppydog

Honorable
Feb 18, 2016
6
0
10,510


 
Status
Not open for further replies.