Biostar Digs In With New AMD AM4 Crypto Mining Motherboard

Status
Not open for further replies.

arakisb

Prominent
Feb 28, 2017
11
0
510
ridiculous that it is virtually impossible to get a mini itx for am4 (other than biostar's own mini itx which is not in stock and from a company with a iffy reputation and shitty reviews for their other am4 boards), but these guys are announcing a niche mobo

thought at least asrock would have released their am4 mini itx, but still no after all this time

pretty sad that mobo makers are not stepping up given so many of us are keen to build small ryzen systems

even the atx taichi, which is clearly the best 370, and about the only i would consider if i was building an atx, is never in stock

how can amd sell ryzens if there are only shitty to fair mobos available and no mini itx to speak of?

i know i am not buying mine until this changes
 

buzznut47

Reputable
Nov 9, 2015
22
0
4,510
A couple of things, for GPU mining I would pick the cheapest CPU possible. There's absolutely no need for 4C/8T. Which makes Intel with a dual core more attractive.
Also why not include some riser cables as part of the package? You could easily sacrifice the IO shield, no need for that. Put the barest minimum sound option. 8 channel HD audio? Completely unnecessary. On board video would be useless as well, most miners run headless units.

Personally, I would not bring this out for Ryzen platform until some low cost CPU options are available.

Great idea. Poor implementation, and really bad timing. I understand wanting to jump on the mining bandwagon, but nobody will want to stick a $170 processor in a board meant to maximize profits, when a $70 chip will do.
 
1) First, this board is really just an AM4 ATX board repurposed for bitcoin mining. Including AUDIO and other unnecessary things is probably cheaper than paying an engineer to redesign and take that feature out, not to mention mass production.

2) there are CHEAPER CPU's coming so not sure why that's a complaint, nor should the company just ignore Ryzen if there's a demand.

3) Mini-ITX?
Making this board doesn't affect mini-ITX being released. The REASON for the delay in mini-ITX is because the motherboard manufacturers are scrambling to support Ryzen. AMD did not give them much time to prepare which is obvious with the UEFI/BIOS support issues.

Creating mini-ITX would not be the first priority, and of course they are coming. In fact, they'll be more stable at launch since they'll all contain newer UEFI firmware than what the initial ATX and microATX boards had.
 

waltsmith

Reputable
Dec 15, 2015
17
0
4,510
A lot of coins only support CPU mining at launch, which is when it is traditionally easier to grab a bunch. And the more physical cores the better. Since that's how CPU mining works. A 1700 with no SMT enabled would actually be your best bang for the buck.
 


The low cost ryzen 3 cpu's will launch soon enough, this is a motherboard announcement, so it makes sense to get out ahead of the cpu launch.

 

SpudmanWP

Reputable
May 9, 2017
2
1
4,510
If it's all about the PCIe, then wait till AMD releases the Ryzen APU so that you can use the onboard video out. This frees up the x16 slot so that you can hookup some PCIe expansion backplanes which should allow this rig to handle 10+ GPUs (if not 20+).
 

AmX85

Prominent
May 9, 2017
3
0
510
Is a Biostar Motherboard? the MOSFET´s heatspreader looks like Gigabyte and the CPU fan like Intel

LOL
 
G

Guest

Guest
So not a single other person is worried about FIVE x1 speed PCI-e 3.0 connectors? How the hell is that even going to come REMOTELY HALFWAY to being able to, then, halfway utilize a modern GPU? So, I'm saying how will these connections even provide 25% output of a video card's compute power?
 


It's a mining board. Mining has nothing to do with bandwidth.
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
Well, really x1 at PCI-e 3.0 should be at least 3/4 the speed as obviously most GPUs don't even fully saturate the x4 on PCI-e 2.0.
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
getochkn
6 minutes ago

Anonymous said:
So not a single other person is worried about FIVE x1 speed PCI-e 3.0 connectors? How the hell is that even going to come REMOTELY HALFWAY to being able to, then, halfway utilize a modern GPU? So, I'm saying how will these connections even provide 25% output of a video card's compute power?



It's a mining board. Mining has nothing to do with bandwidth.

I don't understand (I was doing a test with that other comment, I hope it gets deleted. How does mining NOT need to use the cards full potential--how can a x1 speed PCI-e 2.0 provide the performance required for a mining operation? Can you use just a really old card and have the same effect, does it really not matter?
 


Mining only needs to send a tiny bit of information to the card, the card does it's processing, sends that information back. The full potential of the processing power of the card is being used. It doesn't need a constant stream of 1080x1980x144fps pixels, just a few bits to hash.
 
D

Deleted member 217926

Guest


Pointless to delete a quoted comment. Also pointless to send 1000 alerts to the mods to delete a comment because you were confused. Please don't do that again. :heink:
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
[blockquote] getochkn
3 minutes ago

Mining only needs to send a tiny bit of information to the card, the card does it's processing, sends that information back. The full potential of the processing power of the card is being used. It doesn't need a constant stream of 1080x1980x144fps pixels, just a few bits to hash.

[/blockquote]

I bet you know what I'm about to ask now. So what constitutes a difference in "hashing speed"? So, also let me make sure we're on the same page so far--the PCI-E bus speed, is only relevant for transferring information to and from the GPU?

One more thing, this is INCREDIBLY interesting: http://www.tomshardware.com/reviews/pcie-geforce-gtx-480-x16-x8-x4,2696-6.html

There is ZERO noticeable performance difference in crysis, on PCI-e 2.0, at x4 speed! Isn't that amazing? So all these AM4 boards that might only have just one x16 3.0 slot and a slower 2.0 x4 speed slot, it doesn't limit the performance of the GPU, that's too cool, and I never knew that before I just looked at that article.

I'm now looking for a board that does dual x8 3.0 and an x4 2.0 port, for three GPUs with an 8 core / 16 thread AMD CPU, that's the dream anyway, I'll get there soon enough though, maybe not all those GPUs, but I will have the CPU for sure.
 


Hashspeed is how fast the GPU is, faster GPU, better GPU with more processing cores, faster hashrate.

That's an old card. Newer cards can more take use of the bus speed and you don't want to try to game with a 1080ti at 4x. You still don't need full 16x PCIe 3.0 yet though, no cards can fully maximize that speed, but PCIe 4.0 is coming. lol.
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
[blockquote] getochkn
34 minutes ago

Anonymous said:

Hashspeed is how fast the GPU is, faster GPU, better GPU with more processing cores, faster hashrate.

That's an old card. Newer cards can more take use of the bus speed and you don't want to try to game with a 1080ti at 4x. You still don't need full 16x PCIe 3.0 yet though, no cards can fully maximize that speed, but PCIe 4.0 is coming. lol.[/blockquote]

That's an old article! Yes I'm aware, but I think even PCIe 3.0 at x4 speed is plenty for a third GPU on the ryzen platform--not for gaming, but for things like folding@home, which is a really cool program.
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
I've just looked into their forum and made a post about this--and then it took me back to where I posted and I saw there is already a wonderful thread explaining it in very heavy detail. There is actually a constant pull on the PCI-e bus around 25% for my GTX 960. Also, the motherboard I had in mind uses the 2.0 at x4 speed, which is x2 speed in PCIe 3. This is too slow, and will severely penalize performance and waste power.

I'll have to wait for thread ripper to get more PCI-e lanes, else I'm limited with two x8 PCI-e 3.0 ports for full card utilization.
 

timpster

Reputable
Jun 13, 2017
6
0
4,510
[blockquote] getochkn
14 hours ago
Folding doesn't need a lot of bandwidth either, so it would be fine. Again, send a bit of data, get it processed, get a number back, move on.[/blockquote]

[blockquote]I've just looked into their forum and made a post about this--and then it took me back to where I posted and I saw there is already a wonderful thread explaining it in very heavy detail. There is actually a constant pull on the PCI-e bus around 25% for my GTX 960. Also, the motherboard I had in mind uses the 2.0 at x4 speed, which is x2 speed in PCIe 3. This is too slow, and will severely penalize performance and waste power.

I'll have to wait for thread ripper to get more PCI-e lanes, else I'm limited with two x8 PCI-e 3.0 ports for full card utilization.[/blockquote]

I've just recently learned that on Windows, the program uses more PCI-e bandwidth, and on Linux, because there is less CPU usage (which is great), there is considerably less demand on the PCI-e, and even PCI-e 2.0 at x1 speed only loses a very small percent on a GTX 1080!

To give credit the following content was copied from:

[blockquote] foldy wrote:
So on Linux with a GTX 1080 gen3 x16 vs gen 3 x1 you loose only 4% and another 4% when going down to gen 2 x1.
But on your particular mainboard you loose another 10% when using mb instead of CPU connection.[/blockquote]


[blockquote]Yep, that about sums it up! This experimentation has been informative for me, because in future I'll look for motherboards that allow both GPUs to be connected to the CPU (in an x8/x8 config).[/blockquote]

Now, we know that Linux drivers treat folding with less CPU usage (probably due to Nviidia's poor implementation of CUDA on windows) and is much more efficient, as I could most likely fold on more CPU cores as well, instead of one full core per every GPU, which would be throwing away some of my CPU money.
 
Status
Not open for further replies.