Biostar Adds Another AM4 Crypto Mining Motherboard To Its Lineup

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Why do they even bother sticking not one, but TWO MOLEX connectors on the motherboard? PCIe risers (which are required to get 6x card density) nearly unilaterally include a power connector on a separate circuit board rather than pulling power from the motherboard slot. Maybe it's for the handful of miners that use the old ribbon risers?
 
Am I missing something here? These miners are using GPUs to mine, but this motherboard has only 1 x16 slot and the rest are x1? Not going to get a great hashrate from 1 GPU.
 


That indeed what was missing from my understanding :)

That makes sense, thank you.
 
Mining doesn't need an x16 pci-e port to be productive.

The only thing that matters is that the entire DAG (Directed Acyclic Graph) fits into gpu memory as it currently does or else the device would have to store the remainder of the DAG in ram making the speed of the port relevant and most likely slowing the hashrate.

Edit: Of course risers are being used. I thought that was self-evident lol.
 


You only need an x1 PCIe connection for mining. Hashrate isn't determined by the number of PCI lanes going to the card like with gaming performance. I have a 6 GPU Ethereum rig using x1 PCIe to USB adapters to x16 riser boards on all 6 slots, including the x16 slot on the mobo and they all hash at the same speed. The hardest part is getting the system to recognize all 6 cards. I had to disable all other onboard features except the PCIe and LAN connection. The USB does not work, no audio, serial and all SATA but the HDD slot are disabled as well.
 
"So, why exactly are companies suddenly feeling the need to cater to miners? "

Cause they stopped making the AM3 based solution the same time those $20 semptron processors stopped getting released?
 
AMD really needs to release the Excavator based Athlon 950 and the APU's for the AM4. As for now there's no way you can get an AM4 platform for cheap (<$100 cpu's) and upgrade later to Ryzen.
 
DEREKULLO

"Mining doesn't need an x16 pci-e port to be productive.

The only thing that matters is that the entire DAG (Directed Acyclic Graph) fits into gpu memory as it currently does or else the device would have to store the remainder of the DAG in ram making the speed of the port relevant and most likely slowing the hashrate."

so does this mean mining apps could profit a lot from huge gpu memory?

I am curious, as a little discussed feature of vega is onboard nvme raid storage/cache, which vega sees as gpu memory. for now, its 1TB of theoretical 8GBps max., but plenty of lanes have been allowed for on vegas discrete infinity fabric, so rapid speed boosts seem possible - a 4 way stripe of 128GB nvme ssdS e.g., could ~affordably provide a theoretical 16GBps of "L2 GPU cache".

NB that the benchmark speeds we see from nvme ssdS are in a pcie3 environment using pcie3 nand storage devices.

Above, we have both the gpu and the nvme cache on the same Fabric, in very close proximity, and the restrictive pcie protocol can be ignored, in favor of fabricS faster native protocol. Perhaps the nand devices could be customised for that fabric protocol and preferred hardware i/f, yielding significant speed advances for this new form of cache/memory.

nand ~memory/cache "could be" a godsend for power consumption for miners. dunno, but sure sounds it from what u say.

Its slow vs gpu ram, and slower again than system ram (which also can be used by vega gpu as cache), but not as slow or resource draining as u would think - and its HUGE and adjacent to the gpu on the die.

sorry to go on.

but lastly, I should also note that all that fancy data shuffling required to provide the illusion of huge memory, by pooling layers of resources and managing them cleverly, is being performed by a dedicated powerful processor - the Hi Bandwidth Cache Controller. It all~ happens in background and is code agnostic~.
 
I know most miners will use risers to separate power and cooling, but in theory, you COULD run 6x NANO-sized GPUs on a single motherboard (remember that 7-gamer PC that came out some time ago?). They even had a special 7-card liquid cooling adapter made up. I guess my point is, using open-ended PCIe x1 slots wouldn't have killed them, would it?
 
Status
Not open for further replies.