Bitmain Announces Ethereum ASIC Miner, Shipping Mid-July

Status
Not open for further replies.

teodorehh

Distinguished
May 16, 2008
2
0
18,510
You only need six RX580 with 1050W power consumption. And

(a) if ethereum changes the algorithm, you just switch to new software while ASICs are uselss
(b) When Ethereum goes POS you sell the GPUs while you throw to the garbage can the ASIC.

..and btw, ethereum WILL change the algorithm if many of those get sold.
 

InvalidError

Titan
Moderator

I bet Bitmain has had thousands of those in its own mining farms for months already, just like it has used Bitcoin miners internally for months prior to making them available for sale.
 
Jumping into cryptocurrency mining with BC or ETH nowadays is like going back in time to the late 1800s heading westward for gold with a horse and buggy. You just aren't going to get rich after those who got there years prior already beat you to it.
 
Apr 3, 2018
4
0
10
Hey guys, been lurking @Tom's for a long time, love the site! :)

That said, here's a summary of a 13-gpu miner built around two 1000w PSUs:
miner3 10.0.0.13:3333 4 days, 06:38 364.61 MH/s 33110/1 (0.00%) 70C/72% 64C:72% 57C:72% 52C:72%
63C:72% 57C:72% 63C:72% 64C:72% 51C:72% 68C:72% 64C:72% 52C:72% 55C:72% us2.ethermine.org:4444
10.6 - ETH THIRTEEN

So it's 364 MH/s and pulling about 1900W at the wall. A bit under the efficiency of the Bitmain offering, but close. The difference is the GPU build way at this low of a power draw and stable, high hashrate, requires an understanding of the unix kernel and how to undervolt the PCI bus and underclock the GPUs via the kernel's amdgpu driver. And the GPU's need to have their bioses modified. Which, if you don't understand what you're doing, can leave you with "bricked" GPUs that are a PITA to recover. So quite a bit of work to build.

Bitmain's ASICs have a nice web interface. Pointy-clicky and done, you're mining.

And post #1 above isn't really accurate. The E3s won't be bricks if you know how to modify the software running them. Which is kinda like building a ROM for Android, not really that hard... just update the bundled mining software with the new algo, build, flash, poof. Already done for Bitmain D3s, google "BlissZ D3 firmware".

So the E3s are, IMHO, a good thing. They're not drastically faster than six AMD GPUs of similar power, but they are available to hobbyists that just want to mine. And they draw less power, which is always a good thing...

And I doubt PiRL will fork so you can always mine that, or ETC/ELLA/MVP/Something ETHASH-based.
 

Giroro

Splendid
An ASIC can be anything that you want it to be. You could even build a GPU into an ASIC - you could actually even make the argument that all GPUs are technically ASICs. They solved the "Fast Memory" problem by simply designing an ASIC that can use fast memory, and then providing that memory. The only real challenge is that an ETH ASIC probably operates very differently, so they would have been starting from scratch with their design. It can take about 2 years (and a lot of money) to design and produce an ASIC from scratch, and ETH became profitable about 2 years ago - so it didn't even take them an unreasonable amount of time.
A hard fork or a changed algo on ETH won't necessarily even break an ASIC, if they designed the ASIC to be re-programmable - but I imagine you would lose a lot of speed and efficiency if you did that.
FPGAs on the other hand can easily be reconfigured, but those devices are where you actually run into the memory problem. For the most part only FPGAs made for AI are designed for that kind of memory access, and you would have a hard time building a box around a single one of those for $800.
Even if they need to build a new chip for a new algorithm, the turnaround for that will be much faster this time. So ETH fans are going to find themselves in an arms race where there is a hard fork every 6 months, until investors get tired of it and the currency loses so much value that an updated ASIC is not worthwhile.

Sure, there are not many people in the world with the knowledge and motivation (and absurd bankroll) needed to design and build an ASIC, but they didn't exactly pull off some technical miracle here. Anybody who claims their currency is ASIC-resistant doesn't actually understand what an ASIC is, so you should not trust their opinion on such things. All the current ASIC-resistant currencies are simply FPGA-resistant (granted ASIC designs are often iterated and tested on FPGAs first, so that could slow ASIC development).
 
Apr 3, 2018
4
0
10


Interesting clarification, thanks!

My sum total of knowledge about GPU hardware: You can carefully cut the x16 slot down to x8 or x4 to fit in old slots on, say, 1U or 2U servers. A figurative hacksaw job to make an elegant AI solution for colocation. And, when replacing fans on non-blower models, the chip underneath says "ASIC" on it. Meaning a GPU is a less-specific "ASIC" than what carries the label of "ASIC"?

Further I don't understand the difference between a programmable CPU feeding a GPU's ASIC Vs. what's considered an FPGA? Haven't gate arrays fallen out of favor/use with the advent of CPU/GPU hardware and Cuda/OpenCL/OpenACC?

And you wimped out on proffering an opinion if the E3 is a good or bad thing ;-)
 

InvalidError

Titan
Moderator

Practically anything that isn't a basic generic function technically qualifies as an ASIC. I'd argue that even CPUs qualify as they are integrated circuits designed for the specific purpose of executing the ISA it was designed for.
 

Xenocrates

Commendable
Mar 24, 2016
9
0
1,510


Why on earth would you cut down the card? It's far easier to cut the back off the PCIe slot, so that a full length card will fit in a 4x slot. That way the cards are undamaged, and can operate at full speed, and you only lose warranty on one component (if you're using risers, it's cheap), if the MB or riser doesn't already have the slot open at the back.

As far as FPGA's, yes and no. FPGA's are for development work, or for niche algorythm's that need to be fast anyways. IIRC, Nvidia was simulating new architectures on FPGA hardware at one point. Once you have a good FPGA design, you could almost as easily send it to be fabbed for ASIC production, and likely get better speeds at cheaper unit prices, once you overcome the cost of taping it out for lithography anyways.
 

TJ Hooker

Titan
Ambassador

No, ASIC resistant is accurate in that it's difficult to speed up significantly with an ASIC. Which is why this miner is barely more efficient or powerful than a GPU mining rig. ASIC resistant =/= ASIC proof.

And a reprogrammable ASIC would be an FPGA...
 
Apr 4, 2018
2
0
10
I don't think they will hard fork, take a quick look at https://antminere3.com/ They say it will only earn $3-4/day - this is a JOKE and nobody will buy it. If nobody buys it, what's the point of hard forking?! That's a huge development decision and nobody is going to even buy this ugly thing... It's not even profitable until you keep it mining for over an entire 6+ months at current ETH value. What a JOKE bitmain, if you released something around 500-1500 Mh/s, then I could see people purchasing it, but what's the point of this low Mh/s unit? gross...
 
Apr 4, 2018
2
0
10
I don't think they will hard fork, take a quick look at https://antminere3.com/ They say it will only earn $3-4/day - this is a JOKE and nobody will buy it. If nobody buys it, what's the point of hard forking?! That's a huge development decision and nobody is going to even buy this ugly thing... It's not even profitable until you keep it mining for over an entire 6+ months at current ETH value. What a JOKE bitmain, if you released something around 500-1500 Mh/s, then I could see people purchasing it, but what's the point of this low Mh/s unit? gross... https://antminerprofitability.com/antminer-e3-profitability/ also shows the profitability going WAY down if Bitmain ships a lot of these. This would be a huge waste of money for most people, I would STAY away.
 

DavidC1

Distinguished
May 18, 2006
494
67
18,860
Rachel.Liem:

It'll be extremely difficult to make a cheap miner for Ethereum that gets even 500MH/s. You are talking needing 5TB/s bandwidth memory to get 500MH/s. For 1500MH/s you need 15TB/s memory.

The rumored F3 miner that was supposed to get ~220MH/s uses 576 1Gbit DDR3 chips. 440MH/s would need 1100 chips, and 1760MH/s would need 4400 chips.

No one should buy an ASIC miner for Ethereum. As with all investment, get in early!
 

TJ Hooker

Titan
Ambassador

You don't need to have any knowledge of the unix kernel, undervolting the PCI bus, or AMD's driver to get those hashrates or power draw. If you have a 13 RX 570/580 rig, all you need to do achieve those hashrates is modify the memory straps in the BIOS and overclock the memory. There is a BIOS modifying utility that will modify the straps for you with a single click, and overclocking the memory can be done with any number of OC utilities (or written right into the BIOS).

To achieve that power draw, you just need to underclock the core and undervolt. Again, this can be accomplished with a variety of utilities, or written into the BIOS. To be honest, the 365 MH/s with 1900W isn't even exceptional in terms of efficiency for a GPU eth mining rig.

Obviously Bitmain's miner will still be a fair bit easier to set up, and probably more consistent in terms of hashrate and power, just saying optimizing a GPU rig isn't as difficult as you make out.

And post #1 above isn't really accurate. The E3s won't be bricks if you know how to modify the software running them. Which is kinda like building a ROM for Android, not really that hard... just update the bundled mining software with the new algo, build, flash, poof. Already done for Bitmain D3s, google "BlissZ D3 firmware".
If you can just reprogram the circuit functionality, it's not an ASIC. The BlissZ FW you refer to doesn't modify the hashing algorithm (nor could it), which is what would be required to adapt to a hard fork.
 

zodiacfml

Distinguished
Oct 2, 2008
1,228
26
19,310
The article is all over the place despite having experience with mining using AMD cards. A short visit to whattomine.com can dispel all the incorrect data presented.

First of all, it is not significantly efficient. 180MH/s is around six RX 580s with a power consumption of around 800 watts.

Second, price. Though you are correct that $800 price easily beats any GPU mining rig, it is not necessary to make it worse using Nvidia GPUs as the cost comparison as they are not used to mine Ethereum or Ethash coins. $800 divided by 6 is $133 per 30 MH/s on the Bitmain product which is cheaper than an RX 570 at MSRP.

You are right that it doesn't mean GPU price decline but not for AMD. AMD cards can only mine Ethash coins and nothing else so there will be price cuts soon for AMD cards but not Nvidia. Nvidia high end cards mines Equihash coins better while having better gaming performance, price won't go down.
 
Apr 3, 2018
4
0
10
"Why on earth would you cut down the card?"

Because it's in my gender's nature to be fickle and flighty? ;)

Kidding. Because I had on-hand two slim-line RX460s that were no longer used and would fit under the hood of a Dell 1U that was on order, and was expensive. I wasn't sure if the slots were on 90-degree risers -- I wouldn't cut into an expensive server and void the warranty -- and if there was room behind the 4x for the rest of the card's x16 silicon. One slot had space, one didn't (if I'm remembering correctly) so I would've had to cut the card down anyway. And, yes, it was fun and I wanted to just for yucks.

To your point, though, yes, much easier to cut the back of the slot on a regular motherboard if you're willing to void the warranty.

Back in the good 'ole days when I did this, a dual Xeon with two RX460s could, with half the cores only taskset to monero and the 460s running when idle from the randomforest stuff, it would more than pay it's colo bill, which isn't a lot, but it was cool nonetheless and I got to learn about ringct. So that was fun. Kinda made my brain hurt getting my head around it though.



 

TJ Hooker

Titan
Ambassador

What? Did you mean to say that ethash coins happen to be the most profitable for AMD cards to mine right now? Because AMD cards can absolutely mine other algorithms, e.g. equihash.
 
Apr 3, 2018
4
0
10
To achieve that power draw, you just need to underclock the core and undervolt. Again, this can be accomplished with a variety of utilities, or written into the BIOS. To be honest, the 365 MH/s with 1900W isn't even exceptional in terms of efficiency for a GPU eth mining rig.

True. Easier, but it's all a PITA, takes time and knowledge. I do find it's much easier to underclock/undervolt via a kernel module as it's easy to change all at once. That means bioses are programmed with straps, undervolt and power draw(s) only once and you don't have to mess with them again to optimize. And I don't know the actual draw at the wall because my power meter is 110, but I could check if you're interested and we can see if it's still lower -- I've been doing it this way since eth was a couple bucks so might be outdated.

I have seen faster and more stable cards from also undervolting the PCI bus, and my over-reaching goal of month-long-at-a-minimum stability (these are aging 470 4GB cards) seems to be peaked at around 28 Mh/s for them, a speed not able to be held without tweaking the kernel. Sure, they can go higher, and 580 8GBs even more so, but prices for those 580s are crazy now, and stability goes down pretty quickly above those speed. I've observed that 24-hour-rolling hashrate on ethermine is higher when the cards throw zero errors and thus are at a slightly lower "reported" rate. Not really scientific, but I haven't seen a rolling average at or above reported from other miners doing it differently.

Pus zero downtime is good. Stability is great. Let's me goof off here. :D

I can't imagine running a remote cluster of miners (even if it's just the distance from home to work) that can't be scripted with keyless secure shell login, full access to the power of a unix system to monitor, report, diagnose and one-script update every machine to change anything you want. And work life without cron and bash scripts? That's just crazy talk. All of that is a barrier to entry for gigahash+ meaningful returns for small businesses -- even with EthOS, most people just don't want to deal with unix and have to pay someone to do it -- and thus a barrier to adoption, which is bad IMHO.

But, yes, it's getting easier. The ability to one-click-strap-time is awesome! And relatively new. Beats the hex editor days for dual bioses for sure. But still out of reach of the standard non-game-rig-builder Windows PC enthusiast, which again is bad for adoption. For the E3, all you need to understand is how to plug in PCIe connectors, RTFM to know the simple login to the web interface and where to put your account information in. Making mining ethash so easy is good for decentralization (if you can actually get the miners) and I think good for adoption and use of ETHASH coins overall, not just Ethereum.

If you can just reprogram the circuit functionality, it's not an ASIC. The BlissZ FW you refer to doesn't modify the hashing algorithm (nor could it), which is what would be required to adapt to a hard fork.

Also true -- D3s can only mine X11 algo -- but missed the point that the cgminer binary can be changed, and as long as any hard forks don't create a new algo (ala CrytponoteV7) D3s and BlissZ (and anyone that can make a ROM) can adapt to changes in X11 as long as the changes conform to the X11 algo. And so it should be with the E3. I'm not familiar with the hardware of the E3 but it seems it has to be more akin to traditional CPU/GPU with a ton of memory than what's considered a one-algo-specific ASIC. That is, ETHASH's memory hardness means there must already be some flexibility in the E3's design, and I do think it can easily adapt to any hard forks, don't you?

Okay, /soapbox time is over, back to work... :)
 

TJ Hooker

Titan
Ambassador

That's exactly what people are talking about when discussing an eth hard fork, modifying the algorithm (and thereby creating a "new" one) as was done for monero.

I'm not familiar with the hardware of the E3 but it seems it has to be more akin to traditional CPU/GPU with a ton of memory than what's considered a one-algo-specific ASIC. That is, ETHASH's memory hardness means there must already be some flexibility in the E3's design, and I do think it can easily adapt to any hard forks, don't you?
Yes, if that's the case the E3 may be able to adapt to any hard forks. But it wouldn't be an ASIC then.

470 4GB cards) seems to be peaked at around 28 Mh/s for them, a speed not able to be held without tweaking the kernel. Sure, they can go higher, and 580 8GBs even more so, but prices for those 580s are crazy now, and stability goes down pretty quickly above those speed.
I have an RX 570 4GB (same thing as a 470 GB for mining) running at 28.8 MH/s with 0 memory errors. All it took was the one click memory timing patch and overclocking the memory clock to 2000 MHz. No kernel tweaking required. I've seen a number of people on reddit getting ~30 MH/s on their 570s with a similar approach.
 

bit_user

Polypheme
Ambassador

Yeah, it's fairly a relative term. GPUs are Application-Specific in the sense that they're principally designed for graphics rendering. If you follow their lineage back to an era before they were called GPUs, graphics chips were fixed-function logic that certainly qualified as ASICs.


Many (if not most) ASICs have an embedded microcontroller or microprocessor. Some ASICs have such programmable engines at their core, which are surrounded with fixed-function blocks tailored to the workload for which they're designed. It's a reasonable bet that these Bitmain miners fall in the latter category.

https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
 

InvalidError

Titan
Moderator
Monero forked and at least 60% of the network hash rate disappeared. Looks like Bitmain and others have been ASIC-mining for the past five months and accounted for the bulk of total hash for almost four of those.
 
Status
Not open for further replies.