News MIT's Protonic Resistors Enable Deep Learning to Soar, In Analog

I wonder if the 10^6 improvement is also accounting for the overhead of digital multiplication. If so, I wonder which datatype they used as the basis for comparison (my guess is fp32, but there are far more efficient options supported in recent hardware: bf16, int8, and even fp8 and int4).

I also wonder how well analog neural nets will scale. It seems to me that the more signal propagation you have, the more noise can get amplified. To counteract that, you might need more redundancy, which in turn could impose a practical limit on scalability. And scale is what limits the complexity of "reasoning" it can perform, as well as how much it can "remember".

Analog computing is nothing new, nor are its intrinsic efficiency advantages. Yet, there are reasons why computing went decidedly digital. This makes me a little skeptical of just how far we can get by turning back to analog.

One more thought: I'm no expert, but I thought our neurons are actually sort of digital in amplitude (i.e. being driven by voltage spikes), but operate in the continuous time domain. There's a whole class of digital neural networks which aims to emulate these characteristics, called "spiking neural networks".