News Researchers detail new technology for reducing AI processing energy requirements by 1,000 times or better

Status
Not open for further replies.

ekio

Reputable
Mar 24, 2021
133
154
4,760
It’s such a known fact among engineers, that 99 percent of the energy is not lost in computation but in data carrying that I wonder why not every company tries to tackle similar solutions…
Groq did it, and their product are very promising as a result, but why nvidia, amd, intel etc. keep the inefficient good old von neumann engineering in place instead of merging ram and logic ? Big question.
 

DS426

Upstanding
May 15, 2024
254
190
360
I would say the holy grail of microprocessors is integrated logic and memory. We see the results from HBM and V-Cache, but indeed having memory and logic processed together seamlessly would be a heck of a quantum leap in computing.

Before that happens, I assume optics will be a necessary step forward. This article was published in 2015 by MIT. Although it was a much smaller/simpler chip at 70 million transistors, one would think we'd have made more progress on this by now.
https://news.mit.edu/2015/optoelectronic-microprocessors-chip-manufacturing-1223

As for the big tech industry, AI hardware has been over-invested in -- specifically, powerful but incredibly inefficient hardware for the task at hand.
 
  • Like
Reactions: gg83

ttquantia

Prominent
Mar 10, 2023
13
13
515
The issue is that these specialized architectures apply only to a minuscule fraction of all computation. Even inside AI. This has been tried for ages, multiple times, but coming up with a general-purpose architecture better than existing ones has so far turned out too difficult. I doubt there is anything here that benefits computation more generally.
 

slightnitpick

Upstanding
Nov 2, 2023
237
156
260
How extensible is this? Can you plug in another module and increase the RAM/logic, the way you currently can by plugging in another RAM module?
 

bit_user

Titan
Ambassador
It’s such a known fact among engineers, that 99 percent of the energy is not lost in computation but in data carrying that I wonder why not every company tries to tackle similar solutions…
Groq did it, and their product are very promising as a result, but why nvidia, amd, intel etc. keep the inefficient good old von neumann engineering in place instead of merging ram and logic ? Big question.
Things like that have already been announced and are in the works.

As mentioned in the article, this approach is far more incremental than what the researchers did. That would involve a fundamental shift in the memory technology used. They didn't say how well that memory technology is expected to scale, but it sounds like there's a lot to tackle before such spintronics-based compute-in-memory technology is ready for production use.
 
  • Like
Reactions: slightnitpick

husker

Distinguished
Oct 2, 2009
1,251
243
19,670
The issue is that these specialized architectures apply only to a minuscule fraction of all computation. Even inside AI. This has been tried for ages, multiple times, but coming up with a general-purpose architecture better than existing ones has so far turned out too difficult. I doubt there is anything here that benefits computation more generally.
Exactly. I imagine that something like the “Fast, good or cheap — pick two.” iron triangle scenario applies.
 
Status
Not open for further replies.