News Neural Chip Plays Doom Using a Thousandth of a Watt

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
It's not suprising that it just uses 1 milliwatt, it's a 5 line algorithm with 3 inputs. It's turning left and right and randomly fires until it hits something. It has no sense of 3D space.
 
Last edited:

Endymio

Reputable
BANNED
Aug 3, 2020
715
258
5,270
It's not suprising that it just uses 1 milliwatt, it's a 5 line algorithm with 3 inputs. It's turning left and right and randomly fires until it hits something.
Did you not read the article? It's not doing that at all. It had to learn to scan video frame, detect a "demon" within the frame, and kill it, while conserving ammunition in the process: a neural network of approximately 600,000 parameters.

Low-power chips like this are the future of computing. Once you get into the sub-mw range, you can harvest ambient energy from the environment, allowing IOT devices to operate without batteries or wires. The home of the far future might have hundreds of thousands of such devices in it, some small and light enough to literally float in the air.
 
  • Like
Reactions: PEnns and bit_user
D

Deleted member 14196

Guest
Ugh. I want the home of the past. No gizmos pls

The more they push, AI, the more I consider it like Battlestar Galactica, we’re gonna need to revert to the old ways if we’re not careful

I don’t want anything in my house having Internet capabilities, unless I choose what it is like my modems pc and 📱
 

Endymio

Reputable
BANNED
Aug 3, 2020
715
258
5,270
Ugh. I want the home of the past. No gizmos pls

The more they push, AI, the more I consider it like Battlestar Galactica, we’re gonna need to revert to the old ways if we’re not careful

I don’t want anything in my house having Internet capabilities, unless I choose what it is like my modems pc and 📱
You should avoid those horseless carriages also. They can kill people outright!
 
  • Like
Reactions: PEnns and bit_user
D

Deleted member 14196

Guest
Oh yeah I don’t want gizmos in my car either
 

USAFRet

Titan
Moderator
The home of the far future might have hundreds of thousands of such devices in it, some small and light enough to literally float in the air.
The devices are one thing.
How the manufacturers and hosts monetize them, and you, is a whole other thing.

A BIG pile of 1980's electronics has been replaced by the little cellphone in your pocket.
With a LOT of monetization and tracking to go along with it.
 

bit_user

Titan
Ambassador
Did you not read the article? It's not doing that at all.
He doesn't care. His brand of trolling is to dismiss everything as derivative, unremarkable, or downright bad.

He talks like he's been there & done that, but he's obviously never attempted anything like this. If he had, he might actually appreciate some of the challenges.

a neural network of approximately 600,000 parameters.
Which is actually quite small, for object detection.

Low-power chips like this are the future of computing. Once you get into the sub-mw range, you can harvest ambient energy from the environment, allowing IOT devices to operate without batteries or wires.
Yeah, the key point is that it's low-power enough to embed object detectors in everyday electronics. That's a game-changer, since it means you could potentially have something like a doggie door that unlocks only for your dog and not racoons, squirrels, or even a neigbor's nosy cat that tries to enter. Just to give one example.
 

bit_user

Titan
Ambassador
all of which were squeezed into the NDP200's 640Kb of RAM
😬 ...struggling not to make a 640K joke. (and was that KB or really Kb?)

Anyway, the article is somewhat annoying in that it makes bizarre comparisons with high-end GPUs. The better approach would be to compare it with other embedded processors and neural IP blocks.

Also, could've used a few details about the chip. I'm sure it's holding the entire model on-die (probably in SRAM). That power budget leaves no headroom for external DRAM. Also, it's almost certainly using integer or fixed-point datatypes, and probably has very limited amount of programmability.
 
  • Like
Reactions: healthy Pro-teen
He doesn't care. His brand of trolling is to dismiss everything as derivative, unremarkable, or downright bad.

He talks like he's been there & done that, but he's obviously never attempted anything like this. If he had, he might actually appreciate some of the challenges.


Which is actually quite small, for object detection.


Yeah, the key point is that it's low-power enough to embed object detectors in everyday electronics. That's a game-changer, since it means you could potentially have something like a doggie door that unlocks only for your dog and not racoons, squirrels, or even a neigbor's nosy cat that tries to enter. Just to give one example.
For comparison, when I boot up Stable Diffusion it's showing as using 859.52 Million parameters.
Vastly different use cases for AI ... but I'm assuming if Elvis were to ride a tricycle through vizDoom he wouldn't get shot by their AI since that model probably isn't in its learning.
Or even scarier maybe it does shoot Elvis riding a tricycle knowing that object is suspicious and out of place and may pose an unknown threat.
....
I for one welcome our future AI overlords!
 

bit_user

Titan
Ambassador
I'm assuming if Elvis were to ride a tricycle through vizDoom he wouldn't get shot by their AI since that model probably isn't in its learning.
Or even scarier maybe it does shoot Elvis riding a tricycle knowing that object is suspicious and out of place and may pose an unknown threat.
Right. I was really wondering just how sensitive the model is, and how well it can discriminate. If they haven't shown its raw output on real world video samples, then I'd be suspicious.

Creatures in Doom should be quite easy to recognize, if the model is specifically trained on them. They're always lit the same way, viewed from a fairly limited set of angles, and have a small number of poses.

Interestingly, Id Software made them by hiring a sculptor and snapping photos of his models on a rotating platform against a green screen (or something like it). What you see are just 2D bitmaps, cheaply composited against the raycast background.

Spider_Mastermind_model.jpg
 

PEnns

Reputable
Apr 25, 2020
702
747
5,770
I am amazed that certain people throughout history acted the same way: They said the same thing about every invention, be it the steam train, the car, the airplanes, electricity, machinery, you name it, they found something wrong to complain about and deem it useless!!

And here we are in 2023 on a tech website and a 2023 version of those people are trolling in the same way as their ancestors!!
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
I am amazed that certain people throughout history acted the same way: They said the same thing about every invention, be it the steam train, the car, the airplanes, electricity, machinery, you name it, they found something wrong to complain about and deem it useless!!

And here we are in 2023 on a tech website and a 2023 version of those people are trolling in the same way as their ancestors!!
I'm just skeptical of the people who look at the current AI implementations, and call it a finished perfect product.
 
D

Deleted member 14196

Guest
Yes, in it’s current state it’s not good and that’s not to say that it won’t get good when we have ultra super complex systems, most likely quantum systems that will be able to learn. Right now we just don’t even understand how it really works to take advantage of it.

things will most likely get better
 

bit_user

Titan
Ambassador
I'm just skeptical of the people who look at the current AI implementations, and call it a finished perfect product.
Are you referring to all applications of deep learning, or just specific ones? Because we can go down a litany of applications where deep learning has been very successful, if that's what's in contention.

Yes, in it’s current state it’s not good and that’s not to say that it won’t get good when we have ultra super complex systems, most likely quantum systems that will be able to learn.
It's not hyperbole to say that deep learning has completely revolutionized the field of computer vision. Starting about 10 years ago, it began to outperform classical methods in solving various computer vision problems. Today, if you wanted to do object detection, classification, matching, or recognition using anything but deep learning technology, your product would be utterly noncompetitive.

In fact, the last hurdle to deploying deep learning computer vision at scale was having enough compute power at the edge. That's been solved in two ways: first, by finding ways to optimize models and use cheap integer computation, and secondly by more power-efficient implementations like this chip that the article is about.

For anyone whose followed developments in computer vision, this shouldn't have come as much of a surprise. Going back more than 20 years ago, we saw increased reliance on machine learning techniques. For instance, most face-detection features that started showing up in phones and digital cameras used cascading classifiers developed by Paul Viola and Michael Jones, which they trained on large numbers of face sample images.

I think the key thing about this Doom demo isn't really how well the model can or can't play Doom, but the mere fact that it can do object detection @ 6 fps on a mere 1 mW. That's nothing short of momentous! It might not play quite flawlessly, but that it can play at all reasonably is the point. If you want to see an AI play a game well, I'm sure you need only look at people throwing a fair bit more compute power at the problem.
 

USAFRet

Titan
Moderator
Are you referring to all applications of deep learning, or just specific ones? Because we can go down a litany of applications where deep learning has been very successful, if that's what's in contention.
No.
I'm referring to the current public facing incarnations of ChatGPT and whatever the google/bing thing is.

The results those things pop out are mostly human sounding, but laughable in the actual text.

We've had people come here with a valid question. A busted PC, perhaps.
Then, some enterprising fool soul will pop that question into the openai interface.
Copy/paste whatever it outputs as a reply to the OPs question.

That is wrong on many levels.
Passing that off as your own work. Similar to plagiarism, which we also do not tolerate. If you're to copy/paste someones work, cite your source.
But also, the 'answer' is frequently wrong. And now we have an OP that is even more confused, and still has a broken PC.
 

Endymio

Reputable
BANNED
Aug 3, 2020
715
258
5,270
...But also, the 'answer' is frequently wrong. And now we have an OP that is even more confused, and still has a broken PC.
But that answer is wrong -less- often than the answer from an average person would be. I'm frankly flabbergasted that you don't see what an enormous stride forward that is. And in 5-10 years, that ChatGPT answer will be wrong less often than, not an 'average' human answer, but one from a tech site SME.
 

USAFRet

Titan
Moderator
But that answer is wrong -less- often than the answer from an average person would be. I'm frankly flabbergasted that you don't see what an enormous stride forward that is. And in 5-10 years, that ChatGPT answer will be wrong less often than, not an 'average' human answer, but one from a tech site SME.
'less wrong' than from the average person? Sure. But 'the average person' would not be replying to a problem they have no clue about.
And from what I've seen, not even sure about the 'less'.

5=10 years? It will absolutely be better than it is today.
What we see today, though, is a lot of human sounding gibberish.
 

bit_user

Titan
Ambassador
No.
I'm referring to the current public facing incarnations of ChatGPT and whatever the google/bing thing is.
Well, that's not what this article is about, so you can perhaps understand my confusion and concern that people are perhaps even seeming to cast doubt on the abilities of deep learning in computer vision applications like this product is (mostly) focused.
 

USAFRet

Titan
Moderator

bit_user

Titan
Ambassador
Understood, but it seems some people are so triggered by the term "AI" that every thread even loosely related gets pulled off onto the same tangents. I think there are enough threads about ChatGPT and related technologies that we can hopefully litigate them there, instead of having the same repetitive debates on like half the articles.

What's funny about people reacting to this article is it's remarkably clear cut. They pushed in IEEE Spectrum, which isn't an academic journal, but still one of the most respected industry publications. The specs are quite clear. Lastly, you can see the results by simply watching the embedded youtube video link.

There's no denying that the AI model is actually demonstrating some degree of skill, even if it's not perfect. To me, it's a clear-cut case of "look what we implemented in 1 mW and 640 Kb (?) !" I get that not everyone appreciates what an accomplishment that is, but it's still a far from hype and bears only the most tenuous and superficial connections to ChatGPT.