Adversarial Turtle? AI Image Recognition Flaw Is Troubling

Status
Not open for further replies.
They're on to me! Now I'll have to scuttle the entire 3D-printed Turtle GunRobot army! Maybe I'll use Penguins next time...


Oh crap, my GunRobot design plans were deleted from Google Docs. Also they removed everything I've ever posted on YouTube with the words "stock" or "bump".
 
For example, an autonomous car relies on machine intelligence
Self-driving cars tend to use multiple algorithms in parallel, each using different fundamental approaches. It seems unlikely that you could fool them all (especially without access to the source). And if you did, it would probably also fool humans, or at least look bizarre enough to attract attention.
 

You are looking at the underside of the turtle. It is flying away from you, banking to the left.

Interpretation of images is an issue for people, too.

 
Thinking about it some more... it seems clever to use an object with some natural camouflage in order to hide the features that activate the classifier. For instance, in most of the turtle pictures, you can see what looks like a finger on a trigger, or the trigger and part of the stock.

I have to wonder how well this can work if you don't use camouflage, which is just borrowing from a technique nature uses to do basically the same thing (i.e. fool a visual classifier). It seems like a pretty big crutch, since you could basically just be on the lookout for anything camouflaged that seems out-of-place. I'll bet it wouldn't be hard to build a classifier for that, which you could then use to reject other objects found within the camouflaged region.
 
How to fool Google's self driving car:

hqdefault.jpg
 
Status
Not open for further replies.