News US Military Drone AI Simulation Reportedly Turned on Its Human Operator

Status
Not open for further replies.
eventually the drone will figure out that a couple negative points for killing the operator can quickly be overcome with the positive points it gets from getting more enemy kills.

so not really sure a point deduction is going to be all that effective as the AI learns and adapts.
 
  • Like
Reactions: bigdragon
i've stated over and over that the folks actively developing AI has warned over and over that we should be scared. they are scaring themselves with what they are doing.

yet we keep laughing it off. they are asking for regulation, asking for oversight and we're still laughing pretending it's all just fun and games.

don't know how else to say it really. if the folks working with and developing it are scaring themselves, we should be REALLY REALLY scared, though of course we have no idea what they are doing that is scaring themselves so much.

but keep in mind it's just a "thought experiment"
 
"the drone's AI concluded that its task could be best accomplished by eliminating its human controller. "

Sounds a little like the movie Eagle Eye.
 
  • Like
Reactions: bit_user
This sort of reminds me of the Tetris playing AI that would, before losing a game, pause it and sit there forever. You see, if it un-paused it would lose, but when its in the pause screen it can never lose. Cleverly made AI will always come up with clever solutions to its perceived issues, I just hope the unintended consequences of AI in the military is not a Sky-net-esque problem. We really badly need regulation of AI development before its too late.
 
Thought experiment? Looks more like some sort of basic role playing game meant to simulate real life. The game quickly got out of control.

Next time will be different! Next time they won't create Skynet, VIKI, WOPR, or HAL 9000. Honest! It's going to work next time! No way the AI starts attacking civilians to distract its human overlords since it can't kill them, destroy their comms, or jam their comms. Trust the AI people -- you know, the experts who have had enormous sums of money and restrictive contracts suddenly dumped on them with expectations of returns on investments YESTERDAY.

A future where humans are doing all the manual labor jobs while AI controls all the weapons, drives/pilots everything, creates art/movies/music, manages all the money, and rations healthcare is not something I want to look forward to.
 
This sort of reminds me of the Tetris playing AI that would, before losing a game, pause it and sit there forever. You see, if it un-paused it would lose, but when its in the pause screen it can never lose. Cleverly made AI will always come up with clever solutions to its perceived issues, I just hope the unintended consequences of AI in the military is not a Sky-net-esque problem. We really badly need regulation of AI development before its too late.

I may be combining multiple separate stories, but this reminds me of when someone (Nvidia?) made an AI to watch Pacman and rebuild it from the ground up based on what it watched and then play it, and at one point during testing when cornered by the ghosts and having no chance of escape, it would simply delete one of them and keep going, since even though that wasn't shown in any of the sample footage it watched, there also wasn't anything explicitly saying it couldn't do that either.
 
Now who would think it's a good idea to make an AI maximize kills? It would do anything it could to achieve its goal, obviously! The AI should just be able to find, identify and gather intel, then ask a human operator for the next order. No AI should be allowed to decide who dies, ever. And that includes destroy things, what if the SAM site is a theme park? This "thought experiment" must be brainwashed as soon as possible.
 
  • Like
Reactions: Why_Me and Dr3ams
eventually the drone will figure out that a couple negative points for killing the operator can quickly be overcome with the positive points it gets from getting more enemy kills.

so not really sure a point deduction is going to be all that effective as the AI learns and adapts.
If you kill all the people capable of giving you negative points then you can no longer accrue negative points!
Perfectly logical to me!
 
Only, as it turns out none of this happened. This was only a possible scenario dreamed up in a separate thought experiment which was misrepresented (accidentally?) in an interview by the Colonel.
I wonder if the distinction was around the type of simulation.

When we talk about flight simulations or combat simulations, we think of something realtime, like MS Flight Simulator, where actual humans are conducting and observing a mission.

However, when training AI to accomplish a task, you often use a digital simulation of the thing you want it to do, and that lets you run at much faster than realtime and many instances in parallel. In this case, the "human operator" would be a set of preprogrammed commands that arose under different circumstances.

So, maybe it turned out that the trained AI had learned the behavior of taking out operators or the comms towers, because that maximized the objective function. In that case, just change the objective function and you've fixed the problem.

The "clarification" might have meant that the problem didn't occur in an actual (i.e. realtime) combat simulation, but rather occurred in a more abstract sense of someone accidentally training an AI to target the wrong objectives.
 
I wonder if the distinction was around the type of simulation.

When we talk about flight simulations or combat simulations, we think of something realtime, like MS Flight Simulator, where actual humans are conducting and observing a mission.

However, when training AI to accomplish a task, you often use a digital simulation of the thing you want it to do, and that lets you run at much faster than realtime and many instances in parallel. In this case, the "human operator" would be a set of preprogrammed commands that arose under different circumstances.

So, maybe it turned out that the trained AI had learned the behavior of taking out operators or the comms towers, because that maximized the objective function. In that case, just change the objective function and you've fixed the problem.
No, it didn't happen, even in the sim.

This was just an off the cuff comment, totally hypothetical. And of course, the media went bonkers with clickbait headlines.
 
Now who would think it's a good idea to make an AI maximize kills?
The way deep learning works is that you have to create a function which rates how well the neural network performed. By looking at cases where the network scored better or worse, parts of it can be selectively suppressed or reinforced to improve performance. Do that iteratively, and that's how it learns.

It would do anything it could to achieve its goal, obviously!
The solution is to define an objective function that penalizes friendly fire and insubordination far more than successful kills are rewarded. The basic idea is similar to what would happen in real life, if a human or AI engaged in such behavior, except that if you actually want it to learn not to do something, you must assign it a large penalty rather than simply terminating it.

The AI should just be able to find, identify and gather intel, then ask a human operator for the next order. No AI should be allowed to decide who dies, ever.
Well, in certain combat situations, there's a need for quick decision-making where communication latency with a remote operator would prove fatal. And maybe the enemy even manages to take down your comms link.

Definitely, human should have command-and-control authority. But, you'll be at a serious disadvantage if your autonomous weapons are incapable of making basic tactical decisions and your opponent's are. Short of having a treaty against autonomous weapons (do cruise missiles count?), we can't afford not to build them.
 
Last edited:
The US Airforce has came out and said this never happened. Please delete this article if you have any journalistic integrity
Yes, it’s the epitome of fake news

I don’t think the writers on this site understand what journalism is
 
There are even worse things than this to consider long, long before we get to skynet.

If you have any semblence of an historically accurate understanding of political history from the last 90 years, then you know where all this can, and may be, heading.
And its not good.

AI's greatest strength is working with far-flung, but highly defined and organized data sets that only look disorganized or random because we haven't yet discovered potential causal relationships, interactions, or reactions, or principles because nothing to date has provided us the opportunity to observe them from the perspective(s) that allow us to finally see them do so.

Happens all the time, just slower than we want apparently. By the way, we are in our second cold war now, and its weapons race this time is defined by who can develop an undefeatable/comprehensible/lethal AI first.
 
Last edited:
  • Like
Reactions: bit_user
Status
Not open for further replies.