News GPT-4V can play Doom, badly — doesn't hesitate to shoot humans and demons

Status
Not open for further replies.
That's funny the Doom I grew up playing didn't have humans other than Doomguy.
Zombieman and Shotgun guy are "humans" in that they were once human and look like humans who have been possessed. So yeah, "humans."
I love when people quote Asimov's laws of robotics. I hope they realize that a lot of the writings he did about them was literally how flawed they were and all the loop holes and error conditions that could arise.
The "Laws of Robotics" were referenced, but not quoted. Getting into the deeper eccentricities of the three/four laws obviously isn't the point of the reference, and neither is the comment about Skynet coming online.
 
  • Like
Reactions: artk2219 and gg83

salgado18

Distinguished
Feb 12, 2007
969
427
19,370
For now, the main "AI assault" most people are worried about relates to generative AI and its ongoing impact, on both independent creatives and industry professionals

Last year there was a huge case of a "hypotetical" military drone simulation that went rogue and tried to kill or disable its operator to keep killing (search "ai controlled drone goes rogue"). Also the post above that's very current, and this AI with Boston Dynamics make T-800 very real. Yes we are worried with more pressing stuff than copyright infringement.
 

Pierce2623

Prominent
Dec 3, 2023
386
284
560
AI plays "if it moves, kill it" game and kills things that move? Don't let it play Duck hunt, the aminal rights people will be out on the streets.

Also, humans have no qualms about shooting humans.
To be fair, there’s only certain conditions under which I’d have no qualms about killing other humans. Endanger my life though, and I’ll try my best to end yours.
 
  • Like
Reactions: artk2219

Pierce2623

Prominent
Dec 3, 2023
386
284
560
Last year there was a huge case of a "hypotetical" military drone simulation that went rogue and tried to kill or disable its operator to keep killing (search "ai controlled drone goes rogue"). Also the post above that's very current, and this AI with Boston Dynamics make T-800 very real. Yes we are worried with more pressing stuff than copyright infringement.
That situation never actually happened and the guy said it was just a hypothetical he came up with himself. I believe that because I’ve had enough experience with the US military over the years to know that anyone high up enough to have information like that knows better than just telling whoever. You have to pass very extreme background checks to get a top secret security clearance. The FBI will literally show up in your neighborhood asking questions about you if it’s a new security clearance rather than a renewal. You don’t get that level of information without a clear understanding that you aren’t supposed to share it.
 
Last edited:
  • Like
Reactions: artk2219

wbfox

Distinguished
Jul 27, 2013
94
53
18,620
Last year there was a huge case of a "hypotetical" military drone simulation that went rogue and tried to kill or disable its operator to keep killing (search "ai controlled drone goes rogue"). Also the post above that's very current, and this AI with Boston Dynamics make T-800 very real. Yes we are worried with more pressing stuff than copyright infringement.
And if you stand in front of a mirror at midnight, and say Bender Rodriguez three times in a row, when you leave and check the refrigerator, all of your beer will be gone and the room will smell of cigar smoke. Can't wait until the totally mind altering amazingness that will be chatgpt-5. It will alter our view on what we consider hype.
 

evdjj3j

Distinguished
Aug 4, 2017
351
383
19,060
Zombieman and Shotgun guy are "humans" in that they were once human and look like humans who have been possessed. So yeah, "humans."

The "Laws of Robotics" were referenced, but not quoted. Getting into the deeper eccentricities of the three/four laws obviously isn't the point of the reference, and neither is the comment about Skynet coming online.
The clickbait title makes it sound like the AI is murdering innocent humans rather than zombies, which are not the same. I have a feeling you're smart enough to know that.
 
  • Like
Reactions: 35below0
I have yet to see a human Doom player hesitate to shoot the humanoid zombie/demon enemies either.
Humans actually know they're playing a game. An AI model does not. And I don't think the original researcher is saying, "wow, I'm shocked it didn't hesitate" but rather "hey, you know this model could easily be used to create kill-bots for the real world?"
 

ivan_vy

Respectable
Apr 22, 2022
194
209
1,960
Humans actually know they're playing a game. An AI model does not. And I don't think the original researcher is saying, "wow, I'm shocked it didn't hesitate" but rather "hey, you know this model could easily be used to create kill-bots for the real world?"
I like the point you raise, a human has to be desensitized to to kill another human, empathy goes into its way. AI are the other way around, need variables to flag which target had to be spared, and that means it can be corrupted or compromised; also there's the concept of responsibility, can you sue or trial an AI?.
 

USAFRet

Titan
Moderator
Last year there was a huge case of a "hypotetical" military drone simulation that went rogue and tried to kill or disable its operator to keep killing (search "ai controlled drone goes rogue"). Also the post above that's very current, and this AI with Boston Dynamics make T-800 very real. Yes we are worried with more pressing stuff than copyright infringement.
No, that was a high ranking military person saying "What if..."

Even in the sim they were running, it did not happen.
 

edzieba

Distinguished
Jul 13, 2016
562
562
19,760
Humans actually know they're playing a game. An AI model does not. And I don't think the original researcher is saying, "wow, I'm shocked it didn't hesitate" but rather "hey, you know this model could easily be used to create kill-bots for the real world?"
You don't even need AI for that, OpenCV has been recognising people in images it for a good decade.
And if you're concerned about entire autonomous weapon systems that can make no-human-in-the-loop attack/abort decisions, that's many years late to the party, cruise missiles have been doing that for a good half a century (specifically terminal phase identification, discrimination, and selection of aircraft and ship targets). Demonstration of low-cost cruise missiles with consumer components was also done two decades ago by Bruce Simpson, pre-dating the more recent 'drone' craze. The technology is all old hat, it just took a while for hollywood sci-fi to catch up to reality for people to bother caring about it.

Where the scary-scary-killer-drones story falls apart however is when you actually try and implement any of it. Turns out, the reliability requirements for terminal guidance are well above what a datacentre-backed ML system can do with nice clean high-resolution-high-framerate machine vision systems. Which is why the current state-of-the-art for operational drone usage is reverting back to the days of the TV-guided missile: a human looks at a grainy unreliable low-resolution video feed and manually pilots the drone, because AI is really bad at either of those tasks.
 
  • Like
Reactions: artk2219

35below0

Respectable
Jan 3, 2024
1,727
743
2,090
I have yet to see a human Doom player hesitate to shoot the humanoid zombie/demon enemies either.
i hesitated only because my health was low ;)


If you want to give AI ideas, teach it to play AI War...

The irony would make our extermination entirely justifiable. Also, we asked for it.
 

husker

Distinguished
Oct 2, 2009
1,243
238
19,670
I love when people quote Asimov's laws of robotics. I hope they realize that a lot of the writings he did about them was literally how flawed they were and all the loop holes and error conditions that could arise.
I disagree; I don't think Asimov was pointing out that the laws themselves are flawed. Instead, what is flawed is the human understanding of how the laws are correctly applied and what is the ultimate result of such laws being faithfully followed. For those who don't know this, the "I Robot" movie was not reflective of Asimov's work at all, but a twisted version of his ideas. The 4th law of robotics which ultimately evolved from robots themselves (not human created) did not plumet the world into chaos as depicted in the movie but was a brilliant addition which made it possible for a robot to be more human, not anti-human.
 

Eximo

Titan
Ambassador
More or less the same point. The wording is insufficient, but so are the concepts behind them that can't be converted easily into any sort of logical decision process. Referencing them in any real world application just doesn't work, because no matter how you word the laws, the question is how to convert them into logic an AI can understand.

The zeroth law is just one example. The better one is adding the word 'knowingly' or 'to it's knowledge' to the laws. If a robot doesn't know it is harming a human.

Don't get me started on that movie. Direct harm to humans never sat well with me. Asimov's view on that was that the robots would slowly take over control through very subtle means and attempt to optimize for the least harm possible.
 
Status
Not open for further replies.