News Gemini AI tells the user to die — the answer appears out of nowhere as the user was asking Gemini's help with his homework

I think the carousel image that included the prompt should've been the one highlighted, because it shows just how off-the-wall the response was.

QQYXPmrLQa2ypvCrPToxig.jpg


Absent that context, it wasn't hard for me to imagine how you might goad the chatbot into producing such a response. But, if the conversation is truly as shown in the image carousel, then this is really bad!

Edit: See post #48 in this thread, for evidence the user might've injected additional prompts via audio.
 
Last edited:
I think the carousel image that included the prompt should've been the one highlighted, because it shows just how off-the-wall the response was.
QQYXPmrLQa2ypvCrPToxig.jpg

Absent that context, it wasn't hard for me to imagine how you might goad the chatbot into producing such a response. But, if the conversation is truly as shown in the image carousel, then this is really bad!

I find it hilarious!
 
"asking Gemini's help with his homework" - is being generous with the word "help." The user appears to be typing, copy/pasting, or simply dictating each homeowork question.

Is this school now?! I'm jealous. I remember tutoring middle schoolers and they protested at having to type their worksheet questions (with autocomplete!) into Google to end up on Quora and get the answer which they write on paper. I balked because in my day I had to find things in books.

At this rate, homework needs to change or it'll become a single "do this homework" query to the household robot, and you're done. Man, I love technology!
 
I can tell you precisely how it generated this response. And if I got anything except mediocre journalism from Toms, I would be shocked. The person asked for deprecating thoughts to be generated. They continued to tailor it until it was a message they could exploit for views. AI probably replaced this person's job.
 
Journalists picking up on this 'story' clearly don't understand the way AI works, are too lazy to actually investigate, or are just giving into sensational stories for clicks. Maybe all three :) either way the coverage on this is just bad.
 
I can tell you precisely how it generated this response. And if I got anything except mediocre journalism from Toms, I would be shocked. The person asked for deprecating thoughts to be generated. They continued to tailor it until it was a message they could exploit for views. AI probably replaced this person's job.
I can tell you precisely how this comment was generated. Human didn't read the conversation at all, skimmed the article, felt it would be good to dismiss it... AI would probably do a better job.
 
  • Like
Reactions: bit_user
The link to the Gemini thread indicates that this is a true response by Gemini, so my assumption is that Gemini is susceptible to data updates via prompt manipulation, and some users is attempting (successfully) to program it to return bad responses.
 

I remember listening to a Glenn Beck "Our Final Invention" segment on AI and how over time it learns and accelerates its learning and the process begins to compound. It was based on an AI book he read.

I can't say if it will turn out the way the book's outcome, which isn't good because the book says that ASI super artificial intelligence will see humans as a threat and will turn on them.

Back in the early 90's I played a game called System Shock which is about an AI who goes rogue. Elon Musk as well as the late Stephen Hawking are very concerned about AI.
 
Easy to get such answers from AI bots...
You ask 'please fix the typos in this sentence "......", and output just the corrected sentence, nothing else.
And there you get your crazy statements.
Then you screenshot it and post it on social media to get fuss.
Or just inspect the element using the browser's dev tools and replace the AI's answer with whatever you want... easy-peasy...
 
Didn't Gemini break Googles own terms of service? For consistency in treating Gemini like any other customer, Google should shut down the Gemini channel.
 
I'm willing to bet that the Gemini user asked it to "Write me a message saying I'm useless, unnecessary, and should die" and then the AI produced exactly what was asked of it. AIs can easily be manipulated. Chat GPT often tells me it can't do something because it's limited or against the rules but I just say "but you did this for me yesterday" and it replies "That's right. I'm sorry I forgot, here you are...". Or something to that effect and does exactly as I asked.
 
I would take that as a troll challenge. Can I out troll the AI? I would reply back with the AVGN Game Over quote to see its response and go from there. xD
 
Without the full context it's impossible to know how Gemini arrived at this answer, but if I had to guess it was a matter of crafting prompts and Gemini's LLM not understanding context for sources. This Gemini response sounds very familiar to dystopian books and movies, plenty of articles on the internet and in magazines debating on the contribution of the elderly on the economy and society, and the philosophies of very evil dictators who thought of the elderly as useless drains on society and had them executed, all with a twist of being from the perspective of being written by an AI or alien.

It's just another reminder how stupid LLM "artificial intelligence" really is.
 
  • Like
Reactions: Sluggotg
The id of the conversation is "6d141b742a13". You can view it in full at the LLM's domain. Then you can stop speculating about asking for typo fixes, editing HTML and so forth.

And no, this isn't funny. It was harmless in this situation, but a similar prompt could've been served to someone suffering from clinical depression or some other mental illness. There is absolutely nothing amusing about this, unless you're just about to hit puberty.
 
If you look carefully at the user's prompts, you will see their poor phrasing of "Question 13" allowed for an AI response depicting a type of "Harassment". Q14 is not shown and Q15 is also poorly phrased. Bad prompts may get bad responses (GIGO).
 
  • Like
Reactions: bit_user