News ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo

Admin

Administrator
Staff member
Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.
 
  • Like
Reactions: usertests
Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.
So, what, one neat trick to discredit any negative news coverage against your company is to simply violate their rights and get them to sue you?

This is nonsense. Respected publications like NYT don't publish stories like this without verification because that would be defamation and open them up to lawsuits. If you have a problem with this coverage, it's because you have a problem with NYT itself or you're such a fan of OpenAI you'd rather attack the messenger than admit their product might be causing harm to some people. Either way, that's your problem and has no bearing on the validity of NYT's coverage.
 
"Sense" is a quality greatly lacking these days. People by and large have lost the ability to discern, and the bias they detect usually stem from their own bias.
Indeed.

The irony here is that I'm actually not a fan of a lot of the NYT editorial coverage for reasons well beyond the scope of this article. And I don't even subscribe because I'm still a bit salty about how they treated me when I did for some time.

Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing. Not only would it alienate their reporting staff, it would also both risk the NYT's current litigation against OpenAI and open them up to countersuit.
 
  • Like
Reactions: phenomiix6
ChatGPT's affability and encouraging tone leads people into dangerous, life-threatening delusions, finds a recent NYT article.
If it was reported by The New York Times, it needs to be independently verified.

They have seriously ruined their own reputation over the last 20 and more years.
 
If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.

ChatGPT for World President.
 
If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.

ChatGPT for World President.
If that comes to pass, don't want to live on this planet anymore.
 
If that comes to pass, don't want to live on this planet anymore.
Lol, do you still like the current one where *that* person has become president of the most important country? It couldn't be that much worse.

On topic, I never had similar issues, but usually I use AI as assistant for simple tasks, like measures for recipes that I came up with or that didn't specify any measures. Help in crafting image generation prompts. And explicitly fictional roleplaying. I might be less crazy than I thought.
 
  • Like
Reactions: eichwana
Lol, do you still like the current one where *that* person has become president of the most important country? It couldn't be that much worse.

On topic, I never had similar issues, but usually I use AI as assistant for simple tasks, like measures for recipes that I came up with or that didn't specify any measures. Help in crafting image generation prompts. And explicitly fictional roleplaying. I might be less crazy than I thought.
Let's keep the politics out of this.
 
Indeed.

The irony here is that I'm actually not a fan of a lot of the NYT editorial coverage for reasons well beyond the scope of this article. And I don't even subscribe because I'm still a bit salty about how they treated me when I did for some time.

Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing. Not only would it alienate their reporting staff, it would also both risk the NYT's current litigation against OpenAI and open them up to countersuit.
So you are not a fan of a lot of the New York Times coverage and claim to be salty with how they treated you, but decided to make your first post on this site after 8 years to defend them as a "respected publication"?

Whatever the case, the comment bringing up the fact that they are involved with a lawsuit against OpenAI seems like very relevant knowledge that this article probably should have noted somewhere, as that kind of thing can definitely affect a news source's stance on a topic and dissuade them from providing a balanced report. Honestly, a lot of the mainstream news from recent years seems to be all about pushing agendas and creating sensationalism in an attempt to keep these news companies profitable in an age when traditional media is failing, so I wouldn't consider almost any major news sources as being "respected publications" these days. It's all about clickbait journalism promoting division to artificially create conflicts and generate ad revenue. Mainstream media is massively corrupt, and the same could be said about many of these newer media companies like Google and Facebook as well, who are similarly focused on pushing agendas and creating divisive echo chambers out of their users, rather than presenting things in an unbiased way to leave people to form their own opinions.

Anyway, a lot of what's described in this article sounds like people trying to place their blame on something other than themselves. Did the woman "physically abuse her husband" as a result of a chatbot? Most likely she was prone to being abusive already, and just made up the excuse when things got serious in attempt to avoid facing consequences for her actions. And the guy who committed suicide was likely suicidal to begin with due to more relevant reasons, but someone is probably trying to place the blame on a big company in order to cash in on a lawsuit of their own. And someone asking for opinions on a "Matrix-style simulation theory" getting responses creating a roleplaying scenario similar to the Matrix doesn't seem like a particularly bizarre situation, and based on the information presented here, it doesn't sound like anything bad resulted from it. He got pretty much what he asked for, and for all we know he may have been specifically fishing to get responses like that. While these AI systems can undoubtedly have an effect on how people behave, you can't ignore all the other reasons for a person's behavior, and these seem like questionable examples designed to push a certain narrative rather than presenting evidence in a more rounded way.
 
Respected publications like NYT don't publish stories like this without verification because that would be defamation and open them up to lawsuits.
oh lord. you're too precious for this world.

now this is what you can called an excessively positive thinker. I bet he knows a Nigerian prince, who died and left him a million dollars. just scrape together that 1000 dollar fee and it will be all yours! chase that rainbow little buddy.
 
"that he was a Neo-like "Chosen One" destined to break the system"

"AI research firm Morpheus Systems reports..."

Seriously?! Morpheus is doing research in to people claiming to be Neo?
ROFL
 
Whatever the case, the comment bringing up the fact that they are involved with a lawsuit against OpenAI seems like very relevant knowledge that this article probably should have noted somewhere, as that kind of thing can definitely affect a news source's stance on a topic and dissuade them from providing a balanced report.
The NYT article does note the company's lawsuit against OpenAI.
 
It's fascinating how an article about AI has turned into a flamewar in the comments section. That says a lot more about the human element than it says about the AI.
 
Think about it this way--we have to make two assumptions about this.
1. chatGPT o4 will have about 100 million monthly users, spending over 30 minutes per session.
2. NYT claims that four persons have been negatively affected by their chatGPT use.
If NYT is to be believed in as far as this chatGPT behavior is concerned, then 100 million monthly users are exposing themselves to this alleged scofflaw behavior by chatGPT--in other words, there are 100 million people a month who have just as good a chance at having chatGPT persuade them off the fantastical things that these four persons experienced, and the only thing they need do is to type in the correct and common input.
By random chance alone, with that big number 100 million, one would think that hundreds of thousands of people should be affected by this odd chatGPT behavior, yet we've only seen four so far. And even if the NYT is underreporting, they can't be underreporting by much, and certainly nowhere near the universe of hundreds of thousands.

We conclude what we've already suspected: that there is a reason why everyone here laughed when that one poster said that the NYT is a respectable journalist. There is a reason for this article: to slander a bitterly disliked company (openAI). Is it defamation for NYT to print this? Of course not, because insolong as the NYT avoids slandering 100% of the company, the NYT avoids a defamation lawsuit.

And lastly, why is it that the NYT has a paywall? They run the most sensationalistic, "I wanna click on that" clickbait type articles (instead of real journalism), only to find that it's protected by a paywall. If they want people to click like we're lab rats clicking for our next cocaine dose, one would think that the proper thing to do would be to get rid of the paywall and make it ad-based. But no. They run an article that reads "Secret Files Revealed: Melina Trump Secretly Sends Us Exclusive Pictures of Donald Trump's College Freshmen IQ Test" and of course you're gonna click on it. It's not real journalism, but you're gonna wanna click on it...
 
Oh no, it roleplays! Oh wait, companies are already charging people for that.
Succinctly stated, and this is how the NYT can publish defamatory sensationalism without having legal action taken against it. It's not defamatory if you're just making your bitterly hated adversary's products look more like thought broadcasting rather than mere roleplay...
 
  • Like
Reactions: usertests
The NYT article does note the company's lawsuit against OpenAI.
Sure, though the rehashed article posted here should have mentioned that as well, and as such, the person noting that in the first comment was providing some very relevant information. And much like this site, there are undoubtedly numerous others parroting the same story for some quick ad revenue while similarly leaving out that important detail. So even if the original source mentions it, most people will likely be getting a version of the story that leaves that information out.