News 17 cringe-worthy Google AI answers demonstrate the problem with training on the entire web

hannibal

Distinguished
This is bad!
Most people don´t even know how to turn of the AI from google search or even don´t know that you should turn it off!
Should we sue Google for transferring dis information or what we customers can do to make this... heresy to stop? AI is good tool, if you know that it is AI and it makes big mistakes. But if you don´t even know that the tool you are using is making answers from its ***... Then it is bad and missleading.
 

abq

Mar 6, 2024
9
13
15
Interesting to note here that Tom’s Hardware did not seem to find any issue with Google Gemini AI image generation.
A few examples that did not seem as worthy of ridicule as the article of this topic:





Apparently, these absolutely ludicrous images generated by a cutting-edge tool made by one of the world’s biggest companies were not worthy of an article back then?

But of course, a Google search that tells you that chewing tobacco is better than smoking it - which btw is 100% factually correct - is more concerning.

I really wonder why TH’s staff did not use the opportunity back then to publish another anti-AI article? Your corporate masters seem to push you to publish more of those, as the lawsuits against OpenAI & others show that publishing houses are very afraid of AI making them obsolete.
(It’s more of a rhetorical question, I think I already know the answer.)
 
Last edited:

abq

Mar 6, 2024
9
13
15
If they called out every example of these AI things getting it badly wrong, the article would be the size of a dictionary.
Ok sure, fair point, but the situation was extremely similar to this one:
- New AI product/feature by big tech company
- Makes ridiculous mistakes
- One generated so much controversy that it got a retraction and statement by the CEO, the other did not (yet)
- One is political the other one is not

One of them got an article. The other one did not. Surely this was purely random.
 
Mar 17, 2024
12
23
15
You have human 'experts' who have been wrong on everything for years but still parade on screens without been called on anything. The societies we live in have lost all respect for truth and honesty as any form of ethics has been replaced with greed, agendas and ideology.

Yet we _demand_ that an algorithm, which is not conscious, be a paragon of virtue.

And this algorithm is based on our datas. Hence, it reflects our failings.

This is trying to blame the mirror and break it. This is not an healthy way of dealing with this problem.

And it is obvious why journalists have a grudge against LLMs. It's not because they suddenly became warriors of the truth.

Do you believe anything you read online ? Do you expect anything you see on TV or in newspapers to be the simple and complete truth ? Just take anything AI produces with a mine of salt, just like the rest.
 
Last edited:
  • Like
Reactions: iLoveThe80s

Neilbob

Distinguished
Mar 31, 2014
217
255
19,620
This perfectly represents my biggest issue with the continual use of the word 'Intelligence'. If there was any actual intelligence involved, then it would have enough ability to recognise and reason that these results are complete and total bilge water. Stuff like this is downright harmful

In any case, I'm completely fed up with hearing and reading about it. I'd almost rather go back to reading about digital currency.
 
  • Like
Reactions: coolitic and abq

Co BIY

Splendid
Most of these examples seem to be garbage results from garbage questions.

Google effectiveness (for me) had been trending downward prior to the introduction of AI and I assumed that it was the proliferation of garbage click-bait content across the web and the always present bias in the algorithm for "freshness", popularity and ad-service.
 

Co BIY

Splendid
You have human 'experts' who have been wrong on everything for years but still parade on screens without been called on anything. The societies we live in have lost all respect for truth and honesty as any form of ethics has been replaced with greed, agendas and ideology.

There really are "Experts" in the world and only a fool rejects their wisdom but they often disagree on issues and their wisdom is usually eradicated when "summed up" for mass consumption. Of course the more power and deference we give to experts the greater the incentive is for pychopaths and imitators to pretend expertise.

Do you believe anything you read online ? Do you expect anything you see on TV or in newspapers to be the simple and complete truth ? Just take anything AI produces with a mine of salt, just like the rest.

Google used to assist with the the best way to check the truth of a matter by going to the sources. The AI makes that much more difficult.
 
  • Like
Reactions: TJ Hooker

w_barath

Distinguished
Aug 22, 2011
48
20
18,535
What this reveals is the depth of Apocrypha in the dataset the AI was trained on.

Don't be shocked at Garbage in Garbage out. Be concerned what this says about how dangerously misinformed the general public is.
 

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
232
130
18,870
Interesting to note here that Tom’s Hardware did not seem to find any issue with Google Gemini AI image generation.
A few examples that did not seem as worthy of ridicule as the article of this topic:





Apparently, these absolutely ludicrous images generated by a cutting-edge tool made by one of the world’s biggest companies were not worthy of an article back then?

But of course, a Google search that tells you that chewing tobacco is better than smoking it - which btw is 100% factually correct - is more concerning.

I really wonder why TH’s staff did not use the opportunity back then to publish another anti-AI article? Your corporate masters seem to push you to publish more of those, as the lawsuits against OpenAI & others show that publishing houses are very afraid of AI making them obsolete.
(It’s more of a rhetorical question, I think I already know the answer.)
So the "corporate masters" don't direct coverage of AI fails. And covering AI fails does not have any any benefits to our parent company.

So about the Gemini fails and why we didn't cover those. I'm trying to think back exactly to remember if we had any conversations about it, but my thoughts as EIC about why it wasn't as important for TH to cover are:

* The fail wasn't appearing directly on top of search results
* I think that by the time it came out in the news and we had time, we couldn't replicate it personally so couldn't add anything interesting to the conversation. I prefer not to just parrot others' coverage on this and to get our own queries and screenshots.
* We had just recently done an article about Google SGE (also Google's AI) and other image generators and the problems with their reproducing copyrighted characters (see Mickey Mouse and Darth Vader Smoking)
* All the mainstream press covered it so we didn't have a lot to add.
 

abq

Mar 6, 2024
9
13
15
So the "corporate masters" don't direct coverage of AI fails. And covering AI fails does not have any any benefits to our parent company.

So about the Gemini fails and why we didn't cover those. I'm trying to think back exactly to remember if we had any conversations about it, but my thoughts as EIC about why it wasn't as important for TH to cover are:

* The fail wasn't appearing directly on top of search results
* I think that by the time it came out in the news and we had time, we couldn't replicate it personally so couldn't add anything interesting to the conversation. I prefer not to just parrot others' coverage on this and to get our own queries and screenshots.
* We had just recently done an article about Google SGE (also Google's AI) and other image generators and the problems with their reproducing copyrighted characters (see Mickey Mouse and Darth Vader Smoking)
* All the mainstream press covered it so we didn't have a lot to add.
Alright, that seems like an honest answer.
Looks like sometimes a cigar is just a cigar.
Thanks for taking the time to reply.
 
  • Like
Reactions: coolitic

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
232
130
18,870
You have human 'experts' who have been wrong on everything for years but still parade on screens without been called on anything. The societies we live in have lost all respect for truth and honesty as any form of ethics has been replaced with greed, agendas and ideology.

Yet we _demand_ that an algorithm, which is not conscious, be a paragon of virtue.

And this algorithm is based on our datas. Hence, it reflects our failings.

This is trying to blame the mirror and break it. This is not an healthy way of dealing with this problem.

And it is obvious why journalists have a grudge against LLMs. It's not because they suddenly became warriors of the truth.

Do you believe anything you read online ? Do you expect anything you see on TV or in newspapers to be the simple and complete truth ? Just take anything AI produces with a mine of salt, just like the rest.
So, I'm sure we're going to still disagree after this post. But here's my two cents on this:

1. Human experts absolutely are held accountable for what they write. I submit this forum where we get constant, lively feedback from readers as evidence. If we get something wrong, we hear about it from our readers, our bosses and whatever people / companies we covered.

And, on every publication I know, people are constantly held accountable for what they write (or don't write). We are constantly having conversations about the quality, accuracy and transparency of our work.

2. It's obvious why journalists have a grudge against LLMs. We don't have a grudge against the technology but I think many of us do not like how it is being deployed in some places where it is A.) literally stealing from our work while competing with us and B.) Doesn't have to face the accountability that we face when it gets something wrong. C.) Spreading harmful misinformation.

3. I believe that the source of information matters. Tell me who said it and I'll tell you how much to trust them and what their biases are. We work hard every day to earn and keep readers' trust. Google is abusing its position as a trusted source of information that it earned in 25 years of giving decent search results to promote its AI. So they expect people to trust it, because it's the "voice of Google." But what are Google's biases and what's its AI's track record for accuracy.
 

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
232
130
18,870
Alright, that seems like an honest answer.
Looks like sometimes a cigar is just a cigar.
Thanks for taking the time to reply.
Every day, I'm thankful to be writing for Tom's Hardware, because we have readers who really care about what we do and hold us accountable. I love feedback, especially tough feedback.

If anything, stories about AI fails are contrary to what most publishers would want from their pubs, because every corporation wants to be seen as embracing the new technology and getting with the times. I don't get negative notes from management after writing these, but I don't get positive ones either. And I have had so many conversations with my colleagues asking them "is this a story for us or not?"

And, in this case, I have been cataloging a lot of bad examples I saw and sharing them around the office and socials and asking friends "is this worth a story?" Some people said no. A couple said "you're going to make this a story, right?" So I decided to do it but I could have gone either way TBH. Then, in doing it, I wanted to make sure I got some different types of examples and dug into them a bit.
 
Mar 17, 2024
12
23
15
There really are "Experts" in the world and only a fool rejects their wisdom but they often disagree on issues and their wisdom is usually eradicated when "summed up" for mass consumption. Of course the more power and deference we give to experts the greater the incentive is for pychopaths and imitators to pretend expertise.
Please continue to believe in your own lies. Please continue to ignore why trust in media organizations in going down in flames. Please continue to ignore why your political system is dysfunctionnal and trust in it is going down in flames too. Do not wonder why this period is called 'Post truth era'. Must be about a fringe part of the internet.

Journalists are fighting against LLMs not for the sake of truth, but because it threaten their jobs. It is 100% the luddite way. It is pretty obvious and natural, but also a losing proposition. Do they expect that their government will let its competitors take the lead in one of the last area where it could possibly lead ?
 

domih

Reputable
Jan 31, 2020
193
174
4,760
(a) What is the editorial value of Generative AI? Its pseudo-editorial value is just derived from all the data fed to it. This is a very far cry from the culture, erudition, reasoning and creativity of a human brain.

(b) A quotation from a long time: "Read books. They are all wrong (or lie) on certain points, but not on the same ones. So you can form your own opinion."
 
Mar 17, 2024
12
23
15
So, I'm sure we're going to still disagree after this post. But here's my two cents on this:

1. Human experts absolutely are held accountable for what they write. I submit this forum where we get constant, lively feedback from readers as evidence. If we get something wrong, we hear about it from our readers, our bosses and whatever people / companies we covered.

And, on every publication I know, people are constantly held accountable for what they write (or don't write). We are constantly having conversations about the quality, accuracy and transparency of our work.

2. It's obvious why journalists have a grudge against LLMs. We don't have a grudge against the technology but I think many of us do not like how it is being deployed in some places where it is A.) literally stealing from our work while competing with us and B.) Doesn't have to face the accountability that we face when it gets something wrong. C.) Spreading harmful misinformation.

3. I believe that the source of information matters. Tell me who said it and I'll tell you how much to trust them and what their biases are. We work hard every day to earn and keep readers' trust. Google is abusing its position as a trusted source of information that it earned in 25 years of giving decent search results to promote its AI. So they expect people to trust it, because it's the "voice of Google." But what are Google's biases and what's its AI's track record for accuracy.

I'm not talking about a website like yours which has almost no effect on society at large. Who cares if you got something wrong ?

I'm talking stuff like propagating 'information' that Saddam Hussein had links with Al Qaeda that led to the death of hundreds of thousands of people, and the 'journalists' involved got promoted. I'm talking people like Petraeus that claimed that the Ukrainian counteroffensive would be a tremendous success and ended killing dozens of thousands of ukrainians for nothing. I'm talking about the smell of civil war where every blow is allowed, every lie is perfectly fine. That's nihilism. And it's everywhere.

'accountable' my *ss. There is no accountability. Maybe in your small corner of the internet that doesn't threaten anyone.

I don't understand where you see that Google is a trusted source of information. Google doesn't rate websites according to truthfullness and you may perfectly find a website that tell you to smoke while pregnant through Google. Why should I trust whatever Google says because it is AI ? Google is not god. Why should I trust that more that any website Google recommends that tell me I have cancer because I cough ?

As I said, since LLMs use our datas, they are a mirror of what we are saying. Breaking the mirror is not the right/healthy way to proceed.

As for the competing aspect, I understand it, but it is pretty much the same than with any other human journalist. The argument brought against LLMs by journalists (NYT) are pretty flimsy.
 
Last edited:
May 25, 2024
1
1
15
Ok sure, fair point, but the situation was extremely similar to this one:
- New AI product/feature by big tech company
- Makes ridiculous mistakes
- One generated so much controversy that it got a retraction and statement by the CEO, the other did not (yet)
- One is political the other one is not

One of them got an article. The other one did not. Surely this was purely random.
A huge difference here: AI Overviews are being inserted en masse whether the user wants them or not. This is no longer a case of opting into the SGE labs testing.

With the Gemini image generation, the user had to actively seek out the AI tool and prompt it. These AI Overviews are supplanting the top of the general search results for the user and the only "prompt" is a standard search query.
 
  • Like
Reactions: Jagar123

Jagar123

Prominent
Dec 28, 2022
58
81
610
I know you are getting a lot of push back (which I don't understand) for writing this article, but I appreciate it. I think AI will continue to grow and have its uses. For what a lot of companies, Google included, are pushing down our throats, to try to obtain market share and name recognition, is gross and disturbing.

I completely understand why journalists are threatened by this type of behavior from AI. Eventually, no journalists would exist and then who would AI have to copy their homework? It is a silly thing to see.
 

coolitic

Distinguished
May 10, 2012
716
38
19,040
Every day, I'm thankful to be writing for Tom's Hardware, because we have readers who really care about what we do and hold us accountable. I love feedback, especially tough feedback.

If anything, stories about AI fails are contrary to what most publishers would want from their pubs, because every corporation wants to be seen as embracing the new technology and getting with the times. I don't get negative notes from management after writing these, but I don't get positive ones either. And I have had so many conversations with my colleagues asking them "is this a story for us or not?"

And, in this case, I have been cataloging a lot of bad examples I saw and sharing them around the office and socials and asking friends "is this worth a story?" Some people said no. A couple said "you're going to make this a story, right?" So I decided to do it but I could have gone either way TBH. Then, in doing it, I wanted to make sure I got some different types of examples and dug into them a bit.
And despite my criticism over this sort of coverage being repetitive, I do respect your going against convenient-norms while still remaining peaceable w/ opposition 😁
 

CmdrShepard

Prominent
Dec 18, 2023
385
287
560
By the same token, I asked whether USB 3.2 Gen 1 or USB 3.0 is faster. It said that USB 3.2 Gen 1 is faster when both are the same 5 Gbps speed.
Give the AI a break please. I don't think that even a single human knows which USB version has which speed, USB IF made sure of that by their "simple" naming scheme.

Joking aside, some of us knew that it's going to be garbage-in, garbage-out.

Another thing to consider is that by training the AI on the average of human intelligences you can only get below average artificial intelligence -- a student can't really surpass their teachers when they suck.
 

tamalero

Distinguished
Oct 25, 2006
1,142
148
19,470
I'm honestly more worried about those idiotic medical recommendations.
Like sometimes the AI saying you should jump from the golden bridge in San Francisco if you feel suicidal.