News Erasing Authors, Google and Bing's Bots Endanger Open Web

Status
Not open for further replies.

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
759
1,760
There are no citations to tell you where these “multiple viewpoints” came from, nor is there authority to back up the assertions it makes.

These "AI" machine learning algorithms have no concept of time either.

Humans understand the concept of time. We understand past, present and future.

There are lots of articles that talk about questional medical practices from the past, without indicating these are from a bygone era, because humans have a great concept of time. Most of these articles aren't dated, because why would they be, we intrinsically understand time. AI does not.

I’ll admit another bias. I’m a professional writer, and chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words.

Chatbots use algorithms to steal mass amounts of data from sources that fall under copyright.

It's simply a form of mass plagiarism. Google an Microsoft are going to get their buts sued over this.
 

setx

Distinguished
Dec 10, 2014
225
149
18,760
I’ll admit another bias. I’m a professional writer, and chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words.
And that's great! You know what chatbots have shown us? That they are as good as "average" writers of the web nowadays who also don't bother to get into details and fact-check everything.

It's already hard to search for something on the web as you are drowned by copy-paste pseudo authors. That's why people don't see much difference when using chatbots.

Most websites rely heavily on search as a source of traffic and, without those eyeballs, the business model of many publishers is broken. No traffic means no ads, no ecommerce clicks, no revenue and no jobs.

Eventually, some publishers could be forced out of business.
And that is also great. If your quality is chatbot-level or you are exploiting unsavvy people by running them through your shill links – you should be out.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
And that is also great. If your quality is chatbot-level or you are exploiting unsavvy people by running them through your shill links – you should be out.
One of the problems is in Forums like this.

Chatbot generated "replies" sound sort of human, but not really good answers or solutions.
But computer generated replies can overwhelm actual humans. 10 mediocre replies can rise to the top more than one good one that actually gets the problem fixed.
 

setx

Distinguished
Dec 10, 2014
225
149
18,760
But computer generated replies can overwhelm actual humans. 10 mediocre replies can rise to the top more than one good one that actually gets the problem fixed.
And that is the problem we already have today. Is every article written by human is actually good?

I'm absolutely fine with every bad article written by human being replaced by bad article written by bot. Because no value was lost.
 

USAFRet

Titan
Moderator
And that is the problem we already have today. Is every article written by human is actually good?

I'm absolutely fine with every bad article written by human being replaced by bad article written by bot. Because no value was lost.
I wasn't talking about news articles, but rather in help forums.
"I have a problem with my foobar..."

"It sounds like you're having an issue with...."
"Do these 5 steps"
"If that doesn't work, contact the manufacturer"

We're already seeing this.
It sounds human, and is sort of close, but no real fix for the persons actual problem.

But given a lot of those replies, the real fix gets lost in the sauce.
 
  • Like
Reactions: rluker5

Awev

Reputable
Jun 4, 2020
89
19
4,535
Do not worry, the answer is always do a clear re-install of Windoze. Now, what was the question?

I do want to thank a college teacher that taught me to question almost everything, such as who is providing the information, what is their motivation, why are the doing so, what do they get out of it, what is their bias, what is their professional level in the matter, who do they work for, etc. .

And as for recommendations, I base my answers on what the person needs or wants to do with the item. I am not going to suggest a $5,000 computer for a grandma that barely knows what a computer mouse is, and wants to just see the pictures of her grandchildren, while a young adult with to much money and time wanting to play the latest AAA game may not mind dropping 10 grand on a gaming rig that will last for the last two years and is already outdated before the ink is dry on the spec sheet.
 
  • Like
Reactions: Why_Me and bit_user

shady28

Distinguished
Jan 29, 2007
427
298
19,090
I messed around with ChatGPT a good bit. It's actually a very deceptive platform.

It clearly has bias', for one. It seems to amalgamate recent trends on the web and regurgitate them as fact.

The result is that it often make false statements on even innocuous, objectively quantifiable questions.

As an example, I asked it :

"Compare the pattern of the DJIA in 1968 to the pattern in 2022."

The response about 1968 included a bunch of political and social events as reasons for the decline. That's highly debatable, it ignores fiscal, other social, and monetary factors.

However, the response for 2022 was objectively incorrect, below :

"In contrast, in 2022, the DJIA saw a positive performance, largely due to a combination of strong economic growth, low interest rates, and a robust technology sector. Despite some short-term volatility due to the COVID-19 pandemic and other geopolitical events, the DJIA ended the year with a gain of over 20%, which was one of its best yearly performances in recent history. "

That is false. The DJIA started 2022 at 36,231 and ended it at 33,147. It declined 10% in 2022.

It also said low interest rates for a year with continuous hikes that have us at the highest rates of the last 15 years. It also completely ignored inflation - both goods, and monetary, which reduces real inflation adjusted gains.

This system is quite good at stringing words together in a coherent way. However, I think it is basically regurgitating what are termed "Normative Facts". Normative Facts are just consensus views, the social zeitgeist so to speak, aka the "Norm" of opinion. This may make it comforting to read for some, but does nothing to advance people's awareness.

In the case of the DJIA question, it clearly "read" a bunch of recent articles on the run up in the index from around October to the end of the year.

It then conflated that 3-month period with the entirety of 2022, and produced an objectively false summary including the ludicrous statement that interest rates were low. If one were to objectively look at 2022 DJIA and relate them to interest rates, you'd find that the DJIA bottomed and started to go up along with interest rates.

I'm not saying those two things happen together or making any such conclusion, I'm just saying that's what happened, and this thing basically just tried to gaslight me.

The problem is, it is quite eloquent in stringing those words together. This gives what it writes a tone of knowing authority that the back end logic actually lacks.

I see it as a very dangerous disinformation tool.
 
D

Deleted member 14196

Guest
It will soon be working for the ministry of truth or I mean Google
 
  • Like
Reactions: Why_Me

baboma

Prominent
Nov 3, 2022
204
180
760
It’s self-evident that AI is the future. But the future of what?

That's what everyone is trying to find out. Nobody knows, as OpenAI CEO himself admitted. Everyone is exploring the possible boundaries. But one thing we can all agree on: ChatGPT has set off a frenzy, and every company is rushing toward this new AI frontier. The genie isn't going back into the bottle.

AI is here to stay, and it will only get more pervasive. It's a matter of society adapting to it, just as society adapted to the Internet. Many changes happened when the Internet came about. Traditional news was savaged, and many journalists' jobs were (are) lost. Ad-supported blogging, eg THW, became the new "news." Now, with AI in ascendance, maybe blogging jobs are now endangered. That's how change works.

both Google’s Bard bot and Microsoft’s “New Bing” chatbot are based on a faulty and dangerous premise: that readers don’t care where their information comes from and who stands behind it.

Microsoft has given a response to this, by providing links side-by-side with AI search. I'm fairly positive Google will do similar. It won't be just chatbot by itself.

I’ll admit another bias. I’m a professional writer, and chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words.

The end result could be a more closed web with less free information and fewer experts to offer you good advice.

The author's bias is evident in his dystopian take of AI advances. I don't agree that his plight (as a blog writer) will translate to the worsening of society. He may well lose his job, as did many journalists who were "obsoleted" because of the Internet. But I think most people would agree the Internet contributed to the betterment of society today. I chose to be an optimist, and I think the same will be with AI. To be sure, there'll be some bad along with some good.

Eventually, some publishers could be forced out of business. Others could retreat behind paywalls and still others could block Google and Bing from indexing their content.

The same is already happening today, without AI.

Let me interject an alternative scenario: AI helping news sites be more productive with AI-assisted content, thus lower their cost and allowing them to stay in business with lower overhead. AI is a double-edged sword. It can help, or it can hurt.

Getty Images is currently suing Stable Diffusion, a company that generates AI images, for using 10 million of its pictures to train the model.

Lawsuit aside, Getty Images' going concern has to be in question, with the ability to generate images on demand obviating the need for "stock" images. As with AI in general, that ability isn't going away because of one lawsuit.

A couple of years ago, Amazon was credibly accused of copying products from its own third party sellers and then making their own Amazon-branded goods – which unsurprisingly come up higher in internal search. This sounds familiar.

The author is reaching here. AI isn't a clone of existing products.

To sum, I empathize with the author's fears. But his self-admitted bias is only allowing him to see the negatives that AI has to offer--ie, potenial loss of his job. His fear has made him blind to the positives of AI. It's not a balanced view.
 
  • Like
Reactions: Awev and bit_user

baboma

Prominent
Nov 3, 2022
204
180
760
It clearly has bias', for one. It seems to amalgamate recent trends on the web and regurgitate them as fact.

The result is that it often make false statements on even innocuous, objectively quantifiable questions.

There's a common misconception among many in equating a "talking" AI to an all-knowing oracle, just because it can "talk." It's hard to get rid of this misjudgment, even when there are clear disclaimers that ChatGPT (and its likes) aren't arbiters of facts.

ChatGPT is just a chatbot. No less, but certainly no more. It possesses a high level of fluency, and can aid you in many things. It also possesses many shortcomings. The proper way to use it is to be aware of those shortcomings, and just use it for the things it is good at.
 
But they are attacking forums like this.
Just between yesterday and today, I personally eradicated 3 of them. Dozens of "replies". Yes, really.

Wow. I report them when I see them as quickly as I can when I see them, but it will be a never ending game of wack-a-bot. Fortunately this is a reputable community and bots cannot earn many badges over a long period of time. That's one thing that makes TH one of the world's most respected tech websites (and community forum platforms for volunteer member tech guidance). AI will never replace our individuality and merit over time!

Now, the day when AI can perfectly replicate our handwriting and signatures on print documentation for financial and legal and contract reasons and fax them across the world for nefarious reasons, that will be a bad day.
 
Apr 1, 2020
1,431
1,083
7,060
More importantly, if you are a consumer of information, you are being treated like a bot, expected to imbibe information without caring about where it came from or whether you can trust it. That’s not a good user experience.

That describes most websites these days, and is why some sources have a much lower reputation than others. Using tech as an example, just off the top of my head if you asked people to rank Techspot, AnAndTech, TechPowerUp, TomsHardware, and Arstechnica in terms of quality and trustworthiness, chances are you'd get a number of different answers.

ChatGPT-like programs will be no different, they may be fun to play around with, and they will no doubt improve AI responses for Alexa and other such assistants, but unless you can customize them to only pull from certain sources the majority of people will still go to the sources they prefer directly for anything more than basic things.
 

baboma

Prominent
Nov 3, 2022
204
180
760
AI will never replace our individuality and merit over time!

I'll play the devil's advocate here: Yes, it can.

At its present state, ChatGPT can already emulate the speaking/writing style of famous people, by ingesting enough of their works. That means it can emulate your thoughts and expression (to a passable degree), if it can ingest enough of your public expressions. On the net, that passes for "individuality."
 
  • Like
Reactions: bit_user

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
222
111
18,870
Alternate title :

"Guy that writes words for a living is scared <Mod Edit> of robot that writes more words better"

As I said in the article, I have my bias as do many of my colleagues. So take that it account when reading.

But let's talk about this from the reader's perspective for a moment.

1. You want quality content you can trust. The AI's answer, in this case, is based on a wide variety of sources. At least in Google's case, you have no way of knowing what the sources are, whether those sources have the right kind of experience and what those sources' biases are. If you ask the AI bot "what are some good candies to give out for Halloween" and the first answer it gives you is M&Ms, you don't know whether it got that idea from the Mars company website, a renowned parenting writer or was paid to include that response directly by Mars.

2. You want the best content to rise to the top where it's the first thing or among the first things you see. Are these AI summaries the best content on the page, better than the organic SERPs they are pushing below the fold and depriving of clicks? It seems like the answer is a clear "no."

The Bard answer not only has no person or persons standing behind it, but it's a broad summary that's meant to be pretty lightweight. The best answer for "which constellations to show my kid" is something like this article (https://www.skyatnightmagazine.com/advice/skills/best-winter-constellations/) which has actual insights and is written by someone with a professional background in astronomy. I trust this advice because I know where it came from.

But Google is prioritizing its own AI bot content over the expert content and pushing the better, expert content down the page. That says more about Google's desire to keep eyeballs on its site than it does about the superiority of AI writers over human ones.
 
  • Like
Reactions: helper800
I'll play the devil's advocate here: Yes, it can.

Sure it [AI] can! But that's why forums like this are human controlled with moderators. I speak from personal experience on this as I've been account spoofed by human trolls on all kinds of various topic interest forums dating back 20+ years who tried to emulate my phrases and specific wordings to discredit me. They copied my avatar and username with say a character just being off that the passing eye would not pick up on and catch. And they were a success - to a short term degree.

Those who knew the real "me" on said forums, including mods, knew I was being spoofed because said human spoofers just missed that "secret ingredient" of communication that they couldn't replicate...like a puzzle incomplete and missing pieces. A virtual fingerprint like say a touch of wit, humor, or reference to something mentioned some time ago that others remembered. That's of course irrespective of course of even simply looking at the account credentials as I referenced previously. And they were human, not machine learning bots.
 
Last edited:
Status
Not open for further replies.