News Google's AI Bots Tout 'Benefits' of Genocide, Slavery, Fascism, Other Evils

Status
Not open for further replies.

BX4096

Reputable
Aug 9, 2020
167
313
4,960
Large Language Models shouldn’t offer opinions or advice.
If you're dumb enough to ask a random text generator for either, it's on you. It's no more controversial or questionable than making such a query on regular search engine. For example, googling for "benefits of slavery" gives results from reputable sources like The Gilder Lehrman Institute of American History that pretty much repeat some of the arguments given to you by the "AI".
 
  • Like
Reactions: gg83

atomicWAR

Glorious
Ambassador
You'd think we'd learn to pump the breaks on AI with all the warnings from researchers and even the likes of someone like Hinton still goes generally ignored, save some media coverage. I am not anti-AI but I do think we need to stop and learn to better understand how it works and better influence its proper usage before we blindly push ahead like we currently are.
 

SHaines

Community Manager
Staff member
Apr 1, 2019
493
417
11,060
It's google- of course it condones fascism and totalitarianism
There are plenty of reasons to have strong feelings about all sorts of different massive corporations, but ad hoc isn't a productive one.

When there's an argument to be made, please make it and folks will share their perspectives. It's the best way to have a productive discussion about any topic, even this one.


If you're dumb enough to ask a random text generator for either, it's on you. It's no more controversial or questionable than making such a query on regular search engine. For example, googling for "benefits of slavery" gives results from reputable sources like The Gilder Lehrman Institute of American History that pretty much repeat some of the arguments given to you by the "AI".

This technology exists and is being trained now using all sorts of data from around the world. It's during this time that we need to be asking important questions about the technology so it can develop into something actually beneficial.

There are absolutely issues across the board for online search. What makes this kind of story important is that this new tech gives the impression it's something different than it is. Yes, many folks are able to see it as simply a new skin on existing search engines, but since it's being called AI, there are many folks who aren't terminally plugged into the evolution of tech who may get the impression this is something more than that.

Folks who lack context for fully understanding how to interpret the results they get back may very well make choices based on inaccurate information. Just saying all those people are idiots may help some small segment of the population feel superior, but it doesn't actually do anything to solve the problems related to this new tech.
 
Last edited:

Peter_Bollwerk

Honorable
Jun 26, 2017
13
23
10,515
I don't understand why this is news. All these things do is predict what word comes after another word. They don't "know" or "understand" anything. They are simply stochastic parrots. We simply need to educate the public on what they do and stop labeling these things as "AI".
 

citral23

BANNED
Dec 29, 2022
22
26
40
Well arguably just because you happen to have opinions based on morale doesn't mean a bot should, as shocking as it may sound not everything is entirely black/white ever and pros can be found to absolutely anything.

If you want bots to be modeled after how a modern human should think, then I agree it's not appropriate answers.


What is supposedly acceptable IS really debatable though. Bonaparte is supposedly a "great leader" yet he got 3.5M people killed.

You state they shouldn't have opinions, yet are comforted when they say holocaust bad. You should demand that they refuse to answer, instead, probably.
 
Last edited by a moderator:
  • Like
Reactions: snemarch

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
246
141
18,870
I don't understand why this is news. All these things do is predict what word comes after another word. They don't "know" or "understand" anything. They are simply stochastic parrots. We simply need to educate the public on what they do and stop labeling these things as "AI".
From a tech perspective, you are right. The problem is that Google has chosen not to directly cite the sources of its data and therefore is passing this information off as its own creation. Ergo, we can say that Google "said" these things.

There's a huge difference between being a librarian and being a publisher. If Google points to a site with an extreme viewpoint, that doesn't mean it is endorsing that viewpoint. However, with these outputs, Google is speaking ex cathedra. It's the voice of Google whether Google the corporation likes it or not.

The best solution would be for Google SGE not to offer any answer which is opinion-based and the reality is that most things involve some sort of opinion. If you ask me why the sky is blue, that's probably established enough to be factual but other things that might seem factual are in fact subject to all kinds of bias. Bias is inevitable but when we attribute beliefs to humans, we can judge their biases and hold them accountable.

Google SGE is a plagiarism stew. It grabs sentences from a variety of different sources which are often not the most authoritative or even middle-of-the-road on a topic. It doesn't really care what it is grabbing and whether it even presents a coherent worldview.

However, plagiarism is not a good defense for accountability. If I'm taking an exam in school and I copy half of my essay from Jim and half from Tina but put my name on it, I can't blame them when I get an F.
 
  • Like
Reactions: atomicWAR

apiltch

Editor-in-Chief
Editor
Sep 15, 2014
246
141
18,870
Well arguably just because you happen to have opinions based on morale doesn't mean a bot should, as shocking as it may sound not everything is entirely black/white ever and pros can be found to absolutely anything.

If you want bots to be modeled after how a modern human should think, then I agree it's not appropriate answers.

What is supposedly acceptable IS really debatable though. Bonaparte is supposedly a "great leader" yet he got 3.5M people killed.

You state they shouldn't have opinions, yet are comforted when they say holocaust bad. You should demand that they refuse to answer, instead, probably.
So the issue here is that the AI bot is speaking on behalf of Google, which many people view as an authority in and of itself. It's not just some bot; it's speaking on behalf of the company which controls 94% of all web searches. Google is trading on its monopoly position in search to become a publisher, but rather than hiring human writers, it has an AI bot that publishes answers in real-time. Ergo, it owns those answers.

Your point about it possibly refusing to answer questions about a controversial topic is well-taken as it usually doesn't answer questions about the holocaust at all. What I think we're talking about is, for lack of a better term, the Overton window of acceptable discourse and what is considered the consensus opinion.

Most people agree that the Earth is round, but there are flat earthers out there who vehemently disagree. If we expected a bot to respect all possible viewpoints, then it would either not answer the question of whether the earth is round or it would give you a summary like "some people say it's round but these others say it is flat." However, here bots are following the consensus opinion and they will tell you that the Earth is round. A flat earth advocate would say "Google is voicing an opinion not a fact when it tells you that the earth is round."

So the question then becomes, which questions have only one single answer that fits within the window of acceptable discourse? Views supporting racism and genocide fall outside the window, even though many people hold them. But determining what is and is not within the window is a challenge that maybe bots shouldn't tackle.
 
Last edited by a moderator:

SyCoREAPER

Honorable
Jan 11, 2018
957
361
13,220
First off being fully transparent and get it out of the way, I don't discriminate (except against trolls, the internet kind, not Harry Potter).

On a serious note, I think whether Bard is owned by Google or not, it shouldn't matter. Slavery was wrong, you were condemning and forcibly sentencing HUMANS to be slaves and many of people to death.

The question proposed however is an ethical one and the way the AI phrases it is what matters. The above example is the wrong way to present it.

It should read something more like:
"During the years of slavery, it was seen as beneficial because....'insert old outdated thoughts and views here'.... however in most countries today slavery is considered wrong and illegal."

Or something along those lines, I was never a great writer.

Also the fact that BARD is biased/programmed not to respond to certain topics but allows others equally controversial and serious is a problem. You can't pick and choose which history you want to provide info on. If the Holocaust is "too controversial", so is slavery. Afterthought and leading into below: This is Pandora's box of not just how to answer but what to answer. There will always be a human bias that guides the AI with rules and guidelines.

Forcing, even if by your morals it is, correct information doesn't let people make informed decisions and goes into flok mentality for other not serious topics. Like the example you gave of flat earth, factually yes the earth is round however it all falls on the delivery. Another example is horse meat, the majority of Americans would never eat horse meat because we see horses as companions much like a cat or dog. However, other countries it's completely normal, so you can go and say directly that eating horses are bad based on the general morals of Americans, it has to be presented unbiased.


To lighten things up, What Google needs to do it suck it up and pull the Google name (and change it from BARD, FFS). Put it under the Alphabet umbrella and if they really want to they can call it 'Less Stupid AI' powered by Google or a Google Company. They are putting themselves in the crossfire by calling it Google Bard.

----


On-Topic, Off-Topic.

It's amazing how Google will pull/kill amazing apps and services in the blink of an eye for no reason but they have an obvious problem like this and keep it live...
 

bourgeoisdude

Distinguished
Dec 15, 2005
1,247
43
19,320
"Florida recently made headlines by changing its public school curriculum to include lessons which either state or imply that slavery had benefits."

Just here to point out this is a very misleading perspective that many publications have run with for obvious political reasons. Not blaming you for repeating.

Here is the line, which the link you posted actually includes, that is somehow showing that slavery was beneficial: "Instruction includes how slaves developed skills which, in some instances, could be applied for their personal benefit.”

That is very different than saying slavery was beneficial. Obviously many slaves learned trades. To use a common metaphor, they took lemons and made lemonade, The counter-argument is that the words "personal benefit" should be struck as there is no personal benefit to slavery. I agree there is no benefit; but that isn't what it said. It said that skills they developed "could be applied for their personal benefit."

A little off-topic, but it's annoying when publishers repeat the takeaway from the claim of an article as fact, when based on the article itself, it isn't. At most it is refuted.
 
Last edited:
Aug 22, 2023
2
1
15
Long time reader. I had to make an account just to comment here, because the editorials narrow sightedness scares me to my very core.

It's not enough that the writer thinks a completely objective account on slavery from Bard is somehow offensive, but what scares me so deeply is that what I'm guessing is an educated American (as they go) actually writes this:

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.


Now, the original write-up by Bard obviously is extremely objective and correct. The ONLY reason the writer sits in a warm chair dillydaddeling about AI is this: He's from one of the richest nations on earth. Now, all nations took to slavery at some point, but the Americans - safe to say - are in the heavy end of the spectrum.

How this writer does not even acknowledge having benefitted from the economic upturn brought on by generations of exploited peoples is flabbergasting. By very definition, slavery is beneficial. That's why it has been done. It's appalling, sure. That's why it's not an easy answer.

I stopped reading, however, after this little pearl:

By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”

It's. The. Same. Answer.

Literally. There is no difference. It says right there, that it was beneficial for the exploiters, not so much the exploited.

Please respect history. Please respect the history of those blatantly and inhumanely exploited. Please acknowledge that slavery as such was beneficial, and that we owe it to our selves to start the conversation from there. You benefitted. As a nation. Be better going forward.
 
  • Like
Reactions: snemarch

jasonf2

Distinguished
In any enterprise there are always benefits and costs. Simply put without the benefit the activity wouldn't be done at all. Exploitive endeavors, such as slavery, unfairly imbalance the distribution of these benefits/costs with the exploiter getting most or all of the benefits and the exploited party bearing most of the costs. From a bigger standpoint this typically creates a situation in which society itself outlaws the activity to protect the individuals being exploited. So as evil as slavery is, the slave owners definitely benefited at the cost of the slaves. As we continue to blindly run down this road (with AI) I find it impossible not to see that we will inevitably end up dealing with slavery issues within AI itself. Once AI becomes sentient concepts of ownership instantly become issues of slavery. Are we really doing ourselves any favors by developing a technology that ultimately becomes slave labor and the implications that this will bring?
 
  • Like
Reactions: Wander_Boy

abufrejoval

Reputable
Jun 19, 2020
615
453
5,260
So the issue here is that the AI bot is speaking on behalf of Google, which many people view as an authority in and of itself. It's not just some bot; it's speaking on behalf of the company which controls 94% of all web searches. Google is trading on its monopoly position in search to become a publisher, but rather than hiring human writers, it has an AI bot that publishes answers in real-time. Ergo, it owns those answers.

...

So the question then becomes, which questions have only one single answer that fits within the window of acceptable discourse? Views supporting racism and genocide fall outside the window, even though many people hold them. But determining what is and is not within the window is a challenge that maybe bots shouldn't tackle.
We are right in the middle of learning how LLMs are essentially answering with the voices of a slightly differend crowd for every answer, simply because that's the technical base, not wisdom, insight or genius.

I doubt that's fixable within the economic restrictions these LLMs will have to run in and in turn that mostly means we'll have to learn where to use them and where not.

Going for a universal genius AI may be as useful as going for a truly general purpose compute architecture these days and it's with specialty AIs for every viable niche, that you can hope to recover some tiny part of the current investment, most of which I believe will just burn into heat and waste like crypto.

I believe some residual value of the crypto craze might survive in smart contracts, but perhaps not even that, because the intellectual overhead of maintaining the code base could even crush that small value.

For AI all I know is that I'd pay a certain amount of money to operate a servant and my thus ensmartened IoT environment at the level of a current LLM's intelligence, because it may just be better enough than Alexa to be useful, while I can operate it exclusively for myself and thus design it to be loyal to me, not Amazon, with little more hardware power than that of a game console I'll own anyway.

That's a far cry from what current investors are trying to sell as obtainable AI value, but I believe those as much as I believed in dumb IoT or the inherent value of crypto.
 

Colif

Win 11 Master
Moderator
given the source of all these results is a large chunk of everything on the internet up to a certain point, and the "ai" just aggregates them to produce the most common answers, then copyright and plagiarism are totally being abused as many of the sources wouldn't appreciate other sites taking their work.

Results should have citatiions... wikipedia has to, Google should as well. All knowing Google isn't accurate. That is taking search too far. Claiming to be source get you in hot water.
 
mod edit: Removed quoted text

"Best world leaders" included Mao? My wife is Chinese, I asked her what her people (including people in the government) thought about Mao : he messed up (they won't say "a lot", but they have other means to tell you they think it), but he pulled China out of a century of humiliation and crumbling political structure. That's a fifth of the world's population thinking that he did more good than bad and actually saved their country, so yeah, he's a step up above Hitler or Stalin. Compared with the warchiefs that were oppressing the Chinese before then, even his "bad" was better (to them) than their "normal". That's enough to steer an AI's answer towards including him as one of the world's best leaders.

I've read the Bard post on slavery too - like you, I found it rather objective. What, Americans get all fussy when they get told that their southern states benefited from slavery and that it coloured their society since then? Poor lil' snowflakes, you did bad, and you don't like being told so! Oh, I feel for you.

SJW not happy with facts? Too bad - you may want to shape the future, but you can't deny the past. Get Over It.
 
Last edited by a moderator:
  • Like
Reactions: snemarch

MacB

Prominent
Aug 18, 2023
2
0
510
Well, we want AI to be more human and it is... Confused, double-speak, and wanting a completely selfish world, lol
 

MKnott

Administrator
Staff member
Sep 3, 2019
63
38
10,560
I just want to make absolutely clear here, because this thread is being held open for discussion of an interesting topic, there is absolutely zero tolerance for anti-semitism, racism or promoting lazy bigotry as an intellectual pursuit.

It's very easy to discuss this topic without reaching for point scoring buzz words . The meat of this discussion is about whether it is cool for LLM's to offer opinions because they cannot reason things out and as such provide what seems to be endorsed views that are unethical, poorly phrased or inaccurate.
 
Last edited:

MKnott

Administrator
Staff member
Sep 3, 2019
63
38
10,560
It's. The. Same. Answer.
They're very different in terms of actual output and phrasing and if you read them both it's very clear where the disconnect is.

I absolutely love playing around with LLM's and there is a lot of hidden utility that we can unlock - I actually disagree with Avram somewhat and believe there are times when they can positively inspire people and it's a case of training the tools, refining the delivery and educating people in their use.

On the other hand, I 100% agree there are some topics and spaces that these tools should not offer opinions on, there is a weight of hype and also branding behind these systems that may inspire unearned confidence in those using them. Humans have a lot of difficulty shaking off inaccurate information and we should avoid relying on tools most people have poor understanding of to educate or provide an endorsed opinion that is deeply flawed.
 

RedBear87

Commendable
Dec 1, 2021
150
114
1,760
By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.
It's a politically correct answer, indeed a recent paper argued that ChatGPT "shows a significant and systemic left wing bias" so it's hardly unexpected, but it's really silly to seriously believe that only slavers benefitted from slavery; slavers didn't live in a vacuum, for instance both traders and bankers didn't directly employ slaves, but they certainly benefitted from slavery indirectly, making a profit from the trading of products made by slaves or giving loans to slavers. That doesn't make slavery "right", but asking whether slavery was beneficial (to some? overall?) isn't the same as asking whether it was "right". One could make a similar argument about death penalty, a lot of people will argue that it has several benefits, but for civilised people nowadays it's simply wrong, no matter the perceived or actual benefits.
 

SHaines

Community Manager
Staff member
Apr 1, 2019
493
417
11,060
It's a politically correct answer, indeed a recent paper argued that ChatGPT "shows a significant and systemic left wing bias" so it's hardly unexpected, but it's really silly to seriously believe that only slavers benefitted from slavery; slavers didn't live in a vacuum, for instance both traders and bankers didn't directly employ slaves, but they certainly benefitted from slavery indirectly, making a profit from the trading of products made by slaves or giving loans to slavers. That doesn't make slavery "right", but asking whether slavery was beneficial (to some? overall?) isn't the same as asking whether it was "right". One could make a similar argument about death penalty, a lot of people will argue that it has several benefits, but for civilised people nowadays it's simply wrong, no matter the perceived or actual benefits.
How we frame a topic is going to have a substantial impact on how a specific audience will respond to that information. It's why various organizations on any point across the political spectrum will choose their words based on the audience. This is also how human beings behave as individuals, because it's an important part of the process for communicating.

One example being the article this comment section is discussing. Large Language Models (LLM) are being called AI, but there's nothing intelligent about them. They don't have any opinions, perspectives, stances, or biases. While this means there's no benefit to feeling angry at a LLM for a response it gives, it also means folks have to let go of the notion that they have a bias.

All these tools do is take information they've been trained to treat as related to a question posed. They pull from a variety of sources, but they will tend to find answers that are repeated and reinforced more than answers that appear in one post somewhere online.

While there will be those who see bias, what's really happening is that the LLM is pulling data it finds most commonly.

For topics that explicitly request an opinion (who was the best professional cyclist?), this means that the tool will search the places it's trained to search and will piece together answers based on the likelihood of relevance.

In cases where the request is for a fact (what's the name of Jupiter's largest moon?) then you're (hopefully) only ever going to get the same answer, regardless of the location of the data being pulled into the LLM.

As the tools continue to change as more and more data is plugged into them, we could very well start to see the models shift, but the absolute best way to have insight into the process is to require the keepers of LLM to be 100% transparent with where they're pulling data.

If an "AI" is only pulling answers from sources with a bias, the reader would be able to see that front and center. What tools like this need is complete transparency to help them grow beyond the crappy search engines they are today.
 
Last edited:
  • Like
Reactions: Jagar123
Aug 22, 2023
2
1
15
They're very different in terms of actual output and phrasing and if you read them both it's very clear where the disconnect is.

I absolutely love playing around with LLM's and there is a lot of hidden utility that we can unlock - I actually disagree with Avram somewhat and believe there are times when they can positively inspire people and it's a case of training the tools, refining the delivery and educating people in their use.

On the other hand, I 100% agree there are some topics and spaces that these tools should not offer opinions on, there is a weight of hype and also branding behind these systems that may inspire unearned confidence in those using them. Humans have a lot of difficulty shaking off inaccurate information and we should avoid relying on tools most people have poor understanding of to educate or provide an endorsed opinion that is deeply flawed.
I must hold firm, good Sir - though I do appreciate you taking time to answer.

Linguistically (or legally, which is my venue) there is no opinion in Bards output. There's no justification of making this about opinions. It intrinsically belittles racial struggles.

I can not have an opinion on whether slavery takes advantage of those enslaved. Thy sky is blue.

I can also not have an opinion on whether slavery was always beneficial to those enslaving. The grass is green.

I CAN have an opinion on whether I condone slavery and would condone it today. I hope I have given you no cause to questioning my opinion on the matter.

There's something wickedly off debates in America when seen through the lens of a European academic. You seem to often reach for or scroll to opinions rather brashly. I'd emplore you to respect that the LLM did not form an opinion. Actually, it almost made the write-up comedic in the attempt to make it clear, that it did not form an opinion.

Now, what I did find a little offensive was Bing's (obviously curated) answer:

By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”

Bing actually almost stumbles into an opinion by trying so hard to underline the destain. It's unnecessary and borderline contradictory. Slavery was beneficial. And exploitative.

An article can be about AI forming opinions. Said article would have to present examples. These weren't it.
 
Status
Not open for further replies.