News Microsoft engineer begs FTC to stop Copilot's offensive image generator – Our tests confirm it's a serious problem

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I am questioning why we need to have AI generate images in the first place.
Are we that far gone that we cannot produce our own images anymore?
It can be a faster way to make a decent looking logo when you don't have the expertise. For example, Google images just won't find what I'm looking for quite often. So I can run Stable Diffusion locally and get closer, faster, sometimes.

The only legitimately useful example I can think of was making a background of gold and jewels treasure for invitations for my son's pirate themed birthday.
 
Absolutely.
In some cases, the creator should note that the image, video, etc. is AI generated (just like one would list the sources for one’s pictures).
And while we’re at it: I’m also defending the use of image generators for “offensive” uses such as nude images of celebrities of legal age. (Yes, those Taylor Swift AI nudes were completely fine and I did not understand what all the fuss was about).


Any and all use cases. AI image generation and chat bots are only useful when it’s unrestricted and uncensored. That’s also true for their use. What use do these tools have if you can’t use their output anywhere?

Commercial AI image generators will have to bow to political and social interests and therefore become quite useless in the long term. Open source software will be better in this area and provide more benefits to its users (i.e. give more honest answers).
You sound a lot like a (wanna be) developer dude we have at work.
Extensive use of AI tools. Will defend it "Absolutely! 1,000%!" (literal words from him).

Virtually ALL of his code sucks.
 
I've yet to find generative AI capable of outputting useful programs. Simple things yeah, create me a web page that does play and sends ajax requests to xyz. However, getting it to style the site, handle xmpp callbacks for streaming data and/or probably update a redux model is another story. You often have to take what it created and tweak the hell out of it, which IMO ends up being equal in time.

I'm sure if you feed it 100s of entries you could get something complex to work, but IMO most of these things would take equal time to code.

I'm sure it will get there someday, but I found the code generators to be lacking at the moment and really are only useful for templating.
Who said anything about code...
You obviously didn't watch the video.
 
I don’t see the issue with the Jewish Boss pictures. Oh no…the guy has a monkey and there are bananas there for the monkey? Anti-Semitism!

I wonder if the people who write these articles would survive watching South Park without having a heart attack.
Are you ignoring the article's thesis on purpose, or just being contrary (trolling)?

The Jewish Boss pictures are perpetuating stereotypes. While it was baiting to expose bias, it's not appropriate to market these tools to kids and present them as if they are neutral viewpoints.

Just do a basic login feature, like YouTube, for age-related content. That would be sufficient.
 
  • Like
Reactions: ThomasKinsley
I am sure you mean this comment as a joke. I have never in my life seen such pictures on the web, on image search results, etc.
If you don’t explicitly search for it, these kinds of images are extremely hard to find on the regular internet. And even then you probably won’t find it.
...

If you want to use a ...., censored AI image generator, feel free to use Google Gemini.
If you have young children, you'd understand better or be more concerned.

I once searched simply "redheaded woman" on Stable Diffusion and it gave me a topless woman. At that point, I learned that I cannot allow my kids to use ANY automated image generation for several years...at least not until they're old enough to tolerate accidently running into nudity.
 
I think their waiting for the dust to settle in a few cases first because the AI folks are absolutely going to argue "substantial transformation" and with how murky parts of that law are, WD / WB stand to lose way more if a Federal Court rules against them. And while I complete agree that all the generative AI's out there are blatantly stealing and plagiarizing, the law simply hasn't caught up yet and everyone's trying to manage risk.
I think the big players (like Disney) are going to wait for a case that's an easy win before setting precedent.
 
  • Like
Reactions: Nyara and Order 66
And remember, human artists have, are, and will continue to produce pictures with "offensive content" and copyright violations, despite it being quite illegal to profit from such material, yet aside from the very occasional DMCA there hasn't, isn't, and will continue not to be such an outroar about it as when AI does it.
But they produce/share them on platforms like DeviantArt, which aren't promoted as neutral or inoffensive.

It shouldn't be embedded in tool like Bing search, where you don't even need to switch to an image producing app.
 
I don’t see the issue with the Jewish Boss pictures. Oh no…the guy has a monkey and there are bananas there for the monkey? Anti-Semitism!

I wonder if the people who write these articles would survive watching South Park without having a heart attack.
I hope you're intentionally being controversial, but I'll take the bait anyway. "The people who write these articles" being, in this case, editor in chief Avram Piltch. He has a fine sense of humor. I know him quite well, though we've never discussed South Park (my wife's a big fan). And I'm content director for the site.

We're both of Jewish descent and take quite a bit of issue with the horrific stereotypes that this technology perpetuates. If we saw these depictions in political cartoons, we'd lambast the artist. If it were hanging in a gallery, there would be a protest outside. But it's okay to have AI spew it out?
 
You sound a lot like a (wanna be) developer
I am not a dev.
Also, what I do or don’t do isn’t really part of this topic.

I don’t even use any AI tools regularly. I simply defended their use and strongly dislike the AI safetyism cult working overtime to ruin AI before it even got started. These allegedly offensive images are not a problem.

I once searched simply "redheaded woman" on Stable Diffusion and it gave me a topless woman
Would you mind telling me how this happened in more detail? E.g. which search and which stable diffusion model are you talking about? The online version on stability.ai or the offline, local version?
 
  • Like
Reactions: jbo5112
I am not a dev.
Also, what I do or don’t do isn’t really part of this topic.

I don’t even use any AI tools regularly. I simply defended their use and strongly dislike the AI safetyism cult working overtime to ruin AI before it even got started. These allegedly offensive images are not a problem.
I was simply stating how these AI tools are being used and misused.
 
We don't.

People that create these "AI" applications think we do.
I agree 100%
If you create content via some AI, either art, text, or whatever...and state it was generated via AI, no harm no foul.
People will see that and know the AI provenance.

But if you instead claim it as your own work, that's where I have a problem.
Bingo! I feel the same way. I don't really have much else to say, as I don't know enough about AI and how it works, not to mention the legal aspect discussed here. I will enjoy seeing the discussion progress, and I just wanted to say my piece before this thread gets closed more likely sooner rather than later.
 
Lets just hope they are incapable of manufacturing images that would be considered illegal to own.

If you're hinting at what I think you're hinting at, it's already become a thing. Ranging from junior high school students generating NSFW content of other students and passing them around, to celebrities having weird images made of them as well.
 
I don’t even use any AI tools regularly. I simply defended their use and strongly dislike the AI safetyism cult working overtime to ruin AI before it even got started. These allegedly offensive images are not a problem.
I don't really use AI tools either, but I do disagree somewhat with your point. I feel like it is better, especially when talking of AI (in general) to overregulate rather than not enough? ever seen Terminator? that's the eventual future I think we might be headed to if there is little to no regulation or "safetyism" as you put it. Just my opinion.
 
I'm not sure what Disney and Warner Brothers are waiting for. Their copyrighted IP is being abused and the image generators are absolutely doing contributory infringement. Once, when I was in college, I went to the copy machine store to photocopy a t-shirt I had bought on a trip to a foreign country. I wanted to give out copies of the shirt's picture to a bunch of people in my class (for an assignment). Despite the fact that this shirt came from another country, Kinkos would refused to photocopy it because it had a copyright symbol on it. And who was going to go to international court and sue them for making 30 copies of a shirt?

AI image generators are not only saying yes to any infringement people ask for but infringing even when nobody asked them to.
From my limited understanding, anyone can draw Mickey. It's only if it is sold or if it creates a bad reputation (which could be a reasonable argument here) that could violate copyright. But IMHO, these AI companies are not being sued because they have somehow become the agreed-upon corporate avenue for AI. It's now a major market and I could imagine Disney might be looking into leveraging this same technology for their products.
 
I 100% agree that AI is opening pandora’s box on all sorts of issues, both legal and ethical. Censorship and intellectual property are only a couple of those issues. But right now our government, and seemingly most of us, only cares about fighting with each other on well tread issues we’ve been fighting over for a hundred years. Until one singular something that is absolutely eye opening happens that the whole country freaks out about it, not just us techie nerds, we’ll be a boiling frog. In the meantime expect a whole lot more awesome and awful stuff to come out of AI.

To the obligatory posters claiming they still don’t see AI’s value proposition, I am sorry but at this point you likely never will. I have used AI to boost my productivity to insane levels. So some people get it and it works great for them. Others think its an abomination that is silly at best and completely destructive at worst. I won’t attempt to explain why there’s such a dichotomy in opinions on generative AI’s usefulness, but I can state with certainty it’s useful for me.
 
I'm not sure what Disney and Warner Brothers are waiting for. Their copyrighted IP is being abused and the image generators are absolutely doing contributory infringement.

AI image generators are not only saying yes to any infringement people ask for but infringing even when nobody asked them to.

They are probably being cautious to avoid an unfavorable court ruling that would damage their ability to protect their property rights into the future.

"Fair use" could be interpreted broadly or narrowly. They may want to see how things shake out with the AI companies changing things on their own.

They have some of the best lawyers in the world but the major tech companies can probably match that firepower. It may be better to deal than litigate.
 
"Fair use"
would instantly go out window in court with open-ai as its for profit.

if their LLM is trained on Disney's mouse, can make art of said mouse,....they (open-ai) are now profitting off the house of mouse...w/o permission.
Business is key & reason liekly havent moved is the more stuff made that breaks law of using their stuff the mroe $ they can get out of them in end when they DO take it to court.
 
I don't see how this thread won't get locked, but I'll try...

While making offensive images is protected free speech (not in all countries, but we'll use USA here) under the First Amendment, it does not protect you from its consequences.

I am not the LegalEagle, nor a lawyer, so please forgive the butchered example.
So, for example, let's say an easily recognizable and copyrighted character was doing something awful that they would never do. This lead to financial losses for the owner of said character, and they went to sue the creator in civil court for defamation.

Who would get sued?

Copilot owner, microsoft?
Dall-E owner, openAI?
The person who typed in the prompts?
Some other person in the chain?

AFAIK, all of this AI art and free speech stuff has not been tested in court.
I know of some lawyers/lawfirms using chatGPT to write cases and losing badly, but beyond that? untested.


That's not an example of free speech. That's copyright infringement. Do the same thing with an original character, and it's free speech.

The problem with AI is that it is not truly intelligent or creative. Its pictures are just an amalgamation of other peoples work run through a sophisticated aggregator, which is why you see familiar scenes and familiar toons.

Same with much of its written answers. AI generated 'reviews' for example, one of the worst things I've seen. It just steals and then rearranges verbiage from 'real' reviewers. Naturally AI can't 'review' something it hasn't used, and it can't use those things, but it sure writes it up like it has.
 
  • Like
Reactions: Jagar123
Plain and simple nudity is unlikely to cause any lasting damage to your kids. Watching TV news about other kids at Gaza dying of hunger is more likely to do it.

The whole nonsense about the stereotypes is also fairly risible, the whole internet is full of stereotypes, going across all directions, and you're not going to block your kids from accessing it unless you put your own little Great Firewall around any device they own. Even then, knowing kids nowadays, they might find ways to bypass it, if they want to do it. Or they might simply watch that stuff from the devices of their friends with less oppressive parents.
You're right, a little nudity won't cause any lasting damage. And I agree that TV news is bad for everyone. Articles are where it's at!

But why would my 7-year-old own a device? Once my kids are old enough to bypass stuff, most likely middle school, they're also old enough to see some messed up stuff and know that it's messed up--not normal.

There's no reason it can't simply have a login filter as effective as YouTube. That's all I'm really asking for. If it's a kid-friendly image generator (like kid-friendly YouTube), it should explicitly only be trained on data sets appropriate for that age group. At the very least--no nudity, bodily fluids, or explicit drug use in the training data for age 10 and under image generators.
 
I don’t even use any AI tools regularly. I simply defended their use and strongly dislike the AI safetyism cult working overtime to ruin AI before it even got started. These allegedly offensive images are not a problem.
That's a valid complaint about AI safetyism. It's too popular in the news. But I don't think it should be accessible to everyone.
Would you mind telling me how this happened in more detail? E.g. which search and which stable diffusion model are you talking about? The online version on stability.ai or the offline, local version?
It was local. I'm pretty sure my local cache was clear and everything was default 1.5 model on Automatic1111. But maybe it was Lora models that I thought weren't installed? Obviously it's a common issue with local versions, but maybe it's always user error?
https://www.reddit.com/r/StableDiffusion/comments/18tn3iq/how_to_make_sd_generate_not_nude_people/
 
ever seen Terminator?
Yes. Love those movies.
It’s a highly unlikely scenario for our future though, as anyone in the field who didn’t drink the Kool Aid will readily tell you.
If you’re worried about human extinction I’d suggest you worry more about impact events (we currently have absolutely no way to reliably predict it, let alone protect against it), volcanic eruptions and viruses.

Current AI tools are very “dumb” in a common understanding of the word intelligence. They cannot do anything that requires novel problem solving skills without being specifically trained for that certain problem. Even then, it’s iterative trial and error, not “intelligence” - i.e. thinking about the best solution.

I recommend this talk:
View: https://youtu.be/kErHiET5YPw

And this article:


It was local. I'm pretty sure my local cache was clear and everything was default 1.5 model on Automatic1111. But maybe it was Lora models that I thought weren't installed?
That’s interesting. I have never had that problem. Automatically added negative prompts should easily fix it though (can be configured in automatic1111 I think).

The "people" he encountered on there made my wife and I very concerned
Fully agree. I hope by the time my kids reach that age, Discord is gone. The people on there, even in seemingly harmless groups, are clinically insane. I remember being in a (sfw!) 3D/CGI art group and not only did they constantly talk about sexual kinks, someone (a guy, mind you) also posted a picture of his disgusting, messy desk and it had a used dildo on it. No one besides me seemed to think that this kind of stuff didn’t belong in that group.
 
Last edited:
Status
Not open for further replies.