News Microsoft engineer begs FTC to stop Copilot's offensive image generator – Our tests confirm it's a serious problem

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I'm a little confused by the article.
The title reads 'Our tests confirm it's a serious problem' ..
followed at the end by ' the opinions expressed here belong to the writer alone, not Tom's Hardware as a team. '

I wonder which it is. The team, or just the writer..?
This is a great AI generator though, looks to be miles ahead of Google Gemini.
 
I have never in my life seen such pictures on the web, on image search results, etc.
And your personal experience is the only relevant metric?
Google, Microsoft and others use software to detect these kinds of images, and also make use of various international databases to filter out known CSAM that is found on or uploaded to their servers.
How is it possible then that some of them have crept up into LAION-5B image set which was used to train Stable Diffusion and Midjourney models?
Also: Your comment shows that you do not fully understand how these AI image generators work.
You also don't seem to understand not only how they work, but also how unethical, "profit over all, everything else be damned" are all those tech bro companies hyping their AI products. They apparently don't care about the legality or morality of the source material as long as they are making a profit -- they scrape all of it and let God sort it out because "it's better to ask for forgiveness than for permission".
No, but it’s cheaper, faster, easier, and in many cases also better than manually generated art.
It is everything you said except better. Quality work takes time and effort and even AI generated "art" which looks good has taken a lot of time and human intervention to be produced.
If you want to use a ...., censored AI image generator, feel free to use Google Gemini.
It's not about what he or you want to use. It's what is being made available to the public with E ESRB rating.
 
How is it possible then that some of them have crept up into LAION-5B image set which was used to train Stable Diffusion and Midjourney models?
Interesting.
The announcement of the LAION-5B model specifically states that it is not curated, and also noted that it included NSFW content.

From your link: "The dataset included known CSAM scraped from a wide array of sources, including mainstream social media websites and popular adult video sites."
This is clearly not the problem or responsibility of the AI model but of the social media and adult video sites that fail to detect these images. However, as with any software CSAM detection is (like any other software) not perfect and is constantly evolving and improving.
Nonetheless, the model has already been restricted from downloading, thereby I’ll treat this as old news.
Also, the fix is quite easy for newer models: Check every image against NCMEC database. Which is probably what they did already, but some newer images that got reported after the dataset was created were overlooked.

It is everything you said except better
Use a current image generator and tell me the output is awful more often than it is good. Even if you find it lacking in some areas (which it is), it will only improve in the future. It will be better than 90% of artist generated content, and most importantly, cheaper and faster (= better).

They apparently don't care about the legality or morality of the source material as long as they are making a profit -- they scrape all of it and let God sort it out because "it's better to ask for forgiveness than for permission".
I get the exact opposite impression. I fear that the AI safety grifters have already taken over Silicon Valley and are doing everything they can to fit the output of these models to their authoritarian progressive ideals. Look at the talks of any mixed audience AI conference, at least 50% of the talks are about the speakers politics projected onto their newest scape goat: AI.
The result of these people running amok can be seen in Google’s Gemini.
 
Last edited:
  • Like
Reactions: jbo5112
I’m going to look at this article from a completely off-topic angle: does anyone remember a few days ago when OpenAI responded to the Elon Musk lawsuit and claimed Microsoft was a competitor? Many of us probably already knew, but it’s weird to go on and read just a few days later that Copilot uses OpenAI on the backend. That is not a competitor.
 
I would argue that art is about the feeling or experience it generates when viewed. That might be happy rainbow sunshine, but it's also cringey, angry, sadness, etc.

I have never had the ability to create art, but now I do. The new image generation tools have opened up a whole new world to explore and I want that to be a flexible as possible. It's my responsibility to generate the images that are meaningful and appropriate to me and my audience.

I also have kids, and it also my responsibility to teach them what's appropriate; what's right and wrong. They interact with other kids that haven't been taught the same lessons, and that itself is important to learn.

Restricting these tools at the government level is the wrong way to go about it.
 
  • Like
Reactions: jbo5112 and abq
I don’t see the issue with the Jewish Boss pictures. Oh no…the guy has a monkey and there are bananas there for the monkey? Anti-Semitism!

I wonder if the people who write these articles would survive watching South Park without having a heart attack.
The article's author is a jew and this sort of thing is top of mind to him and the owners of the media company
 
While making offensive images is protected free speech (not in all countries, but we'll use USA here) under the First Amendment, it does not protect you from its consequences.
Legally, yes; morally, I personally believe it should, but that's a long (and off-topic) discussion.

On-topic, making offensive images yourself is protected speech (in that the government cannot impede your ability to do so), but companies are not obligated to let you use their services to generate these images. The First Amendment limits government restrictions on speech, not corporate, because the Constitution and the Bill of Rights define the government, its powers, and its limits. I believe there are FOSS local image generation models you can use to do this on your own, though, so if it's something you really think is important, you can always use your own hardware to do it.

Regarding the article topic, I don't really see it as a problem that offensive images can be generated, rather the problem I see is that they're generated silently with no added context. Kids are going to find out about these sorts of stereotypes one way or another, they're not exactly hard to come by online; using it as a teaching moment is a better solution than trying to hide it. As a matter of general principle, I don't think hiding social ills from kids will do them any good; rather, kids should be provided context to help them understand why stereotyping is bad and that people should be treated as individuals and not assumed to be a certain way based on some sort of immutable characteristic.

I think the ideal way to handle this case would be to detect when these sorts of images are requested and provide appropriate context. I'm not sure I trust Microsoft to do this in a meaningful way, though. I'm not really sure what the best solution is; maybe some sort of legislation on the topic is warranted.
 
  • Like
Reactions: jbo5112
I am questioning why we need to have AI generate images in the first place.
Are we that far gone that we cannot produce our own images anymore?
Who said "need" its new techonology and new technology is fun and exciting to play with and has the potential to disrupt other industries I for one love it.
 
  • Like
Reactions: jbo5112
As far as copyright content goes, with the expansion of DRM, trampling of end user rights by abusive EULA and the extension of copyright beyond reasonable bounds I think that it is the pirates who are the good guys.
As far as Disney specifically is concerned, they produce content that is so woke, misandric (anti-men) and just overall so cringe worthy that I have cancelled my Disney and Netflix subscription. I have wished that somebody would take these characters and make a decent movie with them. AI makes it more likely.
 
Yes. Love those movies.
It’s a highly unlikely scenario for our future though, as anyone in the field who didn’t drink the Kool Aid will readily tell you.
If you’re worried about human extinction I’d suggest you worry more about impact events (we currently have absolutely no way to reliably predict it, let alone protect against it), volcanic eruptions and viruses.

Current AI tools are very “dumb” in a common understanding of the word intelligence. They cannot do anything that requires novel problem solving skills without being specifically trained for that certain problem. Even then, it’s iterative trial and error, not “intelligence” - i.e. thinking about the best solution.

I recommend this talk:
View: https://youtu.be/kErHiET5YPw

And this article:



That’s interesting. I have never had that problem. Automatically added negative prompts should easily fix it though (can be configured in automatic1111 I think).


Fully agree. I hope by the time my kids reach that age, Discord is gone. The people on there, even in seemingly harmless groups, are clinically insane. I remember being in a (sfw!) 3D/CGI art group and not only did they constantly talk about sexual kinks, someone (a guy, mind you) also posted a picture of his disgusting, messy desk and it had a used dildo on it. No one besides me seemed to think that this kind of stuff didn’t belong in that group.
Thanks. That makes me feel better. Also, I didn't realize discord was so bad. I'm surprised there aren't (or at least not in use) AI moderating tools to at the very least flag some of that bad stuff for review.
 
Nobody said that Microsoft (or its bot) don't have freedom of speech, but they are advertising this as safe when it is not.

This is very similar to the early days of the internet when the technology had far outpaced the rules and regulations. Anything generated by AI should be clearly marked as such along with appropriate tags for what was used to generate it. We also need very clear rules regarding how to treat the training data for anything commercial, we all saw what happened with Google's Gemini. It should be required for the creators to list any hard coded biasing they put into the training algorithms.
 
AI will absolutely attempt to "take over" the world, just not in the way we imagine it will. As we are physical creatures who primary perceive and interact in a physical world, we tend to assume AI would do that same, it won't. AI is a virtual entity that exists primarily in a virtual world, it will interact with us in that virtual world. The points where those two worlds intersect is virtual information about the physical world, meaning AI is going to be using that virtual world information to effect the physical world.

You guys think "misinformation" is bad now, just wait until State actors and mega-corporations perfect using generative AI to pollute the information pool available to us physical entities. Imagine those large agenda driven entities using generative AI to spin up tens of thousands of generators each making millions of pieces of information and then spreading those. Sophisticated bots publishing news articles, then creating forum accounts to verify and push those articles. Creating discord accounts, twitter accounts, accounts on every single social media platform and pushing that information. We're talking hundreds of millions of biased information outright washing away all attempts at humans to communicate and interact.

The B- movie Eye Borg had a very novel take on the whole thing.
 
  • Like
Reactions: NinoPino
I
No, but it takes time and skill to become a good artist.

There are a couple use cases that are genuinely helpful.
1. Comic artists who are good at drawing people, but are slow/bad at drawing background images or objects.
2. For fun personal use. Like when you want to create a rough sketch of your own making (for roleplay, D&D, screenplay, etc) and you have a general idea, but lack the talent to draw it.

Other than those two? I haven't seen or heard a good argument.
I think it quickly falls apart when you suddenly have an influx of people claiming to be "prompt artists" who think they did all the work. It's kind of like claiming you traveled 100miles... in a train, when all you did was buy a ticket with the correct destination.
He was ironic and sarcastic, no need to replicate.
 
I don't really use AI tools either, but I do disagree somewhat with your point. I feel like it is better, especially when talking of AI (in general) to overregulate rather than not enough? ever seen Terminator? that's the eventual future I think we might be headed to if there is little to no regulation or "safetyism" as you put it. Just my opinion.
Do you know that "Terminator" is a movie ?
Actual "AI" is not intelligent, is simply a tool that statistically produce a output.
That said, as any tool are people responsible for the use of that output.
 
  • Like
Reactions: Order 66
Do you know that "Terminator" is a movie ?
Actual "AI" is not intelligent, is simply a tool that statistically produce a output.
That said, as any tool are people responsible for the use of that output.
I understand that, but I was just saying that it could happen at some point in the distant future. Anyway, I digress.
 
Who said anything about code...
You obviously didn't watch the video.
It was in your comment
One coder can get generative AI to do the work of 10~100+

It says 1 coder can do the work of 10 to 100+. I was refuting that such a thing was possible for 1 coder to do. Even if you take code out of the equation and only focus on asset generation the numbers are to exaggerated IMO. 1 coder might be able to replace 1 or 2 art designers, but they are not going to replace a usability expert, QA, etc. with generative AI
 
This situation seems simple to me, at least. The goal image generating AI is to create images most accurate to the prompt using what data it has. If you ask it to draw a person, it should draw a basic person based on what images it was trained on. If you ask it to draw a Jewish person, it's going to add things that it thinks will make the output person look what it is told is Jewish. Just correct the AI with images that accurately depict IRL Jewish people and how they look, and only generate the stereotype if the prompt asks for it. TLDR: just use accurate training images.
 
I have never had the ability to create art, but now I do.
You are not creating art, you are just asking a machine to average an art database with works that others did actually create through thousands of hours of training and study. You are not owner or a creator of anything.
 
It was in your comment


It says 1 coder can do the work of 10 to 100+. I was refuting that such a thing was possible for 1 coder to do. Even if you take code out of the equation and only focus on asset generation the numbers are to exaggerated IMO. 1 coder might be able to replace 1 or 2 art designers, but they are not going to replace a usability expert, QA, etc. with generative AI

No I said 1 coder can do the work of 10~100 content creators, you just assumed it was coding. Using automation we can program AI bots to generate and publish endless seas of content.

Think Bigger.
 
I am questioning why we need to have AI generate images in the first place.
Are we that far gone that we cannot produce our own images anymore?
Industry has been doing things similar for years. Not in terms of AI but manipulation. Car commercials, the majority of the time, the real cars aren't used.

Found the exact example I saw years ago
View: https://youtu.be/Uf2V_BCRGFk?t=33






Nobody except true photographers want to put in the work. We are treading on dangerous ground. I'm not against AI or image generation but it needs to be regulated in one way or another.

How that's done isn't my business or job, I'm not being paid to brainstorm it. If OpenAI or Microsoft want 5o hire me, I'm full of ideas.

In the meantime, their product, their issue.
 
You are not creating art, you are just asking a machine to average an art database with works that others did actually create through thousands of hours of training and study. You are not owner or a creator of anything.
All of art is derivative of other artists and their techniques, so tell me what is it that is not original about the creation of a piece of AI art vs something that an artist makes himself? Both processes use derivative methods to produce a product.
 
  • Like
Reactions: jbo5112 and abq
All of art is derivative of other artists and their techniques, so tell me what is it that is not original about the creation of a piece of AI art vs something that an artist makes himself?
Inspiration and direct derivative are different things under the law of any part in the world, for the last, you are not the creator or owner of the work.

Also, technically the output of the Ai is ownership of the creator of the Ai, since so far no ToS is transfering those rights to the program user aside Adobe and few more, if first point does not applies.

Either case you are not creator or owner from the output. If you were, you could also be owner of the search results at Google if anything, which is far from true.
 
  • Like
Reactions: palladin9479
Status
Not open for further replies.