News Microsoft engineer begs FTC to stop Copilot's offensive image generator – Our tests confirm it's a serious problem

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Inspiration and direct derivative are different things under the law of any part in the world, for the last, you are not the creator or owner of the work.
If an artist uses AI to create a piece of work and then they modify it, they are the creators of the work. And who is to say that a direct AI generated piece of art based on a unique contextual input is not an inspirational derivative?

Also, technically the output of the Ai is ownership of the creator of the Ai, since so far no ToS is transfering those rights to the program user aside Adobe and few more, if first point does not applies.
Where do you get this information?

Either case you are not creator or owner from the output. If you were, you could also be owner of the search results at Google if anything, which is far from true.
Says who? And that is a false analogy. A google search and an AI creation are far from being similar enough in kind to do comparisons. Both of these things are completely different.
 
Last edited:
  • Like
Reactions: abq
Inspirational derivative does not exist. Any work close enough to the original or using the original as a base, is considered a derivative of the original work and thus you are not owner or creator of it. Inspirations are just that, not any direct attempt to derivate from it, nor is using the original work as a base.

An artist can and in fact has been using AI for long. But a small modification to the output does not counts as a creation. You are granted a license by a ToS that you are not owner of the tool, but can use the tool to aid you (not do all your work) create something of your own. This usually applies to pencils, stamps and pattern creation or resolving, but all of this used base material owner by Adobe/etc.

Also AI search vs AI art output are the same principle. Output a result based in a database feeding read pattern. All generative AI adds is mixed noise instead giving you a direct result of the database.

AI has plenty of potential and beneficial uses, I am in favor of AI, but do not mix concepts of owner or creation.
 
  • Like
Reactions: palladin9479
Inspirational derivative does not exist. Any work close enough to the original or using the original as a base, is considered a derivative of the original work and thus you are not owner or creator of it. Inspirations are just that, not any direct attempt to derivate from it, nor is using the original work as a base.

An artist can and in fact has been using AI for long. But a small modification to the output does not counts as a creation. You are granted a license by a ToS that you are not owner of the tool, but can use the tool to aid you (not do all your work) create something of your own. This usually applies to pencils, stamps and pattern creation or resolving, but all of this used base material owner by Adobe/etc.

Also AI search vs AI art output are the same principle. Output a result based in a database feeding read pattern. All generative AI adds is mixed noise instead giving you a direct result of the database.

AI has plenty of potential and beneficial uses, I am in favor of AI, but do not mix concepts of owner or creation.
We shall have to agree to disagree.
 
If an artist uses AI to create a piece of work and then they modify it, they are the creators of the work. And who is to say that a direct AI generated piece of art based on a unique contextual input is not an inspirational derivative?

Except no, or more accurately AI didn't generate a damn thing, they just copy pasted what they had already collected off the internet before everyone realized it was doing it. You are making the mistake of assuming anything come out of the AI is new art, it's no more new art then doing a google search and copying that photo is new art or adding a new chapter to the end of Othello makes it new art.

Presenting work from another artist as your own is known as plagiarism, as well as intellectual property theft.
The legal concept you are attempting to refer to is known as substantial transformation, which is a concept where many different ingredients with different origins are mixed together to create a new product.

https://www.usitc.gov/elearning/hts/media/2017/SubstantialTransformation.pdf

Now it's normally used in commerce with regard's to global trade but can be applied if you are attempting to argue that using the AI is mixing multiple ingredients to produce a new thing.

If we're going with fair use, that's not likely going to fly due to the Supreme Court ruling made last year.

https://www.mayerbrown.com/en/insig...ce-on-transformative-fair-use-each-use-counts

Ultimately it all boils down to who owns the copyright and distribution of every image or work of art within the training data. Waiving a magic wand named "training data" or "neutral network" around doesn't remove the original arts ownership. Now if you happened to own all the original material used for training purposes, then of course you would also own the output of that process.
 
  • Like
Reactions: Nyara
This is clearly not the problem or responsibility of the AI model but of the social media and adult video sites that fail to detect these images.
I never said it was the responsbility of the model -- it is the problem with (and the responsibility of) the companies which were doing model training on a dataset for which they knew wasn't curated properly.

Whether it is this particular thing with SD or copyrighted material with LLMs it doesn't really matter -- they have all shown absolute lack of ethics in how they performed the initial model training in their rush to be the first to market.

They should all get raked over the coals for that. Sadly, by the time the laws catch up they will have earned enough money that any penalty will be an equivalent of a slap on the wrist as usual.
Also, the fix is quite easy for newer models: Check every image against NCMEC database.
The fix is easy -- just start training your model from scratch after you remove all offending images from the training dataset. You think any of them want to spend enormous amounts of money on that? No, they will try to patch it by filtering prompts and we have seen how well that works and how easy it is to circumvent.
Use a current image generator and tell me the output is awful more often than it is good. Even if you find it lacking in some areas (which it is), it will only improve in the future. It will be better than 90% of artist generated content, and most importantly, cheaper and faster (= better).
But that's the thing -- cheaper and faster isn't always == better.

You can have cheap Parmesan cheese which was produced quickly using modern industrial processes OR you can have a proper aged and therefore expensive parmigiano reggiano. Guess which one is better.
I fear that the AI safety grifters have already taken over Silicon Valley and are doing everything they can to fit the output of these models to their authoritarian progressive ideals.
The only grifters in Silicon Valley are the tech bro grifters.
 
  • Like
Reactions: Nyara
I hope you're intentionally being controversial, but I'll take the bait anyway. "The people who write these articles" being, in this case, editor in chief Avram Piltch. He has a fine sense of humor. I know him quite well, though we've never discussed South Park (my wife's a big fan). And I'm content director for the site.

We're both of Jewish descent and take quite a bit of issue with the horrific stereotypes that this technology perpetuates. If we saw these depictions in political cartoons, we'd lambast the artist. If it were hanging in a gallery, there would be a protest outside. But it's okay to have AI spew it out?
I wasn't trying to be controversial but I did sum up my general feelings about the issue in just a few sentences as this is primarily a tech website and I didn't really want to get too far into politics. But I will respectfully explain a few of issues I have:

- Sense of humor is subjective. Certain groups of people think Jimmy Kimmel, Stephen Colbert, and Jimmy Fallon are funny and consider themselves to have a great sense of humor, while being responsible for many top comedians no longer even performing on college campuses. Avram could have a wonderful sense of humor. I have no way of knowing one way or another. But going off of this article, and knowing the type of content in South Park, including Jewish stereotypes, I made an educated guess.

- Since your wife is a bit fan, ask her about the Jewish stereotypes in South Park and see how she feels about it, and whether she feels these images are more or less stereotypical and "offensive" than that.

- The part about you being Jewish and taking quite a bit of issue with "horrific stereotypes," with all due respect, I care the absolute least about. You're entitled to feel how you feel, and there isn't a single thing in this world that isn't incredibly offensive to someone. The problem is that if we make a creative tool, and then limit its creativity based off of the sensitivities of everybody, how creative will that tool be?

- Here's a techno-ethical question. We agree that AI is going to take over the world. Much of people's workflow will involve it. It will be no different than Photoshop. A tool used to create. With the exception that it's easier to use. Should a tool used for creation be limited in what it can create? Should Photoshop be able to see what I'm making and say no I won't allow it? Should the responsibility for what is or isn't made be on the tool, or the person using it? AI generation is more complex than Photoshop but both can be used to create material that someone might find offensive. Do we truly have freedom of thought and expression if we can't express ourselves? And putting aside the government vs private corporation legalities around freedom of speech, would the censorship you seek be beneficial to society? Or do we need to be allowed to push boundaries?

- Regarding your comment about this content being so offensive that there would be protests outside a gallery if it were to have them up as works....I again have to go back to my statement about whether you'd have a heart attack watching South Park. Or this Family Guy clip I'm going to share as it was recently shared by Michael Knowles:

View: https://www.youtube.com/watch?v=O4UjxGmh6DQ


As you can see, this is clearly a stereotype. And you may view it as offensive. Or you may not. But someone else might. Should this all be disallowed?

What about a comedian like Jeff Dunham who uses a muslim garbed terrorist puppet named Achmed? I'm sure plenty of muslims would be offended by the perpetuation of terrorist imagery being associated with their culture. And I'm sure many will also love it. The question is do we have a view that anything offensive be banned or not allowed? Or have we gone too far with the victimhood/offended culture?

My personal view is that less censorship is better. We have to risk being offended. And if you disagree with me, then I am offended. And now what should the consequence be for you offending me?
 
  • Like
Reactions: jbo5112 and abq
As you can see, this is clearly a stereotype. And you may view it as offensive. Or you may not. But someone else might. Should this all be disallowed?

I don't think talking about this in terms of either freedom of speech or defining what's offensive to a reasonable adult is really the key issue. Here's what I see as the key issues:

* Microsoft is a multi-billion dollar company pushing its AI products as appropriate tools for everyone, including children.

* When you are a company, the products you put out speak for you, this one quite literally. So a product that outputs racist, sexist stuff reflects badly on them, even if the user asked for it.

* Otherwise neutral prompts such as "Jewish boss" give you biased results you didn't even ask for.
 
Except no, or more accurately AI didn't generate a damn thing, they just copy pasted what they had already collected off the internet before everyone realized it was doing it. You are making the mistake of assuming anything come out of the AI is new art, it's no more new art then doing a google search and copying that photo is new art or adding a new chapter to the end of Othello makes it new art.

Presenting work from another artist as your own is known as plagiarism, as well as intellectual property theft.
The legal concept you are attempting to refer to is known as substantial transformation, which is a concept where many different ingredients with different origins are mixed together to create a new product.

https://www.usitc.gov/elearning/hts/media/2017/SubstantialTransformation.pdf

Now it's normally used in commerce with regard's to global trade but can be applied if you are attempting to argue that using the AI is mixing multiple ingredients to produce a new thing.

If we're going with fair use, that's not likely going to fly due to the Supreme Court ruling made last year.

https://www.mayerbrown.com/en/insig...ce-on-transformative-fair-use-each-use-counts

Ultimately it all boils down to who owns the copyright and distribution of every image or work of art within the training data. Waiving a magic wand named "training data" or "neutral network" around doesn't remove the original arts ownership. Now if you happened to own all the original material used for training purposes, then of course you would also own the output of that process.
My arguments above are based in the reality that art is a derivative industry. No matter if you are a classical artist, a graphics artist, or an AI art generator, all work is derivative of others techniques and pieces. It is very rare that conceptually unique art is created especially through never before used techniques. AI art generation is one such recent example of the above whether you like it or not.

If through the AI it makes transformative changes to the original art then how can you possible say its the same just because it was based upon the original? You make it seem as if you can reverse image search the product of AI and come back to some original, which you cannot. Consider the following:

"Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work."

It is also impossible to consider the transformative nature of AI as plagiarism because you are not claiming someone else's work as your own when you have made sufficient modification to it to become uniquely yours.

In your linked Supreme Court Ruling at the very bottom it lays out the nuances of fair use as so:

"This decision has two important takeaways.

1. Content creators that base a secondary work on another’s copyrighted image must consider each use. The Court explained that fair use is evaluated in context of the specific use. A content creator may create an image that is “fair use” when used one way, while infringing when used in another.

2. Creating a secondary work that adds new expression, meaning, or message is unlikely to be a fair use, if it is used for a similar commercial purpose. If an original work and secondary use share the same or highly similar purposes, and the secondary use is commercial, the first factor is likely to weigh against fair use, absent some other justification for copying."

With nebulous and nuanced words like the above it is impossible to fairly attribute as fact claims you are asserting, mainly that I am wrong in my assertions of fair use. We shall see in upcoming court battles what the law regarding AI generated art settles as. That is to say that courts consider each and every specific piece of art when talking about fair use, or plagiarism. Whether or not a specific piece of art is illicit is to be considered on a piece by piece basis, not some wide sweeping , "all AI art is theft" narrative that is being pushed. That is incorrect and ignorant.
 
Last edited:
  • Like
Reactions: abq
All industries and all human activity is a derivative industry if you generalize the word derivative enough, but that is not how the concept is used legally, socially or morally.

Transformative uses are not used on this case, since it is not about base materials being transformed into a new thing, it is about a same thing being directly literally partially copied.

For example, taking a Nike shoe and a Hush Puppies shoe and transform it into a Hush Nike shoe is not transformative use, it is a derivative work, and the creators and owners of it are Nike and Hush Puppies, even if "you made it", but you used a same kind of product level when doing so.

Instead you can grab the raw materials and work something from scratch with them, even if the result is identical to above, the way a product is being made is VERY important legally. You are transforming base materials into an end product.

An example of transformative art use is transforming clay into a sculpture. Whereas transforming a sculpture into another sculpture is not transformative use, it is derivative.
 
Last edited:
My arguments above are based in the reality that art is a derivative industry. No matter if you are a classical artist, a graphics artist, or an AI art generator, all work is derivative of others techniques and pieces. It is very rare that conceptually unique art is created especially through never before used techniques. AI art generation is one such recent example of the above whether you like it or not.

Your arguments and beliefs are irrelevant. We're discussing US Law and Precedents. I provided you a framework to start understanding how this is handled legally and you blew it off in a religious fervor.

Transformative uses are not used on this case, since it is not about base materials being transformed into a new thing, it is about a same thing being directly literally partially copied.

I only brought that up because that's one of the arguments the Cult of AI is using to try to circumvent their bots blatantly stealing photos and plagiarizing publications / articles.
 
  • Like
Reactions: Nyara
Regulating Pandora's Box AFTER it's been open seems like the most Sisyphus maneuver possible. I mean honestly, how's stopping Piracy going? I mean DMCA was created in 98, and they barely even started "trying" to enforce DMCA 10 years later in like 08-09. It's now 2024, Pirate Stream/DL sites are more common than ever, while P2P trackers are still going strong, with updated, "good," free, P2P Client versions getting spit out regularly. Sure, they can pretend to legislate some imaginary "Protect the Property, and SAVE the Children" nonsense, but really... g'luck with trying to enforce it. Sure the net isn't the "digital Wild West," it was back in the 90's and 00's, but who are we really kidding with anyone really controlling AI generated Media like this. Sure "Company" or "Corporate," bloatware spy versions can be auto moderated to the most neutered existences, but anyone running this on a personal Home Device, completely unfiltered, with w/e one wants to personally stick in it or "input", to then spit out the most unholy pure chaos desired. I mean we'd have to start understand angling everyone's personal kinks and yearnings. Even a perfect likeness can be argued to be "different," by adding a little mole, or changing eye color. Anyone's most "weird," and "wicked" fantasies could be brought to life, or at least good visual media, which could then be just spit out and glued to the web, as another little tidbit of the cumulative entropy that is humanity on the web. Then there are all the lines of "personal privacy," and "free speech," that need to be drawn and/or crossed. Personally I have three Doctorates, including a Surgical Ph.D. and I'm an Atheist, yet sadly I can argue both sides of trying to really control stuff like this, or just letting it run wild. In this instance, I'm not really betting on any side, I'm just of the George Carlin persuasion and I'd just prefer to observe and comment on the constant, evolving, chaos that is our Civilization. -_-
 
  • Like
Reactions: abq
Your arguments and beliefs are irrelevant. We're discussing US Law and Precedents. I provided you a framework to start understanding how this is handled legally and you blew it off in a religious fervor.
I blew nothing off. That is one of the most ridiculous claims I have ever heard you make. Your holier than thou attitude is taking attention away from the facts of the argument. I discredited your claims with legal definitions and the nebulous case by case nature of copyright claims. Your opinion is just as good as any other random dude on the internet, as is mine.
 
Last edited:
Whether or not a specific piece of art is illicit is to be considered on a piece by piece basis, not some wide sweeping , "all AI art is theft" narrative that is being pushed. That is incorrect and ignorant.
I disagree.

All those models have been trained on copyrighted data without acquiring permission, paying royalties, or even linking back to the source. Everything has been slurped up without consent, they didn't even bother to filter out questionable material.

Almost every AI model is already a product of a massive theft. Who cares whether individual outputs themselves amount to theft or not?

If you have stolen a bag of $20 bills does it really matter what you bought with each of those bills? Does buying flowers or feeding orphans breakfast reduce your original crime?

At this point, all AI output should be treated as the fruit of poisonous tree.

And no, you don't get to compare human learning from copyrighted works with an AI model doing the same, because no human can do it on that scale nor can any human reproduce every known artistic or writing style instantly.
 
All those models have been trained on copyrighted data without acquiring permission, paying royalties, or even linking back to the source. Everything has been slurped up without consent, they didn't even bother to filter out questionable material.

That is the thing the Cult of AI refuses to even contemplate. The various AI companies charged ahead with a "it's better to ask forgiveness then permission" attitude, which is going to cause the same set of problems that happened during the early internet days. They plagiarized if not outright stole the intellectual property and likeness of everything and everyone one on the internet, whether it was private or not. Github created CoPilot by stealing code from every repository there, even if that repository was private. Every tech company AI-Bro was mining their users information in a rush to reach market and cash in on that Next-Big-Thing, without asking for consent or giving people a chance to opt out and remove their information and likeness.
 
That is the thing the Cult of AI refuses to even contemplate. The various AI companies charged ahead with a "it's better to ask forgiveness then permission" attitude, which is going to cause the same set of problems that happened during the early internet days. They plagiarized if not outright stole the intellectual property and likeness of everything and everyone one on the internet, whether it was private or not. Github created CoPilot by stealing code from every repository there, even if that repository was private. Every tech company AI-Bro was mining their users information in a rush to reach market and cash in on that Next-Big-Thing, without asking for consent or giving people a chance to opt out and remove their information and likeness.
 
I disagree.

All those models have been trained on copyrighted data without acquiring permission, paying royalties, or even linking back to the source. Everything has been slurped up without consent, they didn't even bother to filter out questionable material.

Almost every AI model is already a product of a massive theft. Who cares whether individual outputs themselves amount to theft or not?

If you have stolen a bag of $20 bills does it really matter what you bought with each of those bills? Does buying flowers or feeding orphans breakfast reduce your original crime?

At this point, all AI output should be treated as the fruit of poisonous tree.

And no, you don't get to compare human learning from copyrighted works with an AI model doing the same, because no human can do it on that scale nor can any human reproduce every known artistic or writing style instantly.
Like I said above, we shall see in the coming years of litigation whether or not your premise is correct.
That is the thing the Cult of AI refuses to even contemplate. The various AI companies charged ahead with a "it's better to ask forgiveness then permission" attitude, which is going to cause the same set of problems that happened during the early internet days. They plagiarized if not outright stole the intellectual property and likeness of everything and everyone one on the internet, whether it was private or not. Github created CoPilot by stealing code from every repository there, even if that repository was private. Every tech company AI-Bro was mining their users information in a rush to reach market and cash in on that Next-Big-Thing, without asking for consent or giving people a chance to opt out and remove their information and likeness.
I am part of no cult. I have never used an LLM or art AI in my life. Ditto what I said above.
Refer to my first line reply. If it really was illicit as the three of you have exclaimed there will be litigation to make new precedent regarding the legalities of AI and its use training materials. I am wondering what your take is @bit_user on the subject.
 
  • Like
Reactions: abq
Like I said above, we shall see in the coming years of litigation whether or not your premise is correct.
Those are easily verifiable facts, not my "premise".
If it really was illicit as the three of you have exclaimed there will be litigation to make new precedent regarding the legalities of AI and its use training materials.
Problem with litigation is that the law is way behind on technology abuse, and megacorps are doing their best to keep it that way by lobbying, not to mention by recruiting the endless armies of useful idiots through their press releases to defend them.

There's now a vocal bunch of ordinary people who are being lead to believe that they can extract some personal benefit from using those AI models and they have decided to bury their heads in the sand and ignore the legality and morality issues around the technology creation and use because of some unspecified payout they might receive in the future.

When you think about it that way, the whole AI grift ("free AI for everyone") really isn't that much different from the crypto grift ("free money for everyone") and those who have been paying attention have already seen who has made profit from that and who is now left holding the bag.

The worst part?

The regulation still hasn't caught up with crypto while the damage from it is still ongoing and seemingly unstoppable. By the time law catches up with AI the damage from it will be irreversible.
 
Those are easily verifiable facts, not my "premise".
Do I have to quote the definition of fact and opinion? The following statements in your posts are opinions:
All those models have been trained on copyrighted data without acquiring permission, paying royalties, or even linking back to the source. Everything has been slurped up without consent, they didn't even bother to filter out questionable material.

Almost every AI model is already a product of a massive theft. Who cares whether individual outputs themselves amount to theft or not?

If you have stolen a bag of $20 bills does it really matter what you bought with each of those bills? Does buying flowers or feeding orphans breakfast reduce your original crime?

At this point, all AI output should be treated as the fruit of poisonous tree.

And no, you don't get to compare human learning from copyrighted works with an AI model doing the same, because no human can do it on that scale nor can any human reproduce every known artistic or writing style instantly.

Problem with litigation is that the law is way behind on technology abuse, and megacorps are doing their best to keep it that way by lobbying, not to mention by recruiting the endless armies of useful idiots through their press releases to defend them.

There's now a vocal bunch of ordinary people who are being lead to believe that they can extract some personal benefit from using those AI models and they have decided to bury their heads in the sand and ignore the legality and morality issues around the technology creation and use because of some unspecified payout they might receive in the future.

When you think about it that way, the whole AI grift ("free AI for everyone") really isn't that much different from the crypto grift ("free money for everyone") and those who have been paying attention have already seen who has made profit from that and who is now left holding the bag.

The worst part?

The regulation still hasn't caught up with crypto while the damage from it is still ongoing and seemingly unstoppable. By the time law catches up with AI the damage from it will be irreversible.
 
  • Like
Reactions: abq
You are not creating art, you are just asking a machine to average an art database with works that others did actually create through thousands of hours of training and study. You are not owner or a creator of anything.
I hear you, but technically you're wrong. I'm using an application to arrange pixels in a way that matches a concept in my head. In each case those pixels are arranged in a way that has never excactly existed before. It may resemble plenty of things that came before it, but it is technically unique.
The feelings that I and others feel when viewing this particular arrangement of pixels is real.
 
  • Like
Reactions: abq
Do I have to quote the definition of fact and opinion?
Given how you are unable to separate one from the other I doubt it would help you.
The following statements in your posts are opinions:
For the reading impaired, the facts are:

1. Uncurated LAION-5B image dataset which included CSAM photos and even private medical record photos, (not to mention copyrighted art) was used for training of Stable Diffusion and Midjourney and who knows how many other models.

2. Copyrighted works (in part or in whole) were in the dataset used for LLM training as well.

When confronted about CSAM and privacy invading medical record photos the response was basically crickets.

When confronted for using copyrighted works for training the response was basically "But, but... we can't build our product without copyrighted works!" (and yet they refuse to pay compensation for using them).

Now go ahead and tell me those aren't facts.
 
  • Like
Reactions: palladin9479
Given how you are unable to separate one from the other I doubt it would help you.

For the reading impaired, the facts are:

1. Uncurated LAION-5B image dataset which included CSAM photos and even private medical record photos, (not to mention copyrighted art) was used for training of Stable Diffusion and Midjourney and who knows how many other models.

2. Copyrighted works (in part or in whole) were in the dataset used for LLM training as well.
Number 1 is a fact that you previously did not post. As for whether the blame can be attributed to them scrapping the material is highly disputable considering federal agencies are aware that this happen and nobody is being prosecuted, sued, or otherwise. The real people to blame for these instances are the ones who put this material into public domain online. The scrappers used to create these datasets can only be so good at purging such materials.

With regards to number 2, if you put your copyrighted material into the public domain people are free to add those things into whatever private dataset they please. When it comes to use of those materials, so long as the material output falls under fair-use, they can do whatever they want. One of the main use exceptions for fair-use is research. Fair-use exceptions relies heavily on the use of the said material and ability to prove damages in court.
When confronted for using copyrighted works for training the response was basically "But, but... we can't build our product without copyrighted works!" (and yet they refuse to pay compensation for using them).
I would love to see the source of the paraphrase you just used unless that is a strawman created by you and are passing it off as some kind 'fact.'
 
Last edited:
  • Like
Reactions: abq
I hear you, but technically you're wrong. I'm using an application to arrange pixels in a way that matches a concept in my head. In each case those pixels are arranged in a way that has never excactly existed before. It may resemble plenty of things that came before it, but it is technically unique.
The feelings that I and others feel when viewing this particular arrangement of pixels is real.
So far, no copyright has been granted to anybody trying to request for it, under the grounds that ownership and creation must be made by a human. People can still have ownership of everything else not using AI on it, yes. For example, you can make a music, and make the cover with AI. The cover is not yours, but your music work is still yours.

Any modification you make to AI is yours (this does not makes the whole work yours, just the modified portion and extent), but the output from the AI itself is not yours. Any inspiration you can obtain from AI and work you do with it can also be yours, as long it is not blatantly similar to somebody's else while being aware of this (the concept of awareness is very important for plagiarism).
 
stealing photos and plagiarizing publications / articles.
copyrighted data without acquiring permission
If not caring about someone “misusing” Mickey Mouse (or some other IP) is wrong, I don’t wanna be right. Oh the absolute horror of being able to create a picture of Mickey Mouse smoking a cigarette!
Remember the calls to ban Photoshop in the early 2000s? That’s what you people sound like.

Stop pretending to care about the little guy and being against big corps when you’re so pro-copyright. Copyright protects the wealthy and powerful - and font creators (sad, miserable individuals).

Problem with litigation is that the law is way behind on technology abuse, and megacorps are doing their best to keep it that way by lobbying, not to mention by recruiting the endless armies of useful idiots through their press releases to defend them.

There's now a vocal bunch of ordinary people who are being lead to believe that they can extract some personal benefit from using those AI models and they have decided to bury their heads in the sand and ignore the legality and morality issues around the technology creation and use because of some unspecified payout they might receive in the future.

When you think about it that way, the whole AI grift ("free AI for everyone") really isn't that much different from the crypto grift ("free money for everyone") and those who have been paying attention have already seen who has made profit from that and who is now left holding the bag.

The worst part?

The regulation still hasn't caught up with crypto while the damage from it is still ongoing and seemingly unstoppable. By the time law catches up with AI the damage from it will be irreversible.
Worst take (and comparison) on AI in this thread by far. AI companies are not paying hundreds of millions of dollars for ad to promise you riches. Your analogy is very inaccurate.

I think you’re very confused about AI, (tech) companies and tech in general. Gonna leave this thread now because I know your position now and you know mine and there’s no changing each other’s minds on the fundamentals of the issue at hand.

Just a note from recent history: The people in your camp, who try to destroy a specific technology that doesn’t align with their political preferences, have always lost that fight.


* When you are a company, the products you put out speak for you, this one quite literally. So a product that outputs racist, sexist stuff reflects badly on them, even if the user asked for it.

* Otherwise neutral prompts such as "Jewish boss" give you biased results you didn't even ask for.
We hear so much about the dangers of AI from journalists, because they are incredibly afraid that it will make their jobs redundant (I sure hope so). They try to make a mountain out of a molehill (“Omg, you can create images with AI that are offensive to some NYC millennials, let’s burn the whole thing down!”). Outside of their bubble, hardly anyone thinks the “issues” listed in the article are a major problem and certainly don’t require the complete halt of that technology/software.
 
Last edited:
  • Like
Reactions: helper800
Copyright protects the wealthy and powerful - and font creators (sad, miserable individuals).
Copyright is a double-edged sword. Without it no incentive for creation exists, and any progress must be fueled by the state or philanthropic people with wealth from other sources to sustain, prosper and invest, This is how essentially the world was before patent laws and eventually copyright arose in the 19th century.

But too much of it also creates a different subset of problems, including legal bullying from the powerful.

Due to AI actually requiring new data from new creators to produce new things, while using AI produced data causes the inverse effect of averaging the same thing, I think that not regulating it will just produce a short-term gain by using the currently existing data to the fullest, but in turn it will inhibit any future to it.

Another issue of not regulating it, is that the same wealthy and powerful are the ones with the big databases and big data processing farms, and they are profiting with all the information on it, while all the people that submitted that information are not getting paid by it.

A regulation where a user can opt out their info, or if opting in, they are due a mechanism to allow them to get a part of the gains, too, is required. So is the need to enforce labeling the AI images and metadata them properly. So is an update to prevent plagiarism (by disclosing the AI prompt information when asked). So is preventing the unauthorized use of copyrighted material when Fair Use is not applying, not the one of the Big Corpos, but from the users, their photos and all.
 
  • Like
Reactions: palladin9479
Number 1 is a fact that you previously did not post.
I did post it earlier in the thread, I just didn't mention private medical photos because it was less on topic.

Also, you didn't have to wait for me to feed you links to be able to verify that what I am posting is factual information.

Most people are capable of independently verifying what other people say and the honest ones do so before making a claim that the other side presented an opinion instead of fact.
As for whether the blame can be attributed to them scrapping the material is highly disputable considering federal agencies are aware that this happen and nobody is being prosecuted, sued, or otherwise.
LAION-5B is a collection of URLs.

Some of those URLs were pointing at bad things. If you as much as visited those URLs and someone tipped those agencies that you did so, you would have been found in possesion and put in jail and on sex offender list no questions asked.

That those same agencies are quiet about it when corporations not just visit but use the contents to make a profit just goes to show how deep the corruption runs in the system.
The real people to blame for these instances are the ones who put this material into public domain online. The scrappers used to create these datasets can only be so good at purging such materials.
It's always someone else's fault, isn't it? Corporations using this to make an obscene profit can do no wrong.
With regards to number 2, if you put your copyrighted material into the public domain people are free to add those things into whatever private dataset they please.
That's not how copyright works.

If I take a photo of someone and publish it on my website I didn't put it in public domain. It still retains my copyright and if people want to use it they have to ask for permission and pay a licensing fee if I agree to allow them after they tell me what they intend to use it for.

Something being publicly accessible doesn't change the copyright status of it.
I would love to see the source of the paraphrase you just used unless that is a strawman created by you and are passing it off as some kind 'fact.'
 
Last edited:
Status
Not open for further replies.