For me the questions about creating original content comes down to what is copy and pasting and what is actually new?
If I am an artist (no mistake here, I am not, just for the sake of argument and to help illustrate) and I draw a picture of a face using pencils and a person sitting for the portrait, I did not create the face, nor do I own it, yet I might own the drawing if I did not do it under contract.
How about if I use a camera instead of pencils and paper? Again, I do not own the face, yet I am creating an original work based on another person, or it could as easily be a landscape, car, bird, etc.
What if I use water colors and create a alien planet? Did I do so from my own imagination, or was I inspired by a book, and if so, did I create the painting for a book cover?
All of the above is nice and clear cut, or about as much as can be expected. But what if I am a digital media artist, where I take the photo of the person and use tools in a paint or photo editing program and convert it to a duo color print, or give it a metallic look that it lacked before, just by applying a filter or effect?
Does the AI know what a face looks like, what makes it a face, and can it create a new face based on it's own likes and dislikes? What would it's ideal face be? A police artist, based on descriptions alone, can draw a likeness of a suspect, can AI do so without copying and pasting it together?
We have had similar things for years now. Neural nets and rule based systems. A good example of a rule based system is a financial advisor tool. Years (20+?) ago I heard about a financial program to assist financial advisors, and the tests proved interesting. One of the test results had the advisor asking how the program arrived at it's suggestion. It showed the bias in the human advisor (because we have always done it that way) and the program (it was the first of a couple different options). Neither human nor program was wrong, just arrived at equally valid answers based on there own bias.
Not really. There have been lots of recent lawsuits where a songwriter or band you've never heard of sued a famous band or musician because some part of one of their hit songs sounds like an earlier work. Two that come to mind are Men at Work's Land Down Under and Ed Sheeran's Shape of You. These sorts of copyright infringement lawsuits are becoming their own industry. I wouldn't be surprised if they're even using deep learning models to scan massive music archives for similarities.
So, I'd expect that, if a painting or CG work would sell for $millions, then someone would probably come out of the woodwork and claim it was a ripoff of something they posted up on DeviantArt. Maybe it's even happened with movie posters?
Yes. Generative AI doesn't store actual images, in whole or in part. Not if it's trained properly, at least. It learns rules and patterns, and those are what it uses to generate new content.
Training is a key part of the question, though. If you only trained it on pictures of the Starship Enterprise, and then asked it to produce an image of a space ship, you could almost bet it's going to draw something with striking resemblance to the Starship Enterprise. If you trained it on a wide diversity of sci fi art, then the space ships it generated would still be largely derivative, but no less so than what many human artists would tend to produce. You'll also probably see some interesting "original" ideas, which are variations, combinations, or extrapolations of rules and patterns it learned that a human might not think to produce.
Whatever the case, if an AI-generated image looks similar enough to one drawn by a human (and probability suggests it'll happen, even if the human's wasn't in its training set), litigation will likely follow.
On the flip side, we know people are using AI-generated content for inspiration & more, and I'm sure it often goes uncredited. Not to side with the AI, but just to point out that the situation is very asymmetrical.