For this to happen, one would need to prove that ChatGPT has taken data from specific sources, which will be might impossible to do.
It's not that difficult to prove since valuable data has unique identifiers attached to it, be it private, financial, research, etc.
It might be hard to prove ChatGPT ripped off your novel, but it won't be hard to prove ChatGPT has specific data on someone it wasn't allowed to collect. Note that EU legislation like GDPR has rules not only pertaining to the reproduction of private data, but also collection and physical storage location.
At the end of the day, ChatGPT has nothing to do with "AI". It is a brute force algorithm that rips off everything it can find online. Since ChatGPT collects data on a massive scale, OpenAI, nor Microsoft, can stop it from collecting sensitive data, since ChatGPT has no idea what it is collecting. That leaves Microsoft and Google vulnerable to mass class action lawsuits, and since these are juicy billion $ companies, it is just a matter of time before the lawsuits start.
"First, the method used by OpenAI to collect the data ChatGPT is based on needs to be fully disclosed by the generative AI firm, claims Alexander Hanff, member of the European Data Protection Board's (EDPB) support pool of experts. "If OpenAI obtained its training data through trawling the internet, it’s unlawful. Just because something is online doesn't mean it's legal to take it." he added.
-infosecurity
818×306 jpg