News Intel shows Gaudi3 AI accelerator, promising quadruple BF16 performance in 2024

Status
Not open for further replies.

BX4096

Reputable
Aug 9, 2020
167
313
4,960
I wonder what Gaudi would have said if he had seen his name appropriated for a product made to steal art.
You may wonder that when such a product gets released. Anyone who still believes that generative "AI" models are "stealing" is entirely out of their depth posting on a technical website like this.

Although, the idea of what a 19th century inhabitant might think of this is rather apropos, since this whole controversy reminds me of people who used to refuse to have their picture taken back then because they believed that the photograph would steal their soul. Two centuries later, and here we go again.
 

leoneo.x64

Reputable
Feb 24, 2020
10
9
4,515
You may wonder that when such a product gets released. Anyone who still believes that generative "AI" models are "stealing" is entirely out of their depth posting on a technical website like this.

Although, the idea of what a 19th century inhabitant might think of this is rather apropos, since this whole controversy reminds me of people who used to refuse to have their picture taken back then because they believed that the photograph would steal their soul. Two centuries later, and here we go again.
Oh damn...we totally forgot about the global royalty distribution system that regularly pays everybody whose content was used by AI to train itself!

If you don't like getting called a thief, don't steal. Simple. Make up your own test data and then train your models. AI is not human (am surprised I need to spell it out) and can't enjoy human privileges like access to public info. Gen AI represents a flawed foundation built from the groundup to obscure source data lineage.

If we have the power to create such a powerful thing, we also have the responsibility to teach it some manners, else what's the point raising a cute baby T Rex only to be eaten alive by it later.
 

BX4096

Reputable
Aug 9, 2020
167
313
4,960
Oh damn...we totally forgot about the global royalty distribution system that regularly pays everybody whose content was used by AI to train itself!

If you don't like getting called a thief, don't steal. Simple. Make up your own test data and then train your models. AI is not human (am surprised I need to spell it out) and can't enjoy human privileges like access to public info. Gen AI represents a flawed foundation built from the groundup to obscure source data lineage.

If we have the power to create such a powerful thing, we also have the responsibility to teach it some manners, else what's the point raising a cute baby T Rex only to be eaten alive by it later.
Again, all it says is that you're woefully ignorant of how these models are trained and work. The idea of a "global royalty distribution system" for every single source that contributed to training of a model of ChatGTP's scale is ludicrous beyond belief, especially since the actual output only has a very faint resemblance to these sources and will only become infinitely hazier and hazier once their number grows. What's even more laughable is that even if we went with something like this, the royalties for every author out there would realistically amount to perhaps a cent or two per year, if not less.

While AI and model training in particular are nuanced and multi-faceted issues with plenty of ethical considerations, comparing the models themselves to "stealing" is just as inane as claiming that taking a photograph of a public place is theft. Actually, not even that is accurate, as it's more akin to claiming that painting a generic picture very vaguely reminiscent of something (or rather, several unrelated somethings at once) is stealing, or asking a random number generator to come up with well-sourced attribution for every random number it generates.

If we're talking actual legality of it, it's even more clear-cut. The copyright law explicitly sees infringement as a reproducing or redistributing copyrighted work without permission, which is not at all what is happening with AI models. Even the derivative work clauses don't apply, since a derivative work "must incorporate some or all of a preexisting work" in order to be protected, which is not what happens when something like Stable Diffusion generates stuff. With "derivative work", we're talking largely recognizable characters, copyrighted logos, nearly verbatim reproduction, and so on. Being inspired by a work or resembling it very loosely are not protected at all – by design, since that is how virtually all human-created art happens.

So, again, while the subject itself is very complicated, claims like yours are nothing but uneducated nonsense with no basis in reality or copyright law.
 

leoneo.x64

Reputable
Feb 24, 2020
10
9
4,515
Again, all it says is that you're woefully ignorant of how these models are trained and work. The idea of a "global royalty distribution system" for every single source that contributed to training of a model of ChatGTP's scale is ludicrous beyond belief, especially since the actual output only has a very faint resemblance to these sources and will only become infinitely hazier and hazier once their number grows. What's even more laughable is that even if we went with something like this, the royalties for every author out there would realistically amount to perhaps a cent or two per year, if not less.

While AI and model training in particular are nuanced and multi-faceted issues with plenty of ethical considerations, comparing the models themselves to "stealing" is just as inane as claiming that taking a photograph of a public place is theft. Actually, not even that is accurate, as it's more akin to claiming that painting a generic picture very vaguely reminiscent of something (or rather, several unrelated somethings at once) is stealing, or asking a random number generator to come up with well-sourced attribution for every random number it generates.

If we're talking actual legality of it, it's even more clear-cut. The copyright law explicitly sees infringement as a reproducing or redistributing copyrighted work without permission, which is not at all what is happening with AI models. Even the derivative work clauses don't apply, since a derivative work "must incorporate some or all of a preexisting work" in order to be protected, which is not what happens when something like Stable Diffusion generates stuff. With "derivative work", we're talking largely recognizable characters, copyrighted logos, nearly verbatim reproduction, and so on. Being inspired by a work or resembling it very loosely are not protected at all – by design, since that is how virtually all human-created art happens.

So, again, while the subject itself is very complicated, claims like yours are nothing but uneducated nonsense with no basis in reality or copyright law.
:)

While I admit that the "universal royalty program" is a pipe dream, the essence here is that source lineage (in its interity, not just for citation / fact checking etc) is neither being baked in nor being paid much attention to.

Not everyone is a neural network developer, but bringing up how impressive and sophisticated the tech is, while discussing guardrails is called "skirting the issue". So, the elephant in the room is unsupervised learning.

Let's keep monetary challenges related to giving credit where due w.r.t derivative outputs (Gen AI) aside for a second. Let's talk training. The current notion is that far more important things will be done using AI models in the future. We need to be absolutely certain of what can be attributed to the model's learning aspect and what should be attributed to its inferencing / extrapolating. To be clear, I understand that the latter is where the magic happens, before I am brandished as a Luddite.

I'm all for progress, but there is no excuse for citing "complexity" when it comes to baking in supervision in AI models. After all, progress of AI is practically using as much compute, globally, as Brainiac would. Value of compliance is relative and in this case paramount.

I think you will also agree that if the lineage bits are enforced, progress will be slow. But that's the whole point, ultimately these models need to prove their reliability before they can be entrusted with important work. There is no work being done in this direction. Models are "corrected" when publicly shamed for erroneous results. That's not the answer.

So yes, if the models are learning and not even citing sources, that is stealing. Very sophisticated, innovative and trace-less stealing :)

Solved cancer, it has not, but generating product keys, this shiny new brainiac is.

Lastly, laws have been inadequate for decade old issues, let alone this visionary tech. Laws will take decades to come to grips with what needs to be controlled here. What you cited are the right loopholes that need reform to tackle the consequences of AI. Thank you for that.
 
Status
Not open for further replies.