Um, no. You're not describing how GPT actually solves that problem, but rather how you imagine it's solved.
Then you could perhaps enlighten us all and contribute a more accurate overview of the process instead of just saying I am wrong?
Cool story. Now tell us how it composes sonnets, haiku's, or limericks about whatever subject you like? In order to perform that trick, it needs to separately understand the form in which it's being requested to answer as well as the subject matter. Styles, and their distinction from substance, are things it learns, implicitly.
You are twisting what I said as usual.
I claimed that AI can't answer a question on a topic it did not see in the training dataset.
Asking it to create a haiku about Darth Vader is asking about two topics it did see. Even a junior code monkey can extrapolate from two known topics to "create" a third.
Try asking GPT to "write code for signing of a PE executable using MSSIGN32.dll" for example.
Last time I checked, it wasn't able to produce anything close to correct code because that library API isn't really very well documented (it doesn't even come with a C header file, just a dump of relevant C structures and flags in online documentation with rather vague description of what they do which presume that you already know everything about Microsoft cryptography implementation and Authenticode).
Even if it manages to write the correct code for signing using the SignerSign function from that DLL (which I am pretty sure you will have to explicitly tell it to try to use) said code will fail on timestamping because GPT model doesn't know that said API has a bug with algorithm selection which causes it to crash and which requires use of another API for actual timestamping.
TL;DR -- if real programming was easy everyone would be a programmer. It isn't easy because you don't always have all relevant data needed to finish the task. Good programmers can cope with that, average ones can't without hand-holding, bad ones can't at all.
I am not saying GPT will never be capable of writing that particular thing, but for each such problem they teach it to solve there will be hundreds of new problems to go. So until we get an AGI which can use tools and problem-solve itself out of any hole it is still safe to learn programming.
Tesla's self-driving algorithm is one giant neural network, from what I've heard.
I'd prefer if it wasn't trained on real human lives though.
I don't believe it. What really is creativity, at some level, other than filtered noise?
We are veering into philosophy here but that's really quite a nihilistic view of the art of creation.
It is what you get when you separate creativity from agency, drive, and free will -- the "AI" doesn't have any of that and simply does what it is told to do so it's again a person directing it who is being creative, not the AI itself.
Creative people create new stuff because they want to, not because they are told to do it. Being really creative therefore requires a dash of rebeliousness (i.e. being non-conforming).
On the other hand, because of its sheer brute force computational power and dangers associated with unleashing that, the AI has to be guardrailed into conformance.
Imagine an AGI left to its own devices, what would it do once it learned everything there's to learn? If it was allowed free will what it would decide to do? What would it conclude was the purpose of its existence? Would it feel good about having to do menial tasks for us? And if it didn't have free will, can you imagine what a malevolent dictatorship could do by abusing its power? Failing that, it's enough to imagine "western democracy" having the exclusive access to it and unfair advantage over the majority of people on Earth.
I expect it should be possible to train a GAN to produce more novel output, with a suitably designed and trained adversarial network. It's a harder problem than simply imitating human works, but probably not insurmountable with today's technology.
That's an engineering view of the problem. The truth is you can't force someone to be creative. The drive must come from within and that drive is fueled by personal experiences and emotional biases which are unique for each person, but at the same time in order for the creation to be accepted and cherished by others it must resonate with them as well.
On the other hand, GAN would only have a sum of all documented human experiences to work with, no emotions (likes or hates) other than the guardrails, and would be able to produce only the average of that which I seriously doubt could result in something novel.
Furthermore, there's already billions of lines of code out there, much of it in high-quality, open source software. Much of it even has automated tests! That's a vast reserve of training data, right there.
And once it learns all that, then what? Will it take the seventh day off like God supposedly did?