Learning is a biological process harnessed by brains. Where I think you're drawing the distinction isn't actually related to learning, but rather abstractions and higher-order thought processes. In either case, learning basically works the same way at the cellular level.
No, I am drawing distinction on proper comparisons (human .vs. "AI", not tapeworm .vs. "AI"). What current crop of "AI" is doing while being trained isn't learning -- it's rote memorization.
Of course they do, but I know it's just a waste of my time to try to convince you of something you don't want to believe.
As you already said you are a programmer I am sure you can do some digging and point me to some scientific paper published in a well respected and peer-reviewed scientific journal which proves your point?
If you can't, then what you are saying is based on your anthropomorphizing of LLMs, and the onus of proving they actually understand concepts is on you and those AI tech bros who are parroting that view to hype what they are peddling.
Without ever having seriously studied the theory behind how they're implemented, you've already made up your mind about not only current but all future AI technology?
Creating an AI is an attempt to create a perfect human servant -- one who is all-powerful, all-knowing, and (the creators hope so) benevolent. I read enough SF in my life to know where that leads.
If you only ever seek out explanations that confirm your beliefs and preconceptions, you risk finding yourself out of step with reality, at some point.
Thanks, but I can still tell wishful thinking of the unwashed masses and AI tech bros' shilling from reality.
Is the possibility that you're underestimating AI so threatening to you that you close your mind to all information that might seed any doubt?
I am not underestimating it at all. I've read how it works, and experimented with local models a lot. Just today I asked llama3 to translate some simple short bits of text from English to German and the results were wildly inaccurate. In comparison, Google Translate at least used German words with correct meaning.
No, it's not remotely true that "all of mankind is united in celebration". Don't tell me you're enthralled by some romantic self-conception as a lone or rare dissident.
Not at all, but you are reading into "all of mankind" way too literally when it was poetic quote from the movie.
Heck, even plenty of AI experts are plenty worried about the potential of AI, to the point of signing open letters and raising alarms by other means.
Yes but there's still a large majority of people who have wool pulled over their eyes by the tech companies.
It's also pretty unoriginal and not so different than what we'd already seen in the Terminator franchise. In a lot of ways, I think movies like 2001: A Space Odyssey, War Games, and Terminator 1 & 2 paved the way for The Matrix, because it acquainted mainstream society with the idea of AI taking over and declaring war on humanity. If Matrix had to shoulder that burden, as well, it might've been too much.
Before all those movies there was a 1966 novel
Colossus, and even before that, way back in 1909, a book called
The Machine Stops -- apparently even people who lived a century ago and who never saw a computer in their life or knew about the concept were able to imagine how human dependence on machines might be their undoing. Also, 1984 predicted a lot of stuff that's going on (Ministry of Peace / Defense waging wars, Ministry of Information censoring the same, history revisionism, TVs that watch you, etc).
I think so, because you can do other things with it than simply execute the recipe. You can pass it on to someone else or you might relay certain details of it, for entertainment purposes, if it's weird or exotic. It could also inform you in ways that help you with other recipes or food preparation. Heck, depending on what dish it's for, it might even influence what you order at a restaurant or where you choose to dine.
What I was trying to say is that there is a huge step between theoretical and practical knowledge. Theoretical knowledge is based on faith -- you literally have to trust others that they aren't lying to you. Practical knowledge on the other hand is sometimes very dangerous to acquire directly -- for example you don't want to learn that a bullet can kill you by actually being shot and killed. The practical knowledge is further divided into observable knowledge (seeing someone get killed by a bullet), and personal experience (burning your fingers on a hot stove). The AI is currently limited to theoretical knowledge and as such it is susceptible to manipulation. Heck, current training datasets already have pro-western cultural and moral bias -- something that's limited to like 1/8th of the Earth population.
I hope I am wrong, but the cynic in me says nothing good will come out of it.