Pigeons learn very much like computer AIs and it makes them better than humans at solving some complex problems, a new study finds.
AI Learns Like a Pigeon, Researchers Say : Read more
AI Learns Like a Pigeon, Researchers Say : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
ROFL...that is SO true. I love tech, see great potential in AI but fear the industry isn't listening to the people who design/invent this stuff. Even though people like Hinton are telling the tech sector or anyone who will listen, that we need to pump the breaks on AI so we can better understand how AI is 'thinking'. Yet we/the tech sector blindly push forward without a proper understanding of what is actually going on in some of these AI models. I hope our soon to be AI overlords are kinder than we are....Pigeons and Ai so similar, they both do not care where their <Mod Edit> lands.
Reading the article, it seems to be brute-force learning at its best - pigeons simply don't get bored.I wonder if humans can be trained to use this associative learning technique. If you could identify problems where it's a more effective way to learn, and then deploy it effectively, perhaps you could master certain tasks better and more quickly than others.
Uhm, school biology has taught me that pigeons are intelligent creatures that are capable of complex learning, and that humans can learn repetitive tasks in a more efficient way than pigeons by using our knowledge and understanding of the world.Reading the article, it seems to be brute-force learning at its best - pigeons simply don't get bored.
Humans can master it by not getting impatient in front of repetitive, simplistic problems.
https://edubirdie.com/pay-for-homework helps me pay for homework.
...
Like, how to tighten that bolt with a wrench.
...
I guess I'll keep trying to think outside the box, thank you.
Pigeons ARE capable of complex learning : they can repeat a task that they've been shown. Crows (and several other corvidae) can solve puzzles. Current AIs are like pigeons, indeed, but nowhere near problem-solving intelligence.Uhm, school biology has taught me that pigeons are intelligent creatures that are capable of complex learning, and that humans can learn repetitive tasks in a more efficient way than pigeons by using our knowledge and understanding of the world.
Overall, the best way for us to learn is to combine our ability to think outside the box with our ability to learn from repetition))
GPT-class AI can solve word problems. Seriously, how could it write code if it were incapable of problem-solving?Current AIs are like pigeons, indeed, but nowhere near problem-solving intelligence.
Which founder and when did they say it?Or, to cite a founder of AI (who insists on calling it Applied Epistemology, I would tend to agree), current AIs know "How", but we're still ways away from "Why".
Q 1 : pattern recognition. Don't kid yourself, 99% of the code out there is always solving the same base problems; current AIs recognize those patterns and regurgitate these solutions - they are VERY good at it. Ask them to solve a problem no one has ever solved before, you'll get garbage - current AI engines are basically probability generators, they'll find a pattern and "guess" (i.e. get the pattern with the highest possibility to match) the solution. No matching pattern for a problem? You'll get garbage.GPT-class AI can solve word problems. Seriously, how could it write code if it were incapable of problem-solving?
Which founder and when did they say it?
That's too simplistic. They can do arithmetic, basic reasoning, and solve word problems. It's a lot more than just pattern-matching.Q 1 : pattern recognition. Don't kid yourself, 99% of the code out there is always solving the same base problems; current AIs recognize those patterns and regurgitate these solutions - they are VERY good at it.
Why is it people are so quick to dismiss AI after one bad answer or hallucination? If you don't get a good answer, there are techniques you can use to work around its limitations, one of which is that it can't take arbitrarily long to think about the request and generate output.Ask them to solve a problem no one has ever solved before, you'll get garbage
Arithmetic : is pattern following rules. Basic reasoning : not exactly - they'll match the pattern of the situation to the most appropriate pattern for its resolution, but most don't have logic. E.g. ask an AI if it's old, it will tell you when it was trained and how old was its data set, but then, give it the current year and most of them will not be able to deduct how old its data set actually is - yes, I tried with several engines and models, many failed. Some managed, but had other lapses in logic.That's too simplistic. They can do arithmetic, basic reasoning, and solve word problems. It's a lot more than just pattern-matching.
Why is it people are so quick to dismiss AI after one bad answer or hallucination? If you don't get a good answer, there are techniques you can use to work around its limitations, one of which is that it can't take arbitrarily long to think about the request and generate output.
To the extent that's true, so is your brain.Arithmetic : is pattern following rules.
That is not a test of basic reasoning. You assume a degree of self-knowledge that it doesn't necessarily have.Basic reasoning : not exactly - they'll match the pattern of the situation to the most appropriate pattern for its resolution, but most don't have logic. E.g. ask an AI if it's old, it will tell you when it was trained and how old was its data set,
This highlights a misconception people have about these chatbots. People treat them according to some preconception of what they think an all-knowing AI should be able to do - like Commander Data, from Star Trek: Next Generation.One bad answer : no, I spent some time actually running models and settings, and I would steer the questions until I reached a point where the AI spouted garbage; I could even tell WHY it spouted that garbage, because it was some data it wasn't trained with but it managed to extract some elements from the questions that steered it towards an answer that was theoretically close, but factually completely wrong. E. g. "What can you tell me about novelist Daniel Pennac" ? And the answer was something close to "Daniel Pennac is a French novellist that published several written works such as novels and theatre plays; his most well known works like L'avare or les précieuses ridicules earned him a large recognition". So yes, it's a renowned French novelist, and playwright (which can be inferred from the question), but Daniel Pennac is still alive while the authors of the cited works is Jean-Baptiste Poquelin AKA Molière. He's been dead and buried for more than 3 centuries now. So close, but no cigar - pure probability : a renowned French novelist has a high chance to be Molière, Hugo or Dumas, but since the former used a stage name, there is a higher chance an unknown nickname was used for him than the others - so he came up as "the" answer for several models.
No, the brain is not-so-great at arithmetic by default - not like a microprocessor. Inasmuch as one can "get" the patterns of arithmetic, one gets good at maths. Strikingly good even, but still.To the extent that's true, so is your brain.
I did test if the model had that kind of awareness first. The question would indeed be pointless if the model didn't know when its training data set was completed.Related:
That is not a test of basic reasoning. You assume a degree of self-knowledge that it doesn't necessarily have.
Based off that, I sustain that many models did fail at basic deduction and inference if it can't deduct its age when I make sure it knowns when its data set was created, give it the current year and ask how old it is - and get the wrong answer.When I said "basic reasoning", I meant logical deductions and inferences.
ST:TNG was entertaining, but ST:VOY's doctor was more prescient to what AI problems would be.This highlights a misconception people have about these chatbots. People treat them according to some preconception of what they think an all-knowing AI should be able to do - like Commander Data, from Star Trek: Next Generation.
While a neuron network is built from mapping faster paths to more likely outcomes for a given set of parameters, they DO "remember" stuff - and they're supposed to remember them more reliably than us since they are not subject to biological degradation. Of course it's not the best use for them (a search engine would be better at that), but they are able to say if something is likely to be true or false; however, they use probabilities for that, not logic - see the nuance?The mere fact that the AI knows anything specific is pretty amazing, but that's not what they're for. Their strength lies in their ability to solve general problems, because they need lots of examples of a kind of thing, in order to truly know and understand it - not unlike a human! They aren't a substitute for wikipedia, where it seems you expect that if they see a fact one single time, they'll remember it.