News AI Learns Like a Pigeon, Researchers Say

Pigeons and Ai so similar, they both do not care where their <Mod Edit> lands.
ROFL...that is SO true. I love tech, see great potential in AI but fear the industry isn't listening to the people who design/invent this stuff. Even though people like Hinton are telling the tech sector or anyone who will listen, that we need to pump the breaks on AI so we can better understand how AI is 'thinking'. Yet we/the tech sector blindly push forward without a proper understanding of what is actually going on in some of these AI models. I hope our soon to be AI overlords are kinder than we are....
 
Last edited by a moderator:
I wonder if humans can be trained to use this associative learning technique. If you could identify problems where it's a more effective way to learn, and then deploy it effectively, perhaps you could master certain tasks better and more quickly than others.
 
I wonder if humans can be trained to use this associative learning technique. If you could identify problems where it's a more effective way to learn, and then deploy it effectively, perhaps you could master certain tasks better and more quickly than others.
Reading the article, it seems to be brute-force learning at its best - pigeons simply don't get bored.
Humans can master it by not getting impatient in front of repetitive, simplistic problems.
...
Like, how to tighten that bolt with a wrench.
...
I guess I'll keep trying to think outside the box, thank you.
 
Reading the article, it seems to be brute-force learning at its best - pigeons simply don't get bored.
Humans can master it by not getting impatient in front of repetitive, simplistic problems.
https://edubirdie.com/pay-for-homework helps me pay for homework.
...
Like, how to tighten that bolt with a wrench.
...
I guess I'll keep trying to think outside the box, thank you.
Uhm, school biology has taught me that pigeons are intelligent creatures that are capable of complex learning, and that humans can learn repetitive tasks in a more efficient way than pigeons by using our knowledge and understanding of the world.

Overall, the best way for us to learn is to combine our ability to think outside the box with our ability to learn from repetition))
 
Last edited:
  • Like
Reactions: bit_user
Uhm, school biology has taught me that pigeons are intelligent creatures that are capable of complex learning, and that humans can learn repetitive tasks in a more efficient way than pigeons by using our knowledge and understanding of the world.

Overall, the best way for us to learn is to combine our ability to think outside the box with our ability to learn from repetition))
Pigeons ARE capable of complex learning : they can repeat a task that they've been shown. Crows (and several other corvidae) can solve puzzles. Current AIs are like pigeons, indeed, but nowhere near problem-solving intelligence.
Or, to cite a founder of AI (who insists on calling it Applied Epistemology, I would tend to agree), current AIs know "How", but we're still ways away from "Why".
 
Current AIs are like pigeons, indeed, but nowhere near problem-solving intelligence.
GPT-class AI can solve word problems. Seriously, how could it write code if it were incapable of problem-solving?

Or, to cite a founder of AI (who insists on calling it Applied Epistemology, I would tend to agree), current AIs know "How", but we're still ways away from "Why".
Which founder and when did they say it?
 
GPT-class AI can solve word problems. Seriously, how could it write code if it were incapable of problem-solving?


Which founder and when did they say it?
Q 1 : pattern recognition. Don't kid yourself, 99% of the code out there is always solving the same base problems; current AIs recognize those patterns and regurgitate these solutions - they are VERY good at it. Ask them to solve a problem no one has ever solved before, you'll get garbage - current AI engines are basically probability generators, they'll find a pattern and "guess" (i.e. get the pattern with the highest possibility to match) the solution. No matching pattern for a problem? You'll get garbage.

Q 2 : Hermann Iline, he told me so a few months ago. I know him personally. You probably won't find anything about him in the US, as he's Russian-born and he mostly writes in French. Started studying AE in the 80's, wrote a few papers on the subject (AI nerds at the beginning of the 2000's knew him), then he veered towards social science epistemology and philosophy - you know, the next step : "why".
 
Q 1 : pattern recognition. Don't kid yourself, 99% of the code out there is always solving the same base problems; current AIs recognize those patterns and regurgitate these solutions - they are VERY good at it.
That's too simplistic. They can do arithmetic, basic reasoning, and solve word problems. It's a lot more than just pattern-matching.

Ask them to solve a problem no one has ever solved before, you'll get garbage
Why is it people are so quick to dismiss AI after one bad answer or hallucination? If you don't get a good answer, there are techniques you can use to work around its limitations, one of which is that it can't take arbitrarily long to think about the request and generate output.
 
That's too simplistic. They can do arithmetic, basic reasoning, and solve word problems. It's a lot more than just pattern-matching.


Why is it people are so quick to dismiss AI after one bad answer or hallucination? If you don't get a good answer, there are techniques you can use to work around its limitations, one of which is that it can't take arbitrarily long to think about the request and generate output.
Arithmetic : is pattern following rules. Basic reasoning : not exactly - they'll match the pattern of the situation to the most appropriate pattern for its resolution, but most don't have logic. E.g. ask an AI if it's old, it will tell you when it was trained and how old was its data set, but then, give it the current year and most of them will not be able to deduct how old its data set actually is - yes, I tried with several engines and models, many failed. Some managed, but had other lapses in logic.

One bad answer : no, I spent some time actually running models and settings, and I would steer the questions until I reached a point where the AI spouted garbage; I could even tell WHY it spouted that garbage, because it was some data it wasn't trained with but it managed to extract some elements from the questions that steered it towards an answer that was theoretically close, but factually completely wrong. E. g. "What can you tell me about novelist Daniel Pennac" ? And the answer was something close to "Daniel Pennac is a French novellist that published several written works such as novels and theatre plays; his most well known works like L'avare or les précieuses ridicules earned him a large recognition". So yes, it's a renowned French novelist, and playwright (which can be inferred from the question), but Daniel Pennac is still alive while the authors of the cited works is Jean-Baptiste Poquelin AKA Molière. He's been dead and buried for more than 3 centuries now. So close, but no cigar - pure probability : a renowned French novelist has a high chance to be Molière, Hugo or Dumas, but since the former used a stage name, there is a higher chance an unknown nickname was used for him than the others - so he came up as "the" answer for several models.

So no, I didn't exactly dismiss AI after one wrong answer : I actually pushed them hard to see what made them tick, and I managed to trip up most of them after a while, because I know how they work. I sure as heck didn't simply give a couple silly questions to ChatGPT and laughed it off when it made a mistake for a reason I didn't (try to) understand.
 
Arithmetic : is pattern following rules.
To the extent that's true, so is your brain.

Related:

c9Mt4i3W7EMNKfmjG3RvZD.jpg


Basic reasoning : not exactly - they'll match the pattern of the situation to the most appropriate pattern for its resolution, but most don't have logic. E.g. ask an AI if it's old, it will tell you when it was trained and how old was its data set,
That is not a test of basic reasoning. You assume a degree of self-knowledge that it doesn't necessarily have.

When I said "basic reasoning", I meant logical deductions and inferences.

One bad answer : no, I spent some time actually running models and settings, and I would steer the questions until I reached a point where the AI spouted garbage; I could even tell WHY it spouted that garbage, because it was some data it wasn't trained with but it managed to extract some elements from the questions that steered it towards an answer that was theoretically close, but factually completely wrong. E. g. "What can you tell me about novelist Daniel Pennac" ? And the answer was something close to "Daniel Pennac is a French novellist that published several written works such as novels and theatre plays; his most well known works like L'avare or les précieuses ridicules earned him a large recognition". So yes, it's a renowned French novelist, and playwright (which can be inferred from the question), but Daniel Pennac is still alive while the authors of the cited works is Jean-Baptiste Poquelin AKA Molière. He's been dead and buried for more than 3 centuries now. So close, but no cigar - pure probability : a renowned French novelist has a high chance to be Molière, Hugo or Dumas, but since the former used a stage name, there is a higher chance an unknown nickname was used for him than the others - so he came up as "the" answer for several models.
This highlights a misconception people have about these chatbots. People treat them according to some preconception of what they think an all-knowing AI should be able to do - like Commander Data, from Star Trek: Next Generation.

DataTNG.jpg


The mere fact that the AI knows anything specific is pretty amazing, but that's not what they're for. Their strength lies in their ability to solve general problems, because they need lots of examples of a kind of thing, in order to truly know and understand it - not unlike a human! They aren't a substitute for wikipedia, where it seems you expect that if they see a fact one single time, they'll remember it.
 
To the extent that's true, so is your brain.
No, the brain is not-so-great at arithmetic by default - not like a microprocessor. Inasmuch as one can "get" the patterns of arithmetic, one gets good at maths. Strikingly good even, but still.
Related:
c9Mt4i3W7EMNKfmjG3RvZD.jpg


That is not a test of basic reasoning. You assume a degree of self-knowledge that it doesn't necessarily have.
I did test if the model had that kind of awareness first. The question would indeed be pointless if the model didn't know when its training data set was completed.
When I said "basic reasoning", I meant logical deductions and inferences.
Based off that, I sustain that many models did fail at basic deduction and inference if it can't deduct its age when I make sure it knowns when its data set was created, give it the current year and ask how old it is - and get the wrong answer.
This highlights a misconception people have about these chatbots. People treat them according to some preconception of what they think an all-knowing AI should be able to do - like Commander Data, from Star Trek: Next Generation.
ST:TNG was entertaining, but ST:VOY's doctor was more prescient to what AI problems would be.
The mere fact that the AI knows anything specific is pretty amazing, but that's not what they're for. Their strength lies in their ability to solve general problems, because they need lots of examples of a kind of thing, in order to truly know and understand it - not unlike a human! They aren't a substitute for wikipedia, where it seems you expect that if they see a fact one single time, they'll remember it.
While a neuron network is built from mapping faster paths to more likely outcomes for a given set of parameters, they DO "remember" stuff - and they're supposed to remember them more reliably than us since they are not subject to biological degradation. Of course it's not the best use for them (a search engine would be better at that), but they are able to say if something is likely to be true or false; however, they use probabilities for that, not logic - see the nuance?
To be fair, most humans do that too at one moment or another, so it's not as if it's not impressive indeed, but do you see when humans make such mistakes?
More often than not, it's when we don't bother asking "why".
intro-1651069653.webp