News Current AIs only have the IQ level of a cat, asserts Google DeepMind CEO

A Stoner

Distinguished
Jan 19, 2009
378
145
18,960
That is an infinitely large exaggeration, so far there is not an AI out there that has any intelligence at all. None, zero, nada, zilch, so they do not even have the intelligence of an insect. There are probably plants with more intelligence.

I am not saying that they do not possess the ability to do things, but none of it is based upon intelligence.
 

Neilbob

Distinguished
Mar 31, 2014
266
350
19,720
I wonder, if you placed an AI supercomputer and a cat on a long conveyor belt heading slowly and dramatically towards a giant meat grinder, which one would try to get off the conveyor belt first?
 
  • Like
Reactions: kano1337
That's because there is no "intelligence" in current "AI", it's all pattern recognition and content cues. We know it's -superb- at these tasks, as they're able to pour over upteen thousand astronomical images and find patterns that humans could never do, or look at medical scans and find dental and health issues. It's also great at focused content, like law, which is very black and white. It will also excel at finding errors and bugs in code because it can trail and error at a speed impossible for even a team of coders to do.

Pretty much everywhere else it falls flat because it would involve a certain level of intelligence. I wouldn't even say current AI has the intelligence level of a cat because a cat can very efficiently stalk and hunt prey, notice and evade danger, and anticipate and predict events. To me current AI has the intelligence somewhere between an encyclopedia index and Excel, and has a long way to go to even reach "Crow" levels of intelligence, much less "average adult human" or "Stephen Hawking" level.
 
  • Like
Reactions: KyaraM

JRStern

Distinguished
Mar 20, 2017
178
67
18,660
Hassabis seems a fine fellow, but he does cats no favor with the comparison. Cats can't tell me much about the history of mathematics and ChatGPT can't catch a mouse or red dot. So if they're equal, they're equal at about zero. Surely cats deserve better.
 

watzupken

Reputable
Mar 16, 2020
1,181
663
6,070
To begin with, AI has no IQ. The algo is as good as the programmer trying to determine the right response(s) based on what content the AI has been trained and have access to. The algo in any case, is a black box.
 
  • Like
Reactions: KyaraM

ralfthedog

Distinguished
Oct 23, 2006
41
10
18,535
That is an infinitely large exaggeration, so far there is not an AI out there that has any intelligence at all. None, zero, nada, zilch, so they do not even have the intelligence of an insect. There are probably plants with more intelligence.

I am not saying that they do not possess the ability to do things, but none of it is based upon intelligence.
AI has intelligence because, you can feed it problems it has not been exposed to or trained on and it can find solutions. The definition of intelligence is, the ability to solve problems. The newer LLMs with moderately large context windows are quite good at solving very complex problems. They don't have feelings or consciousness, but they do have intellect.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
OMG, so many "experts" in this thread who probably learned all they know about AI from some rando youtubers who also don't know as much as they think they do, plus maybe toying with ChatGPT without having read anything about how to tune your prompts for better results.

Imagine people fancying themselves more knowledgeable than lawyers, accountants, electrical engineers, architects, etc. after such a cursory overview? The hubris of AI-deniers always astounds me.

That is an infinitely large exaggeration, so far there is not an AI out there that has any intelligence at all. None, zero, nada, zilch,
At some point sooner than you all think, you're probably going to find yourself facing the question: "if it looks like a duck and quacks like a duck, how can I be so sure it's really not a duck?" Then again, those few earnest flat-earthers remind us all what a potent force self-deception can be...

I am not saying that they do not possess the ability to do things, but none of it is based upon intelligence.
First, the article was clear that he was talking about Artificial General Intelligence, not LLMs. So, to the extent you're basing your judgement on them, you're thinking of something different than what he's talking about.

Second, are you sure you didn't just define intelligence as "whatever the LLM(s) you've tried don't have"?

If you try to devise an objective test for intelligence, as Alan Turing famously attempted, you're likely to find that LLMs actually can perform significantly better than a coinflip at some cognitive tasks. If you're interested, I'd suggest digging into the latest Hugging Face benchmark, while keeping in mind that they only test Open Source LLMs, and therefore it doesn't necessarily reflect the industry's best efforts.

Their scoring criteria is detailed here:
 

bit_user

Titan
Ambassador
It will also excel at finding errors and bugs in code because it can trail and error at a speed impossible for even a team of coders to do.
No, that's not how they work.

The algo in any case, is a black box.
Check out the Hugging Face link, in my post above. All of the LLMs they test are open source, including Facebook/Meta's. You can download and read the algorithms, yourself.

Hassabis seems a fine fellow, but he does cats no favor with the comparison. Cats can't tell me much about the history of mathematics and ChatGPT can't catch a mouse or red dot. So if they're equal, they're equal at about zero. Surely cats deserve better.
I was poking around on Wikipedia and noticed that not only do they have a page on animal cognition, but there's even a whole page dedicated to cat intelligence!
 
Last edited:

NinoPino

Respectable
May 26, 2022
496
310
2,060
AI has intelligence because, you can feed it problems it has not been exposed to or trained on and it can find solutions.
Here we should define the meaning of "solve problems". A lot of softwares are developed to "solve problems" never viewed, spreadsheets, RADs, programming languages, etc...
A lot of things done by LLM actually are not so wonderful. Some examples : sorting things, summarise things, generate texts, generate images, translate texts, interpret texts, interpret images, etc... Each of those can be achieved with specific neural networks and I think nobody in the world interpret such ability as "intelligence".
Blending all this "abilities" together through other neural networks may be considered "intelligence" ? For me not, it is only a more complex neural network that can mimic human behaviour.
The ability to "solve problems" never exposed to, may exist, but not the ability to "solve problems" never trained. If the network was never trained on image recognition it never will be able to interpret images, if the LLM is able to sort elements, it was trained on sorting and so on.

The definition of intelligence is, the ability to solve problems.
If this is the definition of intelligence I am pretty sure that actual LLMs cannot solve problems, because LLMs solve problems only if trained on a similar problems.
In practice LLM find a known solution to a similar problem or a solution to a known problem, as you prefer, but cannot find a new solution for a known problem or a solution for an unknown problem.
 

NinoPino

Respectable
May 26, 2022
496
310
2,060
At some point sooner than you all think, you're probably going to find yourself facing the question: "if it looks like a duck and quacks like a duck, how can I be so sure it's really not a duck?" ...
At some point, maybe, but not for sure, we could extinguish sooner.
Actually the problem is that it does not look like a duck and not quack like a duck.😀

First, the article was clear that he was talking about Artificial General Intelligence, not LLMs. So, to the extent you're basing your judgement on them, you're thinking of something different than what he's talking about.
You are right here. But you knows, when nowadays somebody talks of AIs it is implicit to go to LLMs and ChatGPT.
Power of marketing.
 

bit_user

Titan
Ambassador
it is only a more complex neural network that can mimic human behaviour.
Humans are intelligent, no? How much can something mimic human behavior without also reasonably being considered intelligent?

The ability to "solve problems" never exposed to, may exist, but not the ability to "solve problems" never trained.
LLMs don't need dedicated training for each facet of everything they do. For instance, you don't need a separate training module for them on grammar. They can deduce it from other training material that contains a sufficient amount of grammatically-correct writing.

Lots of the reasons people try to "jail-break" LLMs is to advise on things they weren't specifically trained to know, but just sort of picked up along the way.

If the network was never trained on image recognition it never will be able to interpret images,
It's like if a person is born blind, but later they're granted the ability to see. People like that haven't developed the neural pathways to interpret everything their eyes are showing them and tend to find their new ability overwhelming and tend not to be so good at visual tasks.

In practice LLM find a known solution to a similar problem or a solution to a known problem, as you prefer, but cannot find a new solution for a known problem or a solution for an unknown problem.
Well, here's a paper that demonstrates how chain-of-thought prompting can substantially improve the performance on a subset of hard problems in the Big Bench test set. If you're judging model capability on the basis of simple prompts, maybe it's worth reconsidering.

The average human scores only 67.7% on these tests. Without chain-of-thought prompting, the best LLM only gets 56.6%. With CoT prompting, it gets 73.9%. So, that's enough to put it decidedly ahead of the average human score.

BTW, this benchmark is listed as BBH, on the Hugging Face leaderboard 2.
 

ralfthedog

Distinguished
Oct 23, 2006
41
10
18,535
Here we should define the meaning of "solve problems". A lot of softwares are developed to "solve problems" never viewed, spreadsheets, RADs, programming languages, etc...
A lot of things done by LLM actually are not so wonderful. Some examples : sorting things, summarise things, generate texts, generate images, translate texts, interpret texts, interpret images, etc... Each of those can be achieved with specific neural networks and I think nobody in the world interpret such ability as "intelligence".
Blending all this "abilities" together through other neural networks may be considered "intelligence" ? For me not, it is only a more complex neural network that can mimic human behaviour.
The ability to "solve problems" never exposed to, may exist, but not the ability to "solve problems" never trained. If the network was never trained on image recognition it never will be able to interpret images, if the LLM is able to sort elements, it was trained on sorting and so on.


If this is the definition of intelligence I am pretty sure that actual LLMs cannot solve problems, because LLMs solve problems only if trained on a similar problems.
In practice LLM find a known solution to a similar problem or a solution to a known problem, as you prefer, but cannot find a new solution for a known problem or a solution for an unknown problem.
Actually, you can describe a game's rules to an LLM and it will play that game competently. Much more so if the LLM has a large context window. (Not too large, then they get a bit crazy.) There are some tricks you can play with an LLM and a vector database where a very competent LLM can actually learn from playing a game, just like a human does. You tell the LLM how to play the game. It starts out good. You beat it a few times. It beats you a few times. Each time you play, it gets better. Before long, you can't beat it.
 
  • Like
Reactions: bit_user

punkncat

Polypheme
Ambassador
Of course they are going to play it down. Best to hide your strength when 'on the attack' as it were. My cat can't answer ANY questions, so we can see there is a bit of disinformation going on. I would assume the thought is that no one else is logical or smart enough to question them making such a ridiculous claim.

The other funny article I saw the other day said "AI isn't going to replace humans in the workplace"
REEAAALY?!? (it already has, ask an IT tech or phone tech agent)
 

Sam Hobbs

Distinguished
Jan 26, 2015
18
1
18,515
I really think that Brad Rutter and Ken Jennings, who lost against IBM Watson AI, would resent the implication that they are less smart than a cat. Probably Demis Hassabis was just referring to Google's AI. I am eager to see Google's DeepMind and some other AI system compete against Watson in a Jeopardy game. Or whichever two AI systems are capable of competing with Watson.
 
  • Like
Reactions: bit_user

NinoPino

Respectable
May 26, 2022
496
310
2,060
The other funny article I saw the other day said "AI isn't going to replace humans in the workplace"
REEAAALY?!? (it already has, ask an IT tech or phone tech agent)
Yes, it have replaced humans, but with good results ? In my experience not.
Can we call it a real replacement ? Imho no.
What does that means ? A mountain of saved $$$.
This demostrate intelligence ? Yes, but from the human management.

We can extend this to all aspects of actual "AI".