Hallucination isn't "misremembering".
Misremembering something means you had a correct memory to begin with and you just happened to pull out a wrong one because you got confused.
I disagree. Your mind doesn't just record facts like a tape recorder. It's encoding information much more efficiently, like a LLM does. When information fits a certain pattern, the particulars can get lost, but this can also happen with the passage of time. Then, upon recall, you rely on the pattern instead of any particulars that happen to be missing.
See also:
en.wikipedia.org
LLM doesn't have a correct memory / answer, so it gives next statistically most probable set of tokens instead.
Every prediction it makes is based on the patterns it has learned. Sometimes, it over-extrapolates a pattern, usually because it hasn't figured out the exception that should apply.
The only reason I am equating those two is because "AI" is supposedly modelled after it,
At a foundational level, it's based on neural networks that apply to every creature containing neurons and has the ability to learn.
and those performance metrics that ML vendors use are designed to compare against human intelligence.
Not exactly. They're measuring its performance on cognitive tasks. That doesn't mean they think it's human-like or human-caliber.
The point of AI is to perform cognitive tasks of the sort that humans can perform, but that defy classical algorithmic solutions.
I think that you bringing up animal intelligence and comparing against it is irrelevant.
You're free to disregard, but I think it's not irrelevant. Certain animals can perform limited cognitive tasks, like counting, problem solving, and basic reasoning, not to mention social skills like identifying nonconformity and theory-of-mind. This shows intelligence can occur in degrees. It's not all-or-nothing.
Hallucinations are a direct result of an absence of an exact (or statistically relevant) match in the training data.
Well, in order for it to solve simple math problems, it doesn't need to have seen that exact math problem. This is a key point. It can perform compositional reasoning, which blows a hole in your "statistical matching" argument.