The term of art you're reaching for is AGI (Artificial General Intelligence).
LLMs certainly
do fit any broad, classical definition of AI you'd care to use. The field of AI stretches back more than 70 years. For most of that time, AI researchers held the
Turing Test as one of the gold standards in AI research, and struggled to produce anything that could clear that hurdle. LLMs routinely leap over it, with ease.
The key term in AI we shouldn't forget is the
artificial part. They have different strengths and weaknesses than a natural intelligence. So far, it's a little disheartening that AI has gotten better by emulating some of our cognitive weaknesses, as it incorporates some of our cognitive strengths.
Speaking of which, humans hallucinate crap
all the time, but we just don't call it that (BS is one name we use for it). Don't tell me you've never had a discussion with someone who's telling you something suspect, and you think they're either making it up as they go along or pitching their best guess about something as if it's actual knowledge. The biggest difference is just that humans tend to know the limits of our own knowledge and can qualify our degree of certainty, if we're being diligent. LLMs haven't been taught how to do that, but I think they can be.