The plain definition of AI and its name is already what defines what AI is.
Current "AIs" aren't aware.
They do not get us,
See, that's the problem. People latch onto the word "intelligence" and reject anything that seems in any way inferior to a human, even if it can far outpace humans in other areas. What they're thinking of is referred to as "Artificial General Intelligence".
The field of AI is not principally concerned with trying to clone human intelligence. One way to think about it is simply looking at approaches to problems which defy straight-forward conventional algorithmic solutions.
Consider this:
Think of a wolf. They're social, they can strategize, work in teams, figure out puzzles, etc.
How about crows? They can observe and adapt to human behavior, recognize specific people, solve problems, etc.
What about octopuses?
Are these animals intelligent? I would say so. There are lots of examples of intelligence, in the animal kingdom. Even some social insects, like bees, who can argue with each other, make collective decisions, and problem-solve.
I would say intelligence is a more fluid concept than we might like to think. That makes it difficult to establish a clear, bright line between intelligent vs. not intelligent.
the just process based on specified view points, keywords, etc..
That's not accurate. You'd do well to educate yourself more on how LLMs really work. If you're the sort of person who likes to understand how things work, you might even find it fascinating.
ID say, current methods is just brute-forcing massive amounts of data.
We've had lots of data, before. What we didn't have was the computational capability or the algorithmic techniques to build systems which model higher-order concepts. That's one of the truly remarkable things about LLMs - their prowess at knowledge-representation.
They don't only operate at a word-level, but model and operate on abstract concepts. It's even more fascinating that they learn these concepts without anyone having to explicitly specify them. It seems to be a natural byproduct of seeking representational efficiency, which is probably similar to what happens in human brains.