I was making a more fundamental point than that. The notion that LLMs are "just statistics" is about as accurate as saying a wet brain is "just statistics", which is to say not at all. Even if a LLM's design takes liberties in how it approaches certain tasks, it's still neuromorphic in its fundamental nature. The basic mechanics of its connectivity, activation functions, and learning are all based on a highly-abstracted model of how biological brains work.They may*be modeled on how the human brain works to learn grammar and syntax. I'm not sure they're modeled on how the human brain works to learn content.
https://cognitiveworld.com/articles/large-language-models-a-cognitive-and-neuroscience-perspective
To me, this is somewhat akin to someone in the early industrial age showing that a wood-fired, steam-powered automobile is infeasible and therefore horse-drawn buggies will never be made obsolete. It's fair to understand the capabilities and limitations of existing tech, but we shouldn't presume it represents the only path of development.* And even that is still a questionable hypothesis in need of testing. And given the demands on LLMs it's doubtful that the programmers are even attempting to model human language and idea development:
https://www.sciencedirect.com/science/article/pii/S1364661323002024
I think it would be nuts to argue that LLMs are the ultimate form of neuromorphic computing. It's just a particular branch of the field that showed unexpectedly promising results. We can anticipate new techniques and architectures arising, in the coming years and decades. Maybe some of those will borrow more heavily from what's understood about human cognition, but an even more exciting aspect is that they could take a somewhat or substantially different course!