“The many failures of LLMs on non-linguistic tasks do not undermine them as good models of language processing,” wrote the authors in conclusion. “After all, the set of areas that support language processing in the human brain also cannot do math, solve logical problems, or even track the meaning of a story across multiple paragraphs.”
“Finally, to those who are looking to language models as a route to
AGI, we suggest that, instead of or in addition to scaling up the size of the models, more promising solutions will come in the form of modular architectures — pre-specified or emergent — that, like the human brain, integrate language processing with additional systems that carry out perception, reasoning, and planning.”