mstdn.science is one of the many independent Mastodon servers you can use to participate in the fediverse.
http://mstdn.science is a place for scientists and science enthusiasts to discuss research and topics surrounding our work.

Server stats:

515
active users

The key point we’re making is the distinction between *formal competence* - the knowledge of linguistic rules and patterns - and *functional competence* - a set of skills required to use language in real-world situations. 2/

We ground this distinction in cognitive neuroscience.

Years of empirical work show that humans have specialized neural machinery for language processing (reading, listening, speaking, etc), which is distinct from brain mechanisms underlying other cognitive capacities (social reasoning, intuitive physics, logic and math…) 3/

Armed with the formal/functional distinction, we thoroughly review the NLP literature. We show that, on one hand, LLMs are surprisingly good at *formal* linguistic competence, making significant progress at learning phonology, morphosyntax, etc etc. 4/

On the other hand, LLMs are still quite bad at most aspects of functional competence (math, reasoning, world knowledge) - especially when it deviates from commonly occurring text patterns. 5/

We argue that the word-in-context prediction objective is not enough to master human thought (even though it’s surprisingly effective for learning much about language!).

Instead, like human brains, models that strive to master both formal & functional language competence will benefit from modular components - either built-in explicitly or emerging through a careful combo of data+training objectives+architecture. 6/

ChatGPT, with its combination of next-word prediction and RLHF objectives, might be a step in that direction (although it still can’t think imo). 7/

Similarly, criticisms directed at LLMs center on their inability to think (or do math, or maintain a coherent worldview) and sometimes overlook their impressive advances in language learning. We call this a “bad at thought = bad at language” fallacy. 9/

It’s been fun working with a brilliant team of coauthors - @kmahowald @ev_fedorenko @ibandlank @NancyKanwisher & Josh Tenenbaum

We’ve done a lot of work refining our views and revising our arguments every time a new big model came out. In the end, we still think a cogsci perspective is valuable - and hope you do too :) 10/10

P.S. Although we have >20 pages of references, we are likely missing stuff. If you think we don’t cover important work, pls comment below! We also under-cover certain topics (grounding, memory, etc) - if you think something doesn’t square with the formal/functional distinction, let us know.