在大型语言模型中分离语言和思维。

IF 16.7 1区 心理学 Q1 BEHAVIORAL SCIENCES Trends in Cognitive Sciences Pub Date : 2024-06-01 Epub Date: 2024-03-19 DOI:10.1016/j.tics.2024.01.011
Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, Evelina Fedorenko
{"title":"在大型语言模型中分离语言和思维。","authors":"Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, Evelina Fedorenko","doi":"10.1016/j.tics.2024.01.011","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules and patterns) and functional linguistic competence (understanding and using language in the world). We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of separate mechanisms specialized for formal versus functional linguistic competence.</p>","PeriodicalId":49417,"journal":{"name":"Trends in Cognitive Sciences","volume":null,"pages":null},"PeriodicalIF":16.7000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11416727/pdf/","citationCount":"0","resultStr":"{\"title\":\"Dissociating language and thought in large language models.\",\"authors\":\"Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, Evelina Fedorenko\",\"doi\":\"10.1016/j.tics.2024.01.011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules and patterns) and functional linguistic competence (understanding and using language in the world). We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of separate mechanisms specialized for formal versus functional linguistic competence.</p>\",\"PeriodicalId\":49417,\"journal\":{\"name\":\"Trends in Cognitive Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.7000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11416727/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Trends in Cognitive Sciences\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1016/j.tics.2024.01.011\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/19 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Trends in Cognitive Sciences","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.tics.2024.01.011","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLMs)是迄今为止所有模型中最接近掌握人类语言的模型,但人们对其语言和认知能力的看法仍然莫衷一是。在这里,我们通过区分形式语言能力(语言规则和模式知识)和功能语言能力(在世界上理解和使用语言)来评估大型语言模型。人类神经科学表明,形式语言能力和功能语言能力依赖于不同的神经机制。虽然 LLM 在形式能力方面的表现出人意料地好,但它们在功能能力任务上的表现仍然不尽如人意,而且往往需要专门的微调和/或与外部模块的耦合。我们认为,以类似人类的方式使用语言的模型需要同时掌握这两种能力类型,这反过来又可能需要出现专门用于形式语言能力和功能语言能力的不同机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Dissociating language and thought in large language models.

Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules and patterns) and functional linguistic competence (understanding and using language in the world). We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of separate mechanisms specialized for formal versus functional linguistic competence.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Trends in Cognitive Sciences
Trends in Cognitive Sciences 医学-行为科学
CiteScore
27.90
自引率
1.50%
发文量
156
审稿时长
6-12 weeks
期刊介绍: Essential reading for those working directly in the cognitive sciences or in related specialist areas, Trends in Cognitive Sciences provides an instant overview of current thinking for scientists, students and teachers who want to keep up with the latest developments in the cognitive sciences. The journal brings together research in psychology, artificial intelligence, linguistics, philosophy, computer science and neuroscience. Trends in Cognitive Sciences provides a platform for the interaction of these disciplines and the evolution of cognitive science as an independent field of study.
期刊最新文献
A sequence bottleneck for animal intelligence and language? Dynamic brain plasticity during the transition to motherhood. Embracing variability in the search for biological mechanisms of psychiatric illness. Leveraging cognitive neuroscience for making and breaking real-world habits. New strategies for the cognitive science of dreaming.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1