大型语言模型在理解人类语言和认知方面的局限性。

Q1 Social Sciences Open Mind Pub Date : 2024-08-31 eCollection Date: 2024-01-01 DOI:10.1162/opmi_a_00160
Christine Cuskley, Rebecca Woods, Molly Flaherty
{"title":"大型语言模型在理解人类语言和认知方面的局限性。","authors":"Christine Cuskley, Rebecca Woods, Molly Flaherty","doi":"10.1162/opmi_a_00160","DOIUrl":null,"url":null,"abstract":"<p><p>Researchers have recently argued that the capabilities of Large Language Models (LLMs) can provide new insights into longstanding debates about the role of learning and/or innateness in the development and evolution of human language. Here, we argue on two grounds that LLMs alone tell us very little about human language and cognition in terms of acquisition and evolution. First, any similarities between human language and the output of LLMs are purely functional. Borrowing the \"four questions\" framework from ethology, we argue that <i>what</i> LLMs do is superficially similar, but <i>how</i> they do it is not. In contrast to the rich multimodal data humans leverage in interactive language learning, LLMs rely on immersive exposure to vastly greater quantities of unimodal text data, with recent multimodal efforts built upon mappings between images and text. Second, turning to functional similarities between human language and LLM output, we show that human linguistic behavior is much broader. LLMs were designed to imitate the very specific behavior of human <i>writing</i>; while they do this impressively, the underlying mechanisms of these models limit their capacities for meaning and naturalistic interaction, and their potential for dealing with the diversity in human language. We conclude by emphasising that LLMs are not theories of language, but tools that may be used to study language, and that can only be effectively applied with specific hypotheses to motivate research.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"8 ","pages":"1058-1083"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11370970/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Limitations of Large Language Models for Understanding Human Language and Cognition.\",\"authors\":\"Christine Cuskley, Rebecca Woods, Molly Flaherty\",\"doi\":\"10.1162/opmi_a_00160\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Researchers have recently argued that the capabilities of Large Language Models (LLMs) can provide new insights into longstanding debates about the role of learning and/or innateness in the development and evolution of human language. Here, we argue on two grounds that LLMs alone tell us very little about human language and cognition in terms of acquisition and evolution. First, any similarities between human language and the output of LLMs are purely functional. Borrowing the \\\"four questions\\\" framework from ethology, we argue that <i>what</i> LLMs do is superficially similar, but <i>how</i> they do it is not. In contrast to the rich multimodal data humans leverage in interactive language learning, LLMs rely on immersive exposure to vastly greater quantities of unimodal text data, with recent multimodal efforts built upon mappings between images and text. Second, turning to functional similarities between human language and LLM output, we show that human linguistic behavior is much broader. LLMs were designed to imitate the very specific behavior of human <i>writing</i>; while they do this impressively, the underlying mechanisms of these models limit their capacities for meaning and naturalistic interaction, and their potential for dealing with the diversity in human language. We conclude by emphasising that LLMs are not theories of language, but tools that may be used to study language, and that can only be effectively applied with specific hypotheses to motivate research.</p>\",\"PeriodicalId\":32558,\"journal\":{\"name\":\"Open Mind\",\"volume\":\"8 \",\"pages\":\"1058-1083\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11370970/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Open Mind\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1162/opmi_a_00160\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Mind","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/opmi_a_00160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

最近有研究人员认为,大型语言模型(LLM)的能力可以为关于学习和/或先天性在人类语言发展和进化中的作用的长期争论提供新的见解。在此,我们从两个方面来论证,仅凭大型语言模型,我们对人类语言和认知的习得与进化知之甚少。首先,人类语言与 LLMs 输出之间的任何相似之处都纯粹是功能性的。借用人种学中的 "四个问题 "框架,我们认为 LLMs 所做的事情表面上是相似的,但它们是如何做的却不尽相同。与人类在交互式语言学习中利用丰富的多模态数据不同,LLMs 依靠的是身临其境地接触大量的单模态文本数据,而最近的多模态工作则建立在图像和文本之间的映射上。其次,关于人类语言与 LLM 输出之间的功能相似性,我们发现人类的语言行为更为广泛。LLM 的设计目的是模仿人类书写的特定行为;虽然它们在这方面的表现令人印象深刻,但这些模型的基本机制限制了它们的意义和自然交互能力,也限制了它们处理人类语言多样性的潜力。最后,我们要强调的是,LLMs 不是语言理论,而是可以用来研究语言的工具,只有提出具体的假设,才能有效地应用 LLMs 来推动研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Limitations of Large Language Models for Understanding Human Language and Cognition.

Researchers have recently argued that the capabilities of Large Language Models (LLMs) can provide new insights into longstanding debates about the role of learning and/or innateness in the development and evolution of human language. Here, we argue on two grounds that LLMs alone tell us very little about human language and cognition in terms of acquisition and evolution. First, any similarities between human language and the output of LLMs are purely functional. Borrowing the "four questions" framework from ethology, we argue that what LLMs do is superficially similar, but how they do it is not. In contrast to the rich multimodal data humans leverage in interactive language learning, LLMs rely on immersive exposure to vastly greater quantities of unimodal text data, with recent multimodal efforts built upon mappings between images and text. Second, turning to functional similarities between human language and LLM output, we show that human linguistic behavior is much broader. LLMs were designed to imitate the very specific behavior of human writing; while they do this impressively, the underlying mechanisms of these models limit their capacities for meaning and naturalistic interaction, and their potential for dealing with the diversity in human language. We conclude by emphasising that LLMs are not theories of language, but tools that may be used to study language, and that can only be effectively applied with specific hypotheses to motivate research.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Open Mind
Open Mind Social Sciences-Linguistics and Language
CiteScore
3.20
自引率
0.00%
发文量
15
审稿时长
53 weeks
期刊最新文献
Approximating Human-Level 3D Visual Inferences With Deep Neural Networks. Prosodic Cues Support Inferences About the Question's Pedagogical Intent. The Double Standard of Ownership. Combination and Differentiation Theories of Categorization: A Comparison Using Participants' Categorization Descriptions. Investigating Sensitivity to Shared Information and Personal Experience in Children's Use of Majority Information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1