大规模基准测试结果显示,没有证据表明语言模型的意外性可以解释句法消歧的难度

IF 2.9 1区 心理学 Q1 LINGUISTICS Journal of memory and language Pub Date : 2024-02-28 DOI:10.1016/j.jml.2024.104510
Kuan-Jung Huang , Suhas Arehalli , Mari Kugemoto , Christian Muxica , Grusha Prasad , Brian Dillon , Tal Linzen
{"title":"大规模基准测试结果显示,没有证据表明语言模型的意外性可以解释句法消歧的难度","authors":"Kuan-Jung Huang ,&nbsp;Suhas Arehalli ,&nbsp;Mari Kugemoto ,&nbsp;Christian Muxica ,&nbsp;Grusha Prasad ,&nbsp;Brian Dillon ,&nbsp;Tal Linzen","doi":"10.1016/j.jml.2024.104510","DOIUrl":null,"url":null,"abstract":"<div><p>Prediction has been proposed as an overarching principle that explains human information processing in language and beyond. To what degree can processing difficulty in syntactically complex sentences – one of the major concerns of psycholinguistics – be explained by predictability, as estimated using computational language models, and operationalized as surprisal (negative log probability)? A precise, quantitative test of this question requires a much larger scale data collection effort than has been done in the past. We present the Syntactic Ambiguity Processing Benchmark, a dataset of self-paced reading times from 2000 participants, who read a diverse set of complex English sentences. This dataset makes it possible to measure processing difficulty associated with individual syntactic constructions, and even individual sentences, precisely enough to rigorously test the predictions of computational models of language comprehension. By estimating the function that relates surprisal to reading times from filler items included in the experiment, we find that the predictions of language models with two different architectures sharply diverge from the empirical reading time data, dramatically underpredicting processing difficulty, failing to predict relative difficulty among different syntactic ambiguous constructions, and only partially explaining item-wise variability. These findings suggest that next-word prediction is most likely insufficient on its own to explain human syntactic processing.</p></div>","PeriodicalId":16493,"journal":{"name":"Journal of memory and language","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large-scale benchmark yields no evidence that language model surprisal explains syntactic disambiguation difficulty\",\"authors\":\"Kuan-Jung Huang ,&nbsp;Suhas Arehalli ,&nbsp;Mari Kugemoto ,&nbsp;Christian Muxica ,&nbsp;Grusha Prasad ,&nbsp;Brian Dillon ,&nbsp;Tal Linzen\",\"doi\":\"10.1016/j.jml.2024.104510\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Prediction has been proposed as an overarching principle that explains human information processing in language and beyond. To what degree can processing difficulty in syntactically complex sentences – one of the major concerns of psycholinguistics – be explained by predictability, as estimated using computational language models, and operationalized as surprisal (negative log probability)? A precise, quantitative test of this question requires a much larger scale data collection effort than has been done in the past. We present the Syntactic Ambiguity Processing Benchmark, a dataset of self-paced reading times from 2000 participants, who read a diverse set of complex English sentences. This dataset makes it possible to measure processing difficulty associated with individual syntactic constructions, and even individual sentences, precisely enough to rigorously test the predictions of computational models of language comprehension. By estimating the function that relates surprisal to reading times from filler items included in the experiment, we find that the predictions of language models with two different architectures sharply diverge from the empirical reading time data, dramatically underpredicting processing difficulty, failing to predict relative difficulty among different syntactic ambiguous constructions, and only partially explaining item-wise variability. These findings suggest that next-word prediction is most likely insufficient on its own to explain human syntactic processing.</p></div>\",\"PeriodicalId\":16493,\"journal\":{\"name\":\"Journal of memory and language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of memory and language\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0749596X24000135\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of memory and language","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749596X24000135","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

预测被认为是解释人类语言及其他方面信息处理的首要原则。句法复杂句子的处理难度--心理语言学的主要关注点之一--在多大程度上可以用可预测性来解释,可预测性是用计算语言模型估算的,并可操作为惊奇(负对数概率)?要对这一问题进行精确的定量测试,需要比以往更大规模的数据收集工作。我们提出了句法歧义处理基准(Syntactic Ambiguity Processing Benchmark),这是一个由 2000 名参与者组成的自定进度阅读时间数据集,他们阅读了一系列复杂的英语句子。通过该数据集,我们可以精确测量与单个句法结构甚至单个句子相关的处理难度,从而严格检验语言理解计算模型的预测结果。通过从实验中的填充项目中估算意外与阅读时间之间的函数,我们发现两种不同架构的语言模型的预测结果与实证阅读时间数据大相径庭,它们大大低估了处理难度,无法预测不同句法模糊结构之间的相对难度,而且只能部分解释项目的变化。这些发现表明,下一单词预测本身很可能不足以解释人类的句法处理过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Large-scale benchmark yields no evidence that language model surprisal explains syntactic disambiguation difficulty

Prediction has been proposed as an overarching principle that explains human information processing in language and beyond. To what degree can processing difficulty in syntactically complex sentences – one of the major concerns of psycholinguistics – be explained by predictability, as estimated using computational language models, and operationalized as surprisal (negative log probability)? A precise, quantitative test of this question requires a much larger scale data collection effort than has been done in the past. We present the Syntactic Ambiguity Processing Benchmark, a dataset of self-paced reading times from 2000 participants, who read a diverse set of complex English sentences. This dataset makes it possible to measure processing difficulty associated with individual syntactic constructions, and even individual sentences, precisely enough to rigorously test the predictions of computational models of language comprehension. By estimating the function that relates surprisal to reading times from filler items included in the experiment, we find that the predictions of language models with two different architectures sharply diverge from the empirical reading time data, dramatically underpredicting processing difficulty, failing to predict relative difficulty among different syntactic ambiguous constructions, and only partially explaining item-wise variability. These findings suggest that next-word prediction is most likely insufficient on its own to explain human syntactic processing.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.70
自引率
14.00%
发文量
49
审稿时长
12.7 weeks
期刊介绍: Articles in the Journal of Memory and Language contribute to the formulation of scientific issues and theories in the areas of memory, language comprehension and production, and cognitive processes. Special emphasis is given to research articles that provide new theoretical insights based on a carefully laid empirical foundation. The journal generally favors articles that provide multiple experiments. In addition, significant theoretical papers without new experimental findings may be published. The Journal of Memory and Language is a valuable tool for cognitive scientists, including psychologists, linguists, and others interested in memory and learning, language, reading, and speech. Research Areas include: • Topics that illuminate aspects of memory or language processing • Linguistics • Neuropsychology.
期刊最新文献
Self-Reported attention to changes and associations with episodic memory updating Individual differences in the reactivity effect of judgments of learning: Cognitive factors Understanding with the body? Testing the role of verb relative embodiment across tasks at the interface of language and memory Visual context benefits spoken sentence comprehension across the lifespan Setting the “tone” first and then integrating it into the syllable: An EEG investigation of the time course of lexical tone and syllable encoding in Mandarin word production
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1