Artificial Neural Network Language Models Predict Human Brain Responses to Language Even After a Developmentally Realistic Amount of Training.

IF 3.6 Q1 LINGUISTICS Neurobiology of Language Pub Date : 2024-04-01 eCollection Date: 2024-01-01 DOI:10.1162/nol_a_00137
Eghbal A Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, Evelina Fedorenko
{"title":"Artificial Neural Network Language Models Predict Human Brain Responses to Language Even After a Developmentally Realistic Amount of Training.","authors":"Eghbal A Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, Evelina Fedorenko","doi":"10.1162/nol_a_00137","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial neural networks have emerged as computationally plausible models of human language processing. A major criticism of these models is that the amount of training data they receive far exceeds that of humans during language learning. Here, we use two complementary approaches to ask how the models' ability to capture human fMRI responses to sentences is affected by the amount of training data. First, we evaluate GPT-2 models trained on 1 million, 10 million, 100 million, or 1 billion words against an fMRI benchmark. We consider the 100-million-word model to be developmentally plausible in terms of the amount of training data given that this amount is similar to what children are estimated to be exposed to during the first 10 years of life. Second, we test the performance of a GPT-2 model trained on a 9-billion-token dataset to reach state-of-the-art next-word prediction performance on the human benchmark at different stages during training. Across both approaches, we find that (i) the models trained on a developmentally plausible amount of data already achieve near-maximal performance in capturing fMRI responses to sentences. Further, (ii) lower perplexity-a measure of next-word prediction performance-is associated with stronger alignment with human data, suggesting that models that have received enough training to achieve sufficiently high next-word prediction performance also acquire representations of sentences that are predictive of human fMRI responses. In tandem, these findings establish that although <i>some</i> training is necessary for the models' predictive ability, a developmentally realistic amount of training (∼100 million words) may suffice.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11025646/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurobiology of Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/nol_a_00137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial neural networks have emerged as computationally plausible models of human language processing. A major criticism of these models is that the amount of training data they receive far exceeds that of humans during language learning. Here, we use two complementary approaches to ask how the models' ability to capture human fMRI responses to sentences is affected by the amount of training data. First, we evaluate GPT-2 models trained on 1 million, 10 million, 100 million, or 1 billion words against an fMRI benchmark. We consider the 100-million-word model to be developmentally plausible in terms of the amount of training data given that this amount is similar to what children are estimated to be exposed to during the first 10 years of life. Second, we test the performance of a GPT-2 model trained on a 9-billion-token dataset to reach state-of-the-art next-word prediction performance on the human benchmark at different stages during training. Across both approaches, we find that (i) the models trained on a developmentally plausible amount of data already achieve near-maximal performance in capturing fMRI responses to sentences. Further, (ii) lower perplexity-a measure of next-word prediction performance-is associated with stronger alignment with human data, suggesting that models that have received enough training to achieve sufficiently high next-word prediction performance also acquire representations of sentences that are predictive of human fMRI responses. In tandem, these findings establish that although some training is necessary for the models' predictive ability, a developmentally realistic amount of training (∼100 million words) may suffice.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工神经网络语言模型可预测人脑对语言的反应,即使是在经过符合发展实际的大量训练之后。
人工神经网络已成为人类语言处理过程中在计算上可信的模型。对这些模型的一个主要批评是,它们所接受的训练数据量远远超过了人类在语言学习过程中的数据量。在此,我们采用两种互补的方法来探讨这些模型捕捉人类对句子的 fMRI 反应的能力如何受到训练数据量的影响。首先,我们根据一个 fMRI 基准来评估在 100 万、1000 万、1 亿或 10 亿个单词上训练的 GPT-2 模型。我们认为,就训练数据量而言,1 亿个单词的模型在发展上是合理的,因为这一数据量与儿童出生后前 10 年估计接触的数据量相似。其次,我们测试了在 90 亿个代词数据集上训练的 GPT-2 模型的性能,该模型在训练过程中的不同阶段都能达到人类基准下一单词预测的一流水平。在这两种方法中,我们发现:(i) 以发育合理的数据量训练的模型在捕捉句子的 fMRI 反应方面已经达到了接近最高的性能。此外,(ii) 较低的perplexity(衡量下一个单词预测性能的指标)与人类数据更强的一致性相关,这表明接受过足够训练以达到足够高的下一个单词预测性能的模型也能获得预测人类fMRI反应的句子表征。这些研究结果同时证明,尽管模型的预测能力需要一定的训练,但符合发展实际的训练量(1 亿个单词)可能就足够了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurobiology of Language
Neurobiology of Language Social Sciences-Linguistics and Language
CiteScore
5.90
自引率
6.20%
发文量
32
审稿时长
17 weeks
期刊最新文献
The Domain-Specific Neural Basis of Auditory Statistical Learning in 5-7-Year-Old Children. A Comparison of Denoising Approaches for Spoken Word Production Related Artefacts in Continuous Multiband fMRI Data. Neural Mechanisms of Learning and Consolidation of Morphologically Derived Words in a Novel Language: Evidence From Hebrew Speakers. Cerebellar Atrophy and Language Processing in Chronic Left-Hemisphere Stroke. Cortico-Cerebellar Monitoring of Speech Sequence Production.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1