Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars.

IF 3.6 Q1 LINGUISTICS Neurobiology of Language Pub Date : 2024-04-01 eCollection Date: 2024-01-01 DOI:10.1162/nol_a_00118
Yushi Sugimoto, Ryo Yoshida, Hyeonjeong Jeong, Masatoshi Koizumi, Jonathan R Brennan, Yohei Oseki
{"title":"Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars.","authors":"Yushi Sugimoto, Ryo Yoshida, Hyeonjeong Jeong, Masatoshi Koizumi, Jonathan R Brennan, Yohei Oseki","doi":"10.1162/nol_a_00118","DOIUrl":null,"url":null,"abstract":"<p><p>In computational neurolinguistics, it has been demonstrated that hierarchical models such as recurrent neural network grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as long short-term memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-final/left-branching languages, and alternatively the left-corner parsing strategy has been proposed as the psychologically plausible parsing strategy. In this article, building on this line of inquiry, we investigate not only whether hierarchical models like RNNGs better explain human brain activity than sequential models like LSTMs, but also which parsing strategy is more neurobiologically plausible, by developing a novel fMRI corpus where participants read newspaper articles in a head-final/left-branching language, namely Japanese, through the naturalistic fMRI experiment. The results revealed that left-corner RNNGs outperformed both LSTMs and top-down RNNGs in the left inferior frontal and temporal-parietal regions, suggesting that there are certain brain regions that localize the syntactic composition with the left-corner parsing strategy.</p>","PeriodicalId":34845,"journal":{"name":"Neurobiology of Language","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11025653/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurobiology of Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/nol_a_00118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In computational neurolinguistics, it has been demonstrated that hierarchical models such as recurrent neural network grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as long short-term memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-final/left-branching languages, and alternatively the left-corner parsing strategy has been proposed as the psychologically plausible parsing strategy. In this article, building on this line of inquiry, we investigate not only whether hierarchical models like RNNGs better explain human brain activity than sequential models like LSTMs, but also which parsing strategy is more neurobiologically plausible, by developing a novel fMRI corpus where participants read newspaper articles in a head-final/left-branching language, namely Japanese, through the naturalistic fMRI experiment. The results revealed that left-corner RNNGs outperformed both LSTMs and top-down RNNGs in the left inferior frontal and temporal-parietal regions, suggesting that there are certain brain regions that localize the syntactic composition with the left-corner parsing strategy.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用左角递归神经网络语法定位句法组成
在计算神经语言学中,已经证明,通过句法组成联合生成单词序列及其句法结构的递归神经网络语法(RNNGs)等层次模型比长短期记忆网络(LSTM)等序列模型更好地解释了人脑活动。然而,普通RNNG采用了自上而下的解析策略,这在心理语言学文献中被指出是次优的,尤其是对于首末/左分支语言,或者左角解析策略被认为是心理上合理的解析策略。在这篇论文中,在这一研究的基础上,我们不仅研究了像RNNG这样的层次模型是否比LSTM这样的顺序模型更好地解释人类大脑活动,而且还通过开发一个新的fMRI语料库来研究哪种解析策略在神经生物学上更合理,在该语料库中,参与者用首末/左分支语言(即日语)阅读报纸文章,通过自然功能磁共振成像实验。结果表明,左角RNNGs在左额下叶和颞顶叶区域的表现优于LSTM和自上而下的RNNGs,这表明存在某些大脑区域可以通过左角解析策略定位句法成分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurobiology of Language
Neurobiology of Language Social Sciences-Linguistics and Language
CiteScore
5.90
自引率
6.20%
发文量
32
审稿时长
17 weeks
期刊最新文献
The Domain-Specific Neural Basis of Auditory Statistical Learning in 5-7-Year-Old Children. A Comparison of Denoising Approaches for Spoken Word Production Related Artefacts in Continuous Multiband fMRI Data. Neural Mechanisms of Learning and Consolidation of Morphologically Derived Words in a Novel Language: Evidence From Hebrew Speakers. Cerebellar Atrophy and Language Processing in Chronic Left-Hemisphere Stroke. Cortico-Cerebellar Monitoring of Speech Sequence Production.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1