Pub Date : 2023-10-04DOI: 10.1080/23273798.2023.2263590
Elisabeth Beyersmann, Jonathan Grainger, Stéphane Dufau, Colas Fournet, Johannes C. Ziegler
ABSTRACTThe present study explored the role of constituent frequency and distractor type in complex word learning. Skilled readers were trained to associate novel letter strings with one out of two pictures, with one picture serving as the target, and the other as a distractor. A facilitatory effect of first-constituent frequency was found only in trials where distractors promoted first-constituent learning, and a facilitatory effect of second-constituent frequency only in trials where distractors promoted second-constituent learning, but not vice versa. Learning occurred in the absence of any pre-existing knowledge about the constituent morphemes and any explicit reference to the constituents during learning. The results point to the important role of constituent frequency and distractor type in novel word learning and provide insights into the mechanisms involved in the implicit acquisition of morphological knowledge in adult learners, that we suspect to be a key aspect of language learning in general.KEYWORDS: Novel word learningconstituent frequencydistractor typemorphological knowledge Disclosure statementNo potential conflict of interest was reported by the author(s).Data availability statementMaterials, data and analyses scripts have been made available under the following link: https://osf.io/r3cdf/?view_only = c9bd1f5142724e59878d14d5deae8cb0.Additional informationFundingThis research was supported by the center of excellence on Language, Communication and the Brain (France2030, ANR-16-CONV-0002), the Excellence Initiative of Aix-Marseille University A*MIDEX (ANR-11-IDEX-0001-02), and the pilot center for teacher training and research in education (AMPIRIC). The research was directly funded through an ANR grant (MORPHEME ANR-15-FRAL-0003-01) with additional support from ERC grant 742141 awarded to JG. EB was supported by a FYSSEN Fellowship.
{"title":"The effect of constituent frequency and distractor type on learning novel complex words","authors":"Elisabeth Beyersmann, Jonathan Grainger, Stéphane Dufau, Colas Fournet, Johannes C. Ziegler","doi":"10.1080/23273798.2023.2263590","DOIUrl":"https://doi.org/10.1080/23273798.2023.2263590","url":null,"abstract":"ABSTRACTThe present study explored the role of constituent frequency and distractor type in complex word learning. Skilled readers were trained to associate novel letter strings with one out of two pictures, with one picture serving as the target, and the other as a distractor. A facilitatory effect of first-constituent frequency was found only in trials where distractors promoted first-constituent learning, and a facilitatory effect of second-constituent frequency only in trials where distractors promoted second-constituent learning, but not vice versa. Learning occurred in the absence of any pre-existing knowledge about the constituent morphemes and any explicit reference to the constituents during learning. The results point to the important role of constituent frequency and distractor type in novel word learning and provide insights into the mechanisms involved in the implicit acquisition of morphological knowledge in adult learners, that we suspect to be a key aspect of language learning in general.KEYWORDS: Novel word learningconstituent frequencydistractor typemorphological knowledge Disclosure statementNo potential conflict of interest was reported by the author(s).Data availability statementMaterials, data and analyses scripts have been made available under the following link: https://osf.io/r3cdf/?view_only = c9bd1f5142724e59878d14d5deae8cb0.Additional informationFundingThis research was supported by the center of excellence on Language, Communication and the Brain (France2030, ANR-16-CONV-0002), the Excellence Initiative of Aix-Marseille University A*MIDEX (ANR-11-IDEX-0001-02), and the pilot center for teacher training and research in education (AMPIRIC). The research was directly funded through an ANR grant (MORPHEME ANR-15-FRAL-0003-01) with additional support from ERC grant 742141 awarded to JG. EB was supported by a FYSSEN Fellowship.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135596089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1080/23273798.2023.2263582
Kunyu Xu, Chenlu Ma, Yiming Liu, Jeng-Ren Duann
ABSTRACTEmpirical studies have found a processing asymmetry between Chinese subject-extracted relative clauses (SRCs) and object-extracted relative clauses (ORCs). Still, there is no consensus on how this SRC-ORC asymmetry occurs. Thus, aiming to elucidate how the neural activity, in the forms of both event-related potentials (ERPs) and brain oscillations (i.e. event-related synchronisation/desynchronisation, ERS/ERD), attuned to sentences with different levels of processing difficulty, we conducted an electroencephalography (EEG) study to examine the comprehension of Chinese SRCs and ORCs. The results showed an N400 and a P600 effect when comparing SRCs and ORCs. Simultaneously, delta ERS was associated with N400 during the processing of both types of relative clauses and theta ERS with P600 during the processing of SRCs. By incorporating the ERP and ERS indexes, we propose that the dissociation between the integration and retrieval effort involved in sentence comprehension may account for the processing asymmetry between sentences.KEYWORDS: Event-related potentials (ERPs)delta/theta synchronisationmemory retrievalintegrationsentence comprehension Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by Shanghai Municipal Education Commission and Shanghai Educational Development Foundation [grant number: WBH4307002].
{"title":"Event-related potentials and brain oscillations reflect unbalanced allocation of retrieval and integration efforts in sentence comprehension","authors":"Kunyu Xu, Chenlu Ma, Yiming Liu, Jeng-Ren Duann","doi":"10.1080/23273798.2023.2263582","DOIUrl":"https://doi.org/10.1080/23273798.2023.2263582","url":null,"abstract":"ABSTRACTEmpirical studies have found a processing asymmetry between Chinese subject-extracted relative clauses (SRCs) and object-extracted relative clauses (ORCs). Still, there is no consensus on how this SRC-ORC asymmetry occurs. Thus, aiming to elucidate how the neural activity, in the forms of both event-related potentials (ERPs) and brain oscillations (i.e. event-related synchronisation/desynchronisation, ERS/ERD), attuned to sentences with different levels of processing difficulty, we conducted an electroencephalography (EEG) study to examine the comprehension of Chinese SRCs and ORCs. The results showed an N400 and a P600 effect when comparing SRCs and ORCs. Simultaneously, delta ERS was associated with N400 during the processing of both types of relative clauses and theta ERS with P600 during the processing of SRCs. By incorporating the ERP and ERS indexes, we propose that the dissociation between the integration and retrieval effort involved in sentence comprehension may account for the processing asymmetry between sentences.KEYWORDS: Event-related potentials (ERPs)delta/theta synchronisationmemory retrievalintegrationsentence comprehension Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by Shanghai Municipal Education Commission and Shanghai Educational Development Foundation [grant number: WBH4307002].","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136280650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACTShared attention across individuals is a crucial component of joint activities, modulating how we perceive relevant information. In this study, we explored shared attention in language production and memory across separate representation levels. In a shared go/no-go task, pairs of participants responded to objects displayed on a screen: One participant reacted according to the animacy of the object (semantic task), while her partner reacted to the first letter/phoneme (phoneme-monitoring task). Objects could require a response from either one participant, both participants or nobody. Only participants assigned to the phoneme-monitoring task were faster at responding to the joint than to alone trials. However, results from a memory recall test showed that for both partners recall was more accurate for those items to which the partner responded and for jointly responded items. Overall, our findings suggest that partners co-represent each other’s language features even when they do not engage in the same task.KEYWORDS: Shared attentionco-representationjoint memory effectcollective-prioritisation effectlanguage production AcknowledgmentsWe are grateful to Noel Nguyen for his advices and his support. We are also grateful to Xavier Alario for his supervision during the first steps of the project.Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis study has received financial support from the Marie Curie Actions (FP7-PEOPLE 2014–2016 under REA agreement n°623845), from the Laboratoire Parole et Langage and from Excellence Initiative of Aix-Marseille University – A*MIDEX through the Institute of Language, Communication and the Brain. G.C. was supported by the Ecole Doctorale 356 of Aix-Marseille University. C.B. was supported by the Ramon y Cajal research program (RYC2018-026174-I). E.R. has benefited from support from the French government, managed by the French National Agency for Research (ANR) through a research grant (ANR-18-CE28-0013). K.S. was supported by a research grant of the ANR (ANR-18-FRAL-0013-01).
摘要个体间的共同关注是共同活动的重要组成部分,它调节着我们对相关信息的感知。在本研究中,我们探讨了不同表征水平下语言产生和记忆的共同注意。在一个共享的“走”/“不走”任务中,成对的参与者对屏幕上显示的物体做出反应:一个参与者根据物体的活力做出反应(语义任务),而她的搭档则根据第一个字母/音素做出反应(音素监测任务)。对象可能需要来自一个参与者、两个参与者或任何人的响应。只有被分配到音素监测任务的参与者对联合试验的反应比单独试验的反应快。然而,记忆回忆测试的结果表明,对于伴侣双方来说,对自己回应的项目和共同回应的项目的回忆更加准确。总的来说,我们的研究结果表明,即使他们不参与相同的任务,合作伙伴也会共同代表彼此的语言特征。关键词:共同注意、共同表征、联合记忆效应、集体优先效应、语言生成感谢Noel Nguyen的建议和支持。我们也非常感谢Xavier Alario在项目最初阶段的监督。披露声明作者未报告潜在的利益冲突。本研究得到了Marie Curie Actions (FP7-PEOPLE 2014-2016, REA协议号623845)、Laboratoire Parole et Language和Aix-Marseille University - A*MIDEX卓越计划(语言、交流和大脑研究所)的资金支持。G.C.是由艾克斯-马赛大学356博士学院资助的。C.B.由Ramon y Cajal研究项目(RYC2018-026174-I)资助。E.R.项目得到了法国政府的支持,由法国国家研究局(ANR)通过一项研究补助金(ANR-18- ce28 -0013)进行管理。K.S.得到了ANR的研究资助(ANR-18- fral -0013-01)。
{"title":"Effects of Shared Attention on joint language production across processing stages","authors":"Giusy Cirillo, Kristof Strijkers, Elin Runnqvist, Cristina Baus","doi":"10.1080/23273798.2023.2260021","DOIUrl":"https://doi.org/10.1080/23273798.2023.2260021","url":null,"abstract":"ABSTRACTShared attention across individuals is a crucial component of joint activities, modulating how we perceive relevant information. In this study, we explored shared attention in language production and memory across separate representation levels. In a shared go/no-go task, pairs of participants responded to objects displayed on a screen: One participant reacted according to the animacy of the object (semantic task), while her partner reacted to the first letter/phoneme (phoneme-monitoring task). Objects could require a response from either one participant, both participants or nobody. Only participants assigned to the phoneme-monitoring task were faster at responding to the joint than to alone trials. However, results from a memory recall test showed that for both partners recall was more accurate for those items to which the partner responded and for jointly responded items. Overall, our findings suggest that partners co-represent each other’s language features even when they do not engage in the same task.KEYWORDS: Shared attentionco-representationjoint memory effectcollective-prioritisation effectlanguage production AcknowledgmentsWe are grateful to Noel Nguyen for his advices and his support. We are also grateful to Xavier Alario for his supervision during the first steps of the project.Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis study has received financial support from the Marie Curie Actions (FP7-PEOPLE 2014–2016 under REA agreement n°623845), from the Laboratoire Parole et Langage and from Excellence Initiative of Aix-Marseille University – A*MIDEX through the Institute of Language, Communication and the Brain. G.C. was supported by the Ecole Doctorale 356 of Aix-Marseille University. C.B. was supported by the Ramon y Cajal research program (RYC2018-026174-I). E.R. has benefited from support from the French government, managed by the French National Agency for Research (ANR) through a research grant (ANR-18-CE28-0013). K.S. was supported by a research grant of the ANR (ANR-18-FRAL-0013-01).","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134961000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1080/23273798.2023.2260022
Yicheng Rong, Yi Weng, Gang Peng
ABSTRACTWhile Mismatch Negativity (MMN) and P300 have been found to correlate with the processing of acoustic and phonological information involved in speech perception, there is controversy surrounding how these two components index acoustic and/or phonological processing at pre-attentive and attentive stages. The current study employed both passive and active oddball paradigms to examine neural responses to lexical tones at the two stages in Cantonese speakers, using the paradigm of categorical perception (CP) where the between- and within-category deviants share the same acoustic distance from the standard but differ in the involvement of phonological information. We failed to observe a CP effect in P300, which might indicate that this component doesn’t necessarily index phonological processing, while MMN does, as reflected by the finding of a greater MMN amplitude elicited from the between-category than within-category deviant. Nevertheless, phonological processing might be overridden by acoustic processing among participants who were sensitive to pitch.KEYWORDS: Acoustic informationphonological informationmismatch negativityP300categorical tone perception Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was supported by a grant from the Research Grants Council of Hong Kong (GRF: 15610321).
{"title":"Processing of acoustic and phonological information of lexical tones at pre-attentive and attentive stages","authors":"Yicheng Rong, Yi Weng, Gang Peng","doi":"10.1080/23273798.2023.2260022","DOIUrl":"https://doi.org/10.1080/23273798.2023.2260022","url":null,"abstract":"ABSTRACTWhile Mismatch Negativity (MMN) and P300 have been found to correlate with the processing of acoustic and phonological information involved in speech perception, there is controversy surrounding how these two components index acoustic and/or phonological processing at pre-attentive and attentive stages. The current study employed both passive and active oddball paradigms to examine neural responses to lexical tones at the two stages in Cantonese speakers, using the paradigm of categorical perception (CP) where the between- and within-category deviants share the same acoustic distance from the standard but differ in the involvement of phonological information. We failed to observe a CP effect in P300, which might indicate that this component doesn’t necessarily index phonological processing, while MMN does, as reflected by the finding of a greater MMN amplitude elicited from the between-category than within-category deviant. Nevertheless, phonological processing might be overridden by acoustic processing among participants who were sensitive to pitch.KEYWORDS: Acoustic informationphonological informationmismatch negativityP300categorical tone perception Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis study was supported by a grant from the Research Grants Council of Hong Kong (GRF: 15610321).","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136314282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1080/23273798.2023.2258536
Margaret Kandel, Jesse Snedeker
ABSTRACTAlthough there is compelling evidence for cascading activation in adult lexical planning, there is little research on how and when cascaded processing develops. We use a picture naming task to compare word planning in adults and five-year-old children. We manipulated image codability (name agreement) and name frequency, factors that affect lexical selection and phonological encoding, respectively. These factors had qualitatively similar influences on naming response time in both populations, suggesting similar underlying planning processes. Critically, we found an under-additive interaction between codability and frequency such that the frequency effect was attenuated when name agreement was low. This interaction generalises across experiments and languages and can be simulated in a planning architecture in which phonological forms become activated before lexical selection is complete. These results provide evidence for cascaded processing at an earlier age than previous studies, suggesting that informational cascades are a fundamental property of the production architecture.KEYWORDS: Language productioncascaded processingword planningname agreementcodabilityfrequencylanguage acquisition AcknowledgementsThank you to Parker Robbins and Benazir Neree for their assistance with data collection and processing as well as to Alfonso Caramazza and Joshua Cetron for sharing their thoughts on the project and analyses. We are additionally grateful to the anonymous reviewers of this article for their helpful comments.Disclosure statementNo potential conflict of interest was reported by the author(s).Data availability statementData and Supplementary Materials are available from https://osf.io/myrtg/.Notes1 Reconciling the mixed error effect with a serial model of lexical planning (e.g. Levelt et al., Citation1991) requires the assumption of a post-encoding editor (Baars et al., Citation1975; Butterworth, Citation1981; Kempen & Huijbers, Citation1983; Levelt, Citation1989).2 It is important to note, however, that codability effects, while commonly attributed to co-activation at the lexical level, may not exclusively reflect an influence on lexical decision; name agreement may also influence processes prior to lexical decision such as conceptual access.3 One exception we have found is an adult sentence production study by Spieler and Griffin (Citation2006). Their experiment elicited sentences in the form The A and the B is above the C. The researchers manipulated the frequency (high, low) and codability (high, medium) of critical items that appeared in either the B or C position (the item in A always had high codability). They observed an interaction between the frequency and codability of the critical items on the latency between the onset of A and the onset of the critical item. This interaction is not in the direction we observe, however: they observed an over-additive effect of frequency for medium codable items compared to highly codable items (lat
{"title":"Cascaded processing develops by five years of age: evidence from adult and child picture naming","authors":"Margaret Kandel, Jesse Snedeker","doi":"10.1080/23273798.2023.2258536","DOIUrl":"https://doi.org/10.1080/23273798.2023.2258536","url":null,"abstract":"ABSTRACTAlthough there is compelling evidence for cascading activation in adult lexical planning, there is little research on how and when cascaded processing develops. We use a picture naming task to compare word planning in adults and five-year-old children. We manipulated image codability (name agreement) and name frequency, factors that affect lexical selection and phonological encoding, respectively. These factors had qualitatively similar influences on naming response time in both populations, suggesting similar underlying planning processes. Critically, we found an under-additive interaction between codability and frequency such that the frequency effect was attenuated when name agreement was low. This interaction generalises across experiments and languages and can be simulated in a planning architecture in which phonological forms become activated before lexical selection is complete. These results provide evidence for cascaded processing at an earlier age than previous studies, suggesting that informational cascades are a fundamental property of the production architecture.KEYWORDS: Language productioncascaded processingword planningname agreementcodabilityfrequencylanguage acquisition AcknowledgementsThank you to Parker Robbins and Benazir Neree for their assistance with data collection and processing as well as to Alfonso Caramazza and Joshua Cetron for sharing their thoughts on the project and analyses. We are additionally grateful to the anonymous reviewers of this article for their helpful comments.Disclosure statementNo potential conflict of interest was reported by the author(s).Data availability statementData and Supplementary Materials are available from https://osf.io/myrtg/.Notes1 Reconciling the mixed error effect with a serial model of lexical planning (e.g. Levelt et al., Citation1991) requires the assumption of a post-encoding editor (Baars et al., Citation1975; Butterworth, Citation1981; Kempen & Huijbers, Citation1983; Levelt, Citation1989).2 It is important to note, however, that codability effects, while commonly attributed to co-activation at the lexical level, may not exclusively reflect an influence on lexical decision; name agreement may also influence processes prior to lexical decision such as conceptual access.3 One exception we have found is an adult sentence production study by Spieler and Griffin (Citation2006). Their experiment elicited sentences in the form The A and the B is above the C. The researchers manipulated the frequency (high, low) and codability (high, medium) of critical items that appeared in either the B or C position (the item in A always had high codability). They observed an interaction between the frequency and codability of the critical items on the latency between the onset of A and the onset of the critical item. This interaction is not in the direction we observe, however: they observed an over-additive effect of frequency for medium codable items compared to highly codable items (lat","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136309070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACT We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.
{"title":"Morphosyntactic predictive processing in adult heritage speakers: effects of cue availability and spoken and written language experience","authors":"Figen Karaca, Susanne Brouwer, Sharon Unsworth, Falk Huettig","doi":"10.1080/23273798.2023.2254424","DOIUrl":"https://doi.org/10.1080/23273798.2023.2254424","url":null,"abstract":"ABSTRACT We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135884328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-11DOI: 10.1080/23273798.2023.2254865
Phoebe Chen, David Poeppel, Arianna Zuanazzi
The interpretation of novel noun-noun compounds (NNCs, e.g. “devil salary”) requires the combination of nouns in the absence of syntactic cues, an interesting facet of complex meaning creation. Here we examine unconstrained interpretations of a large set of novel NNCs, to investigate how NNC constituents are combined into novel complex meanings. The data show that words’ lexical-semantic features (e.g. material, agentivity, imageability, semantic similarity) differentially contribute to the grammatical relations and the semantics of NNC interpretations. Further, we demonstrate that passive interpretations incur higher processing cost (longer interpretation times and more eye-movements) than active interpretations. Finally, we show that large language models (GPT-2, BERT, RoBERTa) can predict whether a NNC is interpretable by human participants and estimate differences in processing cost, but do not exhibit sensitivity to more subtle grammatical differences. The experiments illuminate how humans can use lexical-semantic features to interpret NNCs in the absence of explicit syntactic information.
{"title":"Meaning creation in novel noun-noun compounds: humans and language models","authors":"Phoebe Chen, David Poeppel, Arianna Zuanazzi","doi":"10.1080/23273798.2023.2254865","DOIUrl":"https://doi.org/10.1080/23273798.2023.2254865","url":null,"abstract":"The interpretation of novel noun-noun compounds (NNCs, e.g. “devil salary”) requires the combination of nouns in the absence of syntactic cues, an interesting facet of complex meaning creation. Here we examine unconstrained interpretations of a large set of novel NNCs, to investigate how NNC constituents are combined into novel complex meanings. The data show that words’ lexical-semantic features (e.g. material, agentivity, imageability, semantic similarity) differentially contribute to the grammatical relations and the semantics of NNC interpretations. Further, we demonstrate that passive interpretations incur higher processing cost (longer interpretation times and more eye-movements) than active interpretations. Finally, we show that large language models (GPT-2, BERT, RoBERTa) can predict whether a NNC is interpretable by human participants and estimate differences in processing cost, but do not exhibit sensitivity to more subtle grammatical differences. The experiments illuminate how humans can use lexical-semantic features to interpret NNCs in the absence of explicit syntactic information.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135982534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-02DOI: 10.1080/23273798.2023.2254425
Colas Fournet, Jonathan Mirault, Mathieu Declerck, Jonathan Grainger
ABSTRACT In two grammatical decision experiments, we used fast-priming as a novel method for uncovering the syntactic processes involved in written sentence comprehension while limiting the influence of strategic processes. Targets were sequences of four words that could be grammatically correct or not. Targets (e.g. they see the moon) were preceded by the brief (170 ms) presentation of four types of prime: (1) same syntactic structure / same verb (you see a friend); (2) same structure / different verb (she writes a book); (3) different structure / same verb (he sees him now); or (4) different structure / different verb (stay in our hotel). Same structure primes facilitated decisions to grammatical targets in error rates, and this effect did not significantly interact with the facilitatory effect of a shared verb. These results provide evidence for structural priming of sentence reading in conditions that greatly limit any role for strategic processing.
{"title":"Fast structural priming of grammatical decisions during reading","authors":"Colas Fournet, Jonathan Mirault, Mathieu Declerck, Jonathan Grainger","doi":"10.1080/23273798.2023.2254425","DOIUrl":"https://doi.org/10.1080/23273798.2023.2254425","url":null,"abstract":"ABSTRACT In two grammatical decision experiments, we used fast-priming as a novel method for uncovering the syntactic processes involved in written sentence comprehension while limiting the influence of strategic processes. Targets were sequences of four words that could be grammatically correct or not. Targets (e.g. they see the moon) were preceded by the brief (170 ms) presentation of four types of prime: (1) same syntactic structure / same verb (you see a friend); (2) same structure / different verb (she writes a book); (3) different structure / same verb (he sees him now); or (4) different structure / different verb (stay in our hotel). Same structure primes facilitated decisions to grammatical targets in error rates, and this effect did not significantly interact with the facilitatory effect of a shared verb. These results provide evidence for structural priming of sentence reading in conditions that greatly limit any role for strategic processing.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48993402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-30DOI: 10.1080/23273798.2023.2250023
Arrate Isasi-Isasmendi, Sebastian Sauppe, Caroline Andrews, I. Laka, Martin Meyer, B. Bickel
ABSTRACT Comprehenders across languages tend to interpret role-ambiguous arguments as the subject or the agent of a sentence during parsing. However, the evidence for such a subject/agent preference rests on the comprehension of transitive, active-voice sentences where agents/subjects canonically precede patients/objects. The evidence is thus potentially confounded by the canonical order of arguments. Transitive sentence stimuli additionally conflate the semantic agent role and the syntactic subject function. We resolve these two confounds in an experiment on the comprehension of intransitive sentences in Basque. When exposed to sentence-initial role-ambiguous arguments, comprehenders preferentially interpreted these as agents and had to revise their interpretation when the verb disambiguated to patient-initial readings. The revision was reflected in an N400 component in ERPs and a decrease in power in the alpha and lower beta bands. This finding suggests that sentence processing is guided by a top-down heuristic to interpret ambiguous arguments as agents, independently of word order and independently of transitivity.
{"title":"Incremental sentence processing is guided by a preference for agents: EEG evidence from Basque","authors":"Arrate Isasi-Isasmendi, Sebastian Sauppe, Caroline Andrews, I. Laka, Martin Meyer, B. Bickel","doi":"10.1080/23273798.2023.2250023","DOIUrl":"https://doi.org/10.1080/23273798.2023.2250023","url":null,"abstract":"ABSTRACT Comprehenders across languages tend to interpret role-ambiguous arguments as the subject or the agent of a sentence during parsing. However, the evidence for such a subject/agent preference rests on the comprehension of transitive, active-voice sentences where agents/subjects canonically precede patients/objects. The evidence is thus potentially confounded by the canonical order of arguments. Transitive sentence stimuli additionally conflate the semantic agent role and the syntactic subject function. We resolve these two confounds in an experiment on the comprehension of intransitive sentences in Basque. When exposed to sentence-initial role-ambiguous arguments, comprehenders preferentially interpreted these as agents and had to revise their interpretation when the verb disambiguated to patient-initial readings. The revision was reflected in an N400 component in ERPs and a decrease in power in the alpha and lower beta bands. This finding suggests that sentence processing is guided by a top-down heuristic to interpret ambiguous arguments as agents, independently of word order and independently of transitivity.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43196826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-24DOI: 10.1080/23273798.2023.2250481
Jina Song, E. Kaiser
ABSTRACT Pronoun interpretation is guided by various factors. While most previously-investigated factors involve properties occurring before the pronoun, less attention has been paid to properties of the pronoun-containing clause. We investigate whether pronoun interpretation is influenced by the referential structure of the pronoun-containing clause (i.e. whether another referent from the preceding clause is mentioned), which contributes to discourse coherence. We report three experiments showing referential structure effects: whether subject-position pronouns are ultimately interpreted as referring to the preceding subject or object depends on whether the clause contains another pronoun (e.g. she called Lisa vs. she called her). More specifically, subject-position pronouns exhibit a stronger object preference when only one of the prior antecedents is mentioned, compared to when both are mentioned. We show that this effect is separate from effects of verb semantics and cannot be reduced to semantic or syntactic parallelism effects. Implications for models of pronoun resolution are discussed.
{"title":"Effects of referential structure on pronoun interpretation","authors":"Jina Song, E. Kaiser","doi":"10.1080/23273798.2023.2250481","DOIUrl":"https://doi.org/10.1080/23273798.2023.2250481","url":null,"abstract":"ABSTRACT Pronoun interpretation is guided by various factors. While most previously-investigated factors involve properties occurring before the pronoun, less attention has been paid to properties of the pronoun-containing clause. We investigate whether pronoun interpretation is influenced by the referential structure of the pronoun-containing clause (i.e. whether another referent from the preceding clause is mentioned), which contributes to discourse coherence. We report three experiments showing referential structure effects: whether subject-position pronouns are ultimately interpreted as referring to the preceding subject or object depends on whether the clause contains another pronoun (e.g. she called Lisa vs. she called her). More specifically, subject-position pronouns exhibit a stronger object preference when only one of the prior antecedents is mentioned, compared to when both are mentioned. We show that this effect is separate from effects of verb semantics and cannot be reduced to semantic or syntactic parallelism effects. Implications for models of pronoun resolution are discussed.","PeriodicalId":48782,"journal":{"name":"Language Cognition and Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49616253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}