Pub Date : 2025-12-19DOI: 10.1016/j.bandl.2025.105685
Yasuaki Shinohara , Valerie L. Shafer
Neural discriminative responses index acoustic–phonetic and phonological differences. This study examined how contextual complexity modulates neural discrimination of speech sounds. The neural discrimination of Japanese /ma/ and /na/ was examined in a single-standard versus multi-standard oddball paradigm. In each paradigm, there were within-phoneme and cross-phoneme conditions. The results demonstrated that the single-standard cross-phoneme condition (single-standard [ma] vs. deviant [na]) elicited the largest mismatch negativity (MMN), followed by the single-standard within-phoneme condition (single-standard [na] vs. deviant [na]), and then the multi-standard cross-phoneme condition (multi-standard [ma] vs. deviant [na]). The multi-standard cross-phoneme condition elicited a late discriminative negativity (LDN) unlike the single-standard cross-phoneme condition. The later timing of the effect in the multi-standard condition suggests that task influences processing at the level of the MMN and LDN. Future studies are needed to further determine how the magnitude of varying factors, such as speech voice, influences phonological processing.
{"title":"Neural indices of phonological and acoustic–phonetic perception","authors":"Yasuaki Shinohara , Valerie L. Shafer","doi":"10.1016/j.bandl.2025.105685","DOIUrl":"10.1016/j.bandl.2025.105685","url":null,"abstract":"<div><div>Neural discriminative responses index acoustic–phonetic and phonological differences. This study examined how contextual complexity modulates neural discrimination of speech sounds. The neural discrimination of Japanese /ma/ and /na/ was examined in a single-standard versus multi-standard oddball paradigm. In each paradigm, there were within-phoneme and cross-phoneme conditions. The results demonstrated that the single-standard cross-phoneme condition (single-standard [ma] vs. deviant [na]) elicited the largest mismatch negativity (MMN), followed by the single-standard within-phoneme condition (single-standard [na] vs. deviant [na]), and then the multi-standard cross-phoneme condition (multi-standard [ma] vs. deviant [na]). The multi-standard cross-phoneme condition elicited a late discriminative negativity (LDN) unlike the single-standard cross-phoneme condition. The later timing of the effect in the multi-standard condition suggests that task influences processing at the level of the MMN and LDN. Future studies are needed to further determine how the magnitude of varying factors, such as speech voice, influences phonological processing.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105685"},"PeriodicalIF":2.3,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.bandl.2025.105689
Amir Hossein Ghooch Kanloo , Magdalena Kachlicka , Kazuya Saito , Adam Tierney
There are large differences across individuals in the ability to perceive foreign-accented speech, and the sources of this variability are poorly understood. Here we tested the hypothesis that individual differences in auditory processing help drive variability in accented speech perception. We asked L1 English speakers to perceive prosodic features in Mandarin-accented English. Individuals who could precisely discriminate pitch and accurately remember melodic sequences, and who placed more emphasis on pitch information during prosodic categorization, were better able to perceive Mandarin-accented speech. Individuals with more musical training also demonstrated enhanced Mandarin-accented speech perception. Finally, we found that better Mandarin-accented speech perception was linked to more robust neural encoding of speech harmonics. These findings suggest that the precision of sound perception and robustness of memory for sound sequences are major factors driving variability in accented speech perception, and so auditory training could potentially help remediate poor perception of accented speech.
{"title":"Individual differences in perception of prosody in Mandarin-accented speech are linked to pitch perception, melody memory, musical training, and neural encoding of sound","authors":"Amir Hossein Ghooch Kanloo , Magdalena Kachlicka , Kazuya Saito , Adam Tierney","doi":"10.1016/j.bandl.2025.105689","DOIUrl":"10.1016/j.bandl.2025.105689","url":null,"abstract":"<div><div>There are large differences across individuals in the ability to perceive foreign-accented speech, and the sources of this variability are poorly understood. Here we tested the hypothesis that individual differences in auditory processing help drive variability in accented speech perception. We asked L1 English speakers to perceive prosodic features in Mandarin-accented English. Individuals who could precisely discriminate pitch and accurately remember melodic sequences, and who placed more emphasis on pitch information during prosodic categorization, were better able to perceive Mandarin-accented speech. Individuals with more musical training also demonstrated enhanced Mandarin-accented speech perception. Finally, we found that better Mandarin-accented speech perception was linked to more robust neural encoding of speech harmonics. These findings suggest that the precision of sound perception and robustness of memory for sound sequences are major factors driving variability in accented speech perception, and so auditory training could potentially help remediate poor perception of accented speech.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105689"},"PeriodicalIF":2.3,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1016/j.bandl.2025.105686
Li Sheng , Anita Mei-Yin Wong
{"title":"Editorial: Developmental Language Disorder in Chinese","authors":"Li Sheng , Anita Mei-Yin Wong","doi":"10.1016/j.bandl.2025.105686","DOIUrl":"10.1016/j.bandl.2025.105686","url":null,"abstract":"","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105686"},"PeriodicalIF":2.3,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1016/j.bandl.2025.105687
Emily M. Akers , Katherine J. Midgley , Phillip J. Holcomb , Karen Emmorey
Previous ERP studies have demonstrated that hearing learners of American Sign Language (ASL) show sensitivity to sign iconicity (a resemblance between form and meaning) prior to learning any signs. Highly iconic (transparent) signs elicited greater negativity in the N400 window than non-iconic signs when participants performed a task that did not require semantic processing (detect an occasional grooming gesture). Greater negativity was interpreted as evidence that participants implicitly recognized the meaning of the iconic signs. Here we investigated how this neural response changes after learning. For comparison, we included a group of fluent deaf signers who performed the same task. Results revealed that the N400 to iconic signs became less negative after learning, indicating that these signs had been integrated into an emerging lexicon. In contrast, the N400 to non-iconic signs became more negative after learning, indicating more effortful processing compared to the iconic signs. For deaf signers, iconic signs elicited a larger N400 than non-iconic signs, which we interpret as a task effect whereby the highly iconic signs were seen as similar to the grooming gestures because both are enactments of actions (e.g., drinking from a cup; rubbing the eyes). In order to accurately perform the gesture detection task, deaf signers may have engaged in greater semantic processing of the iconic than non-iconic signs, which led to a larger N400 response. Overall, we conclude that iconicity modulates the neural response to signs in different ways before and after learning and that for deaf signers, iconicity effects are task dependent.
{"title":"The neural response to highly iconic signs in hearing learners and deaf signers","authors":"Emily M. Akers , Katherine J. Midgley , Phillip J. Holcomb , Karen Emmorey","doi":"10.1016/j.bandl.2025.105687","DOIUrl":"10.1016/j.bandl.2025.105687","url":null,"abstract":"<div><div>Previous ERP studies have demonstrated that hearing learners of American Sign Language (ASL) show sensitivity to sign iconicity (a resemblance between form and meaning) prior to learning any signs. Highly iconic (transparent) signs elicited greater negativity in the N400 window than non-iconic signs when participants performed a task that did not require semantic processing (detect an occasional grooming gesture). Greater negativity was interpreted as evidence that participants implicitly recognized the meaning of the iconic signs. Here we investigated how this neural response changes after learning. For comparison, we included a group of fluent deaf signers who performed the same task. Results revealed that the N400 to iconic signs became less negative after learning, indicating that these signs had been integrated into an emerging lexicon. In contrast, the N400 to non-iconic signs became more negative after learning, indicating more effortful processing compared to the iconic signs. For deaf signers, iconic signs elicited a larger N400 than non-iconic signs, which we interpret as a task effect whereby the highly iconic signs were seen as similar to the grooming gestures because both are enactments of actions (e.g., drinking from a cup; rubbing the eyes). In order to accurately perform the gesture detection task, deaf signers may have engaged in greater semantic processing of the iconic than non-iconic signs, which led to a larger N400 response. Overall, we conclude that iconicity modulates the neural response to signs in different ways before and after learning and that for deaf signers, iconicity effects are task dependent.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105687"},"PeriodicalIF":2.3,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.bandl.2025.105675
Sara Farshchi, Carita Paradis
To examine whether processing of negated meanings is facilitated in highly predictable contexts and proceeds incrementally rather than with a delay, we asked participants to read highly constraining sentences containing negated adjectives (e.g., awake/responding/commercial) that were either strongly expected in a high-cloze condition (awake), weakly expected in a low-cloze condition (responding), or contextually inappropriate in a semantic violation condition (commercial). In accordance with findings for affirmative sentences, a smaller N400 was elicited for the high-cloze condition than for the low-cloze one, and for the low-cloze condition than the violation condition. The smaller N400 for the high-cloze condition suggests facilitated processing for strongly expected continuations. Furthermore, in the post-N400 time-windows, two distinct post-N400 positivities (PNPs) were elicited for the two weakly expected and unexpected continuations compared to strongly expected continuations. Firstly, a larger anterior PNP was observed for weakly expecting, but plausible, continuations in the low-cloze condition, suggesting inhibitory processes suppressing initial predictions to allow for the integration of the new information. Secondly, a larger posterior PNP was observed for unexpected and implausible, continuations in the violation condition, indexing contextual integration difficulties. Together, these findings suggest that negation can be processed incrementally in highly constraining contexts where predictions can be made, engaging similar neural mechanisms as predictive processing in affirmative sentences in such contexts. In sum, our results are consistent with previous ERP research on prediction processing in both affirmative and negated contexts but inconsistent with previous research using behavioral methods.
{"title":"Predictability effects in the processing of negation: an ERP study","authors":"Sara Farshchi, Carita Paradis","doi":"10.1016/j.bandl.2025.105675","DOIUrl":"10.1016/j.bandl.2025.105675","url":null,"abstract":"<div><div>To examine whether processing of negated meanings is facilitated in highly predictable contexts and proceeds incrementally rather than with a delay, we asked participants to read highly constraining sentences containing negated adjectives (e.g., <em>awake</em>/<em>responding</em>/<em>commercial</em>) that were either strongly expected in a high-cloze condition (<em>awake</em>), weakly expected in a low-cloze condition (<em>responding</em>), or contextually inappropriate in a semantic violation condition (<em>commercial</em>). In accordance with findings for affirmative sentences, a smaller N400 was elicited for the high-cloze condition than for the low-cloze one, and for the low-cloze condition than the violation condition. The smaller N400 for the high-cloze condition suggests facilitated processing for strongly expected continuations. Furthermore, in the post-N400 time-windows, two distinct post-N400 positivities (PNPs) were elicited for the two weakly expected and unexpected continuations compared to strongly expected continuations. Firstly, a larger anterior PNP was observed for weakly expecting, but plausible, continuations in the low-cloze condition, suggesting inhibitory processes suppressing initial predictions to allow for the integration of the new information. Secondly, a larger posterior PNP was observed for unexpected and implausible, continuations in the violation condition, indexing contextual integration difficulties. Together, these findings suggest that negation can be processed incrementally in highly constraining contexts where predictions can be made, engaging similar neural mechanisms as predictive processing in affirmative sentences in such contexts. In sum, our results are consistent with previous ERP research on prediction processing in both affirmative and negated contexts but inconsistent with previous research using behavioral methods.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105675"},"PeriodicalIF":2.3,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1016/j.bandl.2025.105688
Jeroen J. Stekelenburg, Martijn Baart, Jean Vroomen
We investigated how horizontal viewing angle of a speaking face influences audiovisual (AV) speech integration at the behavioral and neural level. In Experiment 1, seventeen participants identified consonant–vowel syllables (/faː/, /feː/, /foː/, /paː/, /peː/, /poː/, /taː/, /teː/, /toː/) presented in audiovisual, visual-only, and auditory-only conditions across four head orientations (frontal, oblique, profile, side-back at 0°, 45°, 90°, and 100°, respectively), in quiet and white-noise (–14 dB SNR) masking conditions. Audiovisual gain by lip read information (AV minus A-only accuracy) declined with increasing head rotations for speech-in-noise, but only for /p/, while accuracy in the visual-only condition followed a similar trend. Experiment 2 measured electrophysiology (EEG) in twenty-eight participants for four syllables (/faː/, /foː/, /paː/, /poː/) in quiet examining N1/P2 event-related potential (ERP) components for audiovisual, auditory-only, and visual-only stimulus presentations. Peak-amplitude and cluster-based analyses revealed that the well-documented N1 suppression by visual speech information (AV – V < A) was maximal at oblique (45°) head rotations, and significantly reduced at profile and side-back angles, whereas P2 suppression remained constant across all angles. N1 and P2 latencies were consistently shorter for AV – V than A-only conditions for all angles. These results demonstrate a dissociation between temporal (N1 amplitude suppression) and phonetic (P2 amplitude suppression) integrative mechanisms: temporal prediction degraded with profile/side-back head orientations, whereas audiovisual phonetic integration was not affected by the viewing angle of the head. Our findings indicate that head rotation reduces, but does not eliminate, audiovisual speech integration.
{"title":"A behavioral and electrophysiological investigation of the effect of horizontal head viewing angle on audiovisual speech integration","authors":"Jeroen J. Stekelenburg, Martijn Baart, Jean Vroomen","doi":"10.1016/j.bandl.2025.105688","DOIUrl":"10.1016/j.bandl.2025.105688","url":null,"abstract":"<div><div>We investigated how horizontal viewing angle of a speaking face influences audiovisual (AV) speech integration at the behavioral and neural level. In Experiment 1, seventeen participants identified consonant–vowel syllables (/faː/, /feː/, /foː/, /paː/, /peː/, /poː/, /taː/, /teː/, /toː/) presented in audiovisual, visual-only, and auditory-only conditions across four head orientations (frontal, oblique, profile, side-back at 0°, 45°, 90°, and 100°, respectively), in quiet and white-noise (–14 dB SNR) masking conditions. Audiovisual gain by lip read information (AV minus A-only accuracy) declined with increasing head rotations for speech-in-noise, but only for /p/, while accuracy in the visual-only condition followed a similar trend. Experiment 2 measured electrophysiology (EEG) in twenty-eight participants for four syllables (/faː/, /foː/, /paː/, /poː/) in quiet examining N1/P2 event-related potential (ERP) components for audiovisual, auditory-only, and visual-only stimulus presentations. Peak-amplitude and cluster-based analyses revealed that the well-documented N1 suppression by visual speech information (AV – V < A) was maximal at oblique (45°) head rotations, and significantly reduced at profile and side-back angles, whereas P2 suppression remained constant across all angles. N1 and P2 latencies were consistently shorter for AV – V than A-only conditions for all angles. These results demonstrate a dissociation between temporal (N1 amplitude suppression) and phonetic (P2 amplitude suppression) integrative mechanisms: temporal prediction degraded with profile/side-back head orientations, whereas audiovisual phonetic integration was not affected by the viewing angle of the head. Our findings indicate that head rotation reduces, but does not eliminate, audiovisual speech integration.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105688"},"PeriodicalIF":2.3,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1016/j.bandl.2025.105673
Rebecca Holt , Carmen Kung , Elaine Schmidt , Katherine Demuth
Listeners take multiple sources of information into account when processing spoken language. This includes the speaker’s accent, which affects the on-line processing of many aspects of language, including morphosyntax. This study investigated listeners’ neural responses to subject-verb agreement errors in native vs. Mandarin-accented English. The error types differed in typicality: Errors of omission, but not errors of commission, are frequently produced by Mandarin-accented English speakers. Different error types elicited different neural responses in native vs. foreign-accented speech. Errors of omission elicited a P600 in native speech and no response in foreign-accented speech, while errors of commission elicited an N400 in native speech and a sustained negativity, beginning before the overt violation, in foreign-accented speech. This illustrates the influence of speaker accent on morphosyntactic processing and suggests that, while listeners are sensitive to error typicality, factors such as the perceptual salience of the violation may also affect neural responses.
{"title":"Rapid integration of speaker accent during morphosyntactic processing","authors":"Rebecca Holt , Carmen Kung , Elaine Schmidt , Katherine Demuth","doi":"10.1016/j.bandl.2025.105673","DOIUrl":"10.1016/j.bandl.2025.105673","url":null,"abstract":"<div><div>Listeners take multiple sources of information into account when processing spoken language. This includes the speaker’s accent, which affects the on-line processing of many aspects of language, including morphosyntax. This study investigated listeners’ neural responses to subject-verb agreement errors in native vs. Mandarin-accented English. The error types differed in typicality: Errors of omission, but not errors of commission, are frequently produced by Mandarin-accented English speakers. Different error types elicited different neural responses in native vs. foreign-accented speech. Errors of <em>omission</em> elicited a P600 in native speech and no response in foreign-accented speech, while errors of <em>commission</em> elicited an N400 in native speech and a sustained negativity, beginning before the overt violation, in foreign-accented speech. This illustrates the influence of speaker accent on morphosyntactic processing and suggests that, while listeners are sensitive to error typicality, factors such as the perceptual salience of the violation may also affect neural responses.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105673"},"PeriodicalIF":2.3,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1016/j.bandl.2025.105676
Xueping Hu , Yiming Yang , Lijun Wang , Na Hu , Zhenzhen Xu , Yanqing Wang , Lili Ming , Antao Chen
Language familiarity significantly modulates speaker discrimination, with listeners demonstrating superior performance for familiar versus unfamiliar languages. To address the confound of speech content acoustics in prior research, this fMRI study investigated the cognitive and neural mechanisms underlying this effect using identical speech content across three languages of varying listener proficiency. Behaviorally, familiar language discrimination elicited faster responses and higher accuracy. Neurally, bilateral insula and anterior cingulate cortex activation during different-speaker trials intensified as language familiarity decreased. The left rostrolateral prefrontal cortex was specifically engaged for unfamiliar voice discrimination and sensitive to speaker identity. Conjunction and representational similarity analyses across languages revealed engagement of voice-selective, cognitive control, and semantic processing regions, with high cross-language neural representational similarity specifically localized to the right dorsolateral prefrontal cortex and left inferior frontal gyrus. These findings provide the evidence under speech-content-matched conditions demonstrating that speaker discrimination relies on both acoustic voice processing regions and cognitive control networks implementing top-down modulation, the intensity of which is inversely proportional to language familiarity level.
{"title":"The influence of language familiarity on voice discrimination: Divergent and shared neural mechanisms","authors":"Xueping Hu , Yiming Yang , Lijun Wang , Na Hu , Zhenzhen Xu , Yanqing Wang , Lili Ming , Antao Chen","doi":"10.1016/j.bandl.2025.105676","DOIUrl":"10.1016/j.bandl.2025.105676","url":null,"abstract":"<div><div>Language familiarity significantly modulates speaker discrimination, with listeners demonstrating superior performance for familiar versus unfamiliar languages. To address the confound of speech content acoustics in prior research, this fMRI study investigated the cognitive and neural mechanisms underlying this effect using identical speech content across three languages of varying listener proficiency. Behaviorally, familiar language discrimination elicited faster responses and higher accuracy. Neurally, bilateral insula and anterior cingulate cortex activation during different-speaker trials intensified as language familiarity decreased. The left rostrolateral prefrontal cortex was specifically engaged for unfamiliar voice discrimination and sensitive to speaker identity. Conjunction and representational similarity analyses across languages revealed engagement of voice-selective, cognitive control, and semantic processing regions, with high cross-language neural representational similarity specifically localized to the right dorsolateral prefrontal cortex and left inferior frontal gyrus. These findings provide the evidence under speech-content-matched conditions demonstrating that speaker discrimination relies on both acoustic voice processing regions and cognitive control networks implementing top-down modulation, the intensity of which is inversely proportional to language familiarity level.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105676"},"PeriodicalIF":2.3,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.bandl.2025.105674
Annie C. Gilbert , Claire T. Honda , Louis Friedland-Yust , Cassandra Sorin , Shari R. Baum
Learning to process the prosody of a second language can be challenging, particularly when the languages present different prosodic structures, as is the case for English and French. Although previous studies suggested that French listeners are unable to process lexical stress, more recent work suggests that they can, although they might assign a different weight to F0 and duration as stress cues compared to native listeners. To determine if this is the case, forty-two English-French bilinguals participated in two experiments investigating the impact of individual differences in language experience on F0 and duration weight when perceiving lexical stress. Interestingly, participants’ language experience could predict the weight assigned to F0 and duration as cues to lexical stress in the behavioral task from Experiment 1, but not the event-related potentials of Experiment 2. Together, these results suggest that prosodic learning involves learning to assign the (language-specific) appropriate weight to non-language-specific acoustic-prosodic cues.
{"title":"The influence of individual differences in language experience on lexical stress cue-weighting: native and non-native listeners","authors":"Annie C. Gilbert , Claire T. Honda , Louis Friedland-Yust , Cassandra Sorin , Shari R. Baum","doi":"10.1016/j.bandl.2025.105674","DOIUrl":"10.1016/j.bandl.2025.105674","url":null,"abstract":"<div><div>Learning to process the prosody of a second language can be challenging, particularly when the languages present different prosodic structures, as is the case for English and French. Although previous studies suggested that French listeners are unable to process lexical stress, more recent work suggests that they can, although they might assign a different weight to F0 and duration as stress cues compared to native listeners. To determine if this is the case, forty-two English-French bilinguals participated in two experiments investigating the impact of individual differences in language experience on F0 and duration weight when perceiving lexical stress. Interestingly, participants’ language experience could predict the weight assigned to F0 and duration as cues to lexical stress in the behavioral task from Experiment 1, but not the event-related potentials of Experiment 2. Together, these results suggest that prosodic learning involves learning to assign the (language-specific) appropriate weight to non-language-specific acoustic-prosodic cues.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"273 ","pages":"Article 105674"},"PeriodicalIF":2.3,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-27DOI: 10.1016/j.bandl.2025.105667
Weiqi Wang , Lin Chen , Charles Perfetti
High word predictability facilitates the access and integration of word meaning, indicated by a reduced N400 effect. However, whether word predictability affects later phases of reading comprehension (mental model updating), as assessed by a late positivity, remains unsettled. Some studies suggest that unexpected but plausible words increase positivity in frontal regions, while others do not. To gain clarity on this issue, we used two-sentence passages from The New York Times articles that do not contain implausible words. Using a language model with a transformer architecture, we assessed the predictability (surprisal) of each word in these texts on a continuous scale. Linear mixed-effects modeling of the EEG dataset (Chen et al., 2025) showed that higher word predictability reduced N400 in central-parietal regions, whereas lower predictability increased late positivity in frontal regions. These findings suggest that word predictability has a graded effect on late frontal positivity, reflecting mental model updating.
{"title":"The late frontal positivity reflects incremental mental model updating: Graded predictability effects during authentic text reading","authors":"Weiqi Wang , Lin Chen , Charles Perfetti","doi":"10.1016/j.bandl.2025.105667","DOIUrl":"10.1016/j.bandl.2025.105667","url":null,"abstract":"<div><div>High word predictability facilitates the access and integration of word meaning, indicated by a reduced N400 effect. However, whether word predictability affects later phases of reading comprehension (mental model updating), as assessed by a late positivity, remains unsettled. Some studies suggest that unexpected but plausible words increase positivity in frontal regions, while others do not. To gain clarity on this issue, we used two-sentence passages from <em>The New York Times</em> articles that do not contain implausible words. Using a language model with a transformer architecture, we assessed the predictability (surprisal) of each word in these texts on a continuous scale. Linear mixed-effects modeling of the EEG dataset (Chen et al., 2025) showed that higher word predictability reduced N400 in central-parietal regions, whereas lower predictability increased late positivity in frontal regions. These findings suggest that word predictability has a graded effect on late frontal positivity, reflecting mental model updating.</div></div>","PeriodicalId":55330,"journal":{"name":"Brain and Language","volume":"272 ","pages":"Article 105667"},"PeriodicalIF":2.3,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}