Mild Cognitive Impairment (MCI) represents a transitional stage between normal aging and dementia and is associated with an increased risk of progression to Alzheimer's disease. Conventional cognitive screening tools provide limited sensitivity for detecting subtle language impairments that may emerge in the earliest phases of neurodegeneration. This study aimed to evaluate the discriminative validity of the Turkish adaptation of the Detection Test for Language Impairments in Adults and the Aged (DTLA-Tr) in identifying language deficits in individuals with MCI. The sample comprised 110 participants, including 55 individuals with MCI and 55 age-, education-, and gender-matched healthy controls. All participants completed the Montreal Cognitive Assessment Turkish version (MoCA-Tr), Boston Naming Test Turkish Version (BNT-Tr), and DTLA-Tr following a fixed administration order. Group differences were analyzed using non-parametric tests and mixed-effects modelling. Discriminative performance of the DTLA-Tr Total Score was evaluated using ROC curve analysis. Individuals with MCI demonstrated significantly lower performance across multiple DTLA-Tr subtests, particularly in Repetition, Verbal Fluency, Alpha Span, Reading, and Semantic Matching. The DTLA-Tr Total Score showed fair discriminative accuracy for MCI (AUC = .69). The optimal cut-off (≤82) yielded a sensitivity of .44 and specificity of .85, indicating stronger specificity than sensitivity. The findings suggest that DTLA-Tr is a culturally appropriate and clinically useful tool for detecting language-related cognitive decline in MCI. Although its sensitivity remains modest, its multidimensional structure captures linguistic impairment.
{"title":"Linguistic vulnerabilities in mild cognitive impairment: Evidence from the DTLA-Tr screening battery.","authors":"Samet Tosun, Fenise Selin Karalı, Elif İkbal Eskioğlu, Nilgün Çınar, Joël Macoir","doi":"10.1016/j.cortex.2026.03.007","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.007","url":null,"abstract":"<p><p>Mild Cognitive Impairment (MCI) represents a transitional stage between normal aging and dementia and is associated with an increased risk of progression to Alzheimer's disease. Conventional cognitive screening tools provide limited sensitivity for detecting subtle language impairments that may emerge in the earliest phases of neurodegeneration. This study aimed to evaluate the discriminative validity of the Turkish adaptation of the Detection Test for Language Impairments in Adults and the Aged (DTLA-Tr) in identifying language deficits in individuals with MCI. The sample comprised 110 participants, including 55 individuals with MCI and 55 age-, education-, and gender-matched healthy controls. All participants completed the Montreal Cognitive Assessment Turkish version (MoCA-Tr), Boston Naming Test Turkish Version (BNT-Tr), and DTLA-Tr following a fixed administration order. Group differences were analyzed using non-parametric tests and mixed-effects modelling. Discriminative performance of the DTLA-Tr Total Score was evaluated using ROC curve analysis. Individuals with MCI demonstrated significantly lower performance across multiple DTLA-Tr subtests, particularly in Repetition, Verbal Fluency, Alpha Span, Reading, and Semantic Matching. The DTLA-Tr Total Score showed fair discriminative accuracy for MCI (AUC = .69). The optimal cut-off (≤82) yielded a sensitivity of .44 and specificity of .85, indicating stronger specificity than sensitivity. The findings suggest that DTLA-Tr is a culturally appropriate and clinically useful tool for detecting language-related cognitive decline in MCI. Although its sensitivity remains modest, its multidimensional structure captures linguistic impairment.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"182-190"},"PeriodicalIF":3.3,"publicationDate":"2026-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147493377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.1016/j.cortex.2026.03.005
Ilker Duymaz, Naoki Kogo, Nihan Alp
Periodic changes in visual input elicit rhythmic patterns in EEG signals that manifest as narrowband frequency components. These components are typically interpreted as signatures of neural populations sensitive to the modulated stimulus feature. We propose an alternative scenario in which such frequency components arise primarily from retinotopic variations in signal strength, without requiring feature-selective neural mechanisms. Using both simulated and empirical data (Experiment 1: N = 13; Experiment 2: N = 13), we demonstrate that signal fluctuations driven solely by the retinotopic position of a position-modulated stimulus can generate identifiable frequency components. These components are more plausibly attributed to structural properties of cortical organization that shape the relative contribution of different retinotopic areas to the EEG signal. Our findings challenge the conventional assumption that stimulus-related frequency components necessarily reflect feature-specific neural computations, indicating instead that functional interpretations are not guaranteed when spatiotemporal regularities in the stimulus introduce systematic population-level variability.
{"title":"Origin of neural frequency responses: Sensory coding versus structural influences.","authors":"Ilker Duymaz, Naoki Kogo, Nihan Alp","doi":"10.1016/j.cortex.2026.03.005","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.005","url":null,"abstract":"<p><p>Periodic changes in visual input elicit rhythmic patterns in EEG signals that manifest as narrowband frequency components. These components are typically interpreted as signatures of neural populations sensitive to the modulated stimulus feature. We propose an alternative scenario in which such frequency components arise primarily from retinotopic variations in signal strength, without requiring feature-selective neural mechanisms. Using both simulated and empirical data (Experiment 1: N = 13; Experiment 2: N = 13), we demonstrate that signal fluctuations driven solely by the retinotopic position of a position-modulated stimulus can generate identifiable frequency components. These components are more plausibly attributed to structural properties of cortical organization that shape the relative contribution of different retinotopic areas to the EEG signal. Our findings challenge the conventional assumption that stimulus-related frequency components necessarily reflect feature-specific neural computations, indicating instead that functional interpretations are not guaranteed when spatiotemporal regularities in the stimulus introduce systematic population-level variability.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"137-154"},"PeriodicalIF":3.3,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147490839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A milestone of human language faculty is to communicate with syntactically complex sentence structures. Although various syntactic complexity representations have been studied in language comprehension, a clear framework that systematically organizes syntactic complexity into different levels and the corresponding neural substrates during sentence production are largely unknown. Therefore, this functional magnetic resonance imaging (fMRI) study investigated the neural substrates of syntactic processing during production responsive to a well-organized hierarchy of syntactic complexity: Sentence versus Word list, Complex sentence versus Simple sentence, and Subject relative clause versus Object relative clause. Thirty healthy adult Cantonese native speakers underwent the picture-description task during scanning. Behavioral results showed that producing sentences with higher syntactic complexity was associated with greater processing difficulty (i.e., lower accuracy and longer reaction time). Results of the brain activation analysis, peak intensity analysis, and of the effective connectivity modeling converged on a critical fronto-temporal syntactic network, including the left inferior frontal gyrus (IFG), middle frontal gyrus (MFG), and posterior temporal lobe (PostTemp), in managing the increased demands of syntactic structure building. Moreover, the role of the left MFG in adapting to the increased difficulty of syntactic structure building during sentence production was identified. Taking together, this study specified a fronto-temporal syntactic network in sentence production by establishing a well-controlled syntactic complexity hierarchy, and thus sheds further lights on the neural underpinnings of the remarkable complex human language faculty.
{"title":"Syntactic complexity representation in sentence production reveals a fronto-temporal syntactic network.","authors":"Keyi Kang, Mingchuan Yang, Yimin Cai, Luyao Chen, Haoyun Zhang","doi":"10.1016/j.cortex.2026.03.004","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.004","url":null,"abstract":"<p><p>A milestone of human language faculty is to communicate with syntactically complex sentence structures. Although various syntactic complexity representations have been studied in language comprehension, a clear framework that systematically organizes syntactic complexity into different levels and the corresponding neural substrates during sentence production are largely unknown. Therefore, this functional magnetic resonance imaging (fMRI) study investigated the neural substrates of syntactic processing during production responsive to a well-organized hierarchy of syntactic complexity: Sentence versus Word list, Complex sentence versus Simple sentence, and Subject relative clause versus Object relative clause. Thirty healthy adult Cantonese native speakers underwent the picture-description task during scanning. Behavioral results showed that producing sentences with higher syntactic complexity was associated with greater processing difficulty (i.e., lower accuracy and longer reaction time). Results of the brain activation analysis, peak intensity analysis, and of the effective connectivity modeling converged on a critical fronto-temporal syntactic network, including the left inferior frontal gyrus (IFG), middle frontal gyrus (MFG), and posterior temporal lobe (PostTemp), in managing the increased demands of syntactic structure building. Moreover, the role of the left MFG in adapting to the increased difficulty of syntactic structure building during sentence production was identified. Taking together, this study specified a fronto-temporal syntactic network in sentence production by establishing a well-controlled syntactic complexity hierarchy, and thus sheds further lights on the neural underpinnings of the remarkable complex human language faculty.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"162-181"},"PeriodicalIF":3.3,"publicationDate":"2026-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147493404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-07DOI: 10.1016/j.cortex.2026.03.001
Jana Tomastikova, Edward H Silson
Mental imagery and visual perception can both give rise to vivid visual experiences, yet the extent to which they can functionally influence each other remains an open question. Previous research has shown that imagining a stimulus before viewing a rivalrous display can bias perception towards the imagined content. However, this effect has been demonstrated primarily with simple, low-level stimuli such as oriented gratings. Here, we investigated whether imagery of more complex representations-people and buildings-can influence perception, using the binocular rivalry paradigm. Participants in our study imagined either a personally familiar person or personally familiar building before viewing a rivalrous face-house stimulus. We measured their perceptual dominance and imagery vividness on each trial. Their overall imagery ability was assessed using the Vividness of Visual Imagery Questionnaire (VVIQ). We found that participants were significantly more likely to perceive the imagined stimulus; however, this priming effect was driven by person imagery. Greater vividness of person imagery on each trial significantly increased dominance of the face stimulus, but this effect did not extend to building imagery and the house stimulus. Furthermore, the VVIQ did not predict individual differences in priming magnitude. These results extend previous work by showing that mental imagery can influence perception beyond simple stimuli, but that this functional link is shaped by stimulus-specific features. Our findings highlight the need for future research to examine the conditions under which imagining more complex representations affects seeing.
{"title":"From imagining to seeing: The influence of visual mental imagery of people and buildings on perception during binocular rivalry.","authors":"Jana Tomastikova, Edward H Silson","doi":"10.1016/j.cortex.2026.03.001","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.001","url":null,"abstract":"<p><p>Mental imagery and visual perception can both give rise to vivid visual experiences, yet the extent to which they can functionally influence each other remains an open question. Previous research has shown that imagining a stimulus before viewing a rivalrous display can bias perception towards the imagined content. However, this effect has been demonstrated primarily with simple, low-level stimuli such as oriented gratings. Here, we investigated whether imagery of more complex representations-people and buildings-can influence perception, using the binocular rivalry paradigm. Participants in our study imagined either a personally familiar person or personally familiar building before viewing a rivalrous face-house stimulus. We measured their perceptual dominance and imagery vividness on each trial. Their overall imagery ability was assessed using the Vividness of Visual Imagery Questionnaire (VVIQ). We found that participants were significantly more likely to perceive the imagined stimulus; however, this priming effect was driven by person imagery. Greater vividness of person imagery on each trial significantly increased dominance of the face stimulus, but this effect did not extend to building imagery and the house stimulus. Furthermore, the VVIQ did not predict individual differences in priming magnitude. These results extend previous work by showing that mental imagery can influence perception beyond simple stimuli, but that this functional link is shaped by stimulus-specific features. Our findings highlight the need for future research to examine the conditions under which imagining more complex representations affects seeing.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"191-207"},"PeriodicalIF":3.3,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147497758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-07DOI: 10.1016/j.cortex.2026.03.003
Ivan Patané, Anna Berti, Giuseppe di Pellegrino, Alessandro Farnè
{"title":"From bodies to spaces: A neurocognitive/neuropsychological perspective on body-space interactions.","authors":"Ivan Patané, Anna Berti, Giuseppe di Pellegrino, Alessandro Farnè","doi":"10.1016/j.cortex.2026.03.003","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.003","url":null,"abstract":"","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"155-161"},"PeriodicalIF":3.3,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147490846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-07DOI: 10.1016/j.cortex.2026.03.002
V C Peviani, L N Pfeifer, S A M Geurts, G Risso, M Bassolino, L E Miller
There is evidence that the sensorimotor system builds fine-grained spatial maps of the limbs based on somatosensory signals. Can a hand-held tool be mapped in space with a comparable spatial resolution? Do spatial maps change following tool use? In order to address these questions, we used a spatial mapping task on healthy participants to measure the accuracy and precision of spatial estimates pertaining to several locations on their arm and on a hand-held tool. To study spatial accuracy, we first fitted linear regressions with real location as predictor and estimated location as dependent variables. The slopes-representing estimation accuracy-were compared between arm and tool, and before to after tool use. We further investigated changes induced by tool use in terms of variable error associated with spatial estimates, representing their precision. We found that the spatial maps for the arm and tool were comparably accurate, suggesting that holding the tool provides enough information to the sensorimotor system to map it in space. While we did not observe changes in the accuracy of spatial maps following tool use, we did observe changes in their spatial precision. Although these effects were absent in a control experiment without tool use, the direct comparison between the two conditions did not yield significant differences, suggesting that the observed precision changes may be driven by non-specific factors. In all, our results suggest that tool users can build up a map of tool space that is comparable to body space.
{"title":"Spatial maps of the arm and tool: Accuracy, precision, and the effect of tool use.","authors":"V C Peviani, L N Pfeifer, S A M Geurts, G Risso, M Bassolino, L E Miller","doi":"10.1016/j.cortex.2026.03.002","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.03.002","url":null,"abstract":"<p><p>There is evidence that the sensorimotor system builds fine-grained spatial maps of the limbs based on somatosensory signals. Can a hand-held tool be mapped in space with a comparable spatial resolution? Do spatial maps change following tool use? In order to address these questions, we used a spatial mapping task on healthy participants to measure the accuracy and precision of spatial estimates pertaining to several locations on their arm and on a hand-held tool. To study spatial accuracy, we first fitted linear regressions with real location as predictor and estimated location as dependent variables. The slopes-representing estimation accuracy-were compared between arm and tool, and before to after tool use. We further investigated changes induced by tool use in terms of variable error associated with spatial estimates, representing their precision. We found that the spatial maps for the arm and tool were comparably accurate, suggesting that holding the tool provides enough information to the sensorimotor system to map it in space. While we did not observe changes in the accuracy of spatial maps following tool use, we did observe changes in their spatial precision. Although these effects were absent in a control experiment without tool use, the direct comparison between the two conditions did not yield significant differences, suggesting that the observed precision changes may be driven by non-specific factors. In all, our results suggest that tool users can build up a map of tool space that is comparable to body space.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"198 ","pages":"127-136"},"PeriodicalIF":3.3,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147490893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-03DOI: 10.1016/j.cortex.2026.02.017
Laura Giglio, Peter Hagoort, Eleanor Huizeling
Disfluencies in speech frequently occur before the production of longer and more complex speech content. Listeners are thought to use the distribution of disfluencies in the comprehension of speech to inform their predictions. Here, we investigated whether the presence of disfluencies in speech affects word processing also in naturalistic listening conditions. Participants (n = 36) listened to the spoken recall of the events of a television series while undergoing fMRI. We modelled word processing effort using parametric modulations for word length, frequency, entropy, as well as surprisal and presence/absence of a disfluency. To investigate the effects of disfluencies on word processing, we tested the interaction between disfluency and frequency, and disfluency and surprisal. Words preceded by a disfluency were associated with increased activity in the left and right superior temporal gyrus (STG). Lower word frequency was associated with an increase in activity in the left mid STG. Increased word surprisal elicited a similar distribution of activity, with bilateral superior temporal activation. The effect of surprisal was reduced after a disfluency in a cluster in the left posterior temporal lobe, while the effect of frequency increased following disfluencies in the left superior temporal gyrus and the left inferior frontal cortex. Therefore, the presence of a disfluency affects the response to upcoming input, suggesting that it prepares the listener for higher complexity in the upcoming speech, by potentially allocating increased attention resources that facilitate integration in context.
{"title":"Disfluencies reduce the effect of uh … word surprisal during narrative comprehension.","authors":"Laura Giglio, Peter Hagoort, Eleanor Huizeling","doi":"10.1016/j.cortex.2026.02.017","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.02.017","url":null,"abstract":"<p><p>Disfluencies in speech frequently occur before the production of longer and more complex speech content. Listeners are thought to use the distribution of disfluencies in the comprehension of speech to inform their predictions. Here, we investigated whether the presence of disfluencies in speech affects word processing also in naturalistic listening conditions. Participants (n = 36) listened to the spoken recall of the events of a television series while undergoing fMRI. We modelled word processing effort using parametric modulations for word length, frequency, entropy, as well as surprisal and presence/absence of a disfluency. To investigate the effects of disfluencies on word processing, we tested the interaction between disfluency and frequency, and disfluency and surprisal. Words preceded by a disfluency were associated with increased activity in the left and right superior temporal gyrus (STG). Lower word frequency was associated with an increase in activity in the left mid STG. Increased word surprisal elicited a similar distribution of activity, with bilateral superior temporal activation. The effect of surprisal was reduced after a disfluency in a cluster in the left posterior temporal lobe, while the effect of frequency increased following disfluencies in the left superior temporal gyrus and the left inferior frontal cortex. Therefore, the presence of a disfluency affects the response to upcoming input, suggesting that it prepares the listener for higher complexity in the upcoming speech, by potentially allocating increased attention resources that facilitate integration in context.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"199 ","pages":"20-35"},"PeriodicalIF":3.3,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147484683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-03DOI: 10.1016/j.cortex.2026.02.016
Katie Ekström, Neil Cohn, Emily L Coderre
While models of discourse comprehension describe the process of structure building during mental model construction, neurophysiological explorations of this process are limited. Here, we use time-frequency analysis of EEG data to explore the spectral power dynamics associated with the narrative comprehension of comics. Using an existing dataset wherein 22 participants viewed sets of six sequential comic panels, we performed spectral decomposition from theta to gamma bands over the full extent of narrative processing (10+ seconds). Power incrementally decreased in both alpha (8-12 Hz) and low beta (12.5-20 Hz) frequency bands as narratives unfolded. These results are contextualized in the literature, where some suggest that alpha and low beta frequency bands act as mechanisms of suppression and enhancement to modulate attention. Study findings are consistent with changes in alpha and low beta power reflecting domain-general processes of narrative structure building during discourse comprehension.
{"title":"Neural dynamics of narrative structure-building in visual stories.","authors":"Katie Ekström, Neil Cohn, Emily L Coderre","doi":"10.1016/j.cortex.2026.02.016","DOIUrl":"https://doi.org/10.1016/j.cortex.2026.02.016","url":null,"abstract":"<p><p>While models of discourse comprehension describe the process of structure building during mental model construction, neurophysiological explorations of this process are limited. Here, we use time-frequency analysis of EEG data to explore the spectral power dynamics associated with the narrative comprehension of comics. Using an existing dataset wherein 22 participants viewed sets of six sequential comic panels, we performed spectral decomposition from theta to gamma bands over the full extent of narrative processing (10+ seconds). Power incrementally decreased in both alpha (8-12 Hz) and low beta (12.5-20 Hz) frequency bands as narratives unfolded. These results are contextualized in the literature, where some suggest that alpha and low beta frequency bands act as mechanisms of suppression and enhancement to modulate attention. Study findings are consistent with changes in alpha and low beta power reflecting domain-general processes of narrative structure building during discourse comprehension.</p>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"199 ","pages":"1-19"},"PeriodicalIF":3.3,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147479950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-13DOI: 10.1016/j.cortex.2025.12.009
Kelsey L. Frewin , Ross E. Vanderwert , Chiara Gambi , Louis Renoult , Sarah A. Gerson
When do infants first begin grasping the meaning of verbs? To learn verbs – words that describe actions and events – theorists suggest that infants must employ word segmentation, event processing, and verb-to-action mapping skills. Prior research suggests that many of these skills emerge by approximately 10 months. In the current study, we examined whether 10-month-old infants understand several early verbs. In a novel action-verb pairing paradigm, infants saw videos of everyday actions while hearing matching or mismatching verbs. We tested adults on the same paradigm to verify that action-verb pairs reliably evoked an N400 mismatch effect. Adults showed an N400-like effect over frontal and centroparietal regions. Infants also showed ERP differences between mismatched and matched action-verb pairs, although the pattern differed from adults, with variation in topography and directionality. Infants’ ERP response was not related to their receptive or productive vocabulary size. These findings indicate that infants were sensitive to co-occurrences between actions and verbs, reflecting emerging verb understanding and suggesting nascent semantic knowledge. We further consider alternative explanations, including the possibility that the observed ERP differences reflect early action-verb associations that may serve as building blocks for later semantic verb knowledge. These results expand our understanding of infant language acquisition by demonstrating that, by 10 months, infants are sensitive to mismatches between everyday actions and verbs.
{"title":"Electrophysiological evidence of infants’ understanding of verbs","authors":"Kelsey L. Frewin , Ross E. Vanderwert , Chiara Gambi , Louis Renoult , Sarah A. Gerson","doi":"10.1016/j.cortex.2025.12.009","DOIUrl":"10.1016/j.cortex.2025.12.009","url":null,"abstract":"<div><div>When do infants first begin grasping the meaning of verbs? To learn verbs – words that describe actions and events – theorists suggest that infants must employ word segmentation, event processing, and verb-to-action mapping skills. Prior research suggests that many of these skills emerge by approximately 10 months. In the current study, we examined whether 10-month-old infants understand several early verbs. In a novel action-verb pairing paradigm, infants saw videos of everyday actions while hearing matching or mismatching verbs. We tested adults on the same paradigm to verify that action-verb pairs reliably evoked an N400 mismatch effect. Adults showed an N400-like effect over frontal and centroparietal regions. Infants also showed ERP differences between mismatched and matched action-verb pairs, although the pattern differed from adults, with variation in topography and directionality. Infants’ ERP response was not related to their receptive or productive vocabulary size. These findings indicate that infants were sensitive to co-occurrences between actions and verbs, reflecting emerging verb understanding and suggesting nascent semantic knowledge. We further consider alternative explanations, including the possibility that the observed ERP differences reflect early action-verb associations that may serve as building blocks for later semantic verb knowledge. These results expand our understanding of infant language acquisition by demonstrating that, by 10 months, infants are sensitive to mismatches between everyday actions and verbs.</div></div>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"196 ","pages":"Pages 41-60"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146076135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-23DOI: 10.1016/j.cortex.2025.12.012
Theresa Paulus , Astrid Prochnow , Julia Jaworski , Lars-Michael Schöpper , Christian Beste , Christian Frings , Alexander Münchau , Julius Verrel , Tobias Bäumer
Distractor-response binding (DRB) has been widely studied to understand the interplay between perception and motor processes, with DRB effects referring to performance costs or benefits that arise when previously co-occurring distractors and responses are retrieved together. We hypothesize that musical training and musical perception skills modulate flexibility in reconfiguring auditory perception–action associations; this has not yet been investigated in the context of DRB. Here, we use an auditory DRB paradigm with concomitant EEG recordings to investigate how auditory-motor bindings are established, retrieved, and how they might differ between harmonic versus inharmonic sounds. Using a healthy sample of participants (N = 42) with a wide range of musical training, we also investigated whether these processes are modulated by musical perception skills, assessed using the well-established Micro-PROMS (Profile of Music Perception Skills).
Behavioral and EEG results indicated significant DRB effects for both harmonic and inharmonic distractor sound combinations. These effects were modulated by harmonicity: stronger behavioral DRB effects and weaker DRB effects in theta band activity were found when inharmonic as compared to harmonic distractor stimuli were presented. Beamformer analysis localized the theta band effect to the right superior temporal cortex, highlighting the role of this brain area in auditory-motor integration. Further, this study provides evidence that participants with better musical perception skills and higher cumulative practice time show increased flexibility in handling perception–action associations. Together, these findings enhance the understanding of how auditory stimuli interact with motor actions, particularly in relation to individual differences in musical perception skills.
{"title":"Auditory-motor distractor-response binding is modulated by harmonicity of stimuli and acoustical discrimination skills","authors":"Theresa Paulus , Astrid Prochnow , Julia Jaworski , Lars-Michael Schöpper , Christian Beste , Christian Frings , Alexander Münchau , Julius Verrel , Tobias Bäumer","doi":"10.1016/j.cortex.2025.12.012","DOIUrl":"10.1016/j.cortex.2025.12.012","url":null,"abstract":"<div><div>Distractor-response binding (DRB) has been widely studied to understand the interplay between perception and motor processes, with DRB effects referring to performance costs or benefits that arise when previously co-occurring distractors and responses are retrieved together. We hypothesize that musical training and musical perception skills modulate flexibility in reconfiguring auditory perception–action associations; this has not yet been investigated in the context of DRB. Here, we use an auditory DRB paradigm with concomitant EEG recordings to investigate how auditory-motor bindings are established, retrieved, and how they might differ between harmonic versus inharmonic sounds. Using a healthy sample of participants (N = 42) with a wide range of musical training, we also investigated whether these processes are modulated by musical perception skills, assessed using the well-established Micro-PROMS (Profile of Music Perception Skills).</div><div>Behavioral and EEG results indicated significant DRB effects for both harmonic and inharmonic distractor sound combinations. These effects were modulated by harmonicity: stronger behavioral DRB effects and weaker DRB effects in theta band activity were found when inharmonic as compared to harmonic distractor stimuli were presented. Beamformer analysis localized the theta band effect to the right superior temporal cortex, highlighting the role of this brain area in auditory-motor integration. Further, this study provides evidence that participants with better musical perception skills and higher cumulative practice time show increased flexibility in handling perception–action associations. Together, these findings enhance the understanding of how auditory stimuli interact with motor actions, particularly in relation to individual differences in musical perception skills.</div></div>","PeriodicalId":10758,"journal":{"name":"Cortex","volume":"196 ","pages":"Pages 155-170"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146124109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}