Salient unexpected and task-irrelevant sounds can act as distractors by capturing attention away from a task. Consequently, a performance impairment (e.g., prolonged RTs) is typically observed along with a pupil dilation response (PDR) and the P3a ERP component. Previous results showed prolonged RTs in response to task-relevant visual stimuli also following unexpected sound omissions. However, it was unclear whether this was due to the absence of the sound's warning effect or to distraction caused by the violation of a sensory prediction. In our paradigm, participants initiated a trial through a button press that elicited either a regular sound (80%), a deviant sound (10%), or no sound (10%). Thereafter, a digit was presented visually, and the participant had to classify it as even or odd. To dissociate warning and distraction effects, we additionally included a control condition in which a button press never generated a sound, and therefore no sound was expected. Results show that, compared with expected events, unexpected deviants and omissions lead to prolonged RTs (distraction effect), enlarged PDR, and a P3a-like ERP effect. Moreover, sound events, compared with no sound events, yielded faster RTs (warning effect), larger PDR, and increased P3a. Overall, we observed a co-occurrence of warning and distraction effects. This suggests that not only unexpected sounds but also unexpected sound omissions can act as salient distractors. This finding supports theories claiming that involuntary attention is based on prediction violation.
{"title":"Salient, Unexpected Omissions of Sounds Can Involuntarily Distract Attention.","authors":"Valeria Baragona, Erich Schröger, Andreas Widmann","doi":"10.1162/jocn_a_02307","DOIUrl":"https://doi.org/10.1162/jocn_a_02307","url":null,"abstract":"<p><p>Salient unexpected and task-irrelevant sounds can act as distractors by capturing attention away from a task. Consequently, a performance impairment (e.g., prolonged RTs) is typically observed along with a pupil dilation response (PDR) and the P3a ERP component. Previous results showed prolonged RTs in response to task-relevant visual stimuli also following unexpected sound omissions. However, it was unclear whether this was due to the absence of the sound's warning effect or to distraction caused by the violation of a sensory prediction. In our paradigm, participants initiated a trial through a button press that elicited either a regular sound (80%), a deviant sound (10%), or no sound (10%). Thereafter, a digit was presented visually, and the participant had to classify it as even or odd. To dissociate warning and distraction effects, we additionally included a control condition in which a button press never generated a sound, and therefore no sound was expected. Results show that, compared with expected events, unexpected deviants and omissions lead to prolonged RTs (distraction effect), enlarged PDR, and a P3a-like ERP effect. Moreover, sound events, compared with no sound events, yielded faster RTs (warning effect), larger PDR, and increased P3a. Overall, we observed a co-occurrence of warning and distraction effects. This suggests that not only unexpected sounds but also unexpected sound omissions can act as salient distractors. This finding supports theories claiming that involuntary attention is based on prediction violation.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-16"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunyue Teng, Jacqueline M Fulvio, Mattia Pietrelli, Jiefeng Jiang, Bradley R Postle
Visual working memory (WM) extensively interacts with visual perception. When information between the two processes is in conflict, cognitive control can be recruited to effectively mitigate the resultant interference. The current study investigated the neural bases of the control of conflict between visual WM and visual perception. We recorded the EEG from 25 human participants (13 male) performing a dual task combining visual WM and tilt discrimination, the latter occurring during the WM delay. The congruity in orientation between the memorandum and the discriminandum was manipulated. Behavioral data were fitted to a reinforcement-learning model of cognitive control to derive trial-wise estimates of demand for preparatory and reflexive control, which were then used for EEG analyses. The level of preparatory control was associated with sustained frontal-midline theta activity preceding trial onset, as well as with the strength of the neural representation of the memorandum. Subsequently, discriminandum onset triggered a control prediction error signal that was reflected in a left frontal positivity. On trials when an incongruent discriminandum was not expected, reflexive control that scaled with the prediction error acted to suppress the neural representation of the discriminandum, producing below-baseline decoding of the discriminandum that, in turn, exerted a repulsive serial bias on WM recall on the subsequent trial. These results illustrate the flexible recruitment of two modes of control and how their dynamic interplay acts to mitigate interference between simultaneously processed perceptual and mnemonic representations.
{"title":"Temporal Dynamics and Representational Consequences of the Control of Processing Conflict between Visual Working Memory and Visual Perception.","authors":"Chunyue Teng, Jacqueline M Fulvio, Mattia Pietrelli, Jiefeng Jiang, Bradley R Postle","doi":"10.1162/jocn_a_02310","DOIUrl":"https://doi.org/10.1162/jocn_a_02310","url":null,"abstract":"<p><p>Visual working memory (WM) extensively interacts with visual perception. When information between the two processes is in conflict, cognitive control can be recruited to effectively mitigate the resultant interference. The current study investigated the neural bases of the control of conflict between visual WM and visual perception. We recorded the EEG from 25 human participants (13 male) performing a dual task combining visual WM and tilt discrimination, the latter occurring during the WM delay. The congruity in orientation between the memorandum and the discriminandum was manipulated. Behavioral data were fitted to a reinforcement-learning model of cognitive control to derive trial-wise estimates of demand for preparatory and reflexive control, which were then used for EEG analyses. The level of preparatory control was associated with sustained frontal-midline theta activity preceding trial onset, as well as with the strength of the neural representation of the memorandum. Subsequently, discriminandum onset triggered a control prediction error signal that was reflected in a left frontal positivity. On trials when an incongruent discriminandum was not expected, reflexive control that scaled with the prediction error acted to suppress the neural representation of the discriminandum, producing below-baseline decoding of the discriminandum that, in turn, exerted a repulsive serial bias on WM recall on the subsequent trial. These results illustrate the flexible recruitment of two modes of control and how their dynamic interplay acts to mitigate interference between simultaneously processed perceptual and mnemonic representations.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-20"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theories of visual awareness often fall into two general categories, those assuming that awareness arises rapidly within visual cortex and those assuming that awareness arises more slowly as a result of interactions between visual cortex and frontoparietal regions. To test the plausibility of early theories of consciousness, we combined the temporal resolution of the EEG technique with multivariate pattern classification techniques to assess the latency at which decodable information about consciously perceived stimuli is enhanced relative to information about stimuli that are not consciously perceived. Competing red and green gratings were presented simultaneously to the two eyes, creating rivalry, and observers reported which one of the two colors was perceived on each trial. We then used the pattern of EEG over the scalp to decode the orientation of the grating that was perceived and the orientation of the grating that was suppressed by the rivalry and not perceived. This allowed us to determine when the content of the neural representations differed between the consciously perceived grating and the unconscious grating. Early theories predict that the difference between conscious and unconscious processing would occur within ∼200 msec of stimulus onset (e.g., at the time of the visual awareness negativity). We found that decoding accuracy was significantly greater for the consciously perceived orientation than for the unperceived orientation beginning 160 msec after stimulus onset, as predicted by theories that propose a rapid onset of visual awareness.
{"title":"EEG Decoding of Conscious versus Unconscious Representations during Binocular Rivalry.","authors":"Lara C Krisst, Steven J Luck","doi":"10.1162/jocn_a_02308","DOIUrl":"https://doi.org/10.1162/jocn_a_02308","url":null,"abstract":"<p><p>Theories of visual awareness often fall into two general categories, those assuming that awareness arises rapidly within visual cortex and those assuming that awareness arises more slowly as a result of interactions between visual cortex and frontoparietal regions. To test the plausibility of early theories of consciousness, we combined the temporal resolution of the EEG technique with multivariate pattern classification techniques to assess the latency at which decodable information about consciously perceived stimuli is enhanced relative to information about stimuli that are not consciously perceived. Competing red and green gratings were presented simultaneously to the two eyes, creating rivalry, and observers reported which one of the two colors was perceived on each trial. We then used the pattern of EEG over the scalp to decode the orientation of the grating that was perceived and the orientation of the grating that was suppressed by the rivalry and not perceived. This allowed us to determine when the content of the neural representations differed between the consciously perceived grating and the unconscious grating. Early theories predict that the difference between conscious and unconscious processing would occur within ∼200 msec of stimulus onset (e.g., at the time of the visual awareness negativity). We found that decoding accuracy was significantly greater for the consciously perceived orientation than for the unperceived orientation beginning 160 msec after stimulus onset, as predicted by theories that propose a rapid onset of visual awareness.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-10"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The perception and recognition of natural sounds, like speech, rely on the processing of slow amplitude modulations. Perception can be hindered by interfering modulations at similar rates, a phenomenon known as modulation masking. Cortical envelope following responses (EFRs) are highly sensitive to these slow modulations, but it is unclear how modulation masking impacts these cortical envelope responses. To dissociate stimulus-driven and attention-driven effects, we recorded EEG responses to a 4-Hz modulated noise in a two-way factorial design, varying the level of modulation masking and intermodal attention. Auditory stimuli contained one of three random masking bands in the stimulus envelope, at various proximities in modulation frequency to the 4-Hz target, or an unmasked reference condition. During EEG recordings, the same stimuli were presented while participants performed either an auditory or a visual change detection task. Attention to the auditory modality resulted in a general enhancement of sustained EFR responses to the 4-Hz target. In the visual task condition only, EFR 4-Hz power systematically decreased with increasing modulation masking, consistent with psychophysical masking patterns. However, during the auditory task, the 4-Hz EFRs were unaffected by masking and remained strong even with the highest degrees of masking. Rather than indicating a general bottom-up modulation selective process, these results indicate that the masking of cortical envelope responses interacts with attention. We propose that auditory attention allows robust tracking of masked envelopes, possibly through a form of glimpsing of the target, whereas envelope responses to task-irrelevant auditory stimuli reflect stimulus salience.
{"title":"Task-dependent Modulation Masking of a 4-Hz Envelope Following Responses.","authors":"Sam Watson, Torsten Dau, Jens Hjortkjær","doi":"10.1162/jocn_a_02309","DOIUrl":"https://doi.org/10.1162/jocn_a_02309","url":null,"abstract":"<p><p>The perception and recognition of natural sounds, like speech, rely on the processing of slow amplitude modulations. Perception can be hindered by interfering modulations at similar rates, a phenomenon known as modulation masking. Cortical envelope following responses (EFRs) are highly sensitive to these slow modulations, but it is unclear how modulation masking impacts these cortical envelope responses. To dissociate stimulus-driven and attention-driven effects, we recorded EEG responses to a 4-Hz modulated noise in a two-way factorial design, varying the level of modulation masking and intermodal attention. Auditory stimuli contained one of three random masking bands in the stimulus envelope, at various proximities in modulation frequency to the 4-Hz target, or an unmasked reference condition. During EEG recordings, the same stimuli were presented while participants performed either an auditory or a visual change detection task. Attention to the auditory modality resulted in a general enhancement of sustained EFR responses to the 4-Hz target. In the visual task condition only, EFR 4-Hz power systematically decreased with increasing modulation masking, consistent with psychophysical masking patterns. However, during the auditory task, the 4-Hz EFRs were unaffected by masking and remained strong even with the highest degrees of masking. Rather than indicating a general bottom-up modulation selective process, these results indicate that the masking of cortical envelope responses interacts with attention. We propose that auditory attention allows robust tracking of masked envelopes, possibly through a form of glimpsing of the target, whereas envelope responses to task-irrelevant auditory stimuli reflect stimulus salience.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-13"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successful speech comprehension, though seemingly effortless, involves complex interactions between sensory and cognitive processing and is predominantly embedded in a multisensory context, providing acoustic and visual speech cues. Adding the perspective of ageing, the interaction becomes even more manyfold. The impact of cognitive load on speech processing has been investigated, however, characterized by a lack of realistic speech material and multimodality. In this study, we therefore investigated the effects of memory load on naturalistic immersive audiovisual speech comprehension in older adults with varying degrees of hearing impairment and cognitive capacities. By providing natural continuous multisensory speech, provided through virtual reality, we created an immersive three-dimensional visual of the speaker and manipulated the memory load of the natural running speech inspired by a traditional n-back task. This allowed us to quantify neural speech envelope tracking via EEG and behavioral speech comprehension in varying modalities and memory loads in a highly controllable environment, while offering a realistic conversational experience. Neural speech tracking depends on an interaction between modality and memory load, moderated by auditory working memory capacity. Under low memory load, there is an increase in neural speech tracking in the immersive modality, particularly strong for individuals with low auditory working memory. Visually induced performance improvement is observed similarly in high and low memory load settings on a behavioral level. We argue that this dynamic reflects an allocation process of sensory- and cognitive processing resources depending on the presented sensory- and cognitive load of natural continuous speech and individual capacities.
{"title":"Presenting Natural Continuous Speech in a Multisensory Immersive Environment Improves Speech Comprehension and Reflects the Allocation of Processing Resources in Neural Speech Tracking.","authors":"Vanessa Frei, Nathalie Giroud","doi":"10.1162/jocn_a_02306","DOIUrl":"https://doi.org/10.1162/jocn_a_02306","url":null,"abstract":"<p><p>Successful speech comprehension, though seemingly effortless, involves complex interactions between sensory and cognitive processing and is predominantly embedded in a multisensory context, providing acoustic and visual speech cues. Adding the perspective of ageing, the interaction becomes even more manyfold. The impact of cognitive load on speech processing has been investigated, however, characterized by a lack of realistic speech material and multimodality. In this study, we therefore investigated the effects of memory load on naturalistic immersive audiovisual speech comprehension in older adults with varying degrees of hearing impairment and cognitive capacities. By providing natural continuous multisensory speech, provided through virtual reality, we created an immersive three-dimensional visual of the speaker and manipulated the memory load of the natural running speech inspired by a traditional n-back task. This allowed us to quantify neural speech envelope tracking via EEG and behavioral speech comprehension in varying modalities and memory loads in a highly controllable environment, while offering a realistic conversational experience. Neural speech tracking depends on an interaction between modality and memory load, moderated by auditory working memory capacity. Under low memory load, there is an increase in neural speech tracking in the immersive modality, particularly strong for individuals with low auditory working memory. Visually induced performance improvement is observed similarly in high and low memory load settings on a behavioral level. We argue that this dynamic reflects an allocation process of sensory- and cognitive processing resources depending on the presented sensory- and cognitive load of natural continuous speech and individual capacities.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-19"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Dupont, Benedicte Poulin-Charronnat, Carol Madden-Lombardi, Thomas Jacquet, Philippe Pfister, Typhanie Dos Anjos, Florent Lebon
Although both motor imagery (MI) and low-frequency sound listening have independently been shown to modulate brain activity, the potential synergistic effects that may arise from their combined application remains unexplored. Any further modulation derived from this combination may be relevant for motor learning and rehabilitation. We probed neurophysiological activity during these two processes, measuring alpha and beta band power amplitude by means of EEG recordings. Twenty healthy volunteers were instructed to (i) explicitly imagine right finger flexion/extension movements in a kinaesthetic modality, (ii) listen to low-frequency sounds, (iii) imagine right finger movements while listening to low-frequency sounds, or (iv) stay at rest. We observed a bimodal distribution of alpha-band reactivity to the conditions, suggesting the presence of variability in brain activity across participants during both MI and low-frequency sound listening. One group of participants (12 individuals) displayed increased alpha power within contralateral sensorimotor and ipsilateral medial parieto-occipital regions during MI. Another group (eight individuals) exhibited a decrease in alpha and beta band power within sensorimotor areas. Interestingly, low-frequency sound listening elicited a similar pattern of brain activity within both groups. The combination of MI and sound listening did not result in additional changes in alpha and beta power amplitudes, regardless of group (groups based on individual alpha-band reactivity). Altogether, these findings shed significant insight into the brain activity and its variability generated during MI and low-frequency sound listening. The simultaneous engagement of MI and low-frequency sound listening did not further modulate alpha power amplitude, possibly due to concurrent cortical activation. It remains possible that sequential performance of these tasks could elicit additional modulation.
{"title":"Exploring Dynamic Brain Oscillations in Motor Imagery and Low-frequency Sound.","authors":"William Dupont, Benedicte Poulin-Charronnat, Carol Madden-Lombardi, Thomas Jacquet, Philippe Pfister, Typhanie Dos Anjos, Florent Lebon","doi":"10.1162/jocn_a_02311","DOIUrl":"https://doi.org/10.1162/jocn_a_02311","url":null,"abstract":"<p><p>Although both motor imagery (MI) and low-frequency sound listening have independently been shown to modulate brain activity, the potential synergistic effects that may arise from their combined application remains unexplored. Any further modulation derived from this combination may be relevant for motor learning and rehabilitation. We probed neurophysiological activity during these two processes, measuring alpha and beta band power amplitude by means of EEG recordings. Twenty healthy volunteers were instructed to (i) explicitly imagine right finger flexion/extension movements in a kinaesthetic modality, (ii) listen to low-frequency sounds, (iii) imagine right finger movements while listening to low-frequency sounds, or (iv) stay at rest. We observed a bimodal distribution of alpha-band reactivity to the conditions, suggesting the presence of variability in brain activity across participants during both MI and low-frequency sound listening. One group of participants (12 individuals) displayed increased alpha power within contralateral sensorimotor and ipsilateral medial parieto-occipital regions during MI. Another group (eight individuals) exhibited a decrease in alpha and beta band power within sensorimotor areas. Interestingly, low-frequency sound listening elicited a similar pattern of brain activity within both groups. The combination of MI and sound listening did not result in additional changes in alpha and beta power amplitudes, regardless of group (groups based on individual alpha-band reactivity). Altogether, these findings shed significant insight into the brain activity and its variability generated during MI and low-frequency sound listening. The simultaneous engagement of MI and low-frequency sound listening did not further modulate alpha power amplitude, possibly due to concurrent cortical activation. It remains possible that sequential performance of these tasks could elicit additional modulation.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-15"},"PeriodicalIF":3.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient visual word recognition presumably relies on orthographic prediction error (oPE) representations. On the basis of a transparent neurocognitive computational model rooted in the principles of the predictive coding framework, we postulated that readers optimize their percept by removing redundant visual signals, allowing them to focus on the informative aspects of the sensory input (i.e., the oPE). Here, we explore alternative oPE implementations, testing whether increased precision by assuming all-or-nothing signaling and more realistic word lexicons results in adequate representations underlying efficient word recognition. We used behavioral and electrophysiological data (i.e., EEG) for model evaluation. More precise oPE representations (i.e., implementing a binary signaling and a frequency-sorted lexicon with the 500 most common five-letter words) explained variance in behavioral responses and electrophysiological data 300 msec after stimulus onset best. The original less-precise oPE representation still best explains early brain activation. This pattern suggests a dynamic adaption of represented visual-orthographic information, where initial graded prediction errors convert into binary representations, allowing accurate retrieval of word meaning. These results offer a neuro-cognitive plausible account of efficient word recognition, emphasizing visual-orthographic information in the form of prediction error representations central to the transition from perceptual processing to the access of word meaning.
{"title":"Specifying Precision in Visual-orthographic Prediction Error Representations for a Better Understanding of Efficient Reading.","authors":"Wanlu Fu, Benjamin Gagl","doi":"10.1162/jocn_a_02301","DOIUrl":"https://doi.org/10.1162/jocn_a_02301","url":null,"abstract":"<p><p>Efficient visual word recognition presumably relies on orthographic prediction error (oPE) representations. On the basis of a transparent neurocognitive computational model rooted in the principles of the predictive coding framework, we postulated that readers optimize their percept by removing redundant visual signals, allowing them to focus on the informative aspects of the sensory input (i.e., the oPE). Here, we explore alternative oPE implementations, testing whether increased precision by assuming all-or-nothing signaling and more realistic word lexicons results in adequate representations underlying efficient word recognition. We used behavioral and electrophysiological data (i.e., EEG) for model evaluation. More precise oPE representations (i.e., implementing a binary signaling and a frequency-sorted lexicon with the 500 most common five-letter words) explained variance in behavioral responses and electrophysiological data 300 msec after stimulus onset best. The original less-precise oPE representation still best explains early brain activation. This pattern suggests a dynamic adaption of represented visual-orthographic information, where initial graded prediction errors convert into binary representations, allowing accurate retrieval of word meaning. These results offer a neuro-cognitive plausible account of efficient word recognition, emphasizing visual-orthographic information in the form of prediction error representations central to the transition from perceptual processing to the access of word meaning.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-15"},"PeriodicalIF":3.1,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ava Momeni, Donna Rose Addis, Eva Feredoes, Florentine Klepel, Maiya M Rasheed, Abhijit M Chinchani, Nikitas C Kousssis, Todd S Woodward
fMRI studies typically explore changes in the BOLD signal underlying discrete cognitive processes that occur over milliseconds to a few seconds. However, autobiographical cognition is a protracted process and requires fMRI tasks with longer trials to capture the temporal dynamics of the underlying brain networks. In the current study, we provided an updated analysis of the fMRI data obtained from a published autobiographical event simulation study, with a slow event-related design (34-sec trials), that involved participants recalling past, imagining past, and imagining future autobiographical events, as well as completing a semantic association control task. Our updated analysis using Constrained Principal Component Analysis for fMRI retrieved two networks reported in the original study: (1) the default mode network, which activated during the autobiographical event simulation conditions but deactivated during the control condition, and (2) the multiple demand network, which activated early in all conditions during the construction of the required representations (i.e., autobiographical events or semantic associates). Two novel networks also emerged: (1) the Response Network, which activated during the scale-rating phase, and (2) the Maintaining Internal Attention Network, which, while active in all conditions during the elaboration of details associated with the simulated events, was more strongly engaged during the imagination and semantic association control conditions. Our findings suggest that the default mode network does not support autobiographical simulation alone, but it co-activates with the multiple demand network and Maintaining Internal Attention Network, with the timing of activations depending on evolving task demands during the simulation process.
{"title":"Functional Brain Networks Underlying Autobiographical Event Simulation: An Update.","authors":"Ava Momeni, Donna Rose Addis, Eva Feredoes, Florentine Klepel, Maiya M Rasheed, Abhijit M Chinchani, Nikitas C Kousssis, Todd S Woodward","doi":"10.1162/jocn_a_02305","DOIUrl":"https://doi.org/10.1162/jocn_a_02305","url":null,"abstract":"<p><p>fMRI studies typically explore changes in the BOLD signal underlying discrete cognitive processes that occur over milliseconds to a few seconds. However, autobiographical cognition is a protracted process and requires fMRI tasks with longer trials to capture the temporal dynamics of the underlying brain networks. In the current study, we provided an updated analysis of the fMRI data obtained from a published autobiographical event simulation study, with a slow event-related design (34-sec trials), that involved participants recalling past, imagining past, and imagining future autobiographical events, as well as completing a semantic association control task. Our updated analysis using Constrained Principal Component Analysis for fMRI retrieved two networks reported in the original study: (1) the default mode network, which activated during the autobiographical event simulation conditions but deactivated during the control condition, and (2) the multiple demand network, which activated early in all conditions during the construction of the required representations (i.e., autobiographical events or semantic associates). Two novel networks also emerged: (1) the Response Network, which activated during the scale-rating phase, and (2) the Maintaining Internal Attention Network, which, while active in all conditions during the elaboration of details associated with the simulated events, was more strongly engaged during the imagination and semantic association control conditions. Our findings suggest that the default mode network does not support autobiographical simulation alone, but it co-activates with the multiple demand network and Maintaining Internal Attention Network, with the timing of activations depending on evolving task demands during the simulation process.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-64"},"PeriodicalIF":3.1,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143071227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Language is a sophisticated cognitive skill that relies on the coordinated activity of cerebral cortex. Acquiring a second language creates intricate modifications in brain connectivity. Although considerable studies have evaluated the impact of second language acquisition on brain networks in adulthood, the results regarding the ultimate form of adaptive plasticity remain inconsistent within the adult population. Furthermore, due to the assumption that subcortical regions are not significantly involved in language-related tasks, the thalamus has rarely been analyzed in relation to other language-relevant cortical regions. Given these limitations, we aimed to evaluate the functional connectivity and volume modifications of thalamic subfields using magnetic resonance imaging (MRI) modalities following the acquisition of a second language. Structural MRI and fMRI data from 51 participants were collected from the OpenNeuro database. The participants were divided into three groups: monolingual (ML), early bilingual (EB), and late bilingual (LB). The EB group consisted of individuals proficient in both English and Spanish, with exposure to these languages before the age of 10 years. The LB group consisted of individuals proficient in both English and Spanish, but with exposure to these languages after the age of 14 years. The ML group included participants proficient only in English. Our results revealed that the ML group exhibited increased functional connectivity in all thalamic subfields (anterior, intralaminar-medial, lateral, ventral, and pulvinar) compared with the EB and LB groups. In addition, a significantly decreased volume of the left suprageniculate nucleus was found in the bilingual groups compared with the ML group. This study provides valuable evidence suggesting that acquiring a second language may be protective against dementia, due to its high plasticity potential, which acts synergistically with cognitive functions to slow the degenerative process.
{"title":"Bilingualism Is Associated with Significant Structural and Connectivity Alterations in the Thalamus in Adulthood.","authors":"Behcet Ayyildiz, Dila Sayman, Sevilay Ayyildiz, Ece Ozdemir Oktem, Ruhat Arslan, Tuncay Colak, Belgin Bamac, Burak Yulug","doi":"10.1162/jocn_a_02304","DOIUrl":"https://doi.org/10.1162/jocn_a_02304","url":null,"abstract":"<p><p>Language is a sophisticated cognitive skill that relies on the coordinated activity of cerebral cortex. Acquiring a second language creates intricate modifications in brain connectivity. Although considerable studies have evaluated the impact of second language acquisition on brain networks in adulthood, the results regarding the ultimate form of adaptive plasticity remain inconsistent within the adult population. Furthermore, due to the assumption that subcortical regions are not significantly involved in language-related tasks, the thalamus has rarely been analyzed in relation to other language-relevant cortical regions. Given these limitations, we aimed to evaluate the functional connectivity and volume modifications of thalamic subfields using magnetic resonance imaging (MRI) modalities following the acquisition of a second language. Structural MRI and fMRI data from 51 participants were collected from the OpenNeuro database. The participants were divided into three groups: monolingual (ML), early bilingual (EB), and late bilingual (LB). The EB group consisted of individuals proficient in both English and Spanish, with exposure to these languages before the age of 10 years. The LB group consisted of individuals proficient in both English and Spanish, but with exposure to these languages after the age of 14 years. The ML group included participants proficient only in English. Our results revealed that the ML group exhibited increased functional connectivity in all thalamic subfields (anterior, intralaminar-medial, lateral, ventral, and pulvinar) compared with the EB and LB groups. In addition, a significantly decreased volume of the left suprageniculate nucleus was found in the bilingual groups compared with the ML group. This study provides valuable evidence suggesting that acquiring a second language may be protective against dementia, due to its high plasticity potential, which acts synergistically with cognitive functions to slow the degenerative process.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-19"},"PeriodicalIF":3.1,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avery E Ostrand, Vinith Johnson, Adam Gazzaley, Theodore P Zanto
Prior research has indicated musicians show an auditory processing advantage in phonemic processing of language. The aim of the current study was to elucidate when in the auditory cortical processing stream this advantage emerges in a cocktail-party-like environment. Participants (n = 34) were aged 18-35 years and deemed to be either a musician (10+-year experience) or nonmusician (no formal training). EEG data were collected while participants were engaged in a phoneme discrimination task. During the task, participants were asked to discern auditory "ba" and "pa" phonemes in two conditions: one with competing speech (target with distractor [TD]) and one without competing speech (target only). Behavioral results showed that musicians discriminated phonemes better under the TD condition than nonmusicians, whereas no performance differences were observed during the target only condition. Analysis of the EEG ERP showed musicianship-based differences at both early (N1) and late (P3) processing stages during the TD condition. Specifically, musicians exhibited decreased neural activity during the N1 and increased neural activity during the P3. Source localization of the P3 showed that musicians increased activity in the right superior/middle temporal gyrus. Results from this study indicate that musicians have a phonemic processing advantage specifically when presented in the context of distraction, which arises from a shift in neural activity from early (N1) to late (P3) stages of cortical phonemic processing.
{"title":"Neural Correlates of the Musicianship Advantage to the Cocktail Party Effect.","authors":"Avery E Ostrand, Vinith Johnson, Adam Gazzaley, Theodore P Zanto","doi":"10.1162/jocn_a_02300","DOIUrl":"https://doi.org/10.1162/jocn_a_02300","url":null,"abstract":"<p><p>Prior research has indicated musicians show an auditory processing advantage in phonemic processing of language. The aim of the current study was to elucidate when in the auditory cortical processing stream this advantage emerges in a cocktail-party-like environment. Participants (n = 34) were aged 18-35 years and deemed to be either a musician (10+-year experience) or nonmusician (no formal training). EEG data were collected while participants were engaged in a phoneme discrimination task. During the task, participants were asked to discern auditory \"ba\" and \"pa\" phonemes in two conditions: one with competing speech (target with distractor [TD]) and one without competing speech (target only). Behavioral results showed that musicians discriminated phonemes better under the TD condition than nonmusicians, whereas no performance differences were observed during the target only condition. Analysis of the EEG ERP showed musicianship-based differences at both early (N1) and late (P3) processing stages during the TD condition. Specifically, musicians exhibited decreased neural activity during the N1 and increased neural activity during the P3. Source localization of the P3 showed that musicians increased activity in the right superior/middle temporal gyrus. Results from this study indicate that musicians have a phonemic processing advantage specifically when presented in the context of distraction, which arises from a shift in neural activity from early (N1) to late (P3) stages of cortical phonemic processing.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-11"},"PeriodicalIF":3.1,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}