Voices carry a vast amount of information about speakers (e.g., emotional state; spatial location). Neuroimaging studies postulate that spatial ("where") and emotional ("what") cues are processed by partially independent processing streams. Although behavioral evidence reveals interactions between emotion and space, the temporal dynamics of these processes in the brain and its modulation by attention remain unknown. We investigated whether and how spatial and emotional features interact during voice processing as a function of attention focus. Spatialized nonverbal vocalizations differing in valence (neutral, amusement, anger) were presented at different locations around the head, whereas listeners discriminated either the spatial location or emotional quality of the voice. Neural activity was measured with ERPs of the EEG. Affective ratings were collected at the end of the EEG session. Emotional vocalizations elicited decreased N1 but increased P2 and late positive potential amplitudes. Interactions of space and emotion occurred at the salience detection stage: neutral vocalizations presented at right (vs. left) locations elicited increased P2 amplitudes, but no such differences were observed for emotional vocalizations. When task instructions involved emotion categorization, the P2 was increased for vocalizations presented at front (vs. back) locations. Behaviorally, only valence and arousal ratings showed emotion-space interactions. These findings suggest that emotional representations are activated earlier than spatial representations in voice processing. The perceptual prioritization of emotional cues occurred irrespective of task instructions but was not paralleled by an augmented stimulus representation in space. These findings support the differential responding to emotional information by auditory processing pathways.
{"title":"What Is Faster than Where in Vocal Emotional Perception.","authors":"Sara Temudo, Ana P Pinheiro","doi":"10.1162/jocn_a_02251","DOIUrl":"https://doi.org/10.1162/jocn_a_02251","url":null,"abstract":"<p><p>Voices carry a vast amount of information about speakers (e.g., emotional state; spatial location). Neuroimaging studies postulate that spatial (\"where\") and emotional (\"what\") cues are processed by partially independent processing streams. Although behavioral evidence reveals interactions between emotion and space, the temporal dynamics of these processes in the brain and its modulation by attention remain unknown. We investigated whether and how spatial and emotional features interact during voice processing as a function of attention focus. Spatialized nonverbal vocalizations differing in valence (neutral, amusement, anger) were presented at different locations around the head, whereas listeners discriminated either the spatial location or emotional quality of the voice. Neural activity was measured with ERPs of the EEG. Affective ratings were collected at the end of the EEG session. Emotional vocalizations elicited decreased N1 but increased P2 and late positive potential amplitudes. Interactions of space and emotion occurred at the salience detection stage: neutral vocalizations presented at right (vs. left) locations elicited increased P2 amplitudes, but no such differences were observed for emotional vocalizations. When task instructions involved emotion categorization, the P2 was increased for vocalizations presented at front (vs. back) locations. Behaviorally, only valence and arousal ratings showed emotion-space interactions. These findings suggest that emotional representations are activated earlier than spatial representations in voice processing. The perceptual prioritization of emotional cues occurred irrespective of task instructions but was not paralleled by an augmented stimulus representation in space. These findings support the differential responding to emotional information by auditory processing pathways.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Cells to Circuits, from Vision to Cognition, from Monkeys to Humans: Leslie Ungerleider's Pioneering Neuroscience.","authors":"Chris Baker, Sabine Kastner","doi":"10.1162/jocn_e_02253","DOIUrl":"https://doi.org/10.1162/jocn_e_02253","url":null,"abstract":"","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Floortje G Bouwkamp, Floris P de Lange, Eelke Spaak
The human visual system is equipped to rapidly and implicitly learn and exploit the statistical regularities in our environment. Within visual search, contextual cueing demonstrates how implicit knowledge of scenes can improve search performance. This is commonly interpreted as spatial context in the scenes becoming predictive of the target location, which leads to a more efficient guidance of attention during search. However, what drives this enhanced guidance is unknown. First, it is under debate whether the entire scene (global context) or more local context drives this phenomenon. Second, it is unclear how exactly improved attentional guidance is enabled by target enhancement and distractor suppression. In the present magnetoencephalography experiment, we leveraged rapid invisible frequency tagging to answer these two outstanding questions. We found that the improved performance when searching implicitly familiar scenes was accompanied by a stronger neural representation of the target stimulus, at the cost specifically of those distractors directly surrounding the target. Crucially, this biasing of local attentional competition was behaviorally relevant when searching familiar scenes. Taken together, we conclude that implicitly learned spatial predictive context improves how we search our environment by sharpening the attentional field.
{"title":"Spatial Predictive Context Speeds Up Visual Search by Biasing Local Attentional Competition.","authors":"Floortje G Bouwkamp, Floris P de Lange, Eelke Spaak","doi":"10.1162/jocn_a_02254","DOIUrl":"https://doi.org/10.1162/jocn_a_02254","url":null,"abstract":"<p><p>The human visual system is equipped to rapidly and implicitly learn and exploit the statistical regularities in our environment. Within visual search, contextual cueing demonstrates how implicit knowledge of scenes can improve search performance. This is commonly interpreted as spatial context in the scenes becoming predictive of the target location, which leads to a more efficient guidance of attention during search. However, what drives this enhanced guidance is unknown. First, it is under debate whether the entire scene (global context) or more local context drives this phenomenon. Second, it is unclear how exactly improved attentional guidance is enabled by target enhancement and distractor suppression. In the present magnetoencephalography experiment, we leveraged rapid invisible frequency tagging to answer these two outstanding questions. We found that the improved performance when searching implicitly familiar scenes was accompanied by a stronger neural representation of the target stimulus, at the cost specifically of those distractors directly surrounding the target. Crucially, this biasing of local attentional competition was behaviorally relevant when searching familiar scenes. Taken together, we conclude that implicitly learned spatial predictive context improves how we search our environment by sharpening the attentional field.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Challenging goals can induce harder work but also greater stress, in turn potentially undermining goal achievement. We sought to examine how mental effort and subjective experiences thereof interact as a function of the challenge level and the size of the incentives at stake. Participants performed a task that rewarded individual units of effort investment (correctly performed Stroop trials) but only if they met a threshold number of correct trials within a fixed time interval (challenge level). We varied this challenge level (Study 1, n = 40) and the rewards at stake (Study 2, n = 79) and measured variability in task performance and self-reported affect across task intervals. Greater challenge and higher rewards facilitated greater effort investment but also induced greater stress, whereas higher rewards (and lower challenge) simultaneously induced greater positive affect. Within intervals, we observed an initial speed up then slowdown in performance, which could reflect dynamic reconfiguration of control. Collectively, these findings further our understanding of the influence of task demands and incentives on mental effort exertion and well-being.
{"title":"Make or Break: The Influence of Expected Challenges and Rewards on the Motivation and Experience Associated with Cognitive Effort Exertion","authors":"Yue Zhang, Xiamin Leng, Amitai Shenhav","doi":"10.1162/jocn_a_02247","DOIUrl":"https://doi.org/10.1162/jocn_a_02247","url":null,"abstract":"Challenging goals can induce harder work but also greater stress, in turn potentially undermining goal achievement. We sought to examine how mental effort and subjective experiences thereof interact as a function of the challenge level and the size of the incentives at stake. Participants performed a task that rewarded individual units of effort investment (correctly performed Stroop trials) but only if they met a threshold number of correct trials within a fixed time interval (challenge level). We varied this challenge level (Study 1, n = 40) and the rewards at stake (Study 2, n = 79) and measured variability in task performance and self-reported affect across task intervals. Greater challenge and higher rewards facilitated greater effort investment but also induced greater stress, whereas higher rewards (and lower challenge) simultaneously induced greater positive affect. Within intervals, we observed an initial speed up then slowdown in performance, which could reflect dynamic reconfiguration of control. Collectively, these findings further our understanding of the influence of task demands and incentives on mental effort exertion and well-being.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pi-Chun Huang, Ludivine A. P. Schils, Iring Koch, Denise N. Stephan, Shulan Hsieh
The human experience demands seamless attentional switches between sensory modalities. Aging raises questions about how declines in auditory and visual processing affect cross-modal attention switching. This study used a cued cross-modal attention-switching paradigm where visual and auditory stimuli were simultaneously presented on either spatially congruent or incongruent sides. A modality cue indicated the target modality, requiring a spatially left versus right key-press response. EEG recordings were collected during task performance. We investigated whether the mixing costs (decreased performance for repetition trials in a mixed task compared with a single task) and switch costs (decreased performance for a switch of target modality compared with a repetition) in cross-modal attention-switching paradigms would exhibit similarities in terms of behavioral performance and the ERP components to those observed in the traditional unimodal attention-switching paradigms. Specifically, we focused on the ERP components: cue-locked P3 (mixing/switch-related increased positivity), target-locked P3 (mixing/switch-related decreased positivity), and target-locked lateralized readiness potential (mixing/switch-related longer latency). In addition, we assessed how aging impacts cross-modal attention-switching performance. Results revealed that older adults exhibited more pronounced mixing and switch costs than younger adults, especially when visual and auditory stimuli were presented on incongruent sides. ERP findings showed increased cue-locked P3 amplitude, prolonged cue-locked P3 latency, decreased target-locked P3 amplitude, prolonged target-locked P3 latency in association with switch costs, and prolonged onset latency of the target-locked lateralized readiness potential in association with the mixing costs. Age-related effects were significant only for cue-locked P3 amplitude, cue-locked P3 latency (switch-related), and target-locked P3 latency (switch-related). These findings suggest that the larger mixing costs and switch costs in older adults were because of the inefficient use of modality cues to update a representation of the relevant task sets and required more processing time for evaluating and categorizing the target.
{"title":"Age-related Electrophysical Correlates of Cross-modal Attention Switching","authors":"Pi-Chun Huang, Ludivine A. P. Schils, Iring Koch, Denise N. Stephan, Shulan Hsieh","doi":"10.1162/jocn_a_02248","DOIUrl":"https://doi.org/10.1162/jocn_a_02248","url":null,"abstract":"The human experience demands seamless attentional switches between sensory modalities. Aging raises questions about how declines in auditory and visual processing affect cross-modal attention switching. This study used a cued cross-modal attention-switching paradigm where visual and auditory stimuli were simultaneously presented on either spatially congruent or incongruent sides. A modality cue indicated the target modality, requiring a spatially left versus right key-press response. EEG recordings were collected during task performance. We investigated whether the mixing costs (decreased performance for repetition trials in a mixed task compared with a single task) and switch costs (decreased performance for a switch of target modality compared with a repetition) in cross-modal attention-switching paradigms would exhibit similarities in terms of behavioral performance and the ERP components to those observed in the traditional unimodal attention-switching paradigms. Specifically, we focused on the ERP components: cue-locked P3 (mixing/switch-related increased positivity), target-locked P3 (mixing/switch-related decreased positivity), and target-locked lateralized readiness potential (mixing/switch-related longer latency). In addition, we assessed how aging impacts cross-modal attention-switching performance. Results revealed that older adults exhibited more pronounced mixing and switch costs than younger adults, especially when visual and auditory stimuli were presented on incongruent sides. ERP findings showed increased cue-locked P3 amplitude, prolonged cue-locked P3 latency, decreased target-locked P3 amplitude, prolonged target-locked P3 latency in association with switch costs, and prolonged onset latency of the target-locked lateralized readiness potential in association with the mixing costs. Age-related effects were significant only for cue-locked P3 amplitude, cue-locked P3 latency (switch-related), and target-locked P3 latency (switch-related). These findings suggest that the larger mixing costs and switch costs in older adults were because of the inefficient use of modality cues to update a representation of the relevant task sets and required more processing time for evaluating and categorizing the target.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Event boundaries help structure the content of episodic memories by segmenting continuous experiences into discrete events. Event boundaries may also serve to preserve meaningful information within an event, thereby actively separating important memories from interfering representations imposed by past and future events. Here, we tested the hypothesis that event boundaries organize emotional memory based on changing dynamics as events unfold. We developed a novel threat-reversal learning task whereby participants encoded trial-unique exemplars from two semantic categories across three phases: preconditioning, fear acquisition, and reversal. Shock contingencies were established for one category during acquisition (CS+) and then switched to the other during reversal (CS-). Importantly, reversal either was separated by a perceptible event boundary (Experiment 1) or occurred immediately after acquisition, with no perceptible context shift (Experiment 2). In a surprise recognition memory test the next day, memory performance tracked the learning contingencies from encoding in Experiment 1, such that participants selectively recognized more threat-associated CS+ exemplars from before (retroactive) and during acquisition, but this pattern reversed toward CS- exemplars encoded during reversal. By contrast, participants with continuous encoding-without a boundary between conditioning and reversal-exhibited undifferentiated memory for exemplars from both categories encoded before acquisition and after reversal. Further analyses highlight nuanced effects of event boundaries on reversing conditioned fear, updating mnemonic generalization, and emotional biasing of temporal source memory. These findings suggest that event boundaries provide anchor points to organize memory for distinctly meaningful information, thereby adaptively structuring memory based on the content of our experiences.
{"title":"Event Segmentation Promotes the Reorganization of Emotional Memory.","authors":"Patrick A F Laing, Joseph E Dunsmoor","doi":"10.1162/jocn_a_02244","DOIUrl":"10.1162/jocn_a_02244","url":null,"abstract":"<p><p>Event boundaries help structure the content of episodic memories by segmenting continuous experiences into discrete events. Event boundaries may also serve to preserve meaningful information within an event, thereby actively separating important memories from interfering representations imposed by past and future events. Here, we tested the hypothesis that event boundaries organize emotional memory based on changing dynamics as events unfold. We developed a novel threat-reversal learning task whereby participants encoded trial-unique exemplars from two semantic categories across three phases: preconditioning, fear acquisition, and reversal. Shock contingencies were established for one category during acquisition (CS+) and then switched to the other during reversal (CS-). Importantly, reversal either was separated by a perceptible event boundary (Experiment 1) or occurred immediately after acquisition, with no perceptible context shift (Experiment 2). In a surprise recognition memory test the next day, memory performance tracked the learning contingencies from encoding in Experiment 1, such that participants selectively recognized more threat-associated CS+ exemplars from before (retroactive) and during acquisition, but this pattern reversed toward CS- exemplars encoded during reversal. By contrast, participants with continuous encoding-without a boundary between conditioning and reversal-exhibited undifferentiated memory for exemplars from both categories encoded before acquisition and after reversal. Further analyses highlight nuanced effects of event boundaries on reversing conditioned fear, updating mnemonic generalization, and emotional biasing of temporal source memory. These findings suggest that event boundaries provide anchor points to organize memory for distinctly meaningful information, thereby adaptively structuring memory based on the content of our experiences.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tony Ro, Allison M Pierce, Michaela Porubanova, Miriam San Lucas
We perceive visual objects as unified although different brain areas process different features. An attentional mechanism has been proposed to be involved with feature binding, as evidenced by observations of binding errors (i.e., illusory conjunctions) when attention is diverted. However, the neural underpinnings of this feature binding are not well understood. We examined the neural mechanisms of feature binding by recording EEG during an attentionally demanding discrimination task. Unlike prestimulus alpha oscillatory activity and early ERPs (i.e., the N1 and P1 components), the N1pc, reflecting stimulus-evoked spatial attention, was reduced for errors relative to correct responses and illusory conjunctions. However, the later SPCN, reflecting visual short-term memory, was reduced for illusory conjunctions and errors compared with correct responses. Furthermore, binding errors were associated with distinct posterior lateralized activity during this 200- to 300-msec window. These results implicate a temporal binding window that integrates visual features after stimulus-evoked attention but before encoding into visual short-term memory.
{"title":"Neural Correlates of Visual Feature Binding.","authors":"Tony Ro, Allison M Pierce, Michaela Porubanova, Miriam San Lucas","doi":"10.1162/jocn_a_02243","DOIUrl":"https://doi.org/10.1162/jocn_a_02243","url":null,"abstract":"<p><p>We perceive visual objects as unified although different brain areas process different features. An attentional mechanism has been proposed to be involved with feature binding, as evidenced by observations of binding errors (i.e., illusory conjunctions) when attention is diverted. However, the neural underpinnings of this feature binding are not well understood. We examined the neural mechanisms of feature binding by recording EEG during an attentionally demanding discrimination task. Unlike prestimulus alpha oscillatory activity and early ERPs (i.e., the N1 and P1 components), the N1pc, reflecting stimulus-evoked spatial attention, was reduced for errors relative to correct responses and illusory conjunctions. However, the later SPCN, reflecting visual short-term memory, was reduced for illusory conjunctions and errors compared with correct responses. Furthermore, binding errors were associated with distinct posterior lateralized activity during this 200- to 300-msec window. These results implicate a temporal binding window that integrates visual features after stimulus-evoked attention but before encoding into visual short-term memory.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernando Llanos Lucas, Timothy Stump, Megan Crowhurst
The perception of rhythmic patterns is crucial for the recognition of words in spoken languages, yet it remains unclear how these patterns are represented in the brain. Here, we tested the hypothesis that rhythmic patterns are encoded by neural activity phase-locked to the temporal modulation of these patterns in the speech signal. To test this hypothesis, we analyzed EEGs evoked with long sequences of alternating syllables acoustically manipulated to be perceived as a series of different rhythmic groupings in English. We found that the magnitude of the EEG at the syllable and grouping rates of each sequence was significantly higher than the noise baseline, indicating that the neural parsing of syllables and rhythmic groupings operates at different timescales. Distributional differences between the scalp topographies associated with each timescale suggests a further mechanistic dissociation between the neural segmentation of syllables and groupings. In addition, we observed that the neural tracking of louder syllables, which in trochaic languages like English are associated with the beginning of rhythmic groupings, was more robust than the neural tracking of softer syllables. The results of further bootstrapping and brain-behavior analyses indicate that the perception of rhythmic patterns is modulated by the magnitude of grouping alternations in the neural signal. These findings suggest that the temporal coding of rhythmic patterns in stress-based languages like English is supported by temporal regularities that are linguistically relevant in the speech signal.
{"title":"Investigating the Neural Basis of the Loud-first Principle of the Iambic-Trochaic Law.","authors":"Fernando Llanos Lucas, Timothy Stump, Megan Crowhurst","doi":"10.1162/jocn_a_02241","DOIUrl":"https://doi.org/10.1162/jocn_a_02241","url":null,"abstract":"<p><p>The perception of rhythmic patterns is crucial for the recognition of words in spoken languages, yet it remains unclear how these patterns are represented in the brain. Here, we tested the hypothesis that rhythmic patterns are encoded by neural activity phase-locked to the temporal modulation of these patterns in the speech signal. To test this hypothesis, we analyzed EEGs evoked with long sequences of alternating syllables acoustically manipulated to be perceived as a series of different rhythmic groupings in English. We found that the magnitude of the EEG at the syllable and grouping rates of each sequence was significantly higher than the noise baseline, indicating that the neural parsing of syllables and rhythmic groupings operates at different timescales. Distributional differences between the scalp topographies associated with each timescale suggests a further mechanistic dissociation between the neural segmentation of syllables and groupings. In addition, we observed that the neural tracking of louder syllables, which in trochaic languages like English are associated with the beginning of rhythmic groupings, was more robust than the neural tracking of softer syllables. The results of further bootstrapping and brain-behavior analyses indicate that the perception of rhythmic patterns is modulated by the magnitude of grouping alternations in the neural signal. These findings suggest that the temporal coding of rhythmic patterns in stress-based languages like English is supported by temporal regularities that are linguistically relevant in the speech signal.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacqueline M. Fulvio;Saskia Haegens;Bradley R. Postle
A single pulse of TMS (spTMS) during the delay period of a double serial retrocuing working-memory task can briefly rescue decodability of an unprioritized memory item (UMI). This physiological phenomenon, which is paralleled in behavior by involuntary retrieval of the UMI, is carried by the beta frequency band, implicating beta-band dynamics in priority coding in working memory. We decomposed EEG data from 12 participants performing double serial retrocuing with concurrent delivery of spTMS using Spatially distributed PhAse Coupling Extraction. This procedure decomposes the scalp-level signal into a set of discrete coupled oscillators, each with a component strength that can vary over time. The decomposition revealed a diversity of low-frequency components, a subset of them strengthening with the onset of the task, and the majority declining in strength across the trial, as well as within each delay period. Results with spTMS revealed no evidence that it works by activating previously “silent” sources; instead, it had the effect of modulating ongoing activity, specifically by exaggerating the within-delay decrease in strength of posterior beta components. Furthermore, the magnitude of the effect of spTMS on the loading strength of a posterior beta component correlated with the disruptive effect of spTMS on performance, a pattern also seen when analyses were restricted to trials with “UMI-lure” memory probes. Rather than reflecting the “activation” of a putatively “activity silent” UMI, these results implicate beta-band dynamics in a mechanism that distinguishes prioritized from unprioritized, and suggest that the effect of spTMS is to disrupt this code.
{"title":"Single-pulse Transcranial Magnetic Stimulation Affects Working-memory Performance via Posterior Beta-band Oscillations","authors":"Jacqueline M. Fulvio;Saskia Haegens;Bradley R. Postle","doi":"10.1162/jocn_a_02194","DOIUrl":"10.1162/jocn_a_02194","url":null,"abstract":"A single pulse of TMS (spTMS) during the delay period of a double serial retrocuing working-memory task can briefly rescue decodability of an unprioritized memory item (UMI). This physiological phenomenon, which is paralleled in behavior by involuntary retrieval of the UMI, is carried by the beta frequency band, implicating beta-band dynamics in priority coding in working memory. We decomposed EEG data from 12 participants performing double serial retrocuing with concurrent delivery of spTMS using Spatially distributed PhAse Coupling Extraction. This procedure decomposes the scalp-level signal into a set of discrete coupled oscillators, each with a component strength that can vary over time. The decomposition revealed a diversity of low-frequency components, a subset of them strengthening with the onset of the task, and the majority declining in strength across the trial, as well as within each delay period. Results with spTMS revealed no evidence that it works by activating previously “silent” sources; instead, it had the effect of modulating ongoing activity, specifically by exaggerating the within-delay decrease in strength of posterior beta components. Furthermore, the magnitude of the effect of spTMS on the loading strength of a posterior beta component correlated with the disruptive effect of spTMS on performance, a pattern also seen when analyses were restricted to trials with “UMI-lure” memory probes. Rather than reflecting the “activation” of a putatively “activity silent” UMI, these results implicate beta-band dynamics in a mechanism that distinguishes prioritized from unprioritized, and suggest that the effect of spTMS is to disrupt this code.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Jainta;Anoushiravan Zahedi;Ricarda I. Schubotz
Prediction errors (PEs) function as learning signals. It is yet unclear how varying compared to repetitive PEs affect episodic memory in brain and behavior. The current study investigated cerebral and behavioral effects of experiencing either multiple alternative versions (“varying”) or one single alternative version (“repetitive”) of a previously encoded episode. Participants encoded a set of episodes (“originals”) by watching videos showing toy stories. During scanning, participants either experienced originals, one single, or multiple alternative versions of the previously encoded episodes. Participants' memory performance was tested through recall of original objects. Varying and repetitive PEs revealed typical brain responses to the detection of mismatching information including inferior frontal and posterior parietal regions, as well as hippocampus, which is further linked to memory reactivation, and the amygdala, known for modulating memory consolidation. Furthermore, experiencing varying and repetitive PEs triggered distinct brain areas as revealed by direct contrast. Among others, experiencing varying versions triggered activity in the caudate, a region that has been associated with PEs. In contrast, repetitive PEs activated brain areas that resembled more those for retrieval of originally encoded episodes. Thus, ACC and posterior cingulate cortex activation seemed to serve both reactivating old and integrating new but similar information in episodic memory. Consistent with neural findings, participants recalled original objects less accurately when only presented with the same, but not varying, PE during fMRI. The current findings suggest that repeated PEs interact more strongly with a recalled original episodic memory than varying PEs.
预测错误(PE)是一种学习信号。与重复性预测错误相比,变化性预测错误如何影响大脑和行为中的外显记忆,目前尚不清楚。本研究调查了经历先前编码情节的多个替代版本("变化")或单一替代版本("重复")对大脑和行为的影响。参与者通过观看展示玩具故事的视频,对一组情节("原版")进行编码。在扫描过程中,受试者要么体验了原版,要么体验了先前编码情节的一个单一版本或多个替代版本。参与者的记忆表现通过对原始对象的回忆进行测试。变化和重复的PE显示了大脑在检测不匹配信息时的典型反应,包括额叶下部和顶叶后部区域,以及与记忆再激活有进一步联系的海马体和用于调节记忆巩固的杏仁核。此外,直接对比显示,经历不同和重复的 PE 会触发不同的脑区。其中,经历不同版本的 PE 会触发尾状核的活动,而尾状核是一个与 PE 相关的区域。与此相反,重复的 PE 激活的脑区更类似于那些用于检索原始编码情节的脑区。因此,ACC 和后扣带回皮层的激活似乎既能重新激活旧信息,又能整合情节记忆中新的但相似的信息。与神经研究结果一致的是,在进行 fMRI 时,如果只呈现相同而非不同的 PE,参与者回忆原始对象的准确性较低。目前的研究结果表明,重复的 PE 比变化的 PE 与回忆起的原始情节记忆的相互作用更强。
{"title":"Same Same, But Different: Brain Areas Underlying the Learning from Repetitive Episodic Prediction Errors","authors":"Benjamin Jainta;Anoushiravan Zahedi;Ricarda I. Schubotz","doi":"10.1162/jocn_a_02204","DOIUrl":"10.1162/jocn_a_02204","url":null,"abstract":"Prediction errors (PEs) function as learning signals. It is yet unclear how varying compared to repetitive PEs affect episodic memory in brain and behavior. The current study investigated cerebral and behavioral effects of experiencing either multiple alternative versions (“varying”) or one single alternative version (“repetitive”) of a previously encoded episode. Participants encoded a set of episodes (“originals”) by watching videos showing toy stories. During scanning, participants either experienced originals, one single, or multiple alternative versions of the previously encoded episodes. Participants' memory performance was tested through recall of original objects. Varying and repetitive PEs revealed typical brain responses to the detection of mismatching information including inferior frontal and posterior parietal regions, as well as hippocampus, which is further linked to memory reactivation, and the amygdala, known for modulating memory consolidation. Furthermore, experiencing varying and repetitive PEs triggered distinct brain areas as revealed by direct contrast. Among others, experiencing varying versions triggered activity in the caudate, a region that has been associated with PEs. In contrast, repetitive PEs activated brain areas that resembled more those for retrieval of originally encoded episodes. Thus, ACC and posterior cingulate cortex activation seemed to serve both reactivating old and integrating new but similar information in episodic memory. Consistent with neural findings, participants recalled original objects less accurately when only presented with the same, but not varying, PE during fMRI. The current findings suggest that repeated PEs interact more strongly with a recalled original episodic memory than varying PEs.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}