We mentally represent all kinds of objects across a variety of tasks and source modalities (i.e., mental objects). Recent work has proposed that mental objects are represented by content-free, reassignable pointers (or indexicals, or tokens) in our moment-to-moment processing. Are all mental objects represented by the same set of pointers? If not, where should we draw the lines between different kinds of pointers? In this Perspective, we propose a novel research program aiming at unraveling the neural taxonomy of mental objects by testing how the neural markers for pointers generalize across different paradigms, task goals, source modalities, and more.
{"title":"Mapping the Neural Taxonomy of Mental Objects in Moment-to-Moment Cognition.","authors":"Xinchi Yu","doi":"10.1162/jocn_a_02348","DOIUrl":"10.1162/jocn_a_02348","url":null,"abstract":"<p><p>We mentally represent all kinds of objects across a variety of tasks and source modalities (i.e., mental objects). Recent work has proposed that mental objects are represented by content-free, reassignable pointers (or indexicals, or tokens) in our moment-to-moment processing. Are all mental objects represented by the same set of pointers? If not, where should we draw the lines between different kinds of pointers? In this Perspective, we propose a novel research program aiming at unraveling the neural taxonomy of mental objects by testing how the neural markers for pointers generalize across different paradigms, task goals, source modalities, and more.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"2093-2107"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Doyle, Rhys Yewbrey, Katja Kornysheva, Theresa M Desrochers
Humans complete different types of sequences as a part of everyday life. These sequences can be divided into two important categories: those that are abstract, in which the steps unfold according to a rule at super-second to minute time scale, and those that are motor, defined solely by individual movements and their order that unfold at the subsecond to second timescale. For example, the sequence of making spaghetti consists of abstract tasks (preparing the sauce and cooking the noodles) and nested motor actions (stir pasta water). Previous work shows neural activity increases (ramps) in the rostrolateral prefrontal cortex (RLPFC) during abstract sequence execution. During motor sequence production, activity occurs in regions of PFC. However, it remains unknown if ramping is a signature of motor sequence production as well or solely an attribute of abstract sequence monitoring and execution. We tested the hypothesis that significant ramping activity occurs during motor sequence production in the RLPFC. Contrary to our hypothesis, we did not observe significant ramping activity in the RLPFC during motor sequence production, but we found significant ramping activity in bilateral inferior parietal cortex, in regions distinct from those observed during an abstract sequence task. Our results suggest different prefrontal-parietal circuitry may underlie abstract versus motor sequence execution.
{"title":"Motor and Cognitive Sequence Tasks Exhibit Different Ramping Patterns in Parietal and Prefrontal Cortices.","authors":"Hannah Doyle, Rhys Yewbrey, Katja Kornysheva, Theresa M Desrochers","doi":"10.1162/jocn_a_02349","DOIUrl":"10.1162/jocn_a_02349","url":null,"abstract":"<p><p>Humans complete different types of sequences as a part of everyday life. These sequences can be divided into two important categories: those that are abstract, in which the steps unfold according to a rule at super-second to minute time scale, and those that are motor, defined solely by individual movements and their order that unfold at the subsecond to second timescale. For example, the sequence of making spaghetti consists of abstract tasks (preparing the sauce and cooking the noodles) and nested motor actions (stir pasta water). Previous work shows neural activity increases (ramps) in the rostrolateral prefrontal cortex (RLPFC) during abstract sequence execution. During motor sequence production, activity occurs in regions of PFC. However, it remains unknown if ramping is a signature of motor sequence production as well or solely an attribute of abstract sequence monitoring and execution. We tested the hypothesis that significant ramping activity occurs during motor sequence production in the RLPFC. Contrary to our hypothesis, we did not observe significant ramping activity in the RLPFC during motor sequence production, but we found significant ramping activity in bilateral inferior parietal cortex, in regions distinct from those observed during an abstract sequence task. Our results suggest different prefrontal-parietal circuitry may underlie abstract versus motor sequence execution.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1929-1941"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12570284/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Color perception is based on the differential spectral responses of the L-, M-, and S-cones and subsequent subcortical and cortical computations and may include the influence of higher-order factors such as language. Although the early subcortical stages of color vision are well characterized, the organization of cortical representations of color remain elusive, despite numerous models based on discrimination thresholds, appearance, and categorization. An underexplored aspect of cortical color representations is how they unfold over time. Here, we compare the dynamic reorganization of three different color representations over time using magnetoencephalography. We measured neural responses to 14 hues at each of three achromatic luminances (increment, isoluminant, and decrement) while participants attended either to the exact color of the stimulus or its color category. We used a series of classification analyses, combined with multidimensional scaling and representational similarity analysis, to ask how cortical representations of color unfold over time from stimulus onset. We compared the performance of “higher order” models based on hue and color category with a model based simply on stimulus cone contrast and found that all models had significant correlations with the data. However, the unique variance accounted for by each model revealed a dynamic change in hue responses over time, which was consistent with a “coarse to fine” transition from a broad clustering into categorical groups to a finer within-category representation. Notably, these dynamics were replicated across data sets from both tasks, suggesting they reflect a robust reorganization of cortical hue responses over time.
{"title":"Temporal Evolution of Color Representations Measured with Magnetoencephalography Reveals a “Coarse to Fine” Dynamic","authors":"Erin Goddard;Kathy T. Mullen","doi":"10.1162/JOCN.a.56","DOIUrl":"10.1162/JOCN.a.56","url":null,"abstract":"Color perception is based on the differential spectral responses of the L-, M-, and S-cones and subsequent subcortical and cortical computations and may include the influence of higher-order factors such as language. Although the early subcortical stages of color vision are well characterized, the organization of cortical representations of color remain elusive, despite numerous models based on discrimination thresholds, appearance, and categorization. An underexplored aspect of cortical color representations is how they unfold over time. Here, we compare the dynamic reorganization of three different color representations over time using magnetoencephalography. We measured neural responses to 14 hues at each of three achromatic luminances (increment, isoluminant, and decrement) while participants attended either to the exact color of the stimulus or its color category. We used a series of classification analyses, combined with multidimensional scaling and representational similarity analysis, to ask how cortical representations of color unfold over time from stimulus onset. We compared the performance of “higher order” models based on hue and color category with a model based simply on stimulus cone contrast and found that all models had significant correlations with the data. However, the unique variance accounted for by each model revealed a dynamic change in hue responses over time, which was consistent with a “coarse to fine” transition from a broad clustering into categorical groups to a finer within-category representation. Notably, these dynamics were replicated across data sets from both tasks, suggesting they reflect a robust reorganization of cortical hue responses over time.","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":"37 11","pages":"2326-2350"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11235884","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taissa K Lytchenko, Marvin Maechler, Nathan H Heller, Sharif Saleki, Peter U Tse, Gideon P Caplovitz
A central debated question in the study of object-based attention (OBA) is whether attention to the object-mediated deployment of attention is obligatory and automatic [Chen, Z., & Cave, K. R. Reinstating object-based attention under positional certainty: The importance of subjective parsing. Perception & Psychophysics, 68, 992-1003, 2006] or whether the pattern of results is driven by other non-obligatory factors, such as prioritization of invalid target locations [Shomstein, S., & Yantis, S. Object-based attention: Sensory modulation or priority setting? Perception & Psychophysics, 64, 41-51, 2002]. However, virtually all behavioral measures attributed to OBA are based on examining performance on invalid-cue trials, the inclusion of which confounds the assessment of the automaticity hypothesis. Our approach to resolve this issue is to determine whether effects of OBA can be observed in a 100% valid cueing paradigm. In this article, we investigate the obligatory nature of OBA by leveraging the spatial specificity of fMRI and the retinotopic organization of early visual cortex. We aimed to identify potential neural correlates of OBA in the complete absence of invalid trials. Participants perform a version of the classic two-rectangle OBA paradigm while we simultaneously measure changes in BOLD signals arising from retinotopically organized cortical areas V1, V2, and V3. In the first half of the experiment, we used the classic two-rectangle OBA paradigm except that the cue was 100% valid. In the second half, we reduced cue validity to more closely match standard OBA paradigms (runs containing invalid trials). We analyzed BOLD signals arising from our ROIs in V1, V2, and V3 according to their topographic correspondences with the ends of the rectangles in the visual field and compared these. We then compared responses in each ROI according to where the cue had occurred (cued, uncued-same-object, uncued-other-object location). We replicated this procedure in Experiment 2, but changed the layout of the two rectangles from a vertical to a horizontal configuration. Critical result: We observed statistically significant effects of OBA in V3 (Experiment 1) and V1-2 (Experiment 2) in both the 100% valid runs and in runs containing invalid trials. Moreover, the effects of OBA were no smaller in the 100% runs compared with runs containing invalid trials. Conclusion: We see BOLD modulation at the uncued locations consistent with neural correlates of OBA.
{"title":"Invalid Trials Are Not Required to Observe Neural Correlates of Object-based Attention in Retinotopic Visual Cortex.","authors":"Taissa K Lytchenko, Marvin Maechler, Nathan H Heller, Sharif Saleki, Peter U Tse, Gideon P Caplovitz","doi":"10.1162/jocn_a_02313","DOIUrl":"10.1162/jocn_a_02313","url":null,"abstract":"<p><p>A central debated question in the study of object-based attention (OBA) is whether attention to the object-mediated deployment of attention is obligatory and automatic [Chen, Z., & Cave, K. R. Reinstating object-based attention under positional certainty: The importance of subjective parsing. Perception & Psychophysics, 68, 992-1003, 2006] or whether the pattern of results is driven by other non-obligatory factors, such as prioritization of invalid target locations [Shomstein, S., & Yantis, S. Object-based attention: Sensory modulation or priority setting? Perception & Psychophysics, 64, 41-51, 2002]. However, virtually all behavioral measures attributed to OBA are based on examining performance on invalid-cue trials, the inclusion of which confounds the assessment of the automaticity hypothesis. Our approach to resolve this issue is to determine whether effects of OBA can be observed in a 100% valid cueing paradigm. In this article, we investigate the obligatory nature of OBA by leveraging the spatial specificity of fMRI and the retinotopic organization of early visual cortex. We aimed to identify potential neural correlates of OBA in the complete absence of invalid trials. Participants perform a version of the classic two-rectangle OBA paradigm while we simultaneously measure changes in BOLD signals arising from retinotopically organized cortical areas V1, V2, and V3. In the first half of the experiment, we used the classic two-rectangle OBA paradigm except that the cue was 100% valid. In the second half, we reduced cue validity to more closely match standard OBA paradigms (runs containing invalid trials). We analyzed BOLD signals arising from our ROIs in V1, V2, and V3 according to their topographic correspondences with the ends of the rectangles in the visual field and compared these. We then compared responses in each ROI according to where the cue had occurred (cued, uncued-same-object, uncued-other-object location). We replicated this procedure in Experiment 2, but changed the layout of the two rectangles from a vertical to a horizontal configuration. Critical result: We observed statistically significant effects of OBA in V3 (Experiment 1) and V1-2 (Experiment 2) in both the 100% valid runs and in runs containing invalid trials. Moreover, the effects of OBA were no smaller in the 100% runs compared with runs containing invalid trials. Conclusion: We see BOLD modulation at the uncued locations consistent with neural correlates of OBA.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"2160-2177"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143505870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zebo Xu, Yang Yang, Tai Yuan, Gangyi Feng, Zhenguang G Cai
Chinese speakers have long suffered from character amnesia in handwriting, failing to handwrite a character despite being able to recognize it. However, it remains unclear whether character amnesia arises from the failure in accessing orthographic representations in the orthographic lexicon, reduced graphemic information in the graphemic buffer, or/and weakened phonology-orthography links. To address this issue, we employed functional near-infrared spectroscopy to identify brain regions that are associated with character amnesia. In particular, we tested whether character amnesia is associated with deactivation in the fusiform gyrus (FG), the superior parietal gyrus (SPG), or the supramarginal gyrus (SMG), which have been shown to be respectively associated with the orthographic lexicon, graphemic buffer, and phonology-orthography conversion. In a handwriting-to-dictation task, 23 Cantonese-speaking adults handwrote a character according to a dictation prompt and then reported whether they correctly handwrote the character or suffered from character amnesia. Functional near-infrared spectroscopy results showed that, compared with correct handwriting, character amnesia elicited reduced activation in the bilateral FG, the SPG, and the SMG. Parametric analyses showed that character frequency and number of strokes positively correlated with activation of the FG and the SPG, respectively. Functional connectivity analyses revealed that, compared with correct handwriting, character amnesia was associated with decreased connectivity between the left FG and the left SMG, the right FG and the right SMG, the right FG and the right SPG, the right FG and the left SMG, and the right FG and the left SPG. Together, these results suggest that character amnesia is associated with decayed orthographic representations (in the orthographic lexicon) and failure in phonology-orthography conversion, resulting in reduced orthographic information being retrieved (into the graphemic buffer) for handwriting execution.
{"title":"Neural Substrates Associated with Character Amnesia in Chinese Handwriting: A Functional Near-infrared Spectroscopy Study.","authors":"Zebo Xu, Yang Yang, Tai Yuan, Gangyi Feng, Zhenguang G Cai","doi":"10.1162/jocn_a_02346","DOIUrl":"10.1162/jocn_a_02346","url":null,"abstract":"<p><p>Chinese speakers have long suffered from character amnesia in handwriting, failing to handwrite a character despite being able to recognize it. However, it remains unclear whether character amnesia arises from the failure in accessing orthographic representations in the orthographic lexicon, reduced graphemic information in the graphemic buffer, or/and weakened phonology-orthography links. To address this issue, we employed functional near-infrared spectroscopy to identify brain regions that are associated with character amnesia. In particular, we tested whether character amnesia is associated with deactivation in the fusiform gyrus (FG), the superior parietal gyrus (SPG), or the supramarginal gyrus (SMG), which have been shown to be respectively associated with the orthographic lexicon, graphemic buffer, and phonology-orthography conversion. In a handwriting-to-dictation task, 23 Cantonese-speaking adults handwrote a character according to a dictation prompt and then reported whether they correctly handwrote the character or suffered from character amnesia. Functional near-infrared spectroscopy results showed that, compared with correct handwriting, character amnesia elicited reduced activation in the bilateral FG, the SPG, and the SMG. Parametric analyses showed that character frequency and number of strokes positively correlated with activation of the FG and the SPG, respectively. Functional connectivity analyses revealed that, compared with correct handwriting, character amnesia was associated with decreased connectivity between the left FG and the left SMG, the right FG and the right SMG, the right FG and the right SPG, the right FG and the left SMG, and the right FG and the left SPG. Together, these results suggest that character amnesia is associated with decayed orthographic representations (in the orthographic lexicon) and failure in phonology-orthography conversion, resulting in reduced orthographic information being retrieved (into the graphemic buffer) for handwriting execution.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"2053-2071"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isaac R Christian, Samuel A Nastase, Mindy Yu, Kirsten Ziman, Michael S A Graziano
The ability of the brain to monitor its own attention is important for controlling attention. The ability to reconstruct and monitor the attention of others is important for behavioral prediction and therefore interaction with others. Do the same cortical networks participate in constructing a metacognitive representation of attention, whether one's own or someone else's attention? We studied the brain activity of human participants in an fMRI scanner. The participants performed two attention-monitoring tasks. One involved focusing attention on their own breathing and pressing a button when they realized their attention had wandered. In the other, participants watched a video of an actor performing the same focused-attention task, and participants pressed the button if the actor's attention appeared to have wandered. In both cases, we analyzed brain activity just before the button presses, when participants were engaged in metacognition with respect to attention. In the Self condition, activity was obtained in a distinctive set of areas including the TPJ, precuneus, dorsomedial pFC, anterior cingulate, and anterior insula. The activity partly overlapped the default mode network, social cognition network, and salience network. In the Other condition, activity was found in a similar set of areas including the TPJ, precuneus, dorsomedial pFC, anterior cingulate, and anterior insula. These results suggest that there may be a common set of cortical areas that provide an overarching mechanism for metacognition concerning attention, although Self and Other processing are also clearly not identical.
{"title":"Monitoring Attention in Self and Others.","authors":"Isaac R Christian, Samuel A Nastase, Mindy Yu, Kirsten Ziman, Michael S A Graziano","doi":"10.1162/JOCN.a.51","DOIUrl":"10.1162/JOCN.a.51","url":null,"abstract":"<p><p>The ability of the brain to monitor its own attention is important for controlling attention. The ability to reconstruct and monitor the attention of others is important for behavioral prediction and therefore interaction with others. Do the same cortical networks participate in constructing a metacognitive representation of attention, whether one's own or someone else's attention? We studied the brain activity of human participants in an fMRI scanner. The participants performed two attention-monitoring tasks. One involved focusing attention on their own breathing and pressing a button when they realized their attention had wandered. In the other, participants watched a video of an actor performing the same focused-attention task, and participants pressed the button if the actor's attention appeared to have wandered. In both cases, we analyzed brain activity just before the button presses, when participants were engaged in metacognition with respect to attention. In the Self condition, activity was obtained in a distinctive set of areas including the TPJ, precuneus, dorsomedial pFC, anterior cingulate, and anterior insula. The activity partly overlapped the default mode network, social cognition network, and salience network. In the Other condition, activity was found in a similar set of areas including the TPJ, precuneus, dorsomedial pFC, anterior cingulate, and anterior insula. These results suggest that there may be a common set of cortical areas that provide an overarching mechanism for metacognition concerning attention, although Self and Other processing are also clearly not identical.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"2284-2294"},"PeriodicalIF":3.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tzu-Han Zoe Cheng, Victoria Hennessy, Tian Christina Zhao
The mismatch response (MMR) is a critical neural indicator of discrimination of speech contrasts. Using MMRs, previous research has demonstrated that language experience can affect MMRs, such that MMRs to native speech contrasts are different from ones to nonnative speech contrasts. This effect is observed as early as 11-12 months, but not at 6-7 months of age, indicating early learning of speech sounds. Yet, many challenges remain to use MMR to advance our understanding of speech learning especially in infants, including prolonged recording time, inefficient use of data, and a lack of reconciliation of MMR recorded using different technologies (i.e., EEG vs. magnetoencephalography [MEG]). Using an improved recording paradigm and analysis approaches, the current study addressed these challenges by examining (1) whether MEG-MMR is linked to well-established EEG-MMR in the same adults and (2) whether our methods capture the difference of MEG-MMR between native and nonnative speech contrasts in adults and (3) in older infants. Results from 18 adults with simultaneous M/EEG demonstrated a high correlation between the MEG-MMR and the EEG-MMR. Additionally, MEG-MMRs to native speech contrasts were different from ones to nonnative speech contrasts, replicating spatiotemporal patterns documented in existing literature. Finally, we replicated this effect in the MEG-MMR in 14 infants aged between 9 and 14 months using the same methods. These findings validate our new methodologies (less than 15 min) for acquiring and analyzing speech-related MMR across ages, paving the way for studying early language development, and improving early detection of language-related disorders.
{"title":"Time-efficient Methodology for Robustly Assessing Speech-related Mismatch Responses in Adults and Infants.","authors":"Tzu-Han Zoe Cheng, Victoria Hennessy, Tian Christina Zhao","doi":"10.1162/JOCN.a.2397","DOIUrl":"10.1162/JOCN.a.2397","url":null,"abstract":"<p><p>The mismatch response (MMR) is a critical neural indicator of discrimination of speech contrasts. Using MMRs, previous research has demonstrated that language experience can affect MMRs, such that MMRs to native speech contrasts are different from ones to nonnative speech contrasts. This effect is observed as early as 11-12 months, but not at 6-7 months of age, indicating early learning of speech sounds. Yet, many challenges remain to use MMR to advance our understanding of speech learning especially in infants, including prolonged recording time, inefficient use of data, and a lack of reconciliation of MMR recorded using different technologies (i.e., EEG vs. magnetoencephalography [MEG]). Using an improved recording paradigm and analysis approaches, the current study addressed these challenges by examining (1) whether MEG-MMR is linked to well-established EEG-MMR in the same adults and (2) whether our methods capture the difference of MEG-MMR between native and nonnative speech contrasts in adults and (3) in older infants. Results from 18 adults with simultaneous M/EEG demonstrated a high correlation between the MEG-MMR and the EEG-MMR. Additionally, MEG-MMRs to native speech contrasts were different from ones to nonnative speech contrasts, replicating spatiotemporal patterns documented in existing literature. Finally, we replicated this effect in the MEG-MMR in 14 infants aged between 9 and 14 months using the same methods. These findings validate our new methodologies (less than 15 min) for acquiring and analyzing speech-related MMR across ages, paving the way for studying early language development, and improving early detection of language-related disorders.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-16"},"PeriodicalIF":3.0,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Damian Koevoet, Henry M Jones, Stefan Van der Stigchel, Edward Awh
Extant work establishes a close relationship between spatial attention and working memory (WM) storage. Indeed, spatial representations of memorized items emerge spontaneously, even when space is completely task-irrelevant. Nevertheless, accumulating evidence suggests that the number of stored objects in WM can be tracked independently from the distribution of spatial attention, suggesting that these are separable aspects of attentional control. We examined this issue by analyzing pupillometric data from three change detection experiments (total n = 67) wherein the extent of spatial attention and WM load were manipulated independently. Results showed that pupil size tracked the number of attended locations and the number of memorized objects independently in each experiment. This dissociation held across distinct task designs and was present for both visuospatial and auditory WM. The current findings challenge unitary models of attention and instead demonstrate spatial attention and WM gating to be distinct aspects of voluntary attentional control.
{"title":"Dissociating Spatial Attention and Working Memory Storage with Pupillometry.","authors":"Damian Koevoet, Henry M Jones, Stefan Van der Stigchel, Edward Awh","doi":"10.1162/JOCN.a.2395","DOIUrl":"https://doi.org/10.1162/JOCN.a.2395","url":null,"abstract":"<p><p>Extant work establishes a close relationship between spatial attention and working memory (WM) storage. Indeed, spatial representations of memorized items emerge spontaneously, even when space is completely task-irrelevant. Nevertheless, accumulating evidence suggests that the number of stored objects in WM can be tracked independently from the distribution of spatial attention, suggesting that these are separable aspects of attentional control. We examined this issue by analyzing pupillometric data from three change detection experiments (total n = 67) wherein the extent of spatial attention and WM load were manipulated independently. Results showed that pupil size tracked the number of attended locations and the number of memorized objects independently in each experiment. This dissociation held across distinct task designs and was present for both visuospatial and auditory WM. The current findings challenge unitary models of attention and instead demonstrate spatial attention and WM gating to be distinct aspects of voluntary attentional control.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-13"},"PeriodicalIF":3.0,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Zeitlen, Kaixiang Zhuang, Mathias Benedek, Jiang Qiu, Roger Beaty
Complex cognition, such as creativity, relies on cognitive integration of various component processes (e.g., memory, attention, and imagery). Yet, current methods cannot fully capture how the brain integrates cognitive processes during complex tasks. Previous research suggests that communication between functionally dissimilar regions might underlie cognitive integration, allowing for complex cognition. Here, we provide a formal test of this notion using task-based fMRI (n = 28) to assess functional connectivity (FC) among sets of regions ("levels") varying in their functional dissimilarity (defined by differences in resting-state FC profiles) across five tasks hypothesized to vary in cognitive complexity. Each task involved conceptual association and/or idea generation. We found that as task complexity increased, task-FC between regions with greater functional dissimilarity also increased, and the strength of this linear trend positively predicted the relative complexity of tasks. Thus, more complex tasks recruited greater interactions between functionally dissimilar regions. Furthermore, this effect was primarily driven by the default mode and frontoparietal control networks, especially connector hubs within these networks. Task-FC at the highest functional dissimilarity levels was mostly related to metaphor production and bi-association (involving integrating two concepts), followed by generating novel object uses and uncommon association (involving expanding one concept), and was least related to common association (thus, this task was the least complex). Altogether, task-FC across functional dissimilarity levels robustly tracked the cognitive complexity of tasks, supporting the validity of this neural feature for measuring cognitive complexity in a continuous manner and for data-driven tests of theorized differences in task complexity.
{"title":"More Complex Cognitive Tasks Increasingly Connect Functionally Dissimilar Brain Regions.","authors":"Daniel Zeitlen, Kaixiang Zhuang, Mathias Benedek, Jiang Qiu, Roger Beaty","doi":"10.1162/JOCN.a.2396","DOIUrl":"https://doi.org/10.1162/JOCN.a.2396","url":null,"abstract":"<p><p>Complex cognition, such as creativity, relies on cognitive integration of various component processes (e.g., memory, attention, and imagery). Yet, current methods cannot fully capture how the brain integrates cognitive processes during complex tasks. Previous research suggests that communication between functionally dissimilar regions might underlie cognitive integration, allowing for complex cognition. Here, we provide a formal test of this notion using task-based fMRI (n = 28) to assess functional connectivity (FC) among sets of regions (\"levels\") varying in their functional dissimilarity (defined by differences in resting-state FC profiles) across five tasks hypothesized to vary in cognitive complexity. Each task involved conceptual association and/or idea generation. We found that as task complexity increased, task-FC between regions with greater functional dissimilarity also increased, and the strength of this linear trend positively predicted the relative complexity of tasks. Thus, more complex tasks recruited greater interactions between functionally dissimilar regions. Furthermore, this effect was primarily driven by the default mode and frontoparietal control networks, especially connector hubs within these networks. Task-FC at the highest functional dissimilarity levels was mostly related to metaphor production and bi-association (involving integrating two concepts), followed by generating novel object uses and uncommon association (involving expanding one concept), and was least related to common association (thus, this task was the least complex). Altogether, task-FC across functional dissimilarity levels robustly tracked the cognitive complexity of tasks, supporting the validity of this neural feature for measuring cognitive complexity in a continuous manner and for data-driven tests of theorized differences in task complexity.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-20"},"PeriodicalIF":3.0,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabiola Rosaria Fiorino, Cristina Iani, Sandro Rubichi, Elena Gherri
In mixed-features search tasks, the target-defining feature changes unpredictably across trials. Responses are faster when the same feature is repeated across successive trials. This effect, known as intertrial priming of pop-out (PoP), suggests that the selection of a perceptually salient singleton target is modulated by the properties of the preceding search array. To investigate whether PoP can be observed in touch, we developed a mixed-features search task in which a singleton target was presented simultaneously with three homogeneous distractors to the index and middle fingers of the left and right hands. The target-defining vibrotactile frequency varied across trials (either a high-frequency target among low-frequency distractors or vice versa) so that on half of the trials, the singleton frequency was repeated on successive trials, while on the other half, it was alternated. To investigate the presence and the mechanisms underlying PoP in touch, behavioral and ERPs were recorded. Specifically, the N140cc component was used as a marker of spatial selective attention in touch. In line with visual search studies, improved performance for both RTs and accuracy was observed when the singleton target feature was repeated across trials than when it was alternated. Importantly, the N140cc component showed larger amplitudes on repetition compared with change trials, demonstrating that the attentional selection of a tactile target was modulated by PoP. Results demonstrate for the first time that PoP effects emerge also during the search for a tactile target.
{"title":"Behavioral and Electrophysiological Evidence for Intertrial Priming of Pop-out in Touch.","authors":"Fabiola Rosaria Fiorino, Cristina Iani, Sandro Rubichi, Elena Gherri","doi":"10.1162/JOCN.a.2400","DOIUrl":"https://doi.org/10.1162/JOCN.a.2400","url":null,"abstract":"<p><p>In mixed-features search tasks, the target-defining feature changes unpredictably across trials. Responses are faster when the same feature is repeated across successive trials. This effect, known as intertrial priming of pop-out (PoP), suggests that the selection of a perceptually salient singleton target is modulated by the properties of the preceding search array. To investigate whether PoP can be observed in touch, we developed a mixed-features search task in which a singleton target was presented simultaneously with three homogeneous distractors to the index and middle fingers of the left and right hands. The target-defining vibrotactile frequency varied across trials (either a high-frequency target among low-frequency distractors or vice versa) so that on half of the trials, the singleton frequency was repeated on successive trials, while on the other half, it was alternated. To investigate the presence and the mechanisms underlying PoP in touch, behavioral and ERPs were recorded. Specifically, the N140cc component was used as a marker of spatial selective attention in touch. In line with visual search studies, improved performance for both RTs and accuracy was observed when the singleton target feature was repeated across trials than when it was alternated. Importantly, the N140cc component showed larger amplitudes on repetition compared with change trials, demonstrating that the attentional selection of a tactile target was modulated by PoP. Results demonstrate for the first time that PoP effects emerge also during the search for a tactile target.</p>","PeriodicalId":51081,"journal":{"name":"Journal of Cognitive Neuroscience","volume":" ","pages":"1-14"},"PeriodicalIF":3.0,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}