Pub Date : 2026-06-01Epub Date: 2026-01-14DOI: 10.1016/j.cognition.2026.106444
Savithry Namboodiripad , Ethan Kutlu , Anna Babel , Molly Babel , Melissa Baese-Berk , Paras B. Bassuk , Adeli Block , Reinaldo Cabrera Pérez , Matthew T. Carlson , Sita Carraturo , Andrew Cheng , Lauretta S.P. Cheng , Philip Combiths , Ruthe Foushee , Anne Therese Frederiksen , Devin Grammon , Rachel Hayes-Harb , Eve Higby , Kelly Kendro , Elena Koulidobrova , Kelly Elizabeth Wright
Essentialist categorizations of language users, such as native speaker, are widely used but lack empirical validity and reinforce social inequities. This article focuses on the nativeness construct, critically examining how its centrality in social-scientific research distorts scholarly inquiry, introduces bias in educational and clinical assessments, and perpetuates exclusion in academia. We argue that such labels impose artificial homogeneity, devalue linguistic diversity, and contribute to systemic biases in society. By reifying social divisions, essentialist categorizations can exclude marginalized groups, perpetuate linguistic discrimination, and hinder scientific progress. We advocate for a shift away from essentialist proxies and toward more contextually grounded and empirically driven characterizations of language use. A reflexive and interdisciplinary approach is necessary to dismantle these harmful frameworks and promote more accurate, inclusive, and equitable research. Our argument is relevant not just to the cognitive sciences, but to any scholarship which involves describing or understanding language. Ultimately, rejecting essentialist assumptions will lead to more nuanced understandings of language, identity, and social belonging, fostering both scientific and societal transformation by promoting justice and accuracy across social-scientific disciplines.
{"title":"Finding our ROLE: How and why to reframe essentialist approaches to language","authors":"Savithry Namboodiripad , Ethan Kutlu , Anna Babel , Molly Babel , Melissa Baese-Berk , Paras B. Bassuk , Adeli Block , Reinaldo Cabrera Pérez , Matthew T. Carlson , Sita Carraturo , Andrew Cheng , Lauretta S.P. Cheng , Philip Combiths , Ruthe Foushee , Anne Therese Frederiksen , Devin Grammon , Rachel Hayes-Harb , Eve Higby , Kelly Kendro , Elena Koulidobrova , Kelly Elizabeth Wright","doi":"10.1016/j.cognition.2026.106444","DOIUrl":"10.1016/j.cognition.2026.106444","url":null,"abstract":"<div><div>Essentialist categorizations of language users, such as <span>native speaker</span>, are widely used but lack empirical validity and reinforce social inequities. This article focuses on the <span>nativeness</span> construct, critically examining how its centrality in social-scientific research distorts scholarly inquiry, introduces bias in educational and clinical assessments, and perpetuates exclusion in academia. We argue that such labels impose artificial homogeneity, devalue linguistic diversity, and contribute to systemic biases in society. By reifying social divisions, essentialist categorizations can exclude marginalized groups, perpetuate linguistic discrimination, and hinder scientific progress. We advocate for a shift away from essentialist proxies and toward more contextually grounded and empirically driven characterizations of language use. A reflexive and interdisciplinary approach is necessary to dismantle these harmful frameworks and promote more accurate, inclusive, and equitable research. Our argument is relevant not just to the cognitive sciences, but to any scholarship which involves describing or understanding language. Ultimately, rejecting essentialist assumptions will lead to more nuanced understandings of language, identity, and social belonging, fostering both scientific and societal transformation by promoting justice and accuracy across social-scientific disciplines.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106444"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-09DOI: 10.1016/j.cognition.2026.106443
Yue Ji , Anna Papafragou
Event cognition is sensitive to whether an event is bounded (has a well-defined endpoint, e.g. build a sandcastle) or unbounded (lacks such an endpoint; e.g., play with sand). Boundedness interfaces with telicity in language: telic verb phrases denote events that include an inherent or natural endpoint while atelic verb phrases denote events that lack such an endpoint. Given that languages encode telicity in different ways, could these cross-linguistic differences influence the perception of event boundedness? We address this question by comparing English and Mandarin native speakers. We show that the two groups differ in their use of telicity in event descriptions (Experiment 1) but perform similarly when rating the likelihood of an event having a natural endpoint (Experiment 2) or attending to the temporal structure of bounded vs. unbounded events in a perceptual task (Experiment 3). These findings reveal commonalities in the representation of the temporal profile of events despite cross-linguistic differences.
{"title":"Representation of event boundedness in English and Mandarin speakers","authors":"Yue Ji , Anna Papafragou","doi":"10.1016/j.cognition.2026.106443","DOIUrl":"10.1016/j.cognition.2026.106443","url":null,"abstract":"<div><div>Event cognition is sensitive to whether an event is bounded (has a well-defined endpoint, e.g. build a sandcastle) or unbounded (lacks such an endpoint; e.g., play with sand). Boundedness interfaces with telicity in language: telic verb phrases denote events that include an inherent or natural endpoint while atelic verb phrases denote events that lack such an endpoint. Given that languages encode telicity in different ways, could these cross-linguistic differences influence the perception of event boundedness? We address this question by comparing English and Mandarin native speakers. We show that the two groups differ in their use of telicity in event descriptions (Experiment 1) but perform similarly when rating the likelihood of an event having a natural endpoint (Experiment 2) or attending to the temporal structure of bounded vs. unbounded events in a perceptual task (Experiment 3). These findings reveal commonalities in the representation of the temporal profile of events despite cross-linguistic differences.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106443"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-28DOI: 10.1016/j.cognition.2026.106457
Paul Gioia , Johannes C. Ziegler , Jerome Deauvieau
Phoneme awareness (PA) is undoubtably the most important and well-studied predictor of reading development. Yet, 20 years ago, Castles and Coltheart made the provocative claim that there was no convincing evidence for the causal role of PA in learning-to-read because previous studies typically failed to control for pre-reading skills. In the present study, we leveraged a unique opportunity to analyze data from a large-scale longitudinal investigation of reading development conducted nation-wide among all first graders in France (i.e., N = 810,328 children). We estimated not only the direct effect of PA on reading fluency measured one year later, but also its interaction effects with letter-knowledge (LK), knowledge of the alphabetic principle (KAP), and oral comprehension (OC). Our results show that the direct effects of PA on later reading fluency are moderated by OC, LK and KAP. Specifically, PA contributes to later reading outcomes only among children with strong KAP, and good LK and OC. We highlight the central role of KAP as a key predictor that has often been acknowledged in theory but rarely measured in empirical research. These findings indicate that phoneme awareness supports reading development only in the context of sufficient alphabetic knowledge, challenging strong causal accounts of PA in early reading acquisition.
{"title":"Beyond phonemic awareness: The alphabetic principle predicts reading acquisition in a nationwide longitudinal study","authors":"Paul Gioia , Johannes C. Ziegler , Jerome Deauvieau","doi":"10.1016/j.cognition.2026.106457","DOIUrl":"10.1016/j.cognition.2026.106457","url":null,"abstract":"<div><div>Phoneme awareness (PA) is undoubtably the most important and well-studied predictor of reading development. Yet, 20 years ago, Castles and Coltheart made the provocative claim that there was no convincing evidence for the causal role of PA in learning-to-read because previous studies typically failed to control for pre-reading skills. In the present study, we leveraged a unique opportunity to analyze data from a large-scale longitudinal investigation of reading development conducted nation-wide among all first graders in France (i.e., <em>N</em> = 810,328 children). We estimated not only the direct effect of PA on reading fluency measured one year later, but also its interaction effects with letter-knowledge (LK), knowledge of the alphabetic principle (KAP), and oral comprehension (OC). Our results show that the direct effects of PA on later reading fluency are moderated by OC, LK and KAP. Specifically, PA contributes to later reading outcomes only among children with strong KAP, and good LK and OC. We highlight the central role of KAP as a key predictor that has often been acknowledged in theory but rarely measured in empirical research. These findings indicate that phoneme awareness supports reading development only in the context of sufficient alphabetic knowledge, challenging strong causal accounts of PA in early reading acquisition.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106457"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-24DOI: 10.1016/j.cognition.2026.106449
Darrell A. Worthy, Mianzhi Hu
Recent work provides evidence for frequency effects during decision-making, where less-rewarding options that are presented more frequently are selected more often than more-rewarding options presented less frequently. This is predicted by the Decay but not the Delta reinforcement-learning (RL) model. The Decay model assumes that higher-frequency options are preferred because their past outcomes are more available in memory than those of lower-frequency options. However, most of this research has involved decision-making with gains, rather than losses. In loss-minimization scenarios, the Decay model predicts a reversed frequency effect because it assumes greater memory for losses, for the more frequently encountered alternatives. We tested this prediction in three experiments and found that the Decay model provides a very poor fit to data in loss-minimization scenarios. In Experiment 2, where participants tried to minimize their expenditures in a hypothetical shopping scenario, we observed a modest frequency effect. In Experiments 1 and 3, where participants were asked to minimize losses as points, without the hypothetical shopping scenario context, frequency effects were attenuated, but not reversed. These effects were best-accounted for by two novel models, the Prospect-Valence Prediction-Error Decay model (PVPE-Decay), which assumes relative rather than absolute processing of rewards, and the Delta-Uncertainty model which assumes aversiveness to less frequent options that are higher in uncertainty. These results dovetail with recent work showing that people process reward outcomes in a context-dependent manner, and they suggest smaller losses can be perceived as relative gains if framed in familiar scenarios involving cost-minimization.
{"title":"Frequency effects in decision-making involving loss minimization","authors":"Darrell A. Worthy, Mianzhi Hu","doi":"10.1016/j.cognition.2026.106449","DOIUrl":"10.1016/j.cognition.2026.106449","url":null,"abstract":"<div><div>Recent work provides evidence for frequency effects during decision-making, where less-rewarding options that are presented more frequently are selected more often than more-rewarding options presented less frequently. This is predicted by the Decay but not the Delta reinforcement-learning (RL) model. The Decay model assumes that higher-frequency options are preferred because their past outcomes are more available in memory than those of lower-frequency options. However, most of this research has involved decision-making with gains, rather than losses. In loss-minimization scenarios, the Decay model predicts a <em>reversed</em> frequency effect because it assumes greater memory for losses, for the more frequently encountered alternatives. We tested this prediction in three experiments and found that the Decay model provides a very poor fit to data in loss-minimization scenarios. In Experiment 2, where participants tried to minimize their expenditures in a hypothetical shopping scenario, we observed a modest frequency effect. In Experiments 1 and 3, where participants were asked to minimize losses as points, without the hypothetical shopping scenario context, frequency effects were attenuated, but not reversed. These effects were best-accounted for by two novel models, the Prospect-Valence Prediction-Error Decay model (PVPE-Decay), which assumes <em>relative</em> rather than absolute processing of rewards, and the Delta-Uncertainty model which assumes aversiveness to less frequent options that are higher in uncertainty. These results dovetail with recent work showing that people process reward outcomes in a context-dependent manner, and they suggest smaller losses can be perceived as relative gains if framed in familiar scenarios involving cost-minimization.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106449"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-03DOI: 10.1016/j.cognition.2026.106468
Qiongwen Cao , Fan Yang , Haocheng Ma , Jean Decety
The principle of equal human worth is widely endorsed, yet real-world situations often require trade-offs. This raises a fundamental question: Do individuals truly value all human lives equally from an early age, or do they differentiate based on salient attributes? In a cross-sectional study, children aged 5–10 years (N = 253, 47% female) and their mothers made binary life-or-death choices between two individuals differing in age and sex. Results showed that even the youngest children did not value all lives equally. With age, children increasingly prioritized younger individuals, plausibly reflecting a growing understanding that older people have less time left to live, and showed reduced same sex ingroup preference. Machine learning models predicted older children's choices more accurately, suggesting that decision-making becomes more systematic and predictable with development. Mothers prioritized younger and female lives, with the strongest female preference emerging when the two individuals differed in sex but not age. Framing also influenced judgment: saving vs. leaving behind altered the strength of the preference for younger lives. These patterns align with social norms and gender stereotypes (e.g., protection of “vulnerable” groups, gendered expectations of helpfulness and susceptibility to harm). Evolutionary frameworks, such as reproductive value and parental investment, offer potential explanations for why such norms and stereotypes seem pervasive. Overall, the findings indicate that the valuation of human lives is initially not egalitarian, becomes increasingly structured across childhood, and adult priorities may arise from the interplay between evolved caregiving heuristics and fairness norms.
{"title":"Who would you save? Children and mothers' life-or-death decisions","authors":"Qiongwen Cao , Fan Yang , Haocheng Ma , Jean Decety","doi":"10.1016/j.cognition.2026.106468","DOIUrl":"10.1016/j.cognition.2026.106468","url":null,"abstract":"<div><div>The principle of equal human worth is widely endorsed, yet real-world situations often require trade-offs. This raises a fundamental question: Do individuals truly value all human lives equally from an early age, or do they differentiate based on salient attributes? In a cross-sectional study, children aged 5–10 years (<em>N</em> = 253, 47% female) and their mothers made binary life-or-death choices between two individuals differing in age and sex. Results showed that even the youngest children did not value all lives equally. With age, children increasingly prioritized younger individuals, plausibly reflecting a growing understanding that older people have less time left to live, and showed reduced same sex ingroup preference. Machine learning models predicted older children's choices more accurately, suggesting that decision-making becomes more systematic and predictable with development. Mothers prioritized younger and female lives, with the strongest female preference emerging when the two individuals differed in sex but not age. Framing also influenced judgment: saving vs. leaving behind altered the strength of the preference for younger lives. These patterns align with social norms and gender stereotypes (e.g., protection of “vulnerable” groups, gendered expectations of helpfulness and susceptibility to harm). Evolutionary frameworks, such as reproductive value and parental investment, offer potential explanations for why such norms and stereotypes seem pervasive. Overall, the findings indicate that the valuation of human lives is initially not egalitarian, becomes increasingly structured across childhood, and adult priorities may arise from the interplay between evolved caregiving heuristics and fairness norms.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106468"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-30DOI: 10.1016/j.cognition.2026.106442
Ioannis Evangelidis
This paper investigates how the number of branches in a prospect influences decision makers' preferences. I propose that individuals may use differences in branch number as a justification when choosing between prospects, but that this heuristic applies only when multiple probabilistic options are available for comparison. Accordingly, the impact of branch number on choice depends on decision context, particularly the alternatives presented alongside the target prospect. In choices between two prospects offering probabilistic gains, preference for a prospect increases when it offers more gain branches than the alternative. For example, more people choose a target prospect offering a 20% chance to win $14 and a 20% chance to win $15 (otherwise $0) over an alternative offering a 60% chance to win $10 (otherwise $0) than when the target offers a 40% chance to win $15 (otherwise $0). However, the effect disappears when the alternative is a sure gain and reverses when the prospect is presented in isolation. The data also indicate rapidly diminishing sensitivity: preference increases when a prospect's branches rise from one to two while the alternative has a single branch, but additional branches yield little or no further gain in attractiveness. Additional studies examined moderators of the effect and extended the findings to losses and to decisions involving valuations of human lives. Together, these results challenge existing models of risky choice by demonstrating the context dependence of branch effects, and they carry practical implications for financial and policy decisions under uncertainty.
{"title":"Context-dependent effects of branches in decisions under risk","authors":"Ioannis Evangelidis","doi":"10.1016/j.cognition.2026.106442","DOIUrl":"10.1016/j.cognition.2026.106442","url":null,"abstract":"<div><div>This paper investigates how the number of branches in a prospect influences decision makers' preferences. I propose that individuals may use differences in branch number as a justification when choosing between prospects, but that this heuristic applies only when multiple probabilistic options are available for comparison. Accordingly, the impact of branch number on choice depends on decision context, particularly the alternatives presented alongside the target prospect. In choices between two prospects offering probabilistic gains, preference for a prospect increases when it offers more gain branches than the alternative. For example, more people choose a target prospect offering a 20% chance to win $14 and a 20% chance to win $15 (otherwise $0) over an alternative offering a 60% chance to win $10 (otherwise $0) than when the target offers a 40% chance to win $15 (otherwise $0). However, the effect disappears when the alternative is a sure gain and reverses when the prospect is presented in isolation. The data also indicate rapidly diminishing sensitivity: preference increases when a prospect's branches rise from one to two while the alternative has a single branch, but additional branches yield little or no further gain in attractiveness. Additional studies examined moderators of the effect and extended the findings to losses and to decisions involving valuations of human lives. Together, these results challenge existing models of risky choice by demonstrating the context dependence of branch effects, and they carry practical implications for financial and policy decisions under uncertainty.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106442"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-13DOI: 10.1016/j.cognition.2026.106478
Xucong Hu , Yitong Zheng , Qinyi Hu , Hui Chen , Mowei Shen , Jifan Zhou
The underlying mechanism of visual perspective-taking (VPT)—the ability to represent what others see—remains contested. Perceptual simulation theory proposes that VPT involves reconstructing others' visual experiences, whereas heuristic accounts argue that it relies on symbolic inference grounded in naïve optics. Evidence for heuristics largely comes from explicit report tasks, leaving open whether spontaneous (implicit) VPT in an agent-irrelevant task is driven by the same mechanism. A further possibility is that apparent “simulation failures” arise because observers lack prior visual information about what the other sees from their viewpoint. Across two experiments, participants performed an agent-irrelevant line-length judgment task while receiving plausible, absent, or implausible prior visual information from the agent's viewpoint. Experiment 1 showed a robust perspective-consistent bias under plausible priors, no bias without priors, and a weaker bias under implausible priors. A control experiment ruled out priming. Experiment 2 parametrically varied implausibility in a Ponzo-style layout and revealed a boundary condition: priors ranging from plausible to moderately implausible continued to bias judgments, whereas highly implausible priors were discounted. These results support a bounded, resource-rational heuristic account in which others' visual information acts as plausibility-weighted cues integrated with one's own visual input, rather than being reconstructed via perceptual simulation.
{"title":"I'll believe it unless it's too absurd: Spontaneous visual perspective-taking as prior-based heuristic inference","authors":"Xucong Hu , Yitong Zheng , Qinyi Hu , Hui Chen , Mowei Shen , Jifan Zhou","doi":"10.1016/j.cognition.2026.106478","DOIUrl":"10.1016/j.cognition.2026.106478","url":null,"abstract":"<div><div>The underlying mechanism of visual perspective-taking (VPT)—the ability to represent what others see—remains contested. Perceptual simulation theory proposes that VPT involves reconstructing others' visual experiences, whereas heuristic accounts argue that it relies on symbolic inference grounded in naïve optics. Evidence for heuristics largely comes from explicit report tasks, leaving open whether spontaneous (implicit) VPT in an agent-irrelevant task is driven by the same mechanism. A further possibility is that apparent “simulation failures” arise because observers lack prior visual information about what the other sees from their viewpoint. Across two experiments, participants performed an agent-irrelevant line-length judgment task while receiving plausible, absent, or implausible prior visual information from the agent's viewpoint. Experiment 1 showed a robust perspective-consistent bias under plausible priors, no bias without priors, and a weaker bias under implausible priors. A control experiment ruled out priming. Experiment 2 parametrically varied implausibility in a Ponzo-style layout and revealed a boundary condition: priors ranging from plausible to moderately implausible continued to bias judgments, whereas highly implausible priors were discounted. These results support a bounded, resource-rational heuristic account in which others' visual information acts as plausibility-weighted cues integrated with one's own visual input, rather than being reconstructed via perceptual simulation.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106478"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-07DOI: 10.1016/j.cognition.2026.106467
Mandy Cartner , Matthew Kogan , Nikolas Webster , Matthew Wagers , Ivy Sichel
A central question about our shared capacity for language is how it is integrated with other cognitive systems. One important debate focuses on the extent to which the form of linguistic expressions is grounded in their communicative function: Can all constraints on linguistic form be attributed to the way constructions package information, or is linguistic form autonomous of meaning and function? One area of disagreement involves islands: phrases which block the formation of long-distance filler-gap dependencies (Ross, 1967). Grammatical subjects are considered islands, since questioning a sub-part of a subject results in an ill-formed sentence, e.g., “Which topic did the article about inspire you?”. Autonomous syntactic approaches to islands attribute this ungrammaticality to the abstract movement dependency between the wh-phrase and the subject-internal position with which it is associated. An alternative developed in Abeillé et al. (2020) suggests that subjects' island status is specific to the information structure of wh-questions, suggesting that subjects are not islands for movement, but for focusing, due to their discourse-backgroundedness. This predicts that other constructions that involve movement but not focusing should not create a subject island effect. We test this in three acceptability studies, using a factorial design to isolate subject island violations across three constructions: wh-questions, relative clauses and topicalization. We find a subject island effect in each case, despite only wh-questions introducing what Abeillé et al. (2020) call “a clash in information structure”. We argue that this motivates an account of islands in terms of syntactic representations shared across constructions, independent of communicative function.
{"title":"Subject islands do not reduce to construction-specific discourse function","authors":"Mandy Cartner , Matthew Kogan , Nikolas Webster , Matthew Wagers , Ivy Sichel","doi":"10.1016/j.cognition.2026.106467","DOIUrl":"10.1016/j.cognition.2026.106467","url":null,"abstract":"<div><div>A central question about our shared capacity for language is how it is integrated with other cognitive systems. One important debate focuses on the extent to which the form of linguistic expressions is grounded in their communicative function: Can all constraints on linguistic form be attributed to the way constructions package information, or is linguistic form autonomous of meaning and function? One area of disagreement involves <em>islands</em>: phrases which block the formation of long-distance filler-gap dependencies (<span><span>Ross, 1967</span></span>). Grammatical subjects are considered islands, since questioning a sub-part of a subject results in an ill-formed sentence, e.g., “Which topic did the article about inspire you?”. Autonomous syntactic approaches to islands attribute this ungrammaticality to the abstract <em>movement</em> dependency between the <em>wh-</em>phrase and the subject-internal position with which it is associated. An alternative developed in <span><span>Abeillé et al. (2020)</span></span> suggests that subjects' island status is specific to the information structure of <em>wh</em>-questions, suggesting that subjects are not islands for <em>movement</em>, but for <em>focusing</em>, due to their discourse-backgroundedness. This predicts that other constructions that involve movement but not focusing should not create a subject island effect. We test this in three acceptability studies, using a factorial design to isolate subject island violations across three constructions: <em>wh</em>-questions, relative clauses and topicalization. We find a subject island effect in each case, despite only <em>wh</em>-questions introducing what <span><span>Abeillé et al. (2020)</span></span> call “a clash in information structure”. We argue that this motivates an account of islands in terms of syntactic representations shared across constructions, independent of communicative function.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106467"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-09DOI: 10.1016/j.cognition.2026.106480
Litian Chen , Freek van Ede , Chris Jungerius , Heleen A. Slagter
Spatial selective attention is typically thought to act as a sensory filter: prioritizing the processing of relevant information at a particular location in space over that of irrelevant information. Research using dynamic setups, rather than standard static laboratory setups with seated observers, however, shows that spatial selective attention does not simply facilitate sensory processing at a particular location (where), but also involves the planning of how to (covertly) sample that information from the agent's perspective. That is, spatial selective attention is constrained by sensorimotor processing and includes an action component. Here we ask whether this extends to the flipside of target selection: whether the suppression of irrelevant distractors is similarly viewer dependent. In three experiments (one preregistered), participants performed an additional-singleton visual search task in which a salient distractor could occur more often at one of the search locations (unknown to the participant). Critically, participants conducted the visual search on a monitor positioned flat on a tabletop so that we could manipulate their standing position. This enabled us to disentangle whether implicit distractor-location learning is anchored in world coordinates or incorporates information as to how one can suppress attentional sampling in space from their viewpoint to prevent distraction. Across all three experiments, we found that implicit distractor-location learning is viewer dependent when embedded in active behavior. These findings show that learning to inhibit distractors sometimes cannot be abstracted from the agent and how they can suppress sampling the world from their perspective.
{"title":"Grounding distractor inhibition in action control: Implicit distractor-location learning is viewer dependent","authors":"Litian Chen , Freek van Ede , Chris Jungerius , Heleen A. Slagter","doi":"10.1016/j.cognition.2026.106480","DOIUrl":"10.1016/j.cognition.2026.106480","url":null,"abstract":"<div><div>Spatial selective attention is typically thought to act as a sensory filter: prioritizing the processing of relevant information at a particular location in space over that of irrelevant information. Research using dynamic setups, rather than standard static laboratory setups with seated observers, however, shows that spatial selective attention does not simply facilitate sensory processing at a particular location (<em>where</em>), but also involves the planning of how to (covertly) sample that information from the agent's perspective. That is, spatial selective attention is constrained by sensorimotor processing and includes an action component. Here we ask whether this extends to the flipside of target selection: whether the suppression of irrelevant distractors is similarly viewer dependent. In three experiments (one preregistered), participants performed an additional-singleton visual search task in which a salient distractor could occur more often at one of the search locations (unknown to the participant). Critically, participants conducted the visual search on a monitor positioned flat on a tabletop so that we could manipulate their standing position. This enabled us to disentangle whether implicit distractor-location learning is anchored in world coordinates or incorporates information as to how one can suppress attentional sampling in space from their viewpoint to prevent distraction. Across all three experiments, we found that implicit distractor-location learning is viewer dependent when embedded in active behavior. These findings show that learning to inhibit distractors sometimes cannot be abstracted from the agent and how they can suppress sampling the world from their perspective.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106480"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146158887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-01-16DOI: 10.1016/j.cognition.2026.106447
Julie Y.L. Chow , Kelly G. Garner , Daniel Pearson , Jan Theeuwes , Mike E. Le Pelley
Demonstrations of information-seeking behaviour suggest that attention often acts in an exploitative way, prioritising stimuli that provide diagnostic information about upcoming events over stimuli associated with uncertainty. However, recent evidence from studies of attentional capture in visual search show an opposite pattern: automatic prioritisation of items associated with reward uncertainty over diagnostic stimuli. We hypothesise that this uncertainty-modulated attentional capture (UMAC) effect reflects ‘attention for learning’: that is, exploration of potential sources of new information. Here we investigated whether UMAC arises because immediate provision of reward feedback in prior studies rendered advance information redundant, attenuating exploitation of diagnostic items and promoting exploration. Accordingly, increasing the duration of anticipated uncertainty (and hence the value of advance information that allows us to escape uncertainty earlier) should promote prioritisation of diagnostic cues and lead to patterns of attentional exploitation. In two eye-tracking experiments, we compared attentional capture by a cue providing diagnostic reward information and a cue signalling uncertain reward, while manipulating the delay between response and feedback (i.e., the duration of anticipated uncertainty that advance information could forestall). We found a UMAC effect in all conditions: regardless of response–feedback delay, uncertain stimuli were more likely to capture attention than diagnostic stimuli. These results suggest that prioritisation of uncertainty is a robust pattern of behaviour in this task. Synthesising current and previous findings, we suggest that different modes of attentional information-seeking may reflect qualitative task differences, with exploration operating at an implicit, automatic level, and exploitation resulting from top-down, volitional processes.
{"title":"Delaying reward feedback does not increase the influence of information on attentional priority in visual search","authors":"Julie Y.L. Chow , Kelly G. Garner , Daniel Pearson , Jan Theeuwes , Mike E. Le Pelley","doi":"10.1016/j.cognition.2026.106447","DOIUrl":"10.1016/j.cognition.2026.106447","url":null,"abstract":"<div><div>Demonstrations of information-seeking behaviour suggest that attention often acts in an exploitative way, prioritising stimuli that provide diagnostic information about upcoming events over stimuli associated with uncertainty. However, recent evidence from studies of attentional capture in visual search show an opposite pattern: automatic prioritisation of items associated with reward uncertainty over diagnostic stimuli. We hypothesise that this uncertainty-modulated attentional capture (UMAC) effect reflects ‘attention for learning’: that is, exploration of potential sources of new information. Here we investigated whether UMAC arises because immediate provision of reward feedback in prior studies rendered advance information redundant, attenuating exploitation of diagnostic items and promoting exploration. Accordingly, increasing the duration of anticipated uncertainty (and hence the value of advance information that allows us to escape uncertainty earlier) should promote prioritisation of diagnostic cues and lead to patterns of attentional exploitation. In two eye-tracking experiments, we compared attentional capture by a cue providing diagnostic reward information and a cue signalling uncertain reward, while manipulating the delay between response and feedback (i.e., the duration of anticipated uncertainty that advance information could forestall). We found a UMAC effect in all conditions: regardless of response–feedback delay, uncertain stimuli were more likely to capture attention than diagnostic stimuli. These results suggest that prioritisation of uncertainty is a robust pattern of behaviour in this task. Synthesising current and previous findings, we suggest that different modes of attentional information-seeking may reflect qualitative task differences, with exploration operating at an implicit, automatic level, and exploitation resulting from top-down, volitional processes.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106447"},"PeriodicalIF":2.8,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}