Pub Date : 2026-01-29DOI: 10.1016/j.cognition.2025.106415
Lalit Pandey , Samantha M.W. Wood , Benjamin Cappell , Justin N. Wood
Orientation selectivity—the representation of oriented edges—is a hallmark of biological vision, shared across mammals, birds, and reptiles. However, the origins of orientation selectivity are unknown. Is orientation selectivity predetermined, with genes instructing the development of edge representations? Or is orientation selectivity the product of blind evolution-like (variation + selection) fitting during prenatal development? Here, we provide evidence supporting the fitting account. Using generic image-computable fitting models (transformers), we show that orientation selectivity develops when fitting systems adapt to prenatal experiences. Our models started from scratch, with no innate orientation selectivity and no hardcoded priors about lines, objects, or space. The models were then trained with a biologically plausible fitting objective (unsupervised temporal learning) and biologically plausible prenatal data (retinal waves). Despite starting from scratch, the models spontaneously developed robust orientation selectivity. This result generalized across architecture sizes, training conditions, and retinal waves from different species. Edge representations develop when domain-general fitting mechanisms adapt to prenatal experiences, supporting fitting theories of learning and development.
{"title":"Generic fitting models learn edge representations from prenatal retinal waves","authors":"Lalit Pandey , Samantha M.W. Wood , Benjamin Cappell , Justin N. Wood","doi":"10.1016/j.cognition.2025.106415","DOIUrl":"10.1016/j.cognition.2025.106415","url":null,"abstract":"<div><div>Orientation selectivity—the representation of oriented edges—is a hallmark of biological vision, shared across mammals, birds, and reptiles. However, the origins of orientation selectivity are unknown. Is orientation selectivity predetermined, with genes instructing the development of edge representations? Or is orientation selectivity the product of blind evolution-like (variation + selection) fitting during prenatal development? Here, we provide evidence supporting the fitting account. Using generic image-computable fitting models (transformers), we show that orientation selectivity develops when fitting systems adapt to prenatal experiences. Our models started from scratch, with no innate orientation selectivity and no hardcoded priors about lines, objects, or space. The models were then trained with a biologically plausible fitting objective (unsupervised temporal learning) and biologically plausible prenatal data (retinal waves). Despite starting from scratch, the models spontaneously developed robust orientation selectivity. This result generalized across architecture sizes, training conditions, and retinal waves from different species. Edge representations develop when domain-general fitting mechanisms adapt to prenatal experiences, supporting fitting theories of learning and development.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106415"},"PeriodicalIF":2.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.cognition.2026.106457
Paul Gioia , Johannes C. Ziegler , Jerome Deauvieau
Phoneme awareness (PA) is undoubtably the most important and well-studied predictor of reading development. Yet, 20 years ago, Castles and Coltheart made the provocative claim that there was no convincing evidence for the causal role of PA in learning-to-read because previous studies typically failed to control for pre-reading skills. In the present study, we leveraged a unique opportunity to analyze data from a large-scale longitudinal investigation of reading development conducted nation-wide among all first graders in France (i.e., N = 810,328 children). We estimated not only the direct effect of PA on reading fluency measured one year later, but also its interaction effects with letter-knowledge (LK), knowledge of the alphabetic principle (KAP), and oral comprehension (OC). Our results show that the direct effects of PA on later reading fluency are moderated by OC, LK and KAP. Specifically, PA contributes to later reading outcomes only among children with strong KAP, and good LK and OC. We highlight the central role of KAP as a key predictor that has often been acknowledged in theory but rarely measured in empirical research. These findings indicate that phoneme awareness supports reading development only in the context of sufficient alphabetic knowledge, challenging strong causal accounts of PA in early reading acquisition.
{"title":"Beyond phonemic awareness: The alphabetic principle predicts reading acquisition in a nationwide longitudinal study","authors":"Paul Gioia , Johannes C. Ziegler , Jerome Deauvieau","doi":"10.1016/j.cognition.2026.106457","DOIUrl":"10.1016/j.cognition.2026.106457","url":null,"abstract":"<div><div>Phoneme awareness (PA) is undoubtably the most important and well-studied predictor of reading development. Yet, 20 years ago, Castles and Coltheart made the provocative claim that there was no convincing evidence for the causal role of PA in learning-to-read because previous studies typically failed to control for pre-reading skills. In the present study, we leveraged a unique opportunity to analyze data from a large-scale longitudinal investigation of reading development conducted nation-wide among all first graders in France (i.e., <em>N</em> = 810,328 children). We estimated not only the direct effect of PA on reading fluency measured one year later, but also its interaction effects with letter-knowledge (LK), knowledge of the alphabetic principle (KAP), and oral comprehension (OC). Our results show that the direct effects of PA on later reading fluency are moderated by OC, LK and KAP. Specifically, PA contributes to later reading outcomes only among children with strong KAP, and good LK and OC. We highlight the central role of KAP as a key predictor that has often been acknowledged in theory but rarely measured in empirical research. These findings indicate that phoneme awareness supports reading development only in the context of sufficient alphabetic knowledge, challenging strong causal accounts of PA in early reading acquisition.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106457"},"PeriodicalIF":2.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.cognition.2026.106454
Iris Wiegand , Igor S. Utochkin , Ava Mitra , Chia-Chien Wu , Jeremy M. Wolfe
This study investigated age differences in the precise knowledge and imprecise knowledge, or awareness, of multiple moving visual objects, measured by Multiple Identity Tracking (MIT) and Multiple Object Awareness (MOA) capacities, respectively, in a multiple object tracking task. Experiment 1 demonstrated significant decline of both capacities in older observers (65–80 years) compared to younger observers (18–44 years). Experiment 2 showed that age-related declines in MIT and MOA were linear across the adult lifespan (18–76 years).
Additionally, we used computational models to test whether age effects could be explained by one common signal-strength factor (d') or by a dual-process model with an additional recollection parameter (R). Our results indicate that a detailed, recollection-based object-location representation (R) only plays a small role in tracking many objects and this factor does not vary with observers' age. For most observers, a single signal-strength parameter (d) explained behaviour best, and this parameter significantly declined with observers' age. This suggests that reduced sensitivity likely impairs older adults' ability to discriminate and clearly represent visual objects, resulting in both lower MIT and MOA capacities.
{"title":"A common signal-strength factor limits awareness and precise knowledge of multiple moving objects across the adult lifespan","authors":"Iris Wiegand , Igor S. Utochkin , Ava Mitra , Chia-Chien Wu , Jeremy M. Wolfe","doi":"10.1016/j.cognition.2026.106454","DOIUrl":"10.1016/j.cognition.2026.106454","url":null,"abstract":"<div><div>This study investigated age differences in the precise knowledge and imprecise knowledge, or awareness, of multiple moving visual objects, measured by Multiple Identity Tracking (MIT) and Multiple Object Awareness (MOA) capacities, respectively, in a multiple object tracking task. Experiment 1 demonstrated significant decline of both capacities in older observers (65–80 years) compared to younger observers (18–44 years). Experiment 2 showed that age-related declines in MIT and MOA were linear across the adult lifespan (18–76 years).</div><div>Additionally, we used computational models to test whether age effects could be explained by one common signal-strength factor (<em>d'</em>) or by a dual-process model with an additional recollection parameter (R). Our results indicate that a detailed, recollection-based object-location representation (R) only plays a small role in tracking many objects and this factor does not vary with observers' age. For most observers, a single signal-strength parameter (d) explained behaviour best, and this parameter significantly declined with observers' age. This suggests that reduced sensitivity likely impairs older adults' ability to discriminate and clearly represent visual objects, resulting in both lower MIT and MOA capacities.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106454"},"PeriodicalIF":2.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.cognition.2026.106456
Karen C. Levush , Mary DePascale , Jenna Alton , Lucas Payne Butler
Children and adults alike tend to rely on majority opinion to decide what is true. However, in many circumstances we are faced with contradictory explanations for phenomena, each shared by different consensus groups, with little knowledge about how each group's opinion was formed to help guide our decision. When a phenomenon has multiple competing explanations—such as why a species exhibits an unusual behavior or what caused a historical event—we must evaluate not only what different groups believe, but why those groups have come to different conclusions. In such cases, we may need to rely on what we know about the consensus group members themselves, including their social identities and relations with one another. Here we present three studies (N = 288 5- to 9-year-old children, 84 adults) investigating how children use the social composition of consensus groups (homogenous vs. diverse social group membership; distant vs. close proximity) to select which consensus explanation to seek, whether this varies as a function of the type of explanation sought (natural vs. cultural phenomena), and how children reason about these decisions. Our findings suggest increasing sophistication across childhood, with children increasingly coming to understand how social composition indicates more or less independent experiences leading to individuals' shared beliefs. This research provides preliminary insight into how children come to develop an increasing appreciation for the epistemic implications of social relations between consensus members, as reflected both by their choices of whose testimony to seek out and their explicit justification for those choices.
{"title":"More than just agreement: Children's developing understanding that the power of consensus stems from independent experiences","authors":"Karen C. Levush , Mary DePascale , Jenna Alton , Lucas Payne Butler","doi":"10.1016/j.cognition.2026.106456","DOIUrl":"10.1016/j.cognition.2026.106456","url":null,"abstract":"<div><div>Children and adults alike tend to rely on majority opinion to decide what is true. However, in many circumstances we are faced with contradictory explanations for phenomena, each shared by different consensus groups, with little knowledge about how each group's opinion was formed to help guide our decision. When a phenomenon has multiple competing explanations—such as why a species exhibits an unusual behavior or what caused a historical event—we must evaluate not only what different groups believe, but why those groups have come to different conclusions. In such cases, we may need to rely on what we know about the consensus group members themselves, including their social identities and relations with one another. Here we present three studies (<em>N</em> = 288 5- to 9-year-old children, 84 adults) investigating how children use the social composition of consensus groups (homogenous vs. diverse social group membership; distant vs. close proximity) to select which consensus explanation to seek, whether this varies as a function of the type of explanation sought (natural vs. cultural phenomena), and how children reason about these decisions. Our findings suggest increasing sophistication across childhood, with children increasingly coming to understand how social composition indicates more or less independent experiences leading to individuals' shared beliefs. This research provides preliminary insight into how children come to develop an increasing appreciation for the epistemic implications of social relations between consensus members, as reflected both by their choices of whose testimony to seek out and their explicit justification for those choices.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106456"},"PeriodicalIF":2.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1016/j.cognition.2026.106452
M. Houbben , G. Vannuscorps
How does the brain transform retinal information into representations of oriented objects? The most comprehensive computational explanation to date – the coordinate-system hypothesis of orientation representation - proposes that this transformation relies on the computation of four parameters that jointly define the relationship between a shape and its environment: axis correspondence, polarity correspondence, tilt direction, and tilt magnitude. The goal of this research was to investigate whether these parameters are computed in parallel or serially and, if so, in which order. To do so, we conducted three same/different experiments in which targets and probes could differ by either one of two parameters (A and B) or both (A + B). Under the assumption that response times in such tasks reflect the rate at which evidence for a difference is accumulated, the conjunction condition (A + B) should result in faster response times if the two parameters (A and B) are processed in parallel. In contrast, if the two parameters are processed serially, response times for A + B should be equivalent to those for the first parameter (e.g., A) and faster than those for the second parameter (B). In this framework, the results of the three experiments suggest that axis correspondence is computed first, followed by all the other parameters, computed in parallel.
{"title":"The computational dynamics of shape orientation perception","authors":"M. Houbben , G. Vannuscorps","doi":"10.1016/j.cognition.2026.106452","DOIUrl":"10.1016/j.cognition.2026.106452","url":null,"abstract":"<div><div>How does the brain transform retinal information into representations of oriented objects? The most comprehensive computational explanation to date – the coordinate-system hypothesis of orientation representation - proposes that this transformation relies on the computation of four parameters that jointly define the relationship between a shape and its environment: axis correspondence, polarity correspondence, tilt direction, and tilt magnitude. The goal of this research was to investigate whether these parameters are computed in parallel or serially and, if so, in which order. To do so, we conducted three same/different experiments in which targets and probes could differ by either one of two parameters (A and B) or both (A + B). Under the assumption that response times in such tasks reflect the rate at which evidence for a difference is accumulated, the conjunction condition (A + B) should result in faster response times if the two parameters (A and B) are processed in parallel. In contrast, if the two parameters are processed serially, response times for A + B should be equivalent to those for the first parameter (e.g., A) and faster than those for the second parameter (B). In this framework, the results of the three experiments suggest that axis correspondence is computed first, followed by all the other parameters, computed in parallel.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106452"},"PeriodicalIF":2.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.cognition.2026.106449
Darrell A. Worthy, Mianzhi Hu
Recent work provides evidence for frequency effects during decision-making, where less-rewarding options that are presented more frequently are selected more often than more-rewarding options presented less frequently. This is predicted by the Decay but not the Delta reinforcement-learning (RL) model. The Decay model assumes that higher-frequency options are preferred because their past outcomes are more available in memory than those of lower-frequency options. However, most of this research has involved decision-making with gains, rather than losses. In loss-minimization scenarios, the Decay model predicts a reversed frequency effect because it assumes greater memory for losses, for the more frequently encountered alternatives. We tested this prediction in three experiments and found that the Decay model provides a very poor fit to data in loss-minimization scenarios. In Experiment 2, where participants tried to minimize their expenditures in a hypothetical shopping scenario, we observed a modest frequency effect. In Experiments 1 and 3, where participants were asked to minimize losses as points, without the hypothetical shopping scenario context, frequency effects were attenuated, but not reversed. These effects were best-accounted for by two novel models, the Prospect-Valence Prediction-Error Decay model (PVPE-Decay), which assumes relative rather than absolute processing of rewards, and the Delta-Uncertainty model which assumes aversiveness to less frequent options that are higher in uncertainty. These results dovetail with recent work showing that people process reward outcomes in a context-dependent manner, and they suggest smaller losses can be perceived as relative gains if framed in familiar scenarios involving cost-minimization.
{"title":"Frequency effects in decision-making involving loss minimization","authors":"Darrell A. Worthy, Mianzhi Hu","doi":"10.1016/j.cognition.2026.106449","DOIUrl":"10.1016/j.cognition.2026.106449","url":null,"abstract":"<div><div>Recent work provides evidence for frequency effects during decision-making, where less-rewarding options that are presented more frequently are selected more often than more-rewarding options presented less frequently. This is predicted by the Decay but not the Delta reinforcement-learning (RL) model. The Decay model assumes that higher-frequency options are preferred because their past outcomes are more available in memory than those of lower-frequency options. However, most of this research has involved decision-making with gains, rather than losses. In loss-minimization scenarios, the Decay model predicts a <em>reversed</em> frequency effect because it assumes greater memory for losses, for the more frequently encountered alternatives. We tested this prediction in three experiments and found that the Decay model provides a very poor fit to data in loss-minimization scenarios. In Experiment 2, where participants tried to minimize their expenditures in a hypothetical shopping scenario, we observed a modest frequency effect. In Experiments 1 and 3, where participants were asked to minimize losses as points, without the hypothetical shopping scenario context, frequency effects were attenuated, but not reversed. These effects were best-accounted for by two novel models, the Prospect-Valence Prediction-Error Decay model (PVPE-Decay), which assumes <em>relative</em> rather than absolute processing of rewards, and the Delta-Uncertainty model which assumes aversiveness to less frequent options that are higher in uncertainty. These results dovetail with recent work showing that people process reward outcomes in a context-dependent manner, and they suggest smaller losses can be perceived as relative gains if framed in familiar scenarios involving cost-minimization.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106449"},"PeriodicalIF":2.8,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1016/j.cognition.2026.106453
Martina Fanghella , Camilla F. Colombo , Fabio Aurelio D’Asaro , Maria Teresa Pascarelli , Guido Barchiesi , Marco Rabuffetti , Maurizio Ferrarin , Francesco Guala , Corrado Sinigaglia
Successful coordination often requires integrating strategic reasoning with real-time observations of others' actions, yet how humans resolve conflicts between these information sources remains unclear. This study aimed to fill this gap by examining how people coordinate in a strategic game when observing partial kinematic information from their partner's actions. Participants played a HI-LO game with a virtual partner, coordinating their payoff choices based on the initial portion (10–40% of movement length) of their partner's grasping movements toward an invisible large or small target. Hand movements were presented as schematic animations, with partners grasping targets linked to higher or lower payoffs across two configurations. Participants relied exclusively on kinematic cues from hand shape changes in maximum grip aperture to infer their partner's choices. We found that although participants typically favored the higher payoff in line with rational game-theoretic expectations, they revised these expectations whenever the partial kinematic cues suggested otherwise. When early grip aperture changes indicated the partner was reaching for a large target associated with a lower payoff, participants revised their preference for higher payoffs, achieving high coordination success. These findings show that people prioritize kinematic evidence about others' actions over abstract assumptions about rational payoff maximization. Even very early movement cues can shift beliefs about what a rational agent is likely to choose, highlighting the central role of action perception in strategic coordination.
{"title":"Rational expectations and kinematic information in coordination games","authors":"Martina Fanghella , Camilla F. Colombo , Fabio Aurelio D’Asaro , Maria Teresa Pascarelli , Guido Barchiesi , Marco Rabuffetti , Maurizio Ferrarin , Francesco Guala , Corrado Sinigaglia","doi":"10.1016/j.cognition.2026.106453","DOIUrl":"10.1016/j.cognition.2026.106453","url":null,"abstract":"<div><div>Successful coordination often requires integrating strategic reasoning with real-time observations of others' actions, yet how humans resolve conflicts between these information sources remains unclear. This study aimed to fill this gap by examining how people coordinate in a strategic game when observing partial kinematic information from their partner's actions. Participants played a HI-LO game with a virtual partner, coordinating their payoff choices based on the initial portion (10–40% of movement length) of their partner's grasping movements toward an invisible large or small target. Hand movements were presented as schematic animations, with partners grasping targets linked to higher or lower payoffs across two configurations. Participants relied exclusively on kinematic cues from hand shape changes in maximum grip aperture to infer their partner's choices. We found that although participants typically favored the higher payoff in line with rational game-theoretic expectations, they revised these expectations whenever the partial kinematic cues suggested otherwise. When early grip aperture changes indicated the partner was reaching for a large target associated with a lower payoff, participants revised their preference for higher payoffs, achieving high coordination success. These findings show that people prioritize kinematic evidence about others' actions over abstract assumptions about rational payoff maximization. Even very early movement cues can shift beliefs about what a rational agent is likely to choose, highlighting the central role of action perception in strategic coordination.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106453"},"PeriodicalIF":2.8,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.cognition.2026.106447
Julie Y.L. Chow , Kelly G. Garner , Daniel Pearson , Jan Theeuwes , Mike E. Le Pelley
Demonstrations of information-seeking behaviour suggest that attention often acts in an exploitative way, prioritising stimuli that provide diagnostic information about upcoming events over stimuli associated with uncertainty. However, recent evidence from studies of attentional capture in visual search show an opposite pattern: automatic prioritisation of items associated with reward uncertainty over diagnostic stimuli. We hypothesise that this uncertainty-modulated attentional capture (UMAC) effect reflects ‘attention for learning’: that is, exploration of potential sources of new information. Here we investigated whether UMAC arises because immediate provision of reward feedback in prior studies rendered advance information redundant, attenuating exploitation of diagnostic items and promoting exploration. Accordingly, increasing the duration of anticipated uncertainty (and hence the value of advance information that allows us to escape uncertainty earlier) should promote prioritisation of diagnostic cues and lead to patterns of attentional exploitation. In two eye-tracking experiments, we compared attentional capture by a cue providing diagnostic reward information and a cue signalling uncertain reward, while manipulating the delay between response and feedback (i.e., the duration of anticipated uncertainty that advance information could forestall). We found a UMAC effect in all conditions: regardless of response–feedback delay, uncertain stimuli were more likely to capture attention than diagnostic stimuli. These results suggest that prioritisation of uncertainty is a robust pattern of behaviour in this task. Synthesising current and previous findings, we suggest that different modes of attentional information-seeking may reflect qualitative task differences, with exploration operating at an implicit, automatic level, and exploitation resulting from top-down, volitional processes.
{"title":"Delaying reward feedback does not increase the influence of information on attentional priority in visual search","authors":"Julie Y.L. Chow , Kelly G. Garner , Daniel Pearson , Jan Theeuwes , Mike E. Le Pelley","doi":"10.1016/j.cognition.2026.106447","DOIUrl":"10.1016/j.cognition.2026.106447","url":null,"abstract":"<div><div>Demonstrations of information-seeking behaviour suggest that attention often acts in an exploitative way, prioritising stimuli that provide diagnostic information about upcoming events over stimuli associated with uncertainty. However, recent evidence from studies of attentional capture in visual search show an opposite pattern: automatic prioritisation of items associated with reward uncertainty over diagnostic stimuli. We hypothesise that this uncertainty-modulated attentional capture (UMAC) effect reflects ‘attention for learning’: that is, exploration of potential sources of new information. Here we investigated whether UMAC arises because immediate provision of reward feedback in prior studies rendered advance information redundant, attenuating exploitation of diagnostic items and promoting exploration. Accordingly, increasing the duration of anticipated uncertainty (and hence the value of advance information that allows us to escape uncertainty earlier) should promote prioritisation of diagnostic cues and lead to patterns of attentional exploitation. In two eye-tracking experiments, we compared attentional capture by a cue providing diagnostic reward information and a cue signalling uncertain reward, while manipulating the delay between response and feedback (i.e., the duration of anticipated uncertainty that advance information could forestall). We found a UMAC effect in all conditions: regardless of response–feedback delay, uncertain stimuli were more likely to capture attention than diagnostic stimuli. These results suggest that prioritisation of uncertainty is a robust pattern of behaviour in this task. Synthesising current and previous findings, we suggest that different modes of attentional information-seeking may reflect qualitative task differences, with exploration operating at an implicit, automatic level, and exploitation resulting from top-down, volitional processes.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106447"},"PeriodicalIF":2.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.cognition.2026.106445
Margaret Kandel , Nan Li , Jesse Snedeker
Interactive processing is a central feature of human cognition, whereby top-down and bottom-up pathways pass information between different levels of representation. In this study, we investigated how these interactive mechanisms develop by asking whether interactive processing arises early in life or emerges later, with experience or as the brain matures. In a visual world eye-tracking study, we tested whether four and five year-old children show evidence of top-down interactivity during language comprehension. We found that young children, like adults, can use top-down cues from the sentence context to constrain processing of the bottom-up language input during spoken word recognition, allowing them to avoid activating word candidates that initially match the input but are semantically incongruent with the context. Furthermore, we found that the children used top-down cues to pre-activate the phonological representations of predictable words before they appeared in the input. These findings illustrate that the pathways necessary for interactive processing are robust and active by early childhood, suggesting that the mechanisms of interactive processing are intrinsic and fundamental properties of the mind's architecture.
{"title":"Evidence for top-down constraints and form-based prediction in 4–5 year-olds' lexical processing","authors":"Margaret Kandel , Nan Li , Jesse Snedeker","doi":"10.1016/j.cognition.2026.106445","DOIUrl":"10.1016/j.cognition.2026.106445","url":null,"abstract":"<div><div>Interactive processing is a central feature of human cognition, whereby top-down and bottom-up pathways pass information between different levels of representation. In this study, we investigated how these interactive mechanisms develop by asking whether interactive processing arises early in life or emerges later, with experience or as the brain matures. In a visual world eye-tracking study, we tested whether four and five year-old children show evidence of top-down interactivity during language comprehension. We found that young children, like adults, can use top-down cues from the sentence context to constrain processing of the bottom-up language input during spoken word recognition, allowing them to avoid activating word candidates that initially match the input but are semantically incongruent with the context. Furthermore, we found that the children used top-down cues to pre-activate the phonological representations of predictable words before they appeared in the input. These findings illustrate that the pathways necessary for interactive processing are robust and active by early childhood, suggesting that the mechanisms of interactive processing are intrinsic and fundamental properties of the mind's architecture.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106445"},"PeriodicalIF":2.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.cognition.2026.106444
Savithry Namboodiripad , Ethan Kutlu , Anna Babel , Molly Babel , Melissa Baese-Berk , Paras B. Bassuk , Adeli Block , Reinaldo Cabrera Pérez , Matthew T. Carlson , Sita Carraturo , Andrew Cheng , Lauretta S.P. Cheng , Philip Combiths , Ruthe Foushee , Anne Therese Frederiksen , Devin Grammon , Rachel Hayes-Harb , Eve Higby , Kelly Kendro , Elena Koulidobrova , Kelly Elizabeth Wright
Essentialist categorizations of language users, such as native speaker, are widely used but lack empirical validity and reinforce social inequities. This article focuses on the nativeness construct, critically examining how its centrality in social-scientific research distorts scholarly inquiry, introduces bias in educational and clinical assessments, and perpetuates exclusion in academia. We argue that such labels impose artificial homogeneity, devalue linguistic diversity, and contribute to systemic biases in society. By reifying social divisions, essentialist categorizations can exclude marginalized groups, perpetuate linguistic discrimination, and hinder scientific progress. We advocate for a shift away from essentialist proxies and toward more contextually grounded and empirically driven characterizations of language use. A reflexive and interdisciplinary approach is necessary to dismantle these harmful frameworks and promote more accurate, inclusive, and equitable research. Our argument is relevant not just to the cognitive sciences, but to any scholarship which involves describing or understanding language. Ultimately, rejecting essentialist assumptions will lead to more nuanced understandings of language, identity, and social belonging, fostering both scientific and societal transformation by promoting justice and accuracy across social-scientific disciplines.
{"title":"Finding our ROLE: How and why to reframe essentialist approaches to language","authors":"Savithry Namboodiripad , Ethan Kutlu , Anna Babel , Molly Babel , Melissa Baese-Berk , Paras B. Bassuk , Adeli Block , Reinaldo Cabrera Pérez , Matthew T. Carlson , Sita Carraturo , Andrew Cheng , Lauretta S.P. Cheng , Philip Combiths , Ruthe Foushee , Anne Therese Frederiksen , Devin Grammon , Rachel Hayes-Harb , Eve Higby , Kelly Kendro , Elena Koulidobrova , Kelly Elizabeth Wright","doi":"10.1016/j.cognition.2026.106444","DOIUrl":"10.1016/j.cognition.2026.106444","url":null,"abstract":"<div><div>Essentialist categorizations of language users, such as <span>native speaker</span>, are widely used but lack empirical validity and reinforce social inequities. This article focuses on the <span>nativeness</span> construct, critically examining how its centrality in social-scientific research distorts scholarly inquiry, introduces bias in educational and clinical assessments, and perpetuates exclusion in academia. We argue that such labels impose artificial homogeneity, devalue linguistic diversity, and contribute to systemic biases in society. By reifying social divisions, essentialist categorizations can exclude marginalized groups, perpetuate linguistic discrimination, and hinder scientific progress. We advocate for a shift away from essentialist proxies and toward more contextually grounded and empirically driven characterizations of language use. A reflexive and interdisciplinary approach is necessary to dismantle these harmful frameworks and promote more accurate, inclusive, and equitable research. Our argument is relevant not just to the cognitive sciences, but to any scholarship which involves describing or understanding language. Ultimately, rejecting essentialist assumptions will lead to more nuanced understandings of language, identity, and social belonging, fostering both scientific and societal transformation by promoting justice and accuracy across social-scientific disciplines.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"271 ","pages":"Article 106444"},"PeriodicalIF":2.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}