Language processing is rapidly incremental, but evidence bearing upon this assumption comes from very few languages. In this paper we report on a study of incremental processing in Murrinhpatha, a polysynthetic Australian language, which expresses complex sentence-level meanings in a single verb, the full meaning of which is not clear until the final morph. Forty native Murrinhpatha speakers participated in a visual world eyetracking experiment in which they viewed two complex scenes as they heard a verb describing one of the scenes. The scenes were selected so that the verb describing the target scene had either no overlap with a possible description of the competitor image, or overlapped from the start (onset overlap) or at the end of the verb (rhyme overlap). The results showed that, despite meaning only being clear at the end of the verb, Murrinhpatha speakers made incremental predictions that differed across conditions. The findings demonstrate that processing in polysynthetic languages is rapid and incremental, yet unlike in commonly studied languages like English, speakers make parsing predictions based on information associated with bound morphs rather than discrete words.
{"title":"Incremental processing in a polysynthetic language (Murrinhpatha)","authors":"Laurence Bruggeman , Evan Kidd , Rachel Nordlinger , Anne Cutler","doi":"10.1016/j.cognition.2025.106075","DOIUrl":"10.1016/j.cognition.2025.106075","url":null,"abstract":"<div><div>Language processing is rapidly incremental, but evidence bearing upon this assumption comes from very few languages. In this paper we report on a study of incremental processing in Murrinhpatha, a polysynthetic Australian language, which expresses complex sentence-level meanings in a single verb, the full meaning of which is not clear until the final morph. Forty native Murrinhpatha speakers participated in a visual world eyetracking experiment in which they viewed two complex scenes as they heard a verb describing one of the scenes. The scenes were selected so that the verb describing the target scene had either no overlap with a possible description of the competitor image, or overlapped from the start (<em>onset</em> overlap) or at the end of the verb (<em>rhyme</em> overlap). The results showed that, despite meaning only being clear at the end of the verb, Murrinhpatha speakers made incremental predictions that differed across conditions. The findings demonstrate that processing in polysynthetic languages is rapid and incremental, yet unlike in commonly studied languages like English, speakers make parsing predictions based on information associated with bound morphs rather than discrete words.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106075"},"PeriodicalIF":2.8,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143339881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1016/j.cognition.2025.106077
Martina Arioli , Valentina Silvestri , Maria Lorella Giannì , Lorenzo Colombo , Viola Macchi Cassia
Rhythm entrains attention in both human and non-human animals. Here, the ontogenetic origins of this effect were investigated in newborns (Experiment 1; N = 30, 16 females) and 2-month-old infants (Experiment 2; N = 30, 17 females). Visuospatial attentional disengagement was tested in an overlap task where a static peripheral stimulus (S2) appeared while a central rhythmic, non-rhythmic or static stimulus (S1) remained visible on the screen. Results indicated a developmental pattern, with 2-month-olds, but not newborns, showing equally faster disengagement of fixation when S1 was static or rhythmic compared to non-rhythmic. Infants' preferential looking behaviour indicate that this difference in saccadic latencies was not due to stimulus salience (Experiment 3; N = 30, 18 females). Results point to the importance of the temporal structure of dynamic stimuli as a specific feature that modulates attentional disengagement at 2 months of age.
{"title":"The impact of rhythm on visual attention disengagement in newborns and 2-month-old infants","authors":"Martina Arioli , Valentina Silvestri , Maria Lorella Giannì , Lorenzo Colombo , Viola Macchi Cassia","doi":"10.1016/j.cognition.2025.106077","DOIUrl":"10.1016/j.cognition.2025.106077","url":null,"abstract":"<div><div>Rhythm entrains attention in both human and non-human animals. Here, the ontogenetic origins of this effect were investigated in newborns (Experiment 1; <em>N</em> = 30, 16 females) and 2-month-old infants (Experiment 2; N = 30, 17 females). Visuospatial attentional disengagement was tested in an overlap task where a static peripheral stimulus (S2) appeared while a central rhythmic, non-rhythmic or static stimulus (S1) remained visible on the screen. Results indicated a developmental pattern, with 2-month-olds, but not newborns, showing equally faster disengagement of fixation when S1 was static or rhythmic compared to non-rhythmic. Infants' preferential looking behaviour indicate that this difference in saccadic latencies was not due to stimulus salience (Experiment 3; <em>N</em> = 30, 18 females). Results point to the importance of the temporal structure of dynamic stimuli as a specific feature that modulates attentional disengagement at 2 months of age.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106077"},"PeriodicalIF":2.8,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143159266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-03DOI: 10.1016/j.cognition.2025.106079
Thomas Fabian
Visual perception is an integral part of human cognition. Vision comprises sampling information and processing them. Tasks and stimuli influence human sampling behavior, while cognitive and neurological processing mechanisms remain unchanged. A question still controversial today is whether the components interact with each other. Some theories see the components of visual cognition as separate and their influence on gaze behavior as additive. Others see gaze behavior as an emergent structure of visual cognition that emerges through multiplicative interactions. One way to approach this problem is to examine the magnitude of gaze shifts. Demonstrating that gaze shifts show a constant behavior across tasks would argue for the existence of an independent component in human visual behavior. However, studies attempting to generally describe gaze shift magnitudes deliver contradictory results. In this work, we analyze data from numerous experiments to advance the debate on visual cognition by providing a more comprehensive view of visual behavior. The data show that the magnitude of eye movements, also called saccades, cannot be described by a consistent distribution across different experiments. However, we also propose a new way of measuring the magnitude of saccades: relative saccade lengths. We find that a saccade's length relative to the preceding saccade's length consistently follows a power-law distribution. We observe this distribution for all datasets we analyze, regardless of the task, stimulus, age, or native language of the participants. Our results indicate the existence of an independent component utilized by other cognitive processes without interacting with them. This suggests that a part of human visual cognition is based on an additive component that does not depend on stimulus features.
{"title":"Exploring power-law behavior in human gaze shifts across tasks and populations","authors":"Thomas Fabian","doi":"10.1016/j.cognition.2025.106079","DOIUrl":"10.1016/j.cognition.2025.106079","url":null,"abstract":"<div><div>Visual perception is an integral part of human cognition. Vision comprises sampling information and processing them. Tasks and stimuli influence human sampling behavior, while cognitive and neurological processing mechanisms remain unchanged. A question still controversial today is whether the components interact with each other. Some theories see the components of visual cognition as separate and their influence on gaze behavior as additive. Others see gaze behavior as an emergent structure of visual cognition that emerges through multiplicative interactions. One way to approach this problem is to examine the magnitude of gaze shifts. Demonstrating that gaze shifts show a constant behavior across tasks would argue for the existence of an independent component in human visual behavior. However, studies attempting to generally describe gaze shift magnitudes deliver contradictory results. In this work, we analyze data from numerous experiments to advance the debate on visual cognition by providing a more comprehensive view of visual behavior. The data show that the magnitude of eye movements, also called saccades, cannot be described by a consistent distribution across different experiments. However, we also propose a new way of measuring the magnitude of saccades: relative saccade lengths. We find that a saccade's length relative to the preceding saccade's length consistently follows a power-law distribution. We observe this distribution for all datasets we analyze, regardless of the task, stimulus, age, or native language of the participants. Our results indicate the existence of an independent component utilized by other cognitive processes without interacting with them. This suggests that a part of human visual cognition is based on an additive component that does not depend on stimulus features.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106079"},"PeriodicalIF":2.8,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143159267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1016/j.cognition.2025.106074
Dora Kampis, Dimitrios Askitis, Victoria Southgate
Human infants may exhibit an altercentric bias, where the perspective of others biases their own cognition. This bias may serve a crucial learning function in early ontogeny. This work tested the two main predictions of an altercentric bias in 14-month-old infants: (i) conceptual information should also be encoded altercentrically, and (ii) the other's perspective may completely override infants' own processing. We probed if infants detect a semantic mismatch if hidden objects are labelled incorrectly from their own, or another person's perspective. Experiment 1 found a reduced electrophysiological mismatch response (the ‘N400’ event-related potential) when labeling was congruent from the other's perspective compared to incongruent, though it was always incongruent for the infant. Experiment 2 found no effect of (in)congruency from the infants' perspective when labeling was always congruent from the other's. These findings demonstrate a strong altercentric bias that prioritizes encoding conceptual information from others' perspective during early development.
{"title":"Altercentric bias in preverbal infants' encoding of object kind","authors":"Dora Kampis, Dimitrios Askitis, Victoria Southgate","doi":"10.1016/j.cognition.2025.106074","DOIUrl":"10.1016/j.cognition.2025.106074","url":null,"abstract":"<div><div>Human infants may exhibit an altercentric bias, where the perspective of others biases their own cognition. This bias may serve a crucial learning function in early ontogeny. This work tested the two main predictions of an altercentric bias in 14-month-old infants: (i) conceptual information should also be encoded altercentrically, and (ii) the other's perspective may completely override infants' own processing. We probed if infants detect a semantic mismatch if hidden objects are labelled incorrectly from their own, or another person's perspective. Experiment 1 found a reduced electrophysiological mismatch response (the ‘N400’ event-related potential) when labeling was congruent from the other's perspective compared to incongruent, though it was always incongruent for the infant. Experiment 2 found no effect of (in)congruency from the infants' perspective when labeling was always congruent from the other's. These findings demonstrate a strong altercentric bias that prioritizes encoding conceptual information from others' perspective during early development.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106074"},"PeriodicalIF":2.8,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1016/j.cognition.2025.106062
Mustafa Yavuz , Sofia Bonicalzi , Laura Schmitz , Lucas Battich , Jamal Esmaily , Ophelia Deroy
The sense of agency is the subjective feeling of control over one's own actions and the associated outcomes. Here, we asked whether and to what extent the reasons behind our choices (operationalized by value differences, expected utility, and counterfactual option sets) drive our sense of agency. We simultaneously tested these three dimensions during a novel value-based decision-making task while recording explicit (self-reported) and implicit (brain signals) measures of agency. Our results show that choices that are more reasonable also come with a stronger sense of agency: humans report higher levels of control over the outcomes of their actions if (1) they were able to choose between different option values compared to randomly picking between options of identical value, (2) their choices maximizes utility (compared to otherwise) and yields higher than expected utility, and (3) they realize that they have not missed out on hidden opportunities. EEG results showed supporting evidence for factors (1) and (3): We found a higher P300 amplitude for picking than choosing and a higher Late-Positive Component when participants realized they had missed out on possible but hidden opportunities. Together, these results suggest that human agency is not only driven by the goal-directedness of our actions but also by their perceived rationality.
{"title":"Rational choices elicit stronger sense of agency in brain and behavior","authors":"Mustafa Yavuz , Sofia Bonicalzi , Laura Schmitz , Lucas Battich , Jamal Esmaily , Ophelia Deroy","doi":"10.1016/j.cognition.2025.106062","DOIUrl":"10.1016/j.cognition.2025.106062","url":null,"abstract":"<div><div>The sense of agency is the subjective feeling of control over one's own actions and the associated outcomes. Here, we asked whether and to what extent the reasons behind our choices (operationalized by value differences, expected utility, and counterfactual option sets) drive our sense of agency. We simultaneously tested these three dimensions during a novel value-based decision-making task while recording explicit (self-reported) and implicit (brain signals) measures of agency. Our results show that choices that are more reasonable also come with a stronger sense of agency: humans report higher levels of control over the outcomes of their actions if (1) they were able to choose between different option values compared to randomly picking between options of identical value, (2) their choices maximizes utility (compared to otherwise) and yields higher than expected utility, and (3) they realize that they have not missed out on hidden opportunities. EEG results showed supporting evidence for factors (1) and (3): We found a higher P300 amplitude for picking than choosing and a higher Late-Positive Component when participants realized they had missed out on possible but hidden opportunities. Together, these results suggest that human agency is not only driven by the goal-directedness of our actions but also by their perceived rationality.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106062"},"PeriodicalIF":2.8,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1016/j.cognition.2025.106065
Aidan V. Campbell , Yiyi Wang , Michael Inzlicht
Efficiency demands that we work smarter and not harder, but is this better for our wellbeing? Here, we ask if exerting effort on a task can increase feelings of meaning and purpose. In six studies (N = 2883), we manipulated how much effort participants exerted on a task and then assessed how meaningful they found those tasks. In Studies 1 and 2, we presented hypothetical scenarios whereby participants imagined themselves (or others) exerting more or less effort on a writing task, and then asked participants how much meaning they believed they (or others) would derive. In Study 3, we randomly assigned participants to complete inherently meaningless tasks that were harder or easier to complete, and again asked them how meaningful they found the tasks. Study 4 varied the difficulty of a writing assignment by involving or excluding ChatGPT assistance and evaluated its meaningfulness. Study 5 investigated cognitive dissonance as a potential explanatory mechanism. In Study 6, we tested the shape of the effort-meaning relationship. In all studies, the more effort participants exerted (or imagined exerting), the more meaning they derived (or imagined deriving), though the results of Study 6 show this is only up to a point. These studies suggest a causal link, whereby effort begets feelings of meaning. They also suggest that part of the reason this link exists is that effort begets feeling of competence and mastery, although the evidence is preliminary and inconsistent. We found no evidence the effects were caused by post-hoc effort justification (i.e., cognitive dissonance). Effort, beyond being a mere cost, is a source of personal meaning and value, fundamentally influencing how individuals and observers perceive and derive satisfaction from tasks.
{"title":"Experimental evidence that exerting effort increases meaning","authors":"Aidan V. Campbell , Yiyi Wang , Michael Inzlicht","doi":"10.1016/j.cognition.2025.106065","DOIUrl":"10.1016/j.cognition.2025.106065","url":null,"abstract":"<div><div>Efficiency demands that we work smarter and not harder, but is this better for our wellbeing? Here, we ask if exerting effort on a task can increase feelings of meaning and purpose. In six studies (<em>N</em> = 2883), we manipulated how much effort participants exerted on a task and then assessed how meaningful they found those tasks. In Studies 1 and 2, we presented hypothetical scenarios whereby participants imagined themselves (or others) exerting more or less effort on a writing task, and then asked participants how much meaning they believed they (or others) would derive. In Study 3, we randomly assigned participants to complete inherently meaningless tasks that were harder or easier to complete, and again asked them how meaningful they found the tasks. Study 4 varied the difficulty of a writing assignment by involving or excluding ChatGPT assistance and evaluated its meaningfulness. Study 5 investigated cognitive dissonance as a potential explanatory mechanism. In Study 6, we tested the shape of the effort-meaning relationship. In all studies, the more effort participants exerted (or imagined exerting), the more meaning they derived (or imagined deriving), though the results of Study 6 show this is only up to a point. These studies suggest a causal link, whereby effort begets feelings of meaning. They also suggest that part of the reason this link exists is that effort begets feeling of competence and mastery, although the evidence is preliminary and inconsistent. We found no evidence the effects were caused by post-hoc effort justification (i.e., cognitive dissonance). Effort, beyond being a mere cost, is a source of personal meaning and value, fundamentally influencing how individuals and observers perceive and derive satisfaction from tasks.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106065"},"PeriodicalIF":2.8,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143042260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-21DOI: 10.1016/j.cognition.2025.106066
Robin Watson , Thomas J.H. Morgan
Cultural evolutionary theory has shown that social learning is adaptive across a broad range of conditions. While existing theory can account for why some social information is ignored, humans frequently under-utilise beneficial social information in experimental settings. One account of this is epistemic vigilance, whereby individuals avoid social information that is likely to be untrustworthy, though few experiments have directly tested this. We addressed this using a two-player online experiment where participants completed the same task in series. Player one provided social information for player two in the form of freely offered advice or their actual answer (termed “spying”). We manipulated the payoff structure of the task such that it had either a cooperative, competitive, or neutral incentive. As predicted, we found that under a competitive payoff structure: (i) player one was more likely to provide dishonest advice; and (ii) player two reduced their use of social information. Also, (iii) spied information was more influential than advice, and (iv) player two chose to spy rather than receive advice when offered the choice. Unexpectedly, the ability to choose between advice and spied information increased social influence. Finally, exploratory analyses found that the most trusting participants preferred to receive advice, while the least trusting participants favoured receiving no social information at all. Overall, our experiment supports the hypothesis that humans both use and provide social information strategically in a manner consistent with epistemic vigilance.
{"title":"An experimental test of epistemic vigilance: Competitive incentives increase dishonesty and reduce social influence","authors":"Robin Watson , Thomas J.H. Morgan","doi":"10.1016/j.cognition.2025.106066","DOIUrl":"10.1016/j.cognition.2025.106066","url":null,"abstract":"<div><div>Cultural evolutionary theory has shown that social learning is adaptive across a broad range of conditions. While existing theory can account for why some social information is ignored, humans frequently under-utilise beneficial social information in experimental settings. One account of this is epistemic vigilance, whereby individuals avoid social information that is likely to be untrustworthy, though few experiments have directly tested this. We addressed this using a two-player online experiment where participants completed the same task in series. Player one provided social information for player two in the form of freely offered advice or their actual answer (termed “spying”). We manipulated the payoff structure of the task such that it had either a cooperative, competitive, or neutral incentive. As predicted, we found that under a competitive payoff structure: (i) player one was more likely to provide dishonest advice; and (ii) player two reduced their use of social information. Also, (iii) spied information was more influential than advice, and (iv) player two chose to spy rather than receive advice when offered the choice. Unexpectedly, the ability to choose between advice and spied information increased social influence. Finally, exploratory analyses found that the most trusting participants preferred to receive advice, while the least trusting participants favoured receiving no social information at all. Overall, our experiment supports the hypothesis that humans both use and provide social information strategically in a manner consistent with epistemic vigilance.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106066"},"PeriodicalIF":2.8,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ‘different-body/different-concepts hypothesis’ central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of ‘visual’ categories (‘living things,’ or animals, e.g., tiger, and light events, e.g., sparkle) across congenitally blind (n = 21) and sighted (n = 22) adults. Words referring to ‘visual’ entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., sparrow vs. finch) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entity concepts. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.
{"title":"Neural specialization for ‘visual’ concepts emerges in the absence of vision","authors":"Miriam Hauptman , Giulia Elli , Rashi Pant , Marina Bedny","doi":"10.1016/j.cognition.2024.106058","DOIUrl":"10.1016/j.cognition.2024.106058","url":null,"abstract":"<div><div>The ‘different-body/different-concepts hypothesis’ central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of ‘visual’ categories (‘living things,’ or animals, e.g., <em>tiger</em>, and light events, e.g., <em>sparkle</em>) across congenitally blind (<em>n</em> = 21) and sighted (<em>n</em> = 22) adults. Words referring to ‘visual’ entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., <em>sparrow</em> vs. <em>finch</em>) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entity concepts. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106058"},"PeriodicalIF":2.8,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-18DOI: 10.1016/j.cognition.2025.106061
Alexandra Román Irizarry , Anne L. Beatty-Martínez , Julio Torres , Judith F. Kroll
This study compared the processing of non-binary morphemes in Spanish (e.g., todxs, todes) with the processing of canonical grammatical gender violations in Spanish pronouns (e.g., Los maestros… todas…). Using self-paced reading, the study examined how individual differences in working memory and gender/sex diversity beliefs affected language processing at three regions of interest (ROI): the pronoun, the pronoun +1, and the pronoun +2. Seventy-eight Spanish-English bilinguals completed two self-paced reading tasks, one with non-binary pronouns and another with grammatical gender violations, as well as a working memory task, a language dominance questionnaire, and a gender/sex diversity beliefs questionnaire. Processing costs were operationalized as longer reaction times (RTs) or inaccurate responses. Results showed overall processing costs for non-binary morphemes at all 3 ROIs, but no processing costs were observed in terms of accuracy or response times to the comprehension question. The results suggest that processing non-binary pronouns results in a small processing cost that does not affect overall sentence comprehension. The small observed processing cost was moderated by gender/sex diversity beliefs, with gender normative beliefs increasing RTs at the pronoun and affirmation of diverse gender identities beliefs reducing the RTs at the second spillover region. In contrast, grammatical gender violations only showed a processing cost at the first spillover region and were not moderated by working memory nor gender/sex diversity beliefs. Taken together, the results suggest that non-binary pronouns are processed differently than grammatical gender violations and that the small processing cost they impose can lead to good enough comprehension.
{"title":"“Todes” and “Todxs”, linguistic innovations or grammatical gender violations?","authors":"Alexandra Román Irizarry , Anne L. Beatty-Martínez , Julio Torres , Judith F. Kroll","doi":"10.1016/j.cognition.2025.106061","DOIUrl":"10.1016/j.cognition.2025.106061","url":null,"abstract":"<div><div>This study compared the processing of non-binary morphemes in Spanish (e.g., <em>todxs</em>, <em>todes</em>) with the processing of canonical grammatical gender violations in Spanish pronouns (e.g., <em>Los maestros… todas…</em>). Using self-paced reading, the study examined how individual differences in working memory and gender/sex diversity beliefs affected language processing at three regions of interest (ROI): the pronoun, the pronoun +1, and the pronoun +2. Seventy-eight Spanish-English bilinguals completed two self-paced reading tasks, one with non-binary pronouns and another with grammatical gender violations, as well as a working memory task, a language dominance questionnaire, and a gender/sex diversity beliefs questionnaire. Processing costs were operationalized as longer reaction times (RTs) or inaccurate responses. Results showed overall processing costs for non-binary morphemes at all 3 ROIs, but no processing costs were observed in terms of accuracy or response times to the comprehension question. The results suggest that processing non-binary pronouns results in a small processing cost that does not affect overall sentence comprehension. The small observed processing cost was moderated by gender/sex diversity beliefs, with gender normative beliefs increasing RTs at the pronoun and affirmation of diverse gender identities beliefs reducing the RTs at the second spillover region. In contrast, grammatical gender violations only showed a processing cost at the first spillover region and were not moderated by working memory nor gender/sex diversity beliefs. Taken together, the results suggest that non-binary pronouns are processed differently than grammatical gender violations and that the small processing cost they impose can lead to good enough comprehension.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106061"},"PeriodicalIF":2.8,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1016/j.cognition.2025.106064
Ryota Ishikawa , Genta Ono , Jun Izawa
Pain perception is not solely determined by noxious stimuli, but also varies due to other factors, such as beliefs about pain and its uncertainty. A widely accepted theory posits that the brain integrates prediction of pain with noxious stimuli, to estimate pain intensity. This theory assumes that the estimated pain value is adjusted to minimize surprise, mathematically defined as errors between predictions and outcomes. However, it is still unclear whether the represented surprise directly influences pain perception or merely serves to update this estimate. In this study, we empirically examined this question using virtual reality. In the task, participants reported felt pain via VAS after their arm was stimulated by noxious heat and thrusted into by a virtual knife actively. To manipulate surprise level, the visual threat suddenly disappeared randomly, and noxious heat was presented in the on- or post-action phases. We observed that a transphysical surprising event, created by sudden disappearance of a visual threat cue combined with delayed noxious heat, amplified pain intensity. Subsequent model-based analysis using Bayesian theory revealed significant modulation of pain by the Bayesian surprise value. These results illustrated a real-time computational process for pain perception during a single task trial, suggesting that the brain anticipates pain using an efference copy of actions, integrates it with multimodal stimuli, and perceives it as a surprise.
{"title":"Bayesian surprise intensifies pain in a novel visual-noxious association","authors":"Ryota Ishikawa , Genta Ono , Jun Izawa","doi":"10.1016/j.cognition.2025.106064","DOIUrl":"10.1016/j.cognition.2025.106064","url":null,"abstract":"<div><div>Pain perception is not solely determined by noxious stimuli, but also varies due to other factors, such as beliefs about pain and its uncertainty. A widely accepted theory posits that the brain integrates prediction of pain with noxious stimuli, to estimate pain intensity. This theory assumes that the estimated pain value is adjusted to minimize surprise, mathematically defined as errors between predictions and outcomes. However, it is still unclear whether the represented surprise directly influences pain perception or merely serves to update this estimate. In this study, we empirically examined this question using virtual reality. In the task, participants reported felt pain via VAS after their arm was stimulated by noxious heat and thrusted into by a virtual knife actively. To manipulate surprise level, the visual threat suddenly disappeared randomly, and noxious heat was presented in the on- or post-action phases. We observed that a transphysical surprising event, created by sudden disappearance of a visual threat cue combined with delayed noxious heat, amplified pain intensity. Subsequent model-based analysis using Bayesian theory revealed significant modulation of pain by the Bayesian surprise value. These results illustrated a real-time computational process for pain perception during a single task trial, suggesting that the brain anticipates pain using an efference copy of actions, integrates it with multimodal stimuli, and perceives it as a surprise.</div></div>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"Article 106064"},"PeriodicalIF":2.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}