Pub Date : 2025-03-01Epub Date: 2024-12-24DOI: 10.1016/j.cognition.2024.106049
Ambra Ferrari, Peter Hagoort
Face-to-face communication is not only about 'what' is said but also 'how' it is said, both in speech and bodily signals. Beat gestures are rhythmic hand movements that typically accompany prosodic prominence in conversation. Yet, it is still unclear how beat gestures influence language comprehension. On the one hand, beat gestures may share the same functional role of focus markers as prosodic prominence. Accordingly, they would drive attention towards the concurrent speech and highlight its content. On the other hand, beat gestures may trigger inferences of high speaker confidence, generate the expectation that the sentence content is correct and thereby elicit the commitment to the truth of the statement. This study directly disentangled the two hypotheses by evaluating additive and interactive effects of prosodic prominence and beat gestures on language comprehension. Participants watched videos of a speaker uttering sentences and judged whether each sentence was true or false. Sentences sometimes contained a world knowledge violation that may go unnoticed ('semantic illusion'). Combining beat gestures with prosodic prominence led to a higher degree of semantic illusion, making more world knowledge violations go unnoticed during language comprehension. These results challenge current theories proposing that beat gestures are visual focus markers. To the contrary, they suggest that beat gestures automatically trigger inferences of high speaker confidence and thereby elicit the commitment to the truth of the statement, in line with Grice's cooperative principle in conversation. More broadly, our findings also highlight the influence of metacognition on language comprehension in face-to-face communication.
{"title":"Beat gestures and prosodic prominence interactively influence language comprehension.","authors":"Ambra Ferrari, Peter Hagoort","doi":"10.1016/j.cognition.2024.106049","DOIUrl":"10.1016/j.cognition.2024.106049","url":null,"abstract":"<p><p>Face-to-face communication is not only about 'what' is said but also 'how' it is said, both in speech and bodily signals. Beat gestures are rhythmic hand movements that typically accompany prosodic prominence in conversation. Yet, it is still unclear how beat gestures influence language comprehension. On the one hand, beat gestures may share the same functional role of focus markers as prosodic prominence. Accordingly, they would drive attention towards the concurrent speech and highlight its content. On the other hand, beat gestures may trigger inferences of high speaker confidence, generate the expectation that the sentence content is correct and thereby elicit the commitment to the truth of the statement. This study directly disentangled the two hypotheses by evaluating additive and interactive effects of prosodic prominence and beat gestures on language comprehension. Participants watched videos of a speaker uttering sentences and judged whether each sentence was true or false. Sentences sometimes contained a world knowledge violation that may go unnoticed ('semantic illusion'). Combining beat gestures with prosodic prominence led to a higher degree of semantic illusion, making more world knowledge violations go unnoticed during language comprehension. These results challenge current theories proposing that beat gestures are visual focus markers. To the contrary, they suggest that beat gestures automatically trigger inferences of high speaker confidence and thereby elicit the commitment to the truth of the statement, in line with Grice's cooperative principle in conversation. More broadly, our findings also highlight the influence of metacognition on language comprehension in face-to-face communication.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"256 ","pages":"106049"},"PeriodicalIF":2.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a dynamic visual search environment, a synchronous and meaningless auditory signal (pip) that corresponds with a change in a visual target promotes the efficiency of visual search (pop out), which is known as the pip-and-pop effect. We conducted three experiments to investigate the mechanism of the pip-and-pop effect. Using the eye movement technique, we manipulated the interval rhythm (Exp. 1) and interval duration time (Exp. 2) of dynamic color changes in visual stimuli in the dynamic visual search paradigm to ensure that there was a significant pip-and-pop effect. In Exp. 3, we modulated the appearance of the sound by employing a visual-only condition, an auditory target condition (synchronized sounds), an auditory oddball condition (a high-frequency sound in a series of low-frequency sounds), an omitted oddball condition (an omitted sound in a series of sounds) and an auditory non-oddball condition (the last of the four sounds). We aim to clarify the role of audiovisual cross-modal information in the pip-and-pop effect by comparing different conditions. The search time results showed that a significant pip-and-pop effect was found for the auditory target, auditory oddball and auditory non-oddball conditions. The eye movement results revealed an increase in the fixation duration and a decrease in the number of fixations for the auditory target and auditory oddball conditions. Our findings suggest that the pip-and-pop effect is indeed a cross-modal effect. Furthermore, the interaction between auditory and visual information is necessary for the pip-and-pop effect, whereas auditory oddball stimuli attract attention and therefore moderate this effect. Our study provides a solution for the pip-and-pop effect mechanism in a dynamic visual search paradigm.
{"title":"The power of sound: Exploring the auditory influence on visual search efficiency.","authors":"Mengying Yuan, Min Gao, Xinzhong Cui, Xin Yue, Jing Xia, Xiaoyu Tang","doi":"10.1016/j.cognition.2024.106045","DOIUrl":"10.1016/j.cognition.2024.106045","url":null,"abstract":"<p><p>In a dynamic visual search environment, a synchronous and meaningless auditory signal (pip) that corresponds with a change in a visual target promotes the efficiency of visual search (pop out), which is known as the pip-and-pop effect. We conducted three experiments to investigate the mechanism of the pip-and-pop effect. Using the eye movement technique, we manipulated the interval rhythm (Exp. 1) and interval duration time (Exp. 2) of dynamic color changes in visual stimuli in the dynamic visual search paradigm to ensure that there was a significant pip-and-pop effect. In Exp. 3, we modulated the appearance of the sound by employing a visual-only condition, an auditory target condition (synchronized sounds), an auditory oddball condition (a high-frequency sound in a series of low-frequency sounds), an omitted oddball condition (an omitted sound in a series of sounds) and an auditory non-oddball condition (the last of the four sounds). We aim to clarify the role of audiovisual cross-modal information in the pip-and-pop effect by comparing different conditions. The search time results showed that a significant pip-and-pop effect was found for the auditory target, auditory oddball and auditory non-oddball conditions. The eye movement results revealed an increase in the fixation duration and a decrease in the number of fixations for the auditory target and auditory oddball conditions. Our findings suggest that the pip-and-pop effect is indeed a cross-modal effect. Furthermore, the interaction between auditory and visual information is necessary for the pip-and-pop effect, whereas auditory oddball stimuli attract attention and therefore moderate this effect. Our study provides a solution for the pip-and-pop effect mechanism in a dynamic visual search paradigm.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"256 ","pages":"106045"},"PeriodicalIF":2.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-12-05DOI: 10.1016/j.cognition.2024.106014
Eva-Maria Griesbauer, Pablo Fernandez Velasco, Antoine Coutrot, Jan M Wiener, Jeremy G Morley, Daniel McNamee, Ed Manley, Hugo J Spiers
Humans show an impressive ability to plan over complex situations and environments. A classic approach to explaining such planning has been tree-search algorithms which search through alternative state sequences for the most efficient path through states. However, this approach fails when the number of states is large due to the time to compute all possible sequences. Hierarchical route planning has been proposed as an alternative, offering a computationally efficient mechanism in which the representation of the environment is segregated into clusters. Current evidence for hierarchical planning comes from experimentally created environments which have clearly defined boundaries and far fewer states than the real-world. To test for real-world hierarchical planning we exploited the capacity of London licensed taxi drivers to use their memory to construct a street by street plan across London, UK (>26,000 streets). The time to recall each successive street name was treated as the response time, with a rapid average of 1.8 s between each street. In support of hierarchical planning we find that the clustered structure of London's regions impacts the response times, with minimal impact of the distance across the street network (as would be predicted by tree-search). We also find that changing direction during the plan (e.g. turning left or right) is associated with delayed response times. Thus, our results provide real-world evidence for how humans structure planning over a very large number of states, and give a measure of human expertise in planning.
{"title":"London taxi drivers exploit neighbourhood boundaries for hierarchical route planning.","authors":"Eva-Maria Griesbauer, Pablo Fernandez Velasco, Antoine Coutrot, Jan M Wiener, Jeremy G Morley, Daniel McNamee, Ed Manley, Hugo J Spiers","doi":"10.1016/j.cognition.2024.106014","DOIUrl":"10.1016/j.cognition.2024.106014","url":null,"abstract":"<p><p>Humans show an impressive ability to plan over complex situations and environments. A classic approach to explaining such planning has been tree-search algorithms which search through alternative state sequences for the most efficient path through states. However, this approach fails when the number of states is large due to the time to compute all possible sequences. Hierarchical route planning has been proposed as an alternative, offering a computationally efficient mechanism in which the representation of the environment is segregated into clusters. Current evidence for hierarchical planning comes from experimentally created environments which have clearly defined boundaries and far fewer states than the real-world. To test for real-world hierarchical planning we exploited the capacity of London licensed taxi drivers to use their memory to construct a street by street plan across London, UK (>26,000 streets). The time to recall each successive street name was treated as the response time, with a rapid average of 1.8 s between each street. In support of hierarchical planning we find that the clustered structure of London's regions impacts the response times, with minimal impact of the distance across the street network (as would be predicted by tree-search). We also find that changing direction during the plan (e.g. turning left or right) is associated with delayed response times. Thus, our results provide real-world evidence for how humans structure planning over a very large number of states, and give a measure of human expertise in planning.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"256 ","pages":"106014"},"PeriodicalIF":2.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-12-28DOI: 10.1016/j.cognition.2024.106051
Zoe Finiasz, Montana Shore, Fei Xu, Tamar Kushnir
Acting for the greater good often involves paying a personal cost to benefit the collective. In two studies, we investigate how children (N = 184, Mage = 8.02 years, SD = 1.15, Range = 6.00-9.99 years) use information about costs and consequences when reasoning about agents who act for the greater good. Children were told about a novel community, in which individuals could pay a cost to prevent a consequence (e.g., holding up an umbrella to prevent rain from flooding the village). In Study 1, children saw two scenarios, one where costs were minor and consequences were major, and one where the opposite was true (major cost, minor consequence). Children in the former condition expected more agents to engage in costly behavior and judged refusal to engage in costly behavior as less permissible. In Study 2 we separately manipulated cost and consequence to see which factor influences children's judgments most - cost or consequence. Here, children expected agents to pay a minor cost regardless of consequence, and only expected agents to pay a major cost when consequence was also major. In their permissibility judgments, children judged refusal to engage in costly behavior to be less permissible when consequences were major than when they were minor, regardless of cost. These findings suggest that children are making principled judgments about acting for the greater good - both cost and consequence determine when we are expected to act, but consequence seems to be a particularly key factor in deciding when inaction is permissible.
{"title":"Children's cost-benefit analysis about agents who act for the greater good.","authors":"Zoe Finiasz, Montana Shore, Fei Xu, Tamar Kushnir","doi":"10.1016/j.cognition.2024.106051","DOIUrl":"10.1016/j.cognition.2024.106051","url":null,"abstract":"<p><p>Acting for the greater good often involves paying a personal cost to benefit the collective. In two studies, we investigate how children (N = 184, M<sub>age</sub> = 8.02 years, SD = 1.15, Range = 6.00-9.99 years) use information about costs and consequences when reasoning about agents who act for the greater good. Children were told about a novel community, in which individuals could pay a cost to prevent a consequence (e.g., holding up an umbrella to prevent rain from flooding the village). In Study 1, children saw two scenarios, one where costs were minor and consequences were major, and one where the opposite was true (major cost, minor consequence). Children in the former condition expected more agents to engage in costly behavior and judged refusal to engage in costly behavior as less permissible. In Study 2 we separately manipulated cost and consequence to see which factor influences children's judgments most - cost or consequence. Here, children expected agents to pay a minor cost regardless of consequence, and only expected agents to pay a major cost when consequence was also major. In their permissibility judgments, children judged refusal to engage in costly behavior to be less permissible when consequences were major than when they were minor, regardless of cost. These findings suggest that children are making principled judgments about acting for the greater good - both cost and consequence determine when we are expected to act, but consequence seems to be a particularly key factor in deciding when inaction is permissible.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"256 ","pages":"106051"},"PeriodicalIF":2.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-03DOI: 10.1016/j.cognition.2024.106046
Iva Ivanova, Dacia Carolina Hernandez, Aziz Atiya
Second-language speakers are more likely to strategically reuse the words of their conversation partners (Zhang & Nicol, 2022). This study investigates if this is also the case for lower-proficiency bilinguals from a bilingual community, who use language more implicitly, and if there is more alignment with lower than with higher proficiency, provided the words to be aligned to are all highly familiar. In two experiments, Spanish-English bilinguals took turns with a confederate to name and match pictures in Spanish. The confederate named critical pictures with a dispreferred but acceptable name (e.g., agua [Sp. water] for a picture of rain). In Experiment 1, bilinguals were more likely to name critical pictures with dispreferred names after hearing these names from the confederate than after the confederate named an unrelated picture instead (i.e., an alignment effect). In support of our hypothesis, there was more alignment in lower-proficiency speakers. In Experiment 2, designed to reduce the possibility for strategic alignment, only confederates but not participants performed the matching task, which precluded participants from linking the dispreferred names with a referent, and removed the incentive to pay attention to the confederate's names. As a result, alignment was reduced (though still present). Of most interest, the reduction was greater for lower-proficiency speakers, supporting the hypothesis that strategic lexical-referential alignment is more likely with lower proficiency even for bilinguals from a bilingual community. The study also isolates measurable strategic and automatic components of lexical-referential alignment.
{"title":"Automatic and strategic components of bilingual lexical alignment.","authors":"Iva Ivanova, Dacia Carolina Hernandez, Aziz Atiya","doi":"10.1016/j.cognition.2024.106046","DOIUrl":"10.1016/j.cognition.2024.106046","url":null,"abstract":"<p><p>Second-language speakers are more likely to strategically reuse the words of their conversation partners (Zhang & Nicol, 2022). This study investigates if this is also the case for lower-proficiency bilinguals from a bilingual community, who use language more implicitly, and if there is more alignment with lower than with higher proficiency, provided the words to be aligned to are all highly familiar. In two experiments, Spanish-English bilinguals took turns with a confederate to name and match pictures in Spanish. The confederate named critical pictures with a dispreferred but acceptable name (e.g., agua [Sp. water] for a picture of rain). In Experiment 1, bilinguals were more likely to name critical pictures with dispreferred names after hearing these names from the confederate than after the confederate named an unrelated picture instead (i.e., an alignment effect). In support of our hypothesis, there was more alignment in lower-proficiency speakers. In Experiment 2, designed to reduce the possibility for strategic alignment, only confederates but not participants performed the matching task, which precluded participants from linking the dispreferred names with a referent, and removed the incentive to pay attention to the confederate's names. As a result, alignment was reduced (though still present). Of most interest, the reduction was greater for lower-proficiency speakers, supporting the hypothesis that strategic lexical-referential alignment is more likely with lower proficiency even for bilinguals from a bilingual community. The study also isolates measurable strategic and automatic components of lexical-referential alignment.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"256 ","pages":"106046"},"PeriodicalIF":2.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1016/j.cognition.2025.106065
Aidan V Campbell, Yiyi Wang, Michael Inzlicht
Efficiency demands that we work smarter and not harder, but is this better for our wellbeing? Here, we ask if exerting effort on a task can increase feelings of meaning and purpose. In six studies (N = 2883), we manipulated how much effort participants exerted on a task and then assessed how meaningful they found those tasks. In Studies 1 and 2, we presented hypothetical scenarios whereby participants imagined themselves (or others) exerting more or less effort on a writing task, and then asked participants how much meaning they believed they (or others) would derive. In Study 3, we randomly assigned participants to complete inherently meaningless tasks that were harder or easier to complete, and again asked them how meaningful they found the tasks. Study 4 varied the difficulty of a writing assignment by involving or excluding ChatGPT assistance and evaluated its meaningfulness. Study 5 investigated cognitive dissonance as a potential explanatory mechanism. In Study 6, we tested the shape of the effort-meaning relationship. In all studies, the more effort participants exerted (or imagined exerting), the more meaning they derived (or imagined deriving), though the results of Study 6 show this is only up to a point. These studies suggest a causal link, whereby effort begets feelings of meaning. They also suggest that part of the reason this link exists is that effort begets feeling of competence and mastery, although the evidence is preliminary and inconsistent. We found no evidence the effects were caused by post-hoc effort justification (i.e., cognitive dissonance). Effort, beyond being a mere cost, is a source of personal meaning and value, fundamentally influencing how individuals and observers perceive and derive satisfaction from tasks.
{"title":"Experimental evidence that exerting effort increases meaning.","authors":"Aidan V Campbell, Yiyi Wang, Michael Inzlicht","doi":"10.1016/j.cognition.2025.106065","DOIUrl":"https://doi.org/10.1016/j.cognition.2025.106065","url":null,"abstract":"<p><p>Efficiency demands that we work smarter and not harder, but is this better for our wellbeing? Here, we ask if exerting effort on a task can increase feelings of meaning and purpose. In six studies (N = 2883), we manipulated how much effort participants exerted on a task and then assessed how meaningful they found those tasks. In Studies 1 and 2, we presented hypothetical scenarios whereby participants imagined themselves (or others) exerting more or less effort on a writing task, and then asked participants how much meaning they believed they (or others) would derive. In Study 3, we randomly assigned participants to complete inherently meaningless tasks that were harder or easier to complete, and again asked them how meaningful they found the tasks. Study 4 varied the difficulty of a writing assignment by involving or excluding ChatGPT assistance and evaluated its meaningfulness. Study 5 investigated cognitive dissonance as a potential explanatory mechanism. In Study 6, we tested the shape of the effort-meaning relationship. In all studies, the more effort participants exerted (or imagined exerting), the more meaning they derived (or imagined deriving), though the results of Study 6 show this is only up to a point. These studies suggest a causal link, whereby effort begets feelings of meaning. They also suggest that part of the reason this link exists is that effort begets feeling of competence and mastery, although the evidence is preliminary and inconsistent. We found no evidence the effects were caused by post-hoc effort justification (i.e., cognitive dissonance). Effort, beyond being a mere cost, is a source of personal meaning and value, fundamentally influencing how individuals and observers perceive and derive satisfaction from tasks.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"106065"},"PeriodicalIF":2.8,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143042260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-21DOI: 10.1016/j.cognition.2025.106066
Robin Watson, Thomas J H Morgan
Cultural evolutionary theory has shown that social learning is adaptive across a broad range of conditions. While existing theory can account for why some social information is ignored, humans frequently under-utilise beneficial social information in experimental settings. One account of this is epistemic vigilance, whereby individuals avoid social information that is likely to be untrustworthy, though few experiments have directly tested this. We addressed this using a two-player online experiment where participants completed the same task in series. Player one provided social information for player two in the form of freely offered advice or their actual answer (termed "spying"). We manipulated the payoff structure of the task such that it had either a cooperative, competitive, or neutral incentive. As predicted, we found that under a competitive payoff structure: (i) player one was more likely to provide dishonest advice; and (ii) player two reduced their use of social information. Also, (iii) spied information was more influential than advice, and (iv) player two chose to spy rather than receive advice when offered the choice. Unexpectedly, the ability to choose between advice and spied information increased social influence. Finally, exploratory analyses found that the most trusting participants preferred to receive advice, while the least trusting participants favoured receiving no social information at all. Overall, our experiment supports the hypothesis that humans both use and provide social information strategically in a manner consistent with epistemic vigilance.
{"title":"An experimental test of epistemic vigilance: Competitive incentives increase dishonesty and reduce social influence.","authors":"Robin Watson, Thomas J H Morgan","doi":"10.1016/j.cognition.2025.106066","DOIUrl":"https://doi.org/10.1016/j.cognition.2025.106066","url":null,"abstract":"<p><p>Cultural evolutionary theory has shown that social learning is adaptive across a broad range of conditions. While existing theory can account for why some social information is ignored, humans frequently under-utilise beneficial social information in experimental settings. One account of this is epistemic vigilance, whereby individuals avoid social information that is likely to be untrustworthy, though few experiments have directly tested this. We addressed this using a two-player online experiment where participants completed the same task in series. Player one provided social information for player two in the form of freely offered advice or their actual answer (termed \"spying\"). We manipulated the payoff structure of the task such that it had either a cooperative, competitive, or neutral incentive. As predicted, we found that under a competitive payoff structure: (i) player one was more likely to provide dishonest advice; and (ii) player two reduced their use of social information. Also, (iii) spied information was more influential than advice, and (iv) player two chose to spy rather than receive advice when offered the choice. Unexpectedly, the ability to choose between advice and spied information increased social influence. Finally, exploratory analyses found that the most trusting participants preferred to receive advice, while the least trusting participants favoured receiving no social information at all. Overall, our experiment supports the hypothesis that humans both use and provide social information strategically in a manner consistent with epistemic vigilance.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"106066"},"PeriodicalIF":2.8,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 'different-body/different-concepts hypothesis' central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of 'visual' categories ('living things,' or animals, e.g., tiger, and light events, e.g., sparkle) across congenitally blind (n = 21) and sighted (n = 22) adults. Words referring to 'visual' entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., sparrow vs. finch) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entity concepts. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.
{"title":"Neural specialization for 'visual' concepts emerges in the absence of vision.","authors":"Miriam Hauptman, Giulia Elli, Rashi Pant, Marina Bedny","doi":"10.1016/j.cognition.2024.106058","DOIUrl":"10.1016/j.cognition.2024.106058","url":null,"abstract":"<p><p>The 'different-body/different-concepts hypothesis' central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of 'visual' categories ('living things,' or animals, e.g., tiger, and light events, e.g., sparkle) across congenitally blind (n = 21) and sighted (n = 22) adults. Words referring to 'visual' entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., sparrow vs. finch) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entity concepts. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"106058"},"PeriodicalIF":2.8,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1016/j.cognition.2025.106061
Alexandra Román Irizarry, Anne L Beatty-Martínez, Julio Torres, Judith F Kroll
This study compared the processing of non-binary morphemes in Spanish (e.g., todxs, todes) with the processing of canonical grammatical gender violations in Spanish pronouns (e.g., Los maestros… todas…). Using self-paced reading, the study examined how individual differences in working memory and gender/sex diversity beliefs affected language processing at three regions of interest (ROI): the pronoun, the pronoun +1, and the pronoun +2. Seventy-eight Spanish-English bilinguals completed two self-paced reading tasks, one with non-binary pronouns and another with grammatical gender violations, as well as a working memory task, a language dominance questionnaire, and a gender/sex diversity beliefs questionnaire. Processing costs were operationalized as longer reaction times (RTs) or inaccurate responses. Results showed overall processing costs for non-binary morphemes at all 3 ROIs, but no processing costs were observed in terms of accuracy or response times to the comprehension question. The results suggest that processing non-binary pronouns results in a small processing cost that does not affect overall sentence comprehension. The small observed processing cost was moderated by gender/sex diversity beliefs, with gender normative beliefs increasing RTs at the pronoun and affirmation of diverse gender identities beliefs reducing the RTs at the second spillover region. In contrast, grammatical gender violations only showed a processing cost at the first spillover region and were not moderated by working memory nor gender/sex diversity beliefs. Taken together, the results suggest that non-binary pronouns are processed differently than grammatical gender violations and that the small processing cost they impose can lead to good enough comprehension.
{"title":"\"Todes\" and \"Todxs\", linguistic innovations or grammatical gender violations?","authors":"Alexandra Román Irizarry, Anne L Beatty-Martínez, Julio Torres, Judith F Kroll","doi":"10.1016/j.cognition.2025.106061","DOIUrl":"https://doi.org/10.1016/j.cognition.2025.106061","url":null,"abstract":"<p><p>This study compared the processing of non-binary morphemes in Spanish (e.g., todxs, todes) with the processing of canonical grammatical gender violations in Spanish pronouns (e.g., Los maestros… todas…). Using self-paced reading, the study examined how individual differences in working memory and gender/sex diversity beliefs affected language processing at three regions of interest (ROI): the pronoun, the pronoun +1, and the pronoun +2. Seventy-eight Spanish-English bilinguals completed two self-paced reading tasks, one with non-binary pronouns and another with grammatical gender violations, as well as a working memory task, a language dominance questionnaire, and a gender/sex diversity beliefs questionnaire. Processing costs were operationalized as longer reaction times (RTs) or inaccurate responses. Results showed overall processing costs for non-binary morphemes at all 3 ROIs, but no processing costs were observed in terms of accuracy or response times to the comprehension question. The results suggest that processing non-binary pronouns results in a small processing cost that does not affect overall sentence comprehension. The small observed processing cost was moderated by gender/sex diversity beliefs, with gender normative beliefs increasing RTs at the pronoun and affirmation of diverse gender identities beliefs reducing the RTs at the second spillover region. In contrast, grammatical gender violations only showed a processing cost at the first spillover region and were not moderated by working memory nor gender/sex diversity beliefs. Taken together, the results suggest that non-binary pronouns are processed differently than grammatical gender violations and that the small processing cost they impose can lead to good enough comprehension.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"106061"},"PeriodicalIF":2.8,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1016/j.cognition.2025.106064
Ryota Ishikawa, Genta Ono, Jun Izawa
Pain perception is not solely determined by noxious stimuli, but also varies due to other factors, such as beliefs about pain and its uncertainty. A widely accepted theory posits that the brain integrates prediction of pain with noxious stimuli, to estimate pain intensity. This theory assumes that the estimated pain value is adjusted to minimize surprise, mathematically defined as errors between predictions and outcomes. However, it is still unclear whether the represented surprise directly influences pain perception or merely serves to update this estimate. In this study, we empirically examined this question using virtual reality. In the task, participants reported felt pain via VAS after their arm was stimulated by noxious heat and thrusted into by a virtual knife actively. To manipulate surprise level, the visual threat suddenly disappeared randomly, and noxious heat was presented in the on- or post-action phases. We observed that a transphysical surprising event, created by sudden disappearance of a visual threat cue combined with delayed noxious heat, amplified pain intensity. Subsequent model-based analysis using Bayesian theory revealed significant modulation of pain by the Bayesian surprise value. These results illustrated a real-time computational process for pain perception during a single task trial, suggesting that the brain anticipates pain using an efference copy of actions, integrates it with multimodal stimuli, and perceives it as a surprise.
{"title":"Bayesian surprise intensifies pain in a novel visual-noxious association.","authors":"Ryota Ishikawa, Genta Ono, Jun Izawa","doi":"10.1016/j.cognition.2025.106064","DOIUrl":"https://doi.org/10.1016/j.cognition.2025.106064","url":null,"abstract":"<p><p>Pain perception is not solely determined by noxious stimuli, but also varies due to other factors, such as beliefs about pain and its uncertainty. A widely accepted theory posits that the brain integrates prediction of pain with noxious stimuli, to estimate pain intensity. This theory assumes that the estimated pain value is adjusted to minimize surprise, mathematically defined as errors between predictions and outcomes. However, it is still unclear whether the represented surprise directly influences pain perception or merely serves to update this estimate. In this study, we empirically examined this question using virtual reality. In the task, participants reported felt pain via VAS after their arm was stimulated by noxious heat and thrusted into by a virtual knife actively. To manipulate surprise level, the visual threat suddenly disappeared randomly, and noxious heat was presented in the on- or post-action phases. We observed that a transphysical surprising event, created by sudden disappearance of a visual threat cue combined with delayed noxious heat, amplified pain intensity. Subsequent model-based analysis using Bayesian theory revealed significant modulation of pain by the Bayesian surprise value. These results illustrated a real-time computational process for pain perception during a single task trial, suggesting that the brain anticipates pain using an efference copy of actions, integrates it with multimodal stimuli, and perceives it as a surprise.</p>","PeriodicalId":48455,"journal":{"name":"Cognition","volume":"257 ","pages":"106064"},"PeriodicalIF":2.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}