Pub Date : 2025-07-07eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.4
Mila Bertolo, Martynas Snarskis, Thanos Kyritsis, Lidya Yurdum, Constance M Bainbridge, S Atwood, Courtney B Hilton, Anya Keomurjian, Judy S Lee, Alex Mackiel, Vanessa Mak, Mijoo Shin, Alma Bitran, Dor Shilton, Lana Delasanta, Hang Heather Do, Jenna Lang, Tenaaz Irani, Jayanthiny Kangatharan, Kevin Lafleur, Nashua Malko, Quentin D Atkinson, Manvir Singh, Samuel A Mehr
A comprehensive cognitive science requires broad sampling of human behavior to justify general inferences about the mind. For example, the field of psycholinguistics relies on a rich history of comparative study, with many available resources that systematically document many languages. Surprisingly, despite a longstanding interest in questions of universality and diversity, the psychology of music has few such resources. Here, we report the Expanded Natural History of Song Discography, an open-access corpus of vocal music (n = 1007 song excerpts), with accompanying metadata detailing each song's region of origin, language (of 413 languages represented here), and one of 10 behavioral contexts (e.g., work, storytelling, mourning, lullaby, dance). The corpus is designed to sample both broadly, with a large cross-section of societies and languages; and deeply, with many songs representing three well-studied language families (Atlantic-Congo, Austronesian, and Indo-European). This design facilitates direct comparison of musical and vocal features across cultures, principled approaches to sampling stimuli for experiments, and evaluation of models of the cultural evolution of song. In this paper we describe the corpus and provide two proofs of concept, demonstrating its utility. We report (1) a conceptual replication of previous findings that the acoustical forms of songs are predictive of their behavioral contexts, including in previously unstudied contexts (e.g., children's play songs); and (2) similarities in acoustic content of songs across cultures are predictable, in part, by the relatedness of those cultures.
{"title":"The Expanded Natural History of Song Discography, A Global Corpus of Vocal Music.","authors":"Mila Bertolo, Martynas Snarskis, Thanos Kyritsis, Lidya Yurdum, Constance M Bainbridge, S Atwood, Courtney B Hilton, Anya Keomurjian, Judy S Lee, Alex Mackiel, Vanessa Mak, Mijoo Shin, Alma Bitran, Dor Shilton, Lana Delasanta, Hang Heather Do, Jenna Lang, Tenaaz Irani, Jayanthiny Kangatharan, Kevin Lafleur, Nashua Malko, Quentin D Atkinson, Manvir Singh, Samuel A Mehr","doi":"10.1162/opmi.a.4","DOIUrl":"10.1162/opmi.a.4","url":null,"abstract":"<p><p>A comprehensive cognitive science requires broad sampling of human behavior to justify general inferences about the mind. For example, the field of psycholinguistics relies on a rich history of comparative study, with many available resources that systematically document many languages. Surprisingly, despite a longstanding interest in questions of universality and diversity, the psychology of music has few such resources. Here, we report the <i>Expanded Natural History of Song Discography</i>, an open-access corpus of vocal music (<i>n</i> = 1007 song excerpts), with accompanying metadata detailing each song's region of origin, language (of 413 languages represented here), and one of 10 behavioral contexts (e.g., work, storytelling, mourning, lullaby, dance). The corpus is designed to sample both broadly, with a large cross-section of societies and languages; and deeply, with many songs representing three well-studied language families (Atlantic-Congo, Austronesian, and Indo-European). This design facilitates direct comparison of musical and vocal features across cultures, principled approaches to sampling stimuli for experiments, and evaluation of models of the cultural evolution of song. In this paper we describe the corpus and provide two proofs of concept, demonstrating its utility. We report (1) a conceptual replication of previous findings that the acoustical forms of songs are predictive of their behavioral contexts, including in previously unstudied contexts (e.g., children's play songs); and (2) similarities in acoustic content of songs across cultures are predictable, in part, by the relatedness of those cultures.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"844-863"},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12283151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144691805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.2
Mika Asaba, Yang Wu, Brandon Carrillo, Hyowon Gweon
How do we learn who is good at what? Building on the idea that humans draw rich inferences from others' emotional expressions, here we ask whether others' surprised reactions to performance outcomes can elicit inferences about competence. Across three experiments, participants were asked to choose "who is better" in scenarios where two students performed identically on the same task but their teacher expressed surprise to only one of them. In Experiment 1 (n = 60, adults) and Experiment 2 (n = 90, 6- to 8-year-old children), participants' responses were modulated by not only the students' performance outcomes (success or failure) but also the teacher's response to the outcomes (surprise or no surprise). Specifically, participants preferentially chose the student who did not elicit the teacher's surprise as more competent when both students succeeded, but chose the student who elicited surprise when both failed. Experiment 3a (n = 150, 4- to 8-year-olds) replicated this pattern in 6- to 8-year-olds as a group-but not in 4- to 5-year-olds-with increasing robustness with age. Finally, this pattern was significantly reduced in Experiment 3b where the teacher's surprise was directed at an irrelevant event rather than the student's performance (n = 90, 6- to 8-year-olds). Taken together, these results suggest that even non-valenced emotional reactions to performance outcomes-being surprised at someone's success or failure-can inform inferences about valenced qualities such as competence. More broadly, the current findings demonstrate that emotional expressions we observe in our daily lives can lead to nuanced yet consequential social judgments.
{"title":"When Success Is Surprising: Children's Ability to Use Surprise to Infer Competence.","authors":"Mika Asaba, Yang Wu, Brandon Carrillo, Hyowon Gweon","doi":"10.1162/opmi.a.2","DOIUrl":"10.1162/opmi.a.2","url":null,"abstract":"<p><p>How do we learn who is good at what? Building on the idea that humans draw rich inferences from others' emotional expressions, here we ask whether others' surprised reactions to performance outcomes can elicit inferences about competence. Across three experiments, participants were asked to choose \"who is better\" in scenarios where two students performed identically on the same task but their teacher expressed surprise to only one of them. In Experiment 1 (<i>n</i> = 60, adults) and Experiment 2 (<i>n</i> = 90, 6- to 8-year-old children), participants' responses were modulated by not only the students' performance outcomes (success or failure) but also the teacher's response to the outcomes (surprise or no surprise). Specifically, participants preferentially chose the student who did not elicit the teacher's surprise as more competent when both students succeeded, but chose the student who elicited surprise when both failed. Experiment 3a (<i>n</i> = 150, 4- to 8-year-olds) replicated this pattern in 6- to 8-year-olds as a group-but not in 4- to 5-year-olds-with increasing robustness with age. Finally, this pattern was significantly reduced in Experiment 3b where the teacher's surprise was directed at an irrelevant event rather than the student's performance (<i>n</i> = 90, 6- to 8-year-olds). Taken together, these results suggest that even non-valenced emotional reactions to performance outcomes-being surprised at someone's success or failure-can inform inferences about valenced qualities such as competence. More broadly, the current findings demonstrate that emotional expressions we observe in our daily lives can lead to nuanced yet consequential social judgments.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"825-843"},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12283150/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144691806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.1
Marek Meristo, Luca Surian
Individuals with hearing loss have a diverse spectrum of auditory experiences, shaped by the degree of hearing loss and interventions. The study of social cognition in deaf children and more generally, children with hearing loss, contributes to a nuanced understanding of how learning experiences influence social and cognitive development. Research suggests that limited access to language may influence conceptual development in theory of mind or the development of information processing skills required in mental state reasoning. In this article, we briefly review decades of research on social-cognitive development of children with hearing loss acquired in infancy, discuss how access to language-mediated communication contributes to the emergence and expression of understanding other minds and highlight some implications for effective interventions.
{"title":"Deafness, Hearing Loss and the Development of Mental State Reasoning Skills: A Review.","authors":"Marek Meristo, Luca Surian","doi":"10.1162/opmi.a.1","DOIUrl":"10.1162/opmi.a.1","url":null,"abstract":"<p><p>Individuals with hearing loss have a diverse spectrum of auditory experiences, shaped by the degree of hearing loss and interventions. The study of social cognition in deaf children and more generally, children with hearing loss, contributes to a nuanced understanding of how learning experiences influence social and cognitive development. Research suggests that limited access to language may influence conceptual development in theory of mind or the development of information processing skills required in mental state reasoning. In this article, we briefly review decades of research on social-cognitive development of children with hearing loss acquired in infancy, discuss how access to language-mediated communication contributes to the emergence and expression of understanding other minds and highlight some implications for effective interventions.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"762-790"},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144609792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.5
Chris Frith
Moving through our environment generates multiple changes in my sensations. But I do not experience the environment as changing. My conscious perceptual experience is of a stable environment through which I move. This perception is created by intricate neural computations that automatically take account of my movements. The stable environment that I experience is independent of my actions. As a result, I experience it as objective: a set of facts about the world that constrain my movements. Because it is objective I expect that it will also constrain the movements of others in the same way, whether these are rocks rolling down a hill or animals foraging for food. This experience of objectivity creates a shared understanding of the world that enhances our interactions with others. Our perceptual experiences, while personal, are shaped by our model of the world, and since others are modelling the same world, their models will be very similar. Interactions with others will further increase this similarity. The models create a form of common knowledge. This common knowledge is an inherent feature of our basic conscious perception, even when we're not actively reflecting on or deliberately sharing our experiences. The common knowledge created by our conscious perception of the world enables the coordination of behaviour which is a critical precursor for the evolution of cooperative behaviour.
{"title":"Sharing the World-A Social Aspect of Consciousness.","authors":"Chris Frith","doi":"10.1162/opmi.a.5","DOIUrl":"10.1162/opmi.a.5","url":null,"abstract":"<p><p>Moving through our environment generates multiple changes in my sensations. But I do not experience the environment as changing. My conscious perceptual experience is of a stable environment through which I move. This perception is created by intricate neural computations that automatically take account of my movements. The stable environment that I experience is independent of my actions. As a result, I experience it as objective: a set of facts about the world that constrain my movements. Because it is objective I expect that it will also constrain the movements of others in the same way, whether these are rocks rolling down a hill or animals foraging for food. This experience of objectivity creates a shared understanding of the world that enhances our interactions with others. Our perceptual experiences, while personal, are shaped by our model of the world, and since others are modelling the same world, their models will be very similar. Interactions with others will further increase this similarity. The models create a form of common knowledge. This common knowledge is an inherent feature of our basic conscious perception, even when we're not actively reflecting on or deliberately sharing our experiences. The common knowledge created by our conscious perception of the world enables the coordination of behaviour which is a critical precursor for the evolution of cooperative behaviour.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"814-824"},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144609793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.3
Romy Frömer, Frederick Callaway, Thomas L Griffiths, Amitai Shenhav
When making decisions, we often have more information about some options than others. Previous work has shown that people are more likely to choose options that they look at more and those that they are more confident in. But should one always prefer options one knows more about? Intuition suggests not. Rather, how additional information impacts our preferences should depend critically on how valuable we expect the options to be. Here, we formalize this intuition in a Bayesian sequential sampling model where attention and confidence influence the precision of momentary evidence. Our model makes a key prediction: attention and confidence both increase choice probability for better-than-average options, and both decrease choice probability for worse-than-average options. We confirm this prediction in two experiments in which we independently manipulate value and attention. Our results offer a novel perspective on prior work on the role of attention and confidence in decision-making, showing that people rely on contextual knowledge and uncertainty estimates to adaptively learn about their options and make better decisions.
{"title":"Considering What We Know and What We Don't Know: Expectations and Confidence Guide Value Integration in Value-Based Decision-Making.","authors":"Romy Frömer, Frederick Callaway, Thomas L Griffiths, Amitai Shenhav","doi":"10.1162/opmi.a.3","DOIUrl":"10.1162/opmi.a.3","url":null,"abstract":"<p><p>When making decisions, we often have more information about some options than others. Previous work has shown that people are more likely to choose options that they look at more and those that they are more confident in. But should one always prefer options one knows more about? Intuition suggests not. Rather, how additional information impacts our preferences should depend critically on how valuable we expect the options to be. Here, we formalize this intuition in a Bayesian sequential sampling model where attention and confidence influence the precision of momentary evidence. Our model makes a key prediction: attention and confidence both increase choice probability for better-than-average options, and both decrease choice probability for worse-than-average options. We confirm this prediction in two experiments in which we independently manipulate value and attention. Our results offer a novel perspective on prior work on the role of attention and confidence in decision-making, showing that people rely on contextual knowledge and uncertainty estimates to adaptively learn about their options and make better decisions.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"791-813"},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144609791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-23eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00208
C E R Edmunds, Fraser Milton, Andy J Wills
Integral stimuli (e.g., colors varying in saturation and brightness) are classically considered to be processed holistically (i.e., as undifferentiated stimulus wholes); people analyze such stimuli into their consistent dimensions only with substantial time, effort, training, or instruction (Foard & Kemler, 1984). In contrast, Combination Theory (Wills et al., 2015) argues that the dimensions of integral stimuli are quickly combined. Through an investigation of the effects of stimulus presentation time, we support Combination Theory over the classical holistic-to-analytic account. Specifically, using colored squares varying in saturation and brightness, we demonstrate that the prevalence of single-dimension classification increases as stimulus presentation time is reduced. We conclude that integral stimuli are not slowly analyzed, they are quickly synthesized.
整体刺激(例如,饱和度和亮度变化的颜色)通常被认为是整体处理的(即,作为未分化的刺激整体);人们只有花大量的时间、精力、训练或指导,才能把这些刺激分析成它们一致的维度(Foard & Kemler, 1984)。相比之下,组合理论(Wills et al., 2015)认为,整体刺激的维度是快速组合的。通过对刺激呈现时间效应的研究,我们支持组合理论,而不是经典的从整体到分析的解释。具体地说,我们使用饱和度和亮度不同的彩色方块,证明了随着刺激呈现时间的减少,单维分类的流行率增加。我们得出结论,积分刺激不是缓慢分析的,而是快速合成的。
{"title":"The Rapid Synthesis of Integral Stimuli.","authors":"C E R Edmunds, Fraser Milton, Andy J Wills","doi":"10.1162/opmi_a_00208","DOIUrl":"10.1162/opmi_a_00208","url":null,"abstract":"<p><p>Integral stimuli (e.g., colors varying in saturation and brightness) are classically considered to be processed holistically (i.e., as undifferentiated stimulus wholes); people analyze such stimuli into their consistent dimensions only with substantial time, effort, training, or instruction (Foard & Kemler, 1984). In contrast, Combination Theory (Wills et al., 2015) argues that the dimensions of integral stimuli are quickly combined. Through an investigation of the effects of stimulus presentation time, we support Combination Theory over the classical holistic-to-analytic account. Specifically, using colored squares varying in saturation and brightness, we demonstrate that the prevalence of single-dimension classification increases as stimulus presentation time is reduced. We conclude that integral stimuli are not slowly analyzed, they are quickly synthesized.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"746-761"},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-23eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00207
Elena Marx, Natalia Jardón, Eva Wittenberg
In language, comprehenders often need to infer the temporal order of events to construct a mental model of a complex situation. Dynamicity differences are a key predictor of these inferences: Non-dynamic states are reliably inferred to precede dynamic events. In two studies, we test two theoretical explanations for this phenomenon through temporal order judgments for past-under-past and future-under-future relative clauses in English: According to a tense-mediated account of temporal anchoring, people rely on the conceptual distinction between a more salient reference time-often a dynamic event-and a less salient anchored situation-often a static state. The temporal relationship between the two is determined at the linguistic level by tense meaning: For the past tense, the relationship should be one of anteriority, and for the future tense, it should be one of posteriority. However, the future tense has often been placed closer to modals than to tenses, relegating the question of temporal order to other mechanisms. Alternatively, from a purely cognitive perspective, salience differences between states and events are sufficient to infer temporal order, with states acting as temporal backgrounds for more salient events, regardless of tense. Our results support such a cognitive mechanism: In both experiments, states are backgrounded relative to events. Differences between the experiments furthermore support modal accounts of the semantics of the future.
{"title":"The State-Before-Event Inference Emerges Across Tenses.","authors":"Elena Marx, Natalia Jardón, Eva Wittenberg","doi":"10.1162/opmi_a_00207","DOIUrl":"10.1162/opmi_a_00207","url":null,"abstract":"<p><p>In language, comprehenders often need to infer the temporal order of events to construct a mental model of a complex situation. Dynamicity differences are a key predictor of these inferences: Non-dynamic states are reliably inferred to precede dynamic events. In two studies, we test two theoretical explanations for this phenomenon through temporal order judgments for past-under-past and future-under-future relative clauses in English: According to a tense-mediated account of temporal anchoring, people rely on the conceptual distinction between a more salient reference time-often a dynamic event-and a less salient anchored situation-often a static state. The temporal relationship between the two is determined at the linguistic level by tense meaning: For the past tense, the relationship should be one of anteriority, and for the future tense, it should be one of posteriority. However, the future tense has often been placed closer to modals than to tenses, relegating the question of temporal order to other mechanisms. Alternatively, from a purely cognitive perspective, salience differences between states and events are sufficient to infer temporal order, with states acting as temporal backgrounds for more salient events, regardless of tense. Our results support such a cognitive mechanism: In both experiments, states are backgrounded relative to events. Differences between the experiments furthermore support modal accounts of the semantics of the future.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"726-745"},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-09eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00205
Max Taylor-Davies, Neil Bramley, Christopher G Lucas
Social learning can be a powerful tool, allowing us to acquire knowledge and adaptive behaviours while bypassing many of the costs of learning through direct experience. However, not everyone's behaviour is equally valuable to learn from, as other people's goals or preferences may differ dramatically from our own. In this paper, we consider the problem of selectively learning from others on the basis of direct and indirect inferences about their task-relevant preferences. Specifically, we focus on the setting where a social learner must generalise preference judgements across individuals using shared features and other cues, and so develop a formal account that can reconcile a seemingly disparate empirical picture of group-based selective social learning. Across three behavioural experiments, we demonstrate that people are sensitive to the contextual significance of group identity cues when choosing who to learn from in partially observed environments. We show that this behaviour cannot be accounted for by a range of simpler heuristic strategies.
{"title":"A Rational Framework for Group-Based Selective Social Learning.","authors":"Max Taylor-Davies, Neil Bramley, Christopher G Lucas","doi":"10.1162/opmi_a_00205","DOIUrl":"10.1162/opmi_a_00205","url":null,"abstract":"<p><p>Social learning can be a powerful tool, allowing us to acquire knowledge and adaptive behaviours while bypassing many of the costs of learning through direct experience. However, not everyone's behaviour is equally valuable to learn from, as other people's goals or preferences may differ dramatically from our own. In this paper, we consider the problem of selectively learning from others on the basis of direct and indirect inferences about their task-relevant preferences. Specifically, we focus on the setting where a social learner must generalise preference judgements across individuals using shared features and other cues, and so develop a formal account that can reconcile a seemingly disparate empirical picture of group-based selective social learning. Across three behavioural experiments, we demonstrate that people are sensitive to the contextual significance of group identity cues when choosing who to learn from in partially observed environments. We show that this behaviour cannot be accounted for by a range of simpler heuristic strategies.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"677-708"},"PeriodicalIF":0.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-09eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00209
William M Hayes, Nicolas Yax, Stefano Palminteri
In-context learning enables large language models (LLMs) to perform a variety of tasks, including solving reinforcement learning (RL) problems. Given their potential use as (autonomous) decision-making agents, it is important to understand how these models behave in RL tasks and the extent to which they are susceptible to biases. Motivated by the fact that, in humans, it has been widely documented that the value of a choice outcome depends on how it compares to other local outcomes, the present study focuses on whether similar value encoding biases apply to LLMs. Results from experiments with multiple bandit tasks and models show that LLMs exhibit behavioral signatures of relative value encoding. Adding explicit outcome comparisons to the prompt magnifies the bias, impairing the ability of LLMs to generalize from the outcomes presented in-context to new choice problems, similar to effects observed in humans. Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm that incorporates relative values at the outcome encoding stage. Lastly, we present preliminary evidence that the observed biases are not limited to fine-tuned LLMs, and that relative value processing is detectable in the final hidden layer activations of a raw, pretrained model. These findings have important implications for the use of LLMs in decision-making applications.
{"title":"Relative Value Encoding in Large Language Models: A Multi-Task, Multi-Model Investigation.","authors":"William M Hayes, Nicolas Yax, Stefano Palminteri","doi":"10.1162/opmi_a_00209","DOIUrl":"10.1162/opmi_a_00209","url":null,"abstract":"<p><p>In-context learning enables large language models (LLMs) to perform a variety of tasks, including solving reinforcement learning (RL) problems. Given their potential use as (autonomous) decision-making agents, it is important to understand how these models behave in RL tasks and the extent to which they are susceptible to biases. Motivated by the fact that, in humans, it has been widely documented that the value of a choice outcome depends on how it compares to other local outcomes, the present study focuses on whether similar value encoding biases apply to LLMs. Results from experiments with multiple bandit tasks and models show that LLMs exhibit behavioral signatures of relative value encoding. Adding explicit outcome comparisons to the prompt magnifies the bias, impairing the ability of LLMs to generalize from the outcomes presented in-context to new choice problems, similar to effects observed in humans. Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm that incorporates relative values at the outcome encoding stage. Lastly, we present preliminary evidence that the observed biases are not limited to fine-tuned LLMs, and that relative value processing is detectable in the final hidden layer activations of a raw, pretrained model. These findings have important implications for the use of LLMs in decision-making applications.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"709-725"},"PeriodicalIF":0.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140570/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-29eCollection Date: 2025-01-01DOI: 10.1162/opmi_a_00198
Johanna Schick, Sabine Stoll
Language input is crucial for language learning, with child-directed speech being a strong predictor of language development. Yet, in many non-industrialized rural societies, children are less exposed to this type of input. Instead, children encounter frequent child-surrounding speech from third-party interactions. Little is known about whether and how children learn language from this type of input. By analyzing naturalistic data from children growing up in the Shipibo-Konibo community in the Peruvian Amazon, we demonstrate that despite a high prevalence of child-surrounding input, child-directed input best predicts children's production patterns defined as unigrams. We provide first evidence for remarkable similarities between child-surrounding speech and children's own speech patterns. In addition, we demonstrate that a specific type of input best predicts children's production frequencies across the domains of surrounding and directed input: speech from other children. Together, these findings expand our perspective beyond dyadic adult-child interactions, supporting the view that child-surrounding speech and especially speech from other children provide important learning opportunities.
{"title":"Children Learn Best From Their Peers: The Crucial Role of Input From Other Children in Language Development.","authors":"Johanna Schick, Sabine Stoll","doi":"10.1162/opmi_a_00198","DOIUrl":"https://doi.org/10.1162/opmi_a_00198","url":null,"abstract":"<p><p>Language input is crucial for language learning, with child-directed speech being a strong predictor of language development. Yet, in many non-industrialized rural societies, children are less exposed to this type of input. Instead, children encounter frequent child-surrounding speech from third-party interactions. Little is known about whether and how children learn language from this type of input. By analyzing naturalistic data from children growing up in the Shipibo-Konibo community in the Peruvian Amazon, we demonstrate that despite a high prevalence of child-surrounding input, child-directed input best predicts children's production patterns defined as unigrams. We provide first evidence for remarkable similarities between child-surrounding speech and children's own speech patterns. In addition, we demonstrate that a specific type of input best predicts children's production frequencies across the domains of surrounding and directed input: speech from other children. Together, these findings expand our perspective beyond dyadic adult-child interactions, supporting the view that child-surrounding speech and especially speech from other children provide important learning opportunities.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"665-676"},"PeriodicalIF":0.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058327/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}