Pub Date : 2025-10-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.39
Andrea Adriano, Mathias Sablé-Meyer, Lorenzo Ciccione, Minye Zhan, Stanislas Dehaene
In a visual intruder task, regular quadrilaterals such as squares and rectangles are easier to process than matched shapes devoid of parallelism, symmetry or right-angles. This geometric regularity effect was found in various human groups, including preschoolers and uneducated adults, but not in non-human primates. It was proposed to reflect a fundamental ability to combine discrete geometric features into structured representations of geometric shapes using an abstract amodal language-of-thought (LoT) that also supports the acquisition of symbolic drawing and formal mathematics. Here, we tested a prediction of this hypothesis: blind participants should have the same intuitions of geometric regularity as sighted ones. To evaluate this prediction, congenitally blind and sighted (but blindfolded) adults underwent a tactile version of the visual quadrilateral intruder task. Among six tactile shapes, five of which were identical up to small size and rotation changes, participants were asked to identify a deviant shape defined by a fixed displacement of a single vertex, and to rate their confidence in their response. Both variables revealed a geometric regularity effect in both groups, and also correlated with previous results in the visual domain. Furthermore, a symbolic LoT model was a better predictor of tactile performance than a visual CNN model in blind participants. Thus, the geometric regularity effect develops in the absence of vision.
{"title":"Sensitivity to Geometric Shape Regularity Emerges Independently of Vision.","authors":"Andrea Adriano, Mathias Sablé-Meyer, Lorenzo Ciccione, Minye Zhan, Stanislas Dehaene","doi":"10.1162/OPMI.a.39","DOIUrl":"10.1162/OPMI.a.39","url":null,"abstract":"<p><p>In a visual intruder task, regular quadrilaterals such as squares and rectangles are easier to process than matched shapes devoid of parallelism, symmetry or right-angles. This geometric regularity effect was found in various human groups, including preschoolers and uneducated adults, but not in non-human primates. It was proposed to reflect a fundamental ability to combine discrete geometric features into structured representations of geometric shapes using an abstract amodal language-of-thought (LoT) that also supports the acquisition of symbolic drawing and formal mathematics. Here, we tested a prediction of this hypothesis: blind participants should have the same intuitions of geometric regularity as sighted ones. To evaluate this prediction, congenitally blind and sighted (but blindfolded) adults underwent a tactile version of the visual quadrilateral intruder task. Among six tactile shapes, five of which were identical up to small size and rotation changes, participants were asked to identify a deviant shape defined by a fixed displacement of a single vertex, and to rate their confidence in their response. Both variables revealed a geometric regularity effect in both groups, and also correlated with previous results in the visual domain. Furthermore, a symbolic LoT model was a better predictor of tactile performance than a visual CNN model in blind participants. Thus, the geometric regularity effect develops in the absence of vision.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1711-1727"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.41
Alexandr Ten, Pierre-Yves Oudeyer, Michiko Sakaki, Kou Murayama
Many empirical studies have found a curvilinear (inverted-U) relationship between knowledge and curiosity, such that curiosity is induced when stimuli are neither unknown nor too familiar. While various theoretical accounts have been proposed to explain this phenomenon, no clear link between them have been delineated. In this Perspective, we review seven psychological accounts of the inverted-U relationship between knowledge and curiosity ("the U") and provide a coherent framework integrating them. According to this framework, the U emerges as a consequence of the imperative to pursue learning progress and thus maximize knowledge. We show that some theories of curiosity address this issue by explicitly stipulating knowledge maximization as the computational objective, and learning-progress maximization as an optimal means of achieving it (i.e., normative theories). Other theories focus on psychological mechanisms or factors that drive curiosity (i.e., process theories). We propose that these process-theoretic mechanisms could also work in a manner that maximizes learning by signaling situations in which some relevant prior knowledge exists, but is incomplete. The implications of this framework for future theoretical work on curiosity and its connections to related phenomena are discussed.
{"title":"The Curious <i>U</i>: Integrating Theories Linking Knowledge and Information-Seeking Behavior.","authors":"Alexandr Ten, Pierre-Yves Oudeyer, Michiko Sakaki, Kou Murayama","doi":"10.1162/OPMI.a.41","DOIUrl":"10.1162/OPMI.a.41","url":null,"abstract":"<p><p>Many empirical studies have found a curvilinear (inverted-<i>U</i>) relationship between knowledge and curiosity, such that curiosity is induced when stimuli are neither unknown nor too familiar. While various theoretical accounts have been proposed to explain this phenomenon, no clear link between them have been delineated. In this Perspective, we review seven psychological accounts of the inverted-<i>U</i> relationship between knowledge and curiosity (\"the <i>U</i>\") and provide a coherent framework integrating them. According to this framework, the <i>U</i> emerges as a consequence of the imperative to pursue learning progress and thus maximize knowledge. We show that some theories of curiosity address this issue by explicitly stipulating knowledge maximization as the computational objective, and learning-progress maximization as an optimal means of achieving it (i.e., <i>normative</i> theories). Other theories focus on psychological mechanisms or factors that drive curiosity (i.e., <i>process</i> theories). We propose that these process-theoretic mechanisms could also work in a manner that maximizes learning by signaling situations in which some relevant prior knowledge exists, but is incomplete. The implications of this framework for future theoretical work on curiosity and its connections to related phenomena are discussed.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1763-1785"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.40
Sascha Meyen, Frieder Göppert, Carina Schrenk, Ulrike von Luxburg, Volker H Franz
Metacognition comprises the ability to differentiate the accuracy of predictions about the world. This is often called Type 2 performance (with Type 1 performance being the overall accuracy). Typical measures of metacognition are based on signal detection theory and require the strong assumption of truncated normal noise underlying confidence ratings. To minimize distributional assumptions, measures based on classical information theory have been proposed. We further this approach by providing bounds on its key quantity, the transmitted information. We show that classifiers making predictions with a certain accuracy can transmit information only within a limited range, depending on the underlying noise distribution: The lowest transmitted information indicates the worst Type 2 performance and corresponds to binary noise; the highest transmitted information indicates the best Type 2 performance and corresponds to uniform noise. Because normal noise is only an intermediate case, traditional measures based on this assumption can bias interpretations of Type 2 performance. Based on these bounds, we suggest a new measure: Relative metainformation (RMI). RMI scales from 0 (lower bound) to 1 (upper bound) and therefore advances towards the much-needed decoupling of Type 2 from Type 1 performance measures. To demonstrate the strengths of RMI, we apply it to groups: In a setting where multiple independent group members with fixed accuracies combine their predictions in an optimal way, we show that the group performance depends directly on RMI: Group accuracy is best vs. worst if the group members have highest vs. lowest RMI values. Overall, our theoretical bounds allow to better evaluate measures of Type 2 and group performance.
{"title":"Information-Theoretic Measures of Metacognition: Bounds and Relation to Group Performance.","authors":"Sascha Meyen, Frieder Göppert, Carina Schrenk, Ulrike von Luxburg, Volker H Franz","doi":"10.1162/OPMI.a.40","DOIUrl":"10.1162/OPMI.a.40","url":null,"abstract":"<p><p>Metacognition comprises the ability to differentiate the accuracy of predictions about the world. This is often called Type 2 performance (with Type 1 performance being the overall accuracy). Typical measures of metacognition are based on signal detection theory and require the strong assumption of truncated normal noise underlying confidence ratings. To minimize distributional assumptions, measures based on classical information theory have been proposed. We further this approach by providing bounds on its key quantity, the transmitted information. We show that classifiers making predictions with a certain accuracy can transmit information only within a limited range, depending on the underlying noise distribution: The lowest transmitted information indicates the worst Type 2 performance and corresponds to binary noise; the highest transmitted information indicates the best Type 2 performance and corresponds to uniform noise. Because normal noise is only an intermediate case, traditional measures based on this assumption can bias interpretations of Type 2 performance. Based on these bounds, we suggest a new measure: Relative metainformation (RMI). RMI scales from 0 (lower bound) to 1 (upper bound) and therefore advances towards the much-needed decoupling of Type 2 from Type 1 performance measures. To demonstrate the strengths of RMI, we apply it to groups: In a setting where multiple independent group members with fixed accuracies combine their predictions in an optimal way, we show that the group performance depends directly on RMI: Group accuracy is best vs. worst if the group members have highest vs. lowest RMI values. Overall, our theoretical bounds allow to better evaluate measures of Type 2 and group performance.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1728-1762"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.250
Laura J Speed, Eva D Poort, Tanita P Duiker, Heidi Baseler, Asifa Majid
Vision is typically dominant in our perception of the world. Such asymmetry is also observed in conceptual representations. This could be driven by perceptual experience or learned from other input, such as language. In this study we tested the role of direct perceptual experience in conceptual representation by investigating the sensory underpinnings of word meanings in blind and sighted individuals. Seventeen early-blind and 17 matched sighted Dutch native speakers rated 100 Dutch nouns for their sensory associations across six modalities (vision, audition, haptic, interoception, gustation, and olfaction) on a 0 (not at all) to 5 (very much) scale. To cover a range of concepts we used five semantic categories thought to be strongly associated with different sensory modalities: animals (vision), instruments (audition), tactile objects (haptics), food (gustation), and odor objects (olfaction). We found no difference between blind and sighted individuals in their ratings of visual associations, suggesting that conceptual associations with vision can be learned indirectly via means beyond direct visual perception. However, blind participants did associate concepts more strongly with haptics than sighted participants for all semantic categories except animals. This is evidence for crossmodal compensation in conceptual representation, in line with enhanced tactile acuity reported elsewhere for blind individuals. Overall, the results point to a role for perceptual experience in conceptual representation, but suggest there are other strategies that can be recruited to learn about perception, supporting hybrid models of semantic representation.
{"title":"Haptic Compensation in Blind People's Conceptual Representations.","authors":"Laura J Speed, Eva D Poort, Tanita P Duiker, Heidi Baseler, Asifa Majid","doi":"10.1162/OPMI.a.250","DOIUrl":"10.1162/OPMI.a.250","url":null,"abstract":"<p><p>Vision is typically dominant in our perception of the world. Such asymmetry is also observed in conceptual representations. This could be driven by perceptual experience or learned from other input, such as language. In this study we tested the role of direct perceptual experience in conceptual representation by investigating the sensory underpinnings of word meanings in blind and sighted individuals. Seventeen early-blind and 17 matched sighted Dutch native speakers rated 100 Dutch nouns for their sensory associations across six modalities (vision, audition, haptic, interoception, gustation, and olfaction) on a 0 (not at all) to 5 (very much) scale. To cover a range of concepts we used five semantic categories thought to be strongly associated with different sensory modalities: animals (vision), instruments (audition), tactile objects (haptics), food (gustation), and odor objects (olfaction). We found no difference between blind and sighted individuals in their ratings of visual associations, suggesting that conceptual associations with vision can be learned indirectly via means beyond direct visual perception. However, blind participants did associate concepts more strongly with haptics than sighted participants for all semantic categories except animals. This is evidence for crossmodal compensation in conceptual representation, in line with enhanced tactile acuity reported elsewhere for blind individuals. Overall, the results point to a role for perceptual experience in conceptual representation, but suggest there are other strategies that can be recruited to learn about perception, supporting hybrid models of semantic representation.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1786-1801"},"PeriodicalIF":0.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.33
Nan Zhao, Xufeng Duan, Zhenguang G Cai
Developmental language models (DLMs) aim to replicate the efficiency of child language acquisition but often focus solely on the estimation of exogenous linguistic input. We argue that a child's linguistic growth is also critically shaped by endogenous processes, including (1) co-opting language in non-linguistic perception and cognition, (2) engaging in private and inner speech, and (3) benefiting from neural replay of linguistic information during sleep. These endogenous processes amplify and refine exogenous linguistic input in ways that current DLMs do not replicate. To align DLMs with child language acquisition, we propose redefining "linguistic exposure" to encompass both exogenous and endogenous linguistic input. By integrating label feedback, self-generated speech, and sleep-like consolidation, researchers can narrow the gap between artificial and human learning. Collaborations across machine learning, psychology, and linguistics will be essential to ground models in empirical data on child behavior and build DLMs that truly reflect the marvel of language acquisition.
{"title":"The Missing Half of Language Learning in Current Developmental Language Models: Exogenous and Endogenous Linguistic Input.","authors":"Nan Zhao, Xufeng Duan, Zhenguang G Cai","doi":"10.1162/OPMI.a.33","DOIUrl":"10.1162/OPMI.a.33","url":null,"abstract":"<p><p>Developmental language models (DLMs) aim to replicate the efficiency of child language acquisition but often focus solely on the estimation of exogenous linguistic input. We argue that a child's linguistic growth is also critically shaped by endogenous processes, including (1) co-opting language in non-linguistic perception and cognition, (2) engaging in private and inner speech, and (3) benefiting from neural replay of linguistic information during sleep. These endogenous processes amplify and refine exogenous linguistic input in ways that current DLMs do not replicate. To align DLMs with child language acquisition, we propose redefining \"linguistic exposure\" to encompass both exogenous and endogenous linguistic input. By integrating label feedback, self-generated speech, and sleep-like consolidation, researchers can narrow the gap between artificial and human learning. Collaborations across machine learning, psychology, and linguistics will be essential to ground models in empirical data on child behavior and build DLMs that truly reflect the marvel of language acquisition.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1543-1549"},"PeriodicalIF":0.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12506926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.34
Gabriella E Smith, Megan L Lambert, Eliza Swindell, Jan M Engelmann, Christoph J Völter
Both human children and animals seek information following a violation-of-expectation event, but little research suggests the latter do so for the sake of it. In this preregistered experiment, we compared epistemic curiosity-the pursuit of information for its own sake-in kea parrots (Nestor notabilis) and three-year-old human children (Homo sapiens) following a violation-of-expectation event. Subjects were trained to push a tool into an apparatus that produced a reward before the apparatus was surreptitiously made non-functional in following trials. In both functional and non-functional trials, after solving the task, subjects were rewarded and allowed to explore the apparatus for thirty seconds with the opportunity to peek into the side of the apparatus. We found that relatively more kea peeked than children, but the children and not the kea were significantly more likely to peek in the non-functional versus functional trials, particularly when the researcher was absent. While both species showed markers of curiosity in the experiment, we found expectancy-violation-induced epistemic curiosity only in the children and not the kea in this context.
{"title":"Epistemic Curiosity in Kea Parrots and Human Children.","authors":"Gabriella E Smith, Megan L Lambert, Eliza Swindell, Jan M Engelmann, Christoph J Völter","doi":"10.1162/OPMI.a.34","DOIUrl":"10.1162/OPMI.a.34","url":null,"abstract":"<p><p>Both human children and animals seek information following a violation-of-expectation event, but little research suggests the latter do so for the sake of it. In this preregistered experiment, we compared epistemic curiosity-the pursuit of information for its own sake-in kea parrots (<i>Nestor notabilis</i>) and three-year-old human children (<i>Homo sapiens</i>) following a violation-of-expectation event. Subjects were trained to push a tool into an apparatus that produced a reward before the apparatus was surreptitiously made non-functional in following trials. In both functional and non-functional trials, after solving the task, subjects were rewarded and allowed to explore the apparatus for thirty seconds with the opportunity to peek into the side of the apparatus. We found that relatively more kea peeked than children, but the children and not the kea were significantly more likely to peek in the non-functional versus functional trials, particularly when the researcher was absent. While both species showed markers of curiosity in the experiment, we found expectancy-violation-induced epistemic curiosity only in the children and not the kea in this context.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1528-1542"},"PeriodicalIF":0.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12506928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.28
Mina Habibi, Pieter Verbeke, Mehdi Senoussi, Senne Braem
Humans are remarkably efficient at learning new tasks, in large part by relying on the integration of previously learned knowledge. However, research on task learning typically focuses on the learning of abstract task rules on minimalist stimuli, to study behavior independent of the learning history that humans come equipped with (i.e., semantic knowledge). In contrast, several theories suggest that the use of semantic knowledge and labels may help the learning of new task information. Here, we tested whether providing existing, semantically rich task embeddings and response labels allowed for more robust task rule encoding and less (catastrophic) forgetting and interference. Our results show that providing semantically rich task settings and response labels resulted in less task forgetting (Experiment 1), both when using pictorial symbols or words as labels (Experiment 2), or when contrasted with visually matched shape labels without inherent meaning (Experiment 4). Using a subsequent value-based decision-making task and reinforcement learning modeling (Experiment 3), we demonstrate how the learned embedding of novel stimuli in semantically rich, representations, further allowed for a more efficient, feature-specific processing when learning new task information. Finally, using artificial recurrent neural networks fitted to our participants' task performance, we found that task separation during learning was more predictive of learning and task performance in the semantically rich conditions. Together, our findings show the benefit of using semantically rich task rules and response labels during novel task learning, thereby offering important insights into why humans excel in continual learning and are less susceptible to catastrophic forgetting compared to most artificial agents.
{"title":"Semantic Anchors Facilitate Task Encoding in Continual Learning.","authors":"Mina Habibi, Pieter Verbeke, Mehdi Senoussi, Senne Braem","doi":"10.1162/OPMI.a.28","DOIUrl":"10.1162/OPMI.a.28","url":null,"abstract":"<p><p>Humans are remarkably efficient at learning new tasks, in large part by relying on the integration of previously learned knowledge. However, research on task learning typically focuses on the learning of abstract task rules on minimalist stimuli, to study behavior independent of the learning history that humans come equipped with (i.e., semantic knowledge). In contrast, several theories suggest that the use of semantic knowledge and labels may help the learning of new task information. Here, we tested whether providing existing, semantically rich task embeddings and response labels allowed for more robust task rule encoding and less (catastrophic) forgetting and interference. Our results show that providing semantically rich task settings and response labels resulted in less task forgetting (Experiment 1), both when using pictorial symbols or words as labels (Experiment 2), or when contrasted with visually matched shape labels without inherent meaning (Experiment 4). Using a subsequent value-based decision-making task and reinforcement learning modeling (Experiment 3), we demonstrate how the learned embedding of novel stimuli in semantically rich, representations, further allowed for a more efficient, feature-specific processing when learning new task information. Finally, using artificial recurrent neural networks fitted to our participants' task performance, we found that task separation during learning was more predictive of learning and task performance in the semantically rich conditions. Together, our findings show the benefit of using semantically rich task rules and response labels during novel task learning, thereby offering important insights into why humans excel in continual learning and are less susceptible to catastrophic forgetting compared to most artificial agents.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1467-1505"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12483573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145207745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.27
Maisy Hallam, Fiona M Jordan, Simon Kirby, Kenny Smith
Despite cross-linguistic diversity in how kin relations map to terminology, there are constraints on which kin may be categorised together. But what are the constraints on kin term variation, and where do they come from? One proposed constraint is internal co-selection-an evolutionary process where terminological changes in one generation of kin co-occur with parallel changes in other generations. This results in kin categories which are predictable on the basis of other kin categories, a property we call predictive structure. To determine the strength of this constraint, we measured the predictive structure of kinship terminology systems from 731 languages. We found that kinship terminologies exhibit a significant degree of predictive structure, and we argue that its prevalence reflects a cognitive pressure for simplicity imposed during the generalisation of known kin categories to new referent types. We tested this claim using an artificial kin term generalisation task. Our results suggest that people do favour predictive structure when generalising from known kin categories to new referents, but that this preference faces interference from other pressures to distinguish kin by features like gender.
{"title":"Predictive Structure Emerges During the Generalisation of Kin Terms to New Referents.","authors":"Maisy Hallam, Fiona M Jordan, Simon Kirby, Kenny Smith","doi":"10.1162/OPMI.a.27","DOIUrl":"10.1162/OPMI.a.27","url":null,"abstract":"<p><p>Despite cross-linguistic diversity in how kin relations map to terminology, there are constraints on which kin may be categorised together. But what are the constraints on kin term variation, and where do they come from? One proposed constraint is internal co-selection-an evolutionary process where terminological changes in one generation of kin co-occur with parallel changes in other generations. This results in kin categories which are predictable on the basis of other kin categories, a property we call <i>predictive structure</i>. To determine the strength of this constraint, we measured the predictive structure of kinship terminology systems from 731 languages. We found that kinship terminologies exhibit a significant degree of predictive structure, and we argue that its prevalence reflects a cognitive pressure for simplicity imposed during the generalisation of known kin categories to new referent types. We tested this claim using an artificial kin term generalisation task. Our results suggest that people do favour predictive structure when generalising from known kin categories to new referents, but that this preference faces interference from other pressures to distinguish kin by features like gender.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1431-1466"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12483572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145207810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09eCollection Date: 2025-01-01DOI: 10.1162/OPMI.a.31
Ameer Ghouse, Raphael Kaplan
Navigating the social world is guided by remembering which people know each other. Yet, different factors might influence how social relationships are remembered, where people's shared attributes could distort a social network's mnemonic representation. Here, we study whether dyadically shared contexts and personality traits impact how people remember relationships in social networks. Through varying levels of network topological complexity, we find the contexts where people know each other are most memorable and that better contextual retrieval predicts relationship recall. In contrast, shared personality traits affect relationship recall differently depending on social network complexity, where shared negatively valenced traits relate to worse relationship recall in the simple network. Subsequent modeling revealed that as networks become more complex, relationships between more centrally positioned individuals that share negatively valenced traits are better recalled compared to less well-connected individuals. These results suggest contextual memory can serve as a scaffold for remembering relationships in a social network, while affective traits' impact on social network retrievability depends on emotional valence and the individuals involved. More generally, our findings give insight into how the same social network can be represented differently based on one's past experience.
{"title":"The Relative Contributions of Traits and Contexts on Social Network Learning.","authors":"Ameer Ghouse, Raphael Kaplan","doi":"10.1162/OPMI.a.31","DOIUrl":"10.1162/OPMI.a.31","url":null,"abstract":"<p><p>Navigating the social world is guided by remembering which people know each other. Yet, different factors might influence how social relationships are remembered, where people's shared attributes could distort a social network's mnemonic representation. Here, we study whether dyadically shared contexts and personality traits impact how people remember relationships in social networks. Through varying levels of network topological complexity, we find the contexts where people know each other are most memorable and that better contextual retrieval predicts relationship recall. In contrast, shared personality traits affect relationship recall differently depending on social network complexity, where shared negatively valenced traits relate to worse relationship recall in the simple network. Subsequent modeling revealed that as networks become more complex, relationships between more centrally positioned individuals that share negatively valenced traits are better recalled compared to less well-connected individuals. These results suggest contextual memory can serve as a scaffold for remembering relationships in a social network, while affective traits' impact on social network retrievability depends on emotional valence and the individuals involved. More generally, our findings give insight into how the same social network can be represented differently based on one's past experience.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1506-1527"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12483571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145207861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-29eCollection Date: 2025-01-01DOI: 10.1162/opmi.a.26
Eric Bigelow, Tomer Ullman
When people see an agent perform a task, do they care if the underlying algorithm driving it is 'intelligent' or not? More generally, when people intuitively evaluate the performance of others, do they value external performance metrics (intuitive behaviorism) or do they also take into account the underlying algorithm driving the agent's behavior (intuitive cognitivism)? We propose 3 dimensions for examining this distinction: Action Efficiency, Representation Efficiency, and Generalization. Across 3 tasks (N = 598), we showed people pairs of maze-solving agents, together with the programs driving the agents' behavior. Participants were asked to pick the 'better' of the two programs, based on a single example of the two programs, evaluated on the same maze. Each pair of programs varied along one of our 3 proposed dimensions. Our framework predicts people's choice of program across the tasks, and the results support the idea that people are intuitive cognitivists.
{"title":"People Evaluate Agents Based on the Algorithms That Drive Their Behavior.","authors":"Eric Bigelow, Tomer Ullman","doi":"10.1162/opmi.a.26","DOIUrl":"10.1162/opmi.a.26","url":null,"abstract":"<p><p>When people see an agent perform a task, do they care if the underlying algorithm driving it is 'intelligent' or not? More generally, when people intuitively evaluate the performance of others, do they value external performance metrics (intuitive behaviorism) or do they also take into account the underlying algorithm driving the agent's behavior (intuitive cognitivism)? We propose 3 dimensions for examining this distinction: Action Efficiency, Representation Efficiency, and Generalization. Across 3 tasks (<i>N</i> = 598), we showed people pairs of maze-solving agents, together with the programs driving the agents' behavior. Participants were asked to pick the 'better' of the two programs, based on a single example of the two programs, evaluated on the same maze. Each pair of programs varied along one of our 3 proposed dimensions. Our framework predicts people's choice of program across the tasks, and the results support the idea that people are intuitive cognitivists.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"1411-1430"},"PeriodicalIF":0.0,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12435988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}