Visual working memory (VWM) refers to the temporary storage and manipulation of visual information. Although visually different, objects we view and remember can share the same higher-level category information, such as an apple, orange, and banana all being classified as fruit. We study the influence of category information on VWM, focusing on the question of whether stimulus category coherence (i.e., whether all to-be-remembered items belong to the same semantic category) influences VWM performance. This question is addressed in two behavioral experiments using a change-detection paradigm and a rational analysis using an ideal observer based on a Bayesian model. Both experimental participants and the ideal observer often, but not always, performed numerically better on coherent trials (i.e., when all stimuli belonged to the same category). We hypothesize that the influence of category coherence information on VWM may be task-dependent and/or stimulus-dependent. In conditions when category coherence information is highly valuable for task performance, as indicated by the ideal observer, then participants tended to make use of it. However, when the ideal observer suggested this information was not crucial to performance, participants did not. In addition, both participants and the ideal observer showed a bias toward responding “same,” and often showed a stronger influence of category coherence on change trials. The consistencies between participant and ideal observer responses suggest participants often behaved as they did because these behaviors are optimal (or approximately so) for maximizing task performance. This may help explain conflicting results reported in the scientific literature.
{"title":"Does Stimulus Category Coherence Influence Visual Working Memory? A Rational Analysis","authors":"Ruoyang Hu, Robert A. Jacobs","doi":"10.1111/cogs.13498","DOIUrl":"https://doi.org/10.1111/cogs.13498","url":null,"abstract":"<p>Visual working memory (VWM) refers to the temporary storage and manipulation of visual information. Although visually different, objects we view and remember can share the same higher-level category information, such as an apple, orange, and banana all being classified as fruit. We study the influence of category information on VWM, focusing on the question of whether stimulus category coherence (i.e., whether all to-be-remembered items belong to the same semantic category) influences VWM performance. This question is addressed in two behavioral experiments using a change-detection paradigm and a rational analysis using an ideal observer based on a Bayesian model. Both experimental participants and the ideal observer often, but not always, performed numerically better on coherent trials (i.e., when all stimuli belonged to the same category). We hypothesize that the influence of category coherence information on VWM may be task-dependent and/or stimulus-dependent. In conditions when category coherence information is highly valuable for task performance, as indicated by the ideal observer, then participants tended to make use of it. However, when the ideal observer suggested this information was not crucial to performance, participants did not. In addition, both participants and the ideal observer showed a bias toward responding “same,” and often showed a stronger influence of category coherence on change trials. The consistencies between participant and ideal observer responses suggest participants often behaved as they did because these behaviors are optimal (or approximately so) for maximizing task performance. This may help explain conflicting results reported in the scientific literature.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 9","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13498","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142234960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Devylder, Jennifer Hinnel, Joost van de Weier, Linea Brink Andersen, Lucie Laporte-Devylder, Heron Ken Tomaki Kulukul
When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.
{"title":"Kin Cognition and Communication: What Talking, Gesturing, and Drawing About Family Can Tell us About the Way We Think About This Core Social Structure","authors":"Simon Devylder, Jennifer Hinnel, Joost van de Weier, Linea Brink Andersen, Lucie Laporte-Devylder, Heron Ken Tomaki Kulukul","doi":"10.1111/cogs.13484","DOIUrl":"10.1111/cogs.13484","url":null,"abstract":"<p>When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such <i>polysemiotic</i> (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 9","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13484","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In isolated English word reading, readers have the optimal performance when their initial eye fixation is directed to the area between the beginning and word center, that is, the optimal viewing position (OVP). Thus, how well readers voluntarily direct eye gaze to this OVP during isolated word reading may be associated with reading performance. Using Eye Movement analysis with Hidden Markov Models, we discovered two representative eye movement patterns during lexical decisions through clustering, which focused at the OVP and the word center, respectively. Higher eye movement similarity to the OVP-focusing pattern predicted faster lexical decision time in addition to cognitive abilities and lexical knowledge. However, the OVP-focusing pattern was associated with longer isolated single letter naming time, suggesting conflicting visual abilities required for identifying isolated letters and multi-letter words. In contrast, in both word and pseudoword naming, although clustering did not reveal an OVP-focused pattern, higher consistency of the first fixation as measured in entropy predicted faster naming time in addition to cognitive abilities and lexical knowledge. Thus, developing a consistent eye movement pattern focusing on the OVP is essential for word orthographic processing and reading fluency. This finding has important implications for interventions for reading difficulties.
{"title":"Understanding the Role of Eye Movement Pattern and Consistency in Isolated English Word Reading Through Hidden Markov Modeling","authors":"Weiyan Liao, Janet Hui-wen Hsiao","doi":"10.1111/cogs.13489","DOIUrl":"10.1111/cogs.13489","url":null,"abstract":"<p>In isolated English word reading, readers have the optimal performance when their initial eye fixation is directed to the area between the beginning and word center, that is, the optimal viewing position (OVP). Thus, how well readers voluntarily direct eye gaze to this OVP during isolated word reading may be associated with reading performance. Using Eye Movement analysis with Hidden Markov Models, we discovered two representative eye movement patterns during lexical decisions through clustering, which focused at the OVP and the word center, respectively. Higher eye movement similarity to the OVP-focusing pattern predicted faster lexical decision time in addition to cognitive abilities and lexical knowledge. However, the OVP-focusing pattern was associated with longer isolated single letter naming time, suggesting conflicting visual abilities required for identifying isolated letters and multi-letter words. In contrast, in both word and pseudoword naming, although clustering did not reveal an OVP-focused pattern, higher consistency of the first fixation as measured in entropy predicted faster naming time in addition to cognitive abilities and lexical knowledge. Thus, developing a consistent eye movement pattern focusing on the OVP is essential for word orthographic processing and reading fluency. This finding has important implications for interventions for reading difficulties.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 9","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Early number skills represent critical milestones in children's cognitive development and are shaped over years of interacting with quantities and numerals in various contexts. Several connectionist computational models have attempted to emulate how certain number concepts may be learned, represented, and processed in the brain. However, these models mainly used highly simplified inputs and focused on limited tasks. We expand on previous work in two directions: First, we train a model end-to-end on video demonstrations in a synthetic environment with multimodal visual and language inputs. Second, we use a more holistic dataset of 35 tasks, covering enumeration, set comparisons, symbolic digits, and seriation. The order in which the model acquires tasks reflects input length and variability, and the resulting trajectories mostly fit with findings from educational psychology. The trained model also displays symbolic and non-symbolic size and distance effects. Using techniques from interpretability research, we investigate how our attention-based model integrates cross-modal representations and binds them into context-specific associative networks to solve different tasks. We compare models trained with and without symbolic inputs and find that the purely non-symbolic model employs more processing-intensive strategies to determine set size.
{"title":"Exploring Early Number Abilities With Multimodal Transformers","authors":"Alice Hein, Klaus Diepold","doi":"10.1111/cogs.13492","DOIUrl":"10.1111/cogs.13492","url":null,"abstract":"<p>Early number skills represent critical milestones in children's cognitive development and are shaped over years of interacting with quantities and numerals in various contexts. Several connectionist computational models have attempted to emulate how certain number concepts may be learned, represented, and processed in the brain. However, these models mainly used highly simplified inputs and focused on limited tasks. We expand on previous work in two directions: First, we train a model end-to-end on video demonstrations in a synthetic environment with multimodal visual and language inputs. Second, we use a more holistic dataset of 35 tasks, covering enumeration, set comparisons, symbolic digits, and seriation. The order in which the model acquires tasks reflects input length and variability, and the resulting trajectories mostly fit with findings from educational psychology. The trained model also displays symbolic and non-symbolic size and distance effects. Using techniques from interpretability research, we investigate how our attention-based model integrates cross-modal representations and binds them into context-specific associative networks to solve different tasks. We compare models trained with and without symbolic inputs and find that the purely non-symbolic model employs more processing-intensive strategies to determine set size.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 9","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13492","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How situated embodied agents may achieve goals using knowledge is the classical question of natural and artificial intelligence. How organisms achieve this with their nervous systems is a central challenge for a neural theory of embodied cognition. To structure this challenge, we borrow terms from Searle's analysis of intentionality in its two directions of fit and six psychological modes (perception, memory, belief, intention-in-action, prior intention, desire). We postulate that intentional states are instantiated by neural activation patterns that are stabilized by neural interaction. Dynamic instabilities provide the neural mechanism for initiating and terminating intentional states and are critical to organizing sequences of intentional states. Beliefs represented by networks of concept nodes are autonomously learned and activated in response to desired outcomes. The neural dynamic principles of an intentional agent are demonstrated in a toy scenario in which a robotic agent explores an environment and paints objects in desired colors based on learned color transformation rules.
{"title":"Neural Dynamic Principles for an Intentional Embodied Agent","authors":"Jan Tekülve, Gregor Schöner","doi":"10.1111/cogs.13491","DOIUrl":"10.1111/cogs.13491","url":null,"abstract":"<p>How situated embodied agents may achieve goals using knowledge is the classical question of natural and artificial intelligence. How organisms achieve this with their nervous systems is a central challenge for a neural theory of embodied cognition. To structure this challenge, we borrow terms from Searle's analysis of intentionality in its two directions of fit and six psychological modes (perception, memory, belief, intention-in-action, prior intention, desire). We postulate that intentional states are instantiated by neural activation patterns that are stabilized by neural interaction. Dynamic instabilities provide the neural mechanism for initiating and terminating intentional states and are critical to organizing sequences of intentional states. Beliefs represented by networks of concept nodes are autonomously learned and activated in response to desired outcomes. The neural dynamic principles of an intentional agent are demonstrated in a toy scenario in which a robotic agent explores an environment and paints objects in desired colors based on learned color transformation rules.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 9","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13491","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Cristina Quelhas, Célia Rasga, P. N. Johnson-Laird
Quantified modal inferences interest logicians, linguists, and computer scientists, but no previous psychological study of them appears to be in the literature. Here is an example of one:
People tend to conclude: Paulo is possibly a businessman (Experiment 1). It seems plausible, and it follows from an intuitive mental model in which Paulo is one of a set of artists who are businessmen. Further deliberation can yield a model of an alternative possibility in which Paulo is not one of the artists, which confirms that the conclusion is only a possibility. The snag is that standard modal logics, which deal with possibilities, cannot yield a particular conclusion to any premises: Infinitely many follow validly (from any premises) but they do not include the present conclusion. Yet, further experiments corroborated a new mental model theory's predictions for various inferences (Experiment 2), for the occurrence of factual conclusions drawn from premises about possibilities (Experiment 3) and for inferences from premises of modal syllogisms (Experiment 4). The theory is therefore plausible, but we explore the feasibility of a cognitive theory based on modifications to modal logic.
{"title":"Reasoning From Quantified Modal Premises","authors":"Ana Cristina Quelhas, Célia Rasga, P. N. Johnson-Laird","doi":"10.1111/cogs.13485","DOIUrl":"10.1111/cogs.13485","url":null,"abstract":"<p>Quantified modal inferences interest logicians, linguists, and computer scientists, but no previous psychological study of them appears to be in the literature. Here is an example of one:\u0000\u0000 </p><p>People tend to conclude: <i>Paulo is possibly a businessman</i> (Experiment 1). It seems plausible, and it follows from an intuitive mental model in which Paulo is one of a set of artists who are businessmen. Further deliberation can yield a model of an alternative possibility in which Paulo is not one of the artists, which confirms that the conclusion is only a possibility. The snag is that standard modal logics, which deal with possibilities, cannot yield a particular conclusion to any premises: Infinitely many follow validly (from any premises) but they do not include the present conclusion. Yet, further experiments corroborated a new mental model theory's predictions for various inferences (Experiment 2), for the occurrence of factual conclusions drawn from premises about possibilities (Experiment 3) and for inferences from premises of modal syllogisms (Experiment 4). The theory is therefore plausible, but we explore the feasibility of a cognitive theory based on modifications to modal logic.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speakers tend to produce disfluencies when naming unexpected or complex items; in turn, when perceiving disfluency, listeners tend to expect upcoming reference to items that are unexpected or complex to name. In two experiments, we examined if these disfluency-based expectations are routine, or instead, if they adapt to the way the speaker uses disfluency in the current context in a talker-specific manner. Participants listened to instructions to look at objects in contexts with several images, some of which lacked conventional names. We manipulated the co-occurrence of disfluency and reference to novel versus familiar objects in a single talker situation (Experiment 1) and in a multi-talker situation (Experiment 2). In the predictive condition, disfluent expressions referred to novel objects, and fluent expressions referred to familiar objects. In the nonpredictive condition, fluent and disfluent trials referred to either familiar or novel objects. Participants’ gaze revealed that listeners more readily predicted familiar images for fluent trials and novel images for disfluent trials in the predictive condition than in the nonpredictive condition. In sum, listeners adapted their expectations about upcoming words based on recent experience with disfluency. Disfluency is not invariably processed, but instead a cue that is flexibly interpreted depending on the local context even in a multi-talker setting.
{"title":"Partner-Specific Adaptation in Disfluency Processing","authors":"Si On Yoon, Sarah Brown-Schmidt","doi":"10.1111/cogs.13490","DOIUrl":"10.1111/cogs.13490","url":null,"abstract":"<p>Speakers tend to produce disfluencies when naming unexpected or complex items; in turn, when perceiving disfluency, listeners tend to expect upcoming reference to items that are unexpected or complex to name. In two experiments, we examined if these disfluency-based expectations are routine, or instead, if they adapt to the way the speaker uses disfluency in the current context in a talker-specific manner. Participants listened to instructions to look at objects in contexts with several images, some of which lacked conventional names. We manipulated the co-occurrence of disfluency and reference to novel versus familiar objects in a single talker situation (Experiment 1) and in a multi-talker situation (Experiment 2). In the predictive condition, disfluent expressions referred to novel objects, and fluent expressions referred to familiar objects. In the nonpredictive condition, fluent and disfluent trials referred to either familiar or novel objects. Participants’ gaze revealed that listeners more readily predicted familiar images for fluent trials and novel images for disfluent trials in the predictive condition than in the nonpredictive condition. In sum, listeners adapted their expectations about upcoming words based on recent experience with disfluency. Disfluency is not invariably processed, but instead a cue that is flexibly interpreted depending on the local context even in a multi-talker setting.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lari Vainio, Ida-Lotta Myllylä, Alexandra Wikström, Martti Vainio
Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.
{"title":"High-Pitched Sound is Open and Low-Pitched Sound is Closed: Representing the Spatial Meaning of Pitch Height","authors":"Lari Vainio, Ida-Lotta Myllylä, Alexandra Wikström, Martti Vainio","doi":"10.1111/cogs.13486","DOIUrl":"10.1111/cogs.13486","url":null,"abstract":"<p>Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13486","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhua Yu, Lindsay Krebs, Mark Beeman, Vicky T. Lai
Metaphor generation is both a creative act and a means of learning. When learning a new concept, people often create a metaphor to connect the new concept to existing knowledge. Does the manner in which people generate a metaphor, via sudden insight (Aha! moment) or deliberate analysis, influence the quality of generation and subsequent learning outcomes? According to some research, deliberate processing enhances knowledge retention; hence, generation via analysis likely leads to better concept learning. However, other research has shown that solutions generated via insight are better remembered. In the current study, participants were presented with science concepts and descriptions, then generated metaphors for the concepts. They also indicated how they generated each metaphor and rated their metaphor for novelty and aptness. We assessed participants’ learning outcomes with a memory test and evaluated the creative quality of the metaphors based on self- and crowd-sourced ratings. Consistent with the deliberate processing benefit, participants became more familiar with the target science concept if they previously generated a metaphor for the concept via analysis compared to via insight. We also found that metaphors generated via analysis did not differ from metaphors generated via insight in quality (aptness or novelty) nor in how well they were remembered. However, participants’ self-evaluations of metaphors generated via insight showed more agreement with independent raters, suggesting the role of insight in modulating the creative ideation process. These preliminary findings have implications for understanding the nature of insight during idea generation and its impact on learning.
{"title":"Exploring How Generating Metaphor Via Insight Versus Analysis Affects Metaphor Quality and Learning Outcomes","authors":"Yuhua Yu, Lindsay Krebs, Mark Beeman, Vicky T. Lai","doi":"10.1111/cogs.13488","DOIUrl":"10.1111/cogs.13488","url":null,"abstract":"<p>Metaphor generation is both a creative act and a means of learning. When learning a new concept, people often create a metaphor to connect the new concept to existing knowledge. Does the manner in which people generate a metaphor, via sudden insight (Aha! moment) or deliberate analysis, influence the quality of generation and subsequent learning outcomes? According to some research, deliberate processing enhances knowledge retention; hence, generation via analysis likely leads to better concept learning. However, other research has shown that solutions generated via insight are better remembered. In the current study, participants were presented with science concepts and descriptions, then generated metaphors for the concepts. They also indicated how they generated each metaphor and rated their metaphor for novelty and aptness. We assessed participants’ learning outcomes with a memory test and evaluated the creative quality of the metaphors based on self- and crowd-sourced ratings. Consistent with the deliberate processing benefit, participants became more familiar with the target science concept if they previously generated a metaphor for the concept via analysis compared to via insight. We also found that metaphors generated via analysis did not differ from metaphors generated via insight in quality (aptness or novelty) nor in how well they were remembered. However, participants’ self-evaluations of metaphors generated via insight showed more agreement with independent raters, suggesting the role of insight in modulating the creative ideation process. These preliminary findings have implications for understanding the nature of insight during idea generation and its impact on learning.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13488","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately U.S.-based tech companies.
{"title":"Can Large Language Models Counter the Recent Decline in Literacy Levels? An Important Role for Cognitive Science","authors":"Falk Huettig, Morten H. Christiansen","doi":"10.1111/cogs.13487","DOIUrl":"10.1111/cogs.13487","url":null,"abstract":"<p>Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and informing current educational practices and policy. Cognitive scientists have the right interdisciplinary skills to study, analyze, evaluate, and change LLMs to facilitate their critical use, to encourage turn-taking that promotes rather than hinders literacy, to support literacy acquisition in diverse and equitable ways, and to scaffold potential future changes in what it means to be literate. We urge cognitive scientists to take up this mantle—the future impact of LLMs on human literacy skills is too important to be left to the large, predominately U.S.-based tech companies.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}