Past research on people's moral judgments about moral dilemmas has revealed a connection between utilitarian judgment and reflective cognitive style. This has traditionally been interpreted as reflection is conducive to utilitarianism. However, recent research shows that the connection between reflective cognitive style and utilitarian judgments holds only when participants are asked whether the utilitarian option is permissible, and disappears when they are asked whether it is recommended. To explain this phenomenon, we propose that reflective cognitive style is associated with a greater moral leniency—that is, a greater tendency to be tolerant of moral violations, and that moral leniency predicts utilitarian judgment when utilitarian judgment is measured through permissibility. In Study 1 (N = 192), we design a set of vignettes to assess moral leniency. In Studies 2 and 3 (N = 455, 428), we show that reflective cognitive style is indeed associated with greater moral leniency, and that moral leniency mediates the connection between cognitive style and utilitarian judgment. We discuss the implication of our results for the interpretation of the relationship between utilitarianism and reflective cognitive style.
{"title":"Intellectually Rigorous but Morally Tolerant: Exploring Moral Leniency as a Mediator Between Cognitive Style and “Utilitarian” Judgment","authors":"Manon D. Gouiran, Florian Cova","doi":"10.1111/cogs.70024","DOIUrl":"https://doi.org/10.1111/cogs.70024","url":null,"abstract":"<p>Past research on people's moral judgments about moral dilemmas has revealed a connection between utilitarian judgment and reflective cognitive style. This has traditionally been interpreted as reflection is conducive to utilitarianism. However, recent research shows that the connection between reflective cognitive style and utilitarian judgments holds only when participants are asked whether the utilitarian option is permissible, and disappears when they are asked whether it is recommended. To explain this phenomenon, we propose that reflective cognitive style is associated with a greater moral leniency—that is, a greater tendency to be tolerant of moral violations, and that moral leniency predicts utilitarian judgment when utilitarian judgment is measured through permissibility. In Study 1 (<i>N</i> = 192), we design a set of vignettes to assess moral leniency. In Studies 2 and 3 (<i>N</i> = 455, 428), we show that reflective cognitive style is indeed associated with greater moral leniency, and that moral leniency mediates the connection between cognitive style and utilitarian judgment. We discuss the implication of our results for the interpretation of the relationship between utilitarianism and reflective cognitive style.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142764051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judgments of character traits tend to be overcorrelated, a bias known as the halo effect. We conducted two studies to test an explanation of the effect based on shared lexical context and connotation. Study 1 tested whether the context similarity of trait names could explain 39 participants’ ratings of the probability that two traits would co-occur. Over 126 trait pairs, cosine similarity between the word2vec vectors of the two words was a reliable predictor of the human judgments of trait co-occurrence probability (cross-validated r2 = .19, p < .001). Two measures related to word similarity increased the variation accounted for in the human judgments to 45%, cross-validated (p < .001). In Experiment 2, 40 different participants judged similarity of word meaning within the pairs, confirming that the word pairs were not simply synonymous (Average [SD] = 40.8/100 [13.1/100]). Shared lexical context and word connotation play a role in shaping the halo effect.
人们对性格特征的判断往往是过度相关的,这种偏见被称为光环效应。我们进行了两项研究来检验基于共享词汇语境和内涵的效应解释。研究1测试了性状名称的上下文相似性是否可以解释39名参与者对两个性状同时出现的概率的评分。在126对特征对中,两个词的word2vec向量之间的余弦相似度是人类判断特征共现概率的可靠预测因子(交叉验证r2 = 0.19, p <;措施)。两项与单词相似度相关的测量将人类判断中的差异增加到45%,交叉验证(p <;措施)。在实验2中,40名不同的参与者判断单词对内的词义相似度,证实单词对不是简单的同义(平均[SD] = 40.8/100[13.1/100])。共同的词汇语境和词汇内涵在光环效应的形成中起着重要作用。
{"title":"A Constant Error, Revisited: A New Explanation of the Halo Effect","authors":"Chris Westbury, Daniel King","doi":"10.1111/cogs.70022","DOIUrl":"https://doi.org/10.1111/cogs.70022","url":null,"abstract":"<p>Judgments of character traits tend to be overcorrelated, a bias known as <i>the halo effect</i>. We conducted two studies to test an explanation of the effect based on shared lexical context and connotation. Study 1 tested whether the context similarity of trait names could explain 39 participants’ ratings of the probability that two traits would co-occur. Over 126 trait pairs, cosine similarity between the word2vec vectors of the two words was a reliable predictor of the human judgments of trait co-occurrence probability (cross-validated <i>r</i><sup>2</sup> = .19, <i>p</i> < .001). Two measures related to word similarity increased the variation accounted for in the human judgments to 45%, cross-validated (<i>p</i> < .001). In Experiment 2, 40 different participants judged similarity of word meaning within the pairs, confirming that the word pairs were not simply synonymous (Average [SD] = 40.8/100 [13.1/100]). Shared lexical context and word connotation play a role in shaping the halo effect.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142764052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masato Nakamura, Shota Momma, Hiromu Sakai, Colin Phillips
Comprehenders generate expectations about upcoming lexical items in language processing using various types of contextual information. However, a number of studies have shown that argument roles do not impact neural and behavioral prediction measures. Despite these robust findings, some prior studies have suggested that lexical prediction might be sensitive to argument roles in production tasks such as the cloze task or in comprehension tasks when additional time is available for prediction. This study demonstrates that both the task and additional time for prediction independently influence lexical prediction using argument roles, via evidence from closely matched electroencephalogram (EEG) and speeded cloze experiments. In order to investigate the timing effect, our EEG experiment used maximally simple Japanese stimuli such as Bee-nom/acc sting, and it manipulated the time for prediction by changing the temporal interval between the context noun and the target verb without adding any further linguistic content. In order to investigate the task effect, we conducted a speeded cloze study that was matched with our EEG study both in terms of stimuli and the time available for prediction. We found that both the EEG study with additional time for prediction and the speeded cloze study with matched timing showed clear sensitivity to argument roles, while the EEG conditions with less time for prediction replicated the standard pattern of argument role insensitivity. Based on these findings, we propose that lexical prediction is initially insensitive to argument roles but a monitoring mechanism serially inhibits role-inappropriate candidates. This monitoring process operates quickly in production tasks, where it is important to quickly select a single candidate to produce, whereas it may operate more slowly in comprehension tasks, where multiple candidates can be maintained until a continuation is perceived. Computational simulations demonstrate that this mechanism can successfully explain the task and timing effects observed in our experiments.
{"title":"Task and Timing Effects in Argument Role Sensitivity: Evidence From Production, EEG, and Computational Modeling","authors":"Masato Nakamura, Shota Momma, Hiromu Sakai, Colin Phillips","doi":"10.1111/cogs.70023","DOIUrl":"https://doi.org/10.1111/cogs.70023","url":null,"abstract":"<p>Comprehenders generate expectations about upcoming lexical items in language processing using various types of contextual information. However, a number of studies have shown that argument roles do not impact neural and behavioral prediction measures. Despite these robust findings, some prior studies have suggested that lexical prediction might be sensitive to argument roles in production tasks such as the cloze task or in comprehension tasks when additional time is available for prediction. This study demonstrates that both the task and additional time for prediction independently influence lexical prediction using argument roles, via evidence from closely matched electroencephalogram (EEG) and speeded cloze experiments. In order to investigate the timing effect, our EEG experiment used maximally simple Japanese stimuli such as <i>Bee-nom/acc sting</i>, and it manipulated the time for prediction by changing the temporal interval between the context noun and the target verb without adding any further linguistic content. In order to investigate the task effect, we conducted a speeded cloze study that was matched with our EEG study both in terms of stimuli and the time available for prediction. We found that both the EEG study with additional time for prediction and the speeded cloze study with matched timing showed clear sensitivity to argument roles, while the EEG conditions with less time for prediction replicated the standard pattern of argument role insensitivity. Based on these findings, we propose that lexical prediction is initially insensitive to argument roles but a monitoring mechanism serially inhibits role-inappropriate candidates. This monitoring process operates quickly in production tasks, where it is important to quickly select a single candidate to produce, whereas it may operate more slowly in comprehension tasks, where multiple candidates can be maintained until a continuation is perceived. Computational simulations demonstrate that this mechanism can successfully explain the task and timing effects observed in our experiments.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142764048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iconicity is a relationship of resemblance between the form and meaning of a sign. Compelling evidence from diverse areas of the cognitive sciences suggests that iconicity plays a pivotal role in the processing, memory, learning, and evolution of both spoken and signed language, indicating that iconicity is a general property of language. However, the language-specific aspect of iconicity, illustrated by the fact that the meanings of ideophones in an unfamiliar language are hard to guess (e.g., shigeshige 'staring at something' in Japanese), remains to be fully investigated. In the present study, native speakers of Japanese and English rated the iconicity and familiarity of Japanese ideophones (e.g., gatagata 'rattling', butsubutsu 'murmuring') and their English equivalents (e.g., rattle, murmur). Two main findings emerged: (1) individuals generally perceived their native language as more iconic than their non-native language, replicating the previous findings in signed language, and (2) the familiarity of words in their native language boosted their perceived iconicity. These findings shed a light on the language-specific, subjective, and acquired nature of iconicity.
{"title":"Iconicity Emerges From Language Experience: Evidence From Japanese Ideophones and Their English Equivalents.","authors":"Hinano Iida, Kimi Akita","doi":"10.1111/cogs.70031","DOIUrl":"10.1111/cogs.70031","url":null,"abstract":"<p><p>Iconicity is a relationship of resemblance between the form and meaning of a sign. Compelling evidence from diverse areas of the cognitive sciences suggests that iconicity plays a pivotal role in the processing, memory, learning, and evolution of both spoken and signed language, indicating that iconicity is a general property of language. However, the language-specific aspect of iconicity, illustrated by the fact that the meanings of ideophones in an unfamiliar language are hard to guess (e.g., shigeshige 'staring at something' in Japanese), remains to be fully investigated. In the present study, native speakers of Japanese and English rated the iconicity and familiarity of Japanese ideophones (e.g., gatagata 'rattling', butsubutsu 'murmuring') and their English equivalents (e.g., rattle, murmur). Two main findings emerged: (1) individuals generally perceived their native language as more iconic than their non-native language, replicating the previous findings in signed language, and (2) the familiarity of words in their native language boosted their perceived iconicity. These findings shed a light on the language-specific, subjective, and acquired nature of iconicity.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 12","pages":"e70031"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11670811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People are generally more accurate at categorizing objects at the basic level (e.g., dog) than at more general, superordinate categories (e.g., animal). Recent research has suggested that this basic-level advantage emerges from the linguistic-distributional and sensorimotor relationship between a category concept and object concept, but the proposed mechanisms have not been subject to a formal computational test. In this paper, we present a computational model of category verification that allows linguistic distributional information and sensorimotor experience to interact in a grounded implementation of a full-size adult conceptual system. In simulations across multiple datasets, we demonstrate that the model performs the task of category verification at a level comparable to human participants, and-critically-that its operation naturally gives rise to the basic-level-advantage phenomenon. That is, concepts are easier to categorize when there is a high degree of overlap in sensorimotor experience and/or linguistic distributional knowledge between category and member concepts, and the basic-level advantage emerges as an overall behavioral artifact of this linguistic and sensorimotor overlap. Findings support the linguistic-sensorimotor preparation account of the basic-level advantage and, more broadly, linguistic-sensorimotor theories of the conceptual system.
{"title":"A Linguistic-Sensorimotor Model of the Basic-Level Advantage in Category Verification.","authors":"Cai Wingfield, Rens van Hoef, Louise Connell","doi":"10.1111/cogs.70025","DOIUrl":"10.1111/cogs.70025","url":null,"abstract":"<p><p>People are generally more accurate at categorizing objects at the basic level (e.g., dog) than at more general, superordinate categories (e.g., animal). Recent research has suggested that this basic-level advantage emerges from the linguistic-distributional and sensorimotor relationship between a category concept and object concept, but the proposed mechanisms have not been subject to a formal computational test. In this paper, we present a computational model of category verification that allows linguistic distributional information and sensorimotor experience to interact in a grounded implementation of a full-size adult conceptual system. In simulations across multiple datasets, we demonstrate that the model performs the task of category verification at a level comparable to human participants, and-critically-that its operation naturally gives rise to the basic-level-advantage phenomenon. That is, concepts are easier to categorize when there is a high degree of overlap in sensorimotor experience and/or linguistic distributional knowledge between category and member concepts, and the basic-level advantage emerges as an overall behavioral artifact of this linguistic and sensorimotor overlap. Findings support the linguistic-sensorimotor preparation account of the basic-level advantage and, more broadly, linguistic-sensorimotor theories of the conceptual system.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 12","pages":"e70025"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11666073/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Umair, Julia B. Mertens, Lena Warnke, Jan P. de Ruiter
Transformer-based Large Language Models (LLMs) have recently increased in popularity, in part due to their impressive performance on a number of language tasks. While LLMs can produce human-like writing, the extent to which these models can learn to predict spoken language in natural interaction remains unclear. This is a nontrivial question, as spoken and written language differ in syntax, pragmatics, and norms that interlocutors follow. Previous work suggests that while LLMs may develop an understanding of linguistic rules based on statistical regularities, they fail to acquire the knowledge required for language use. This implies that LLMs may not learn the normative structure underlying interactive spoken language, but may instead only model superficial regularities in speech. In this paper, we aim to evaluate LLMs as models of spoken dialogue. Specifically, we investigate whether LLMs can learn that the identity of a speaker in spoken dialogue influences what is likely to be said. To answer this question, we first fine-tuned two variants of a specific LLM (GPT-2) on transcripts of natural spoken dialogue in English. Then, we used these models to compute surprisal values for two-turn sequences with the same first-turn but different second-turn speakers and compared the output to human behavioral data. While the predictability of words in all fine-tuned models was influenced by speaker identity information, the models did not replicate humans' use of this information. Our findings suggest that although LLMs may learn to generate text conforming to normative linguistic structure, they do not (yet) faithfully replicate human behavior in natural conversation.
{"title":"Can Language Models Trained on Written Monologue Learn to Predict Spoken Dialogue?","authors":"Muhammad Umair, Julia B. Mertens, Lena Warnke, Jan P. de Ruiter","doi":"10.1111/cogs.70013","DOIUrl":"10.1111/cogs.70013","url":null,"abstract":"<p>Transformer-based Large Language Models (LLMs) have recently increased in popularity, in part due to their impressive performance on a number of language tasks. While LLMs can produce human-like writing, the extent to which these models can learn to predict <i>spoken</i> language in natural interaction remains unclear. This is a nontrivial question, as spoken and written language differ in syntax, pragmatics, and norms that interlocutors follow. Previous work suggests that while LLMs may develop an understanding of linguistic rules based on statistical regularities, they fail to acquire the knowledge required for language use. This implies that LLMs may not learn the normative structure underlying interactive spoken language, but may instead only model superficial regularities in speech. In this paper, we aim to evaluate LLMs as models of spoken dialogue. Specifically, we investigate whether LLMs can learn that the <i>identity</i> of a speaker in spoken dialogue influences what is likely to be said. To answer this question, we first fine-tuned two variants of a specific LLM (GPT-2) on transcripts of natural spoken dialogue in English. Then, we used these models to compute surprisal values for two-turn sequences with the same first-turn but different second-turn speakers and compared the output to human behavioral data. While the predictability of words in all fine-tuned models was influenced by speaker identity information, the models did not replicate humans' use of this information. Our findings suggest that although LLMs may learn to generate text conforming to normative linguistic structure, they do not (yet) faithfully replicate human behavior in natural conversation.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Weigard, Takakuni Suzuki, Lena J. Skalaban, May Conley, Alexandra O. Cohen, Hugh Garavan, Mary M. Heitzeg, B. J. Casey, Chandra Sripada, Andrew Heathcote
Recent studies using the diffusion decision model find that performance across many cognitive control tasks can be largely attributed to a task-general efficiency of evidence accumulation (EEA) factor that reflects individuals’ ability to selectively gather evidence relevant to task goals. However, estimates of EEA from an n-back “conflict recognition” paradigm in the Adolescent Brain Cognitive DevelopmentSM (ABCD) Study, a large, diverse sample of youth, appear to contradict these findings. EEA estimates from “lure” trials—which present stimuli that are familiar (i.e., presented previously) but do not meet formal criteria for being a target—show inconsistent relations with EEA estimates from other trials and display atypical v-shaped bivariate distributions, suggesting many individuals are responding based largely on stimulus familiarity rather than goal-relevant stimulus features. We present a new formal model of evidence integration in conflict recognition tasks that distinguishes individuals’ EEA for goal-relevant evidence from their use of goal-irrelevant familiarity. We then investigate developmental, cognitive, and clinical correlates of these novel parameters. Parameters for EEA and goal-irrelevant familiarity-based processing showed strong correlations across levels of n-back load, suggesting they are task-general dimensions that influence individuals’ performance regardless of working memory demands. Only EEA showed large, robust developmental differences in the ABCD sample and an independent age-diverse sample. EEA also exhibited higher test-retest reliability and uniquely meaningful associations with clinically relevant dimensions. These findings establish a principled modeling framework for characterizing conflict recognition mechanisms and have several broader implications for research on individual and developmental differences in cognitive control.
{"title":"Dissociable Contributions of Goal-Relevant Evidence and Goal-Irrelevant Familiarity to Individual and Developmental Differences in Conflict Recognition","authors":"Alexander Weigard, Takakuni Suzuki, Lena J. Skalaban, May Conley, Alexandra O. Cohen, Hugh Garavan, Mary M. Heitzeg, B. J. Casey, Chandra Sripada, Andrew Heathcote","doi":"10.1111/cogs.70019","DOIUrl":"10.1111/cogs.70019","url":null,"abstract":"<p>Recent studies using the diffusion decision model find that performance across many cognitive control tasks can be largely attributed to a task-general efficiency of evidence accumulation (EEA) factor that reflects individuals’ ability to selectively gather evidence relevant to task goals. However, estimates of EEA from an n-back “conflict recognition” paradigm in the Adolescent Brain Cognitive Development<sup>SM</sup> (ABCD) Study, a large, diverse sample of youth, appear to contradict these findings. EEA estimates from “lure” trials—which present stimuli that are familiar (i.e., presented previously) but do not meet formal criteria for being a target—show inconsistent relations with EEA estimates from other trials and display atypical v-shaped bivariate distributions, suggesting many individuals are responding based largely on stimulus familiarity rather than goal-relevant stimulus features. We present a new formal model of evidence integration in conflict recognition tasks that distinguishes individuals’ EEA for goal-relevant evidence from their use of goal-irrelevant familiarity. We then investigate developmental, cognitive, and clinical correlates of these novel parameters. Parameters for EEA and goal-irrelevant familiarity-based processing showed strong correlations across levels of n-back load, suggesting they are task-general dimensions that influence individuals’ performance regardless of working memory demands. Only EEA showed large, robust developmental differences in the ABCD sample and an independent age-diverse sample. EEA also exhibited higher test-retest reliability and uniquely meaningful associations with clinically relevant dimensions. These findings establish a principled modeling framework for characterizing conflict recognition mechanisms and have several broader implications for research on individual and developmental differences in cognitive control.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face recognition is adapted to achieve goals of social interactions, which rely on further processing of the semantic information of faces, beyond visual computations. Here, we explored the semantic content of face representation apart from visual component, and tested their relations to face recognition performance. Specifically, we propose that enhanced visual or semantic coding could underlie the advantage of familiar over unfamiliar faces recognition, as well as the superior recognition of skilled face recognizers. We asked participants to freely describe familiar/unfamiliar faces using words or phrases, and converted these descriptions into semantic vectors. Face semantics were transformed into quantifiable face vectors by aggregating these word/phrase vectors. We also extracted visual features from a deep convolutional neural network and obtained the visual representation of familiar/unfamiliar faces. Semantic and visual representations were used to predict perceptual representation generated from a behavior rating task separately in different groups (bad/good face recognizers in familiar-face/unfamiliar-face conditions). Comparisons revealed that although long-term memory facilitated visual feature extraction for familiar faces compared to unfamiliar faces, good recognizers compensated for this disparity by incorporating more semantic information for unfamiliar faces, a strategy not observed in bad recognizers. This study highlights the significance of semantics in recognizing unfamiliar faces.
{"title":"Semantic Content in Face Representation: Essential for Proficient Recognition of Unfamiliar Faces by Good Recognizers","authors":"Tong Jiang, Guomei Zhou","doi":"10.1111/cogs.70020","DOIUrl":"10.1111/cogs.70020","url":null,"abstract":"<p>Face recognition is adapted to achieve goals of social interactions, which rely on further processing of the semantic information of faces, beyond visual computations. Here, we explored the semantic content of face representation apart from visual component, and tested their relations to face recognition performance. Specifically, we propose that enhanced visual or semantic coding could underlie the advantage of familiar over unfamiliar faces recognition, as well as the superior recognition of skilled face recognizers. We asked participants to freely describe familiar/unfamiliar faces using words or phrases, and converted these descriptions into semantic vectors. Face semantics were transformed into quantifiable face vectors by aggregating these word/phrase vectors. We also extracted visual features from a deep convolutional neural network and obtained the visual representation of familiar/unfamiliar faces. Semantic and visual representations were used to predict perceptual representation generated from a behavior rating task separately in different groups (bad/good face recognizers in familiar-face/unfamiliar-face conditions). Comparisons revealed that although long-term memory facilitated visual feature extraction for familiar faces compared to unfamiliar faces, good recognizers compensated for this disparity by incorporating more semantic information for unfamiliar faces, a strategy not observed in bad recognizers. This study highlights the significance of semantics in recognizing unfamiliar faces.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josué García-Arch, Solenn Friedrich, Xiongbo Wu, David Cucurell, Lluís Fuentemilla
Our self-concept is constantly faced with self-relevant information. Prevailing research suggests that information's valence plays a central role in shaping our self-views. However, the need for stability within the self-concept structure and the inherent alignment of positive feedback with the pre-existing self-views of healthy individuals might mask valence and congruence effects. In this study (N = 30, undergraduates), we orthogonalized feedback valence and self-congruence effects to examine the behavioral and electrophysiological signatures of self-relevant feedback processing and self-concept updating. We found that participants had a preference for integrating self-congruent and dismissing self-incongruent feedback, regardless of its valence. Consistently, electroencephalography results revealed that feedback congruence, but not feedback valence, is rapidly detected during early processing stages. Our findings diverge from the accepted notion that self-concept updating is based on the selective incorporation of positive information. These findings offer novel insights into self-concept dynamics, with implications for the understanding of psychopathological conditions.
{"title":"Beyond the Positivity Bias: The Processing and Integration of Self-Relevant Feedback Is Driven by Its Alignment With Pre-Existing Self-Views","authors":"Josué García-Arch, Solenn Friedrich, Xiongbo Wu, David Cucurell, Lluís Fuentemilla","doi":"10.1111/cogs.70017","DOIUrl":"https://doi.org/10.1111/cogs.70017","url":null,"abstract":"<p>Our self-concept is constantly faced with self-relevant information. Prevailing research suggests that information's valence plays a central role in shaping our self-views. However, the need for stability within the self-concept structure and the inherent alignment of positive feedback with the pre-existing self-views of healthy individuals might mask valence and congruence effects. In this study (<i>N</i> = 30, undergraduates), we orthogonalized feedback valence and self-congruence effects to examine the behavioral and electrophysiological signatures of self-relevant feedback processing and self-concept updating. We found that participants had a preference for integrating self-congruent and dismissing self-incongruent feedback, regardless of its valence. Consistently, electroencephalography results revealed that feedback congruence, but not feedback valence, is rapidly detected during early processing stages. Our findings diverge from the accepted notion that self-concept updating is based on the selective incorporation of positive information. These findings offer novel insights into self-concept dynamics, with implications for the understanding of psychopathological conditions.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142666025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many consider the world to be morally better today than it was in the past and expect moral improvement to continue. How do people explain what drives this change? In this paper, we identify two ways people might think about how moral progress occurs: that it is driven by human action (i.e., if people did not actively work to make the world better, moral progress would not occur) or that it is driven by an unspecified mechanism (i.e., that our world is destined to morally improve, but without specifying a role for human action). In Study 1 (N = 147), we find that those who more strongly believe that the mechanism of moral progress is human action are more likely to believe their own intervention is warranted to correct a moral setback. In Study 2 (N = 145), we find that this translates to intended action: those who more strongly believe moral progress is driven by human action report that they would donate more money to correct a moral setback. In Study 3 (N = 297), participants generate their own explanations for why moral progress occurs. We find that participants’ donation intentions are predicted by whether their explanations state that human action drives moral progress. Together, these studies suggest that beliefs about the mechanisms of moral progress have important implications for engaging in social action.
{"title":"Lay Theories of Moral Progress","authors":"Casey Lewry, Sana Asifriyaz, Tania Lombrozo","doi":"10.1111/cogs.70018","DOIUrl":"https://doi.org/10.1111/cogs.70018","url":null,"abstract":"<p>Many consider the world to be morally better today than it was in the past and expect moral improvement to continue. How do people explain what drives this change? In this paper, we identify two ways people might think about how moral progress occurs: that it is driven by human action (i.e., if people did not actively work to make the world better, moral progress would not occur) or that it is driven by an unspecified mechanism (i.e., that our world is destined to morally improve, but without specifying a role for human action). In Study 1 (<i>N</i> = 147), we find that those who more strongly believe that the mechanism of moral progress is human action are more likely to believe their own intervention is warranted to correct a moral setback. In Study 2 (<i>N</i> = 145), we find that this translates to intended action: those who more strongly believe moral progress is driven by human action report that they would donate more money to correct a moral setback. In Study 3 (<i>N</i> = 297), participants generate their own explanations for why moral progress occurs. We find that participants’ donation intentions are predicted by whether their explanations state that human action drives moral progress. Together, these studies suggest that beliefs about the mechanisms of moral progress have important implications for engaging in social action.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":"48 11","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.70018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}