首页 > 最新文献

Journal of Nonverbal Behavior最新文献

英文 中文
Intuitive Thinking is Associated with Stronger Belief in Physiognomy and Confidence in the Accuracy of Facial Impressions. 直觉思维与更强的面相信念和对面部印象准确性的信心有关。
IF 1.7 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2025-01-01 Epub Date: 2025-10-04 DOI: 10.1007/s10919-025-00497-w
Bastian Jaeger, Anthony M Evans, Mariëlle Stel, Ilja van Beest

Physiognomy, the idea that a person's character is reflected in their facial features, has a long history in scholarly thought. Although now widely regarded as pseudoscience in academic circles, recent work suggests that laypeople hold physiognomic beliefs and that belief endorsement is associated with support for facial profiling technology and other outcomes. Here, we build on previous work and investigate who believes in physiognomy. In four studies (three preregistered), we (1) assess the prevalence of physiognomic beliefs across different sociodemographic groups, and (2) investigate its psychological correlates. In a large, representative sample of the Dutch population (Study 1, n = 2624), about 50% of participants at least somewhat endorsed physiognomic beliefs. Endorsement of physiognomic beliefs varied little as a function of gender, age, education, and income. Across different measures of thinking styles and other lay beliefs, we found that physiognomic beliefs were most strongly related to how much people trust their intuitions-an association that emerged consistently with British (Study 2, n = 224), Nigerian (Study 3, n = 147), and Dutch participants (Study 4, n = 388). Participants who scored higher on faith in intuition were also more confident in the accuracy of their face-based trustworthiness impressions. In sum, the present studies suggest that lay beliefs in physiognomy are (a) common, (b) similarly endorsed across various socio-demographic groups, and (c) associated with an intuitive thinking style.

Supplementary information: The online version contains supplementary material available at 10.1007/s10919-025-00497-w.

面相学,即认为一个人的性格可以从他的面部特征中反映出来的观点,在学术思想中有着悠久的历史。尽管现在在学术界被广泛认为是伪科学,但最近的研究表明,外行人持有面相学信仰,这种信仰的认可与对面部分析技术和其他结果的支持有关。在这里,我们以以前的工作为基础,调查谁相信面相学。在四项研究中(其中三项是预先登记的),我们(1)评估了面相信仰在不同社会人口统计学群体中的流行程度,(2)调查了其心理相关性。在一个大型的、具有代表性的荷兰人口样本中(研究1,n = 2624),大约50%的参与者至少在某种程度上支持面相学的信念。对面相信仰的认可在性别、年龄、教育程度和收入方面变化不大。通过对思维方式和其他非专业信仰的不同测量,我们发现面相信仰与人们对自己直觉的信任程度最密切相关——这种联系在英国(研究2,n = 224)、尼日利亚(研究3,n = 147)和荷兰(研究4,n = 388)的参与者中一致出现。在直觉信心方面得分较高的参与者也对自己基于面部的可信度印象的准确性更有信心。总而言之,目前的研究表明,外行对面相学的信仰是(a)普遍的,(b)在不同的社会人口群体中得到类似的认可,(c)与直觉思维方式有关。补充信息:在线版本包含补充资料,可在10.1007/s10919-025-00497-w获得。
{"title":"Intuitive Thinking is Associated with Stronger Belief in Physiognomy and Confidence in the Accuracy of Facial Impressions.","authors":"Bastian Jaeger, Anthony M Evans, Mariëlle Stel, Ilja van Beest","doi":"10.1007/s10919-025-00497-w","DOIUrl":"10.1007/s10919-025-00497-w","url":null,"abstract":"<p><p>Physiognomy, the idea that a person's character is reflected in their facial features, has a long history in scholarly thought. Although now widely regarded as pseudoscience in academic circles, recent work suggests that laypeople hold physiognomic beliefs and that belief endorsement is associated with support for facial profiling technology and other outcomes. Here, we build on previous work and investigate <i>who</i> believes in physiognomy. In four studies (three preregistered), we (1) assess the prevalence of physiognomic beliefs across different sociodemographic groups, and (2) investigate its psychological correlates. In a large, representative sample of the Dutch population (Study 1, n = 2624), about 50% of participants at least somewhat endorsed physiognomic beliefs. Endorsement of physiognomic beliefs varied little as a function of gender, age, education, and income. Across different measures of thinking styles and other lay beliefs, we found that physiognomic beliefs were most strongly related to how much people trust their intuitions-an association that emerged consistently with British (Study 2, n = 224), Nigerian (Study 3, n = 147), and Dutch participants (Study 4, n = 388). Participants who scored higher on faith in intuition were also more confident in the accuracy of their face-based trustworthiness impressions. In sum, the present studies suggest that lay beliefs in physiognomy are (a) common, (b) similarly endorsed across various socio-demographic groups, and (c) associated with an intuitive thinking style.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10919-025-00497-w.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"49 4","pages":"505-527"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12627204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Words: Speech Coordination Linked to Personality and Appraisals. 言语之外:与人格和评价相关的言语协调。
IF 1.2 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2025-01-01 Epub Date: 2025-03-08 DOI: 10.1007/s10919-025-00482-3
Nicol A Arellano-Véliz, Ramón D Castillo, Bertus F Jeronimus, E Saskia Kunnen, Ralf F A Cox

We studied how personality differences and conversation topics predict interpersonal speech coordination, leading/following dynamics, and nonverbal interactional dominance in dyadic conversations. In a laboratory, 100 undergraduate students (50 same-gender dyads) had a 15-min conversation following three topics (introduction/self-disclosure/argumentation). Their speech coordination and turn-taking (speech/silence) dynamics were assessed through nonlinear time-series analyses: Cross-Recurrence Quantification Analysis (CRQA), Diagonal Cross-Recurrence Profiles (DCRP), and Anisotropic-CRQA. From the time series, we extracted five variables to operationalize speech coordination (global and at lag-zero), leading-following dynamics, and asymmetries in the interacting partners' nonverbal interactional dominance. Interaction appraisals were also assessed. Associations between personality traits Extraversion/Agreeableness, speech coordination, and nonverbal interactional dominance were tested using mixed-effects models. Speech coordination and nonverbal interactional dominance differed across conversational topics and peaked during argumentative conversations. Extraversion was associated with increased speech coordination, and nonverbal interactional dominance, especially during the argumentative conversation. During a self-disclosure conversation, Extraversion concordance was associated with more symmetry in turn-taking dynamics. Speech coordination was generally associated with positive post-conversational appraisals such as wanting to meet in the future or liking the conversation partner, especially in extroverted individuals, whereas introverts seemed to value less swift dynamics. High Agreeableness predicted less speech coordination during argumentative conversations, and increased speech coordination (at lag-zero) predicted reduced perceived naturality in agreeable individuals. This may suggest a trade-off between maintaining swift speech dynamics and the natural flow of conversation for individuals high in Agreeableness.

Supplementary information: The online version contains supplementary material available at 10.1007/s10919-025-00482-3.

我们研究了人格差异和谈话话题如何预测二元对话中的人际言语协调、领导/跟随动态和非语言互动优势。在一个实验室里,100名本科生(50名同性二人组)进行了15分钟的对话,主题分为三个(介绍/自我表露/论证)。通过非线性时间序列分析:交叉递归量化分析(CRQA)、对角交叉递归曲线(DCRP)和各向异性-CRQA来评估它们的言语协调和轮流(言语/沉默)动态。从时间序列中,我们提取了五个变量来操作语音协调(全局和滞后零),领导-跟随动态以及交互伙伴非语言交互优势的不对称性。相互作用评价也被评估。使用混合效应模型测试了性格特征外向/宜人性、言语协调和非言语互动优势之间的关系。言语协调和非言语互动优势在不同的会话话题中有所不同,并在争论性对话中达到顶峰。外向性与言语协调能力和非言语互动优势的增强有关,尤其是在争论性谈话中。在自我表露的谈话中,外倾性和谐与轮转动力学的对称性相关。言语协调通常与积极的对话后评价有关,比如想在未来见面或喜欢谈话对象,尤其是在外向的人身上,而内向的人似乎不太重视快速的动态。高亲和性预示着在争论性对话中较少的言语协调,而增加的言语协调(在滞后零)预示着在随和的个体中感知到的自然性降低。这可能表明,对于亲和性高的人来说,在保持快速的语言动态和自然的对话流之间需要权衡。补充信息:在线版本包含补充资料,可在10.1007/s10919-025-00482-3获得。
{"title":"Beyond Words: Speech Coordination Linked to Personality and Appraisals.","authors":"Nicol A Arellano-Véliz, Ramón D Castillo, Bertus F Jeronimus, E Saskia Kunnen, Ralf F A Cox","doi":"10.1007/s10919-025-00482-3","DOIUrl":"https://doi.org/10.1007/s10919-025-00482-3","url":null,"abstract":"<p><p>We studied how personality differences and conversation topics predict interpersonal speech coordination, leading/following dynamics, and nonverbal interactional dominance in dyadic conversations. In a laboratory, 100 undergraduate students (50 same-gender dyads) had a 15-min conversation following three topics (introduction/self-disclosure/argumentation). Their speech coordination and turn-taking (speech/silence) dynamics were assessed through nonlinear time-series analyses: Cross-Recurrence Quantification Analysis (CRQA), Diagonal Cross-Recurrence Profiles (DCRP), and Anisotropic-CRQA. From the time series, we extracted five variables to operationalize speech coordination (global and at lag-zero), leading-following dynamics, and asymmetries in the interacting partners' nonverbal interactional dominance. Interaction appraisals were also assessed. Associations between personality traits Extraversion/Agreeableness, speech coordination, and nonverbal interactional dominance were tested using mixed-effects models. Speech coordination and nonverbal interactional dominance differed across conversational topics and peaked during argumentative conversations. Extraversion was associated with increased speech coordination, and nonverbal interactional dominance, especially during the argumentative conversation. During a self-disclosure conversation, Extraversion concordance was associated with more symmetry in turn-taking dynamics. Speech coordination was generally associated with positive post-conversational appraisals such as wanting to meet in the future or liking the conversation partner, especially in extroverted individuals, whereas introverts seemed to value less swift dynamics. High Agreeableness predicted less speech coordination during argumentative conversations, and increased speech coordination (at lag-zero) predicted reduced perceived naturality in agreeable individuals. This may suggest a trade-off between maintaining swift speech dynamics and the natural flow of conversation for individuals high in Agreeableness.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10919-025-00482-3.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"49 1","pages":"85-123"},"PeriodicalIF":1.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11982161/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144003597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elevated Facial Behavior Variability During Emotions Contributes to Better Functional Communication in Dyslexia. 情绪时面部行为变异性的升高有助于阅读障碍患者更好的功能性沟通。
IF 1.7 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2025-01-01 Epub Date: 2025-07-30 DOI: 10.1007/s10919-025-00490-3
Amie Wallman-Jones, Eleanor R Palser, Fate Noohi, Belinda Y Zhang, Christina R Veziris, Amanda K Gerenza, Alexis I Martinez-Arroyo, Marni Shabash, Ashlin R K Roy, Sarah R Holley, Maria Luisa Gorno-Tempini, Virginia E Sturm

Dyslexia is a neurodevelopmental condition characterized by reading difficulties, yet there is growing evidence for coinciding social and emotional strengths. In our previous work, we found children with dyslexia displayed greater emotional facial behavior to affective stimuli than their well-reading peers, an enhancement that related to better social skills. Traditional measures provide static "snapshots" of emotional facial behavior but overlook important dynamic information about the face's movements that may confer interpersonal advantages. Here, we examined whether variability in emotional facial behavior was heightened in children with dyslexia and associated with social communication benefits. We coded the second-by-second intensities of ten emotional facial behaviors in 54 children (ages 7-14) with (n = 33) and without (n = 21) dyslexia while they watched five emotion-inducing film clips. For each trial, we calculated two facial behavior variability scores: a within-emotion variability score (the second-by-second intensity changes within each category of emotional behavior) and a between-emotions variability score (the total number of changes between categories of emotional behavior). Parents also reported on children's real-world communication skills. Linear mixed-effects models (controlling for age, sex, and total facial behavior) revealed that children with dyslexia had higher within-emotion facial behavior variability but not higher between-emotions facial behavior variability than those without dyslexia. Across the sample, greater total within-emotion facial behavior variability correlated with higher parent-reported functional communication-the ability to express ideas in ways that others can easily understand. These findings suggest nuanced emotional facial behavior dynamics contribute to social strengths in dyslexia.

Supplementary information: The online version contains supplementary material available at 10.1007/s10919-025-00490-3.

阅读障碍是一种以阅读困难为特征的神经发育疾病,然而越来越多的证据表明,社交和情感优势是一致的。在我们之前的研究中,我们发现患有阅读障碍的儿童在面对情感刺激时比阅读能力好的同龄人表现出更多的情绪面部行为,这种增强与更好的社交技能有关。传统的测量方法提供了面部情绪行为的静态“快照”,但忽略了有关面部运动的重要动态信息,这些信息可能会赋予人际优势。在这里,我们研究了在患有阅读障碍的儿童中,情绪面部行为的可变性是否会增加,并与社会沟通的好处有关。我们对54名(7-14岁)有阅读障碍和没有阅读障碍的儿童(n = 33)在观看5部情感诱导电影片段时10种情绪面部行为的秒级强度进行了编码。对于每个试验,我们计算了两个面部行为变异性得分:情绪内变异性得分(每个情绪行为类别内每秒钟的强度变化)和情绪间变异性得分(情绪行为类别之间变化的总数)。家长们还报告了孩子们在现实世界中的沟通技巧。线性混合效应模型(控制年龄、性别和总体面部行为)显示,与无阅读障碍的儿童相比,有阅读障碍的儿童情绪内面部行为变异性更高,但情绪间面部行为变异性不高。在整个样本中,更大的情绪内面部行为变异性与更高的父母报告的功能性沟通相关-以他人容易理解的方式表达想法的能力。这些发现表明,细微的情绪面部行为动态有助于失读症患者的社会优势。补充信息:在线版本包含补充资料,可在10.1007/s10919-025-00490-3获得。
{"title":"Elevated Facial Behavior Variability During Emotions Contributes to Better Functional Communication in Dyslexia.","authors":"Amie Wallman-Jones, Eleanor R Palser, Fate Noohi, Belinda Y Zhang, Christina R Veziris, Amanda K Gerenza, Alexis I Martinez-Arroyo, Marni Shabash, Ashlin R K Roy, Sarah R Holley, Maria Luisa Gorno-Tempini, Virginia E Sturm","doi":"10.1007/s10919-025-00490-3","DOIUrl":"10.1007/s10919-025-00490-3","url":null,"abstract":"<p><p>Dyslexia is a neurodevelopmental condition characterized by reading difficulties, yet there is growing evidence for coinciding social and emotional strengths. In our previous work, we found children with dyslexia displayed greater emotional facial behavior to affective stimuli than their well-reading peers, an enhancement that related to better social skills. Traditional measures provide static \"snapshots\" of emotional facial behavior but overlook important dynamic information about the face's movements that may confer interpersonal advantages. Here, we examined whether variability in emotional facial behavior was heightened in children with dyslexia and associated with social communication benefits. We coded the second-by-second intensities of ten emotional facial behaviors in 54 children (ages 7-14) with (<i>n</i> = 33) and without (<i>n</i> = 21) dyslexia while they watched five emotion-inducing film clips. For each trial, we calculated two facial behavior variability scores: a within-emotion variability score (the second-by-second intensity changes within each category of emotional behavior) and a between-emotions variability score (the total number of changes between categories of emotional behavior). Parents also reported on children's real-world communication skills. Linear mixed-effects models (controlling for age, sex, and total facial behavior) revealed that children with dyslexia had higher within-emotion facial behavior variability but not higher between-emotions facial behavior variability than those without dyslexia. Across the sample, greater total within-emotion facial behavior variability correlated with higher parent-reported functional communication-the ability to express ideas in ways that others can easily understand. These findings suggest nuanced emotional facial behavior dynamics contribute to social strengths in dyslexia.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10919-025-00490-3.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"49 3","pages":"325-343"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12408770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testing the Ingroup Advantage in Emotion Perception from Dynamic Posed and Spontaneous Expressions. 从动态、摆姿势和自发表情看情绪知觉中的内群体优势。
IF 1.7 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2025-01-01 Epub Date: 2025-08-22 DOI: 10.1007/s10919-025-00492-1
Yong-Qi Cong, Lidya Yurdum, Agneta Fischer, Disa Sauter

The ingroup advantage refers to the phenomenon of emotions being more accurately recognised when expressors and perceivers share the same cultural background. Though well-documented in studies using posed facial expressions, whether the in-group advantage generalizes to spontaneous expressions remains unclear. This pre-registered study examined cross-cultural emotion recognition using dynamic posed and spontaneous expressions in a balanced design. Perceivers from the Netherlands (N = 762) and China (N = 738) judged emotions from facial expressions produced by Dutch and Chinese expressors in a forced-choice emotion recognition task. Contrary to our hypothesis, we did not find a mutual ingroup advantage. Instead, we found a Decoder and an Encoder effect for both posed and spontaneous expressions. Specifically, Dutch Perceivers outperformed Chinese Perceivers in recognising both Dutch and Chinese emotional expressions (the Decoder effect), while Chinese emotional expressions were better recognised than Dutch expressions by both Chinese and Dutch Perceivers (the Encoder effect). Bayesian analyses confirmed robust evidence for these effects. These findings challenge the robustness of the ingroup advantage and highlight the importance of using ecologically valid stimuli in the study of nonverbal emotional communication.

Supplementary information: The online version contains supplementary material available at 10.1007/s10919-025-00492-1.

群体内优势指的是当表达者和感知者拥有相同的文化背景时,情绪被更准确地识别的现象。尽管在使用摆姿势面部表情的研究中有充分的记录,但群体内优势是否能推广到自发表情仍不清楚。这个预先注册的研究在平衡设计中使用动态姿势和自发表达来检验跨文化情感识别。在一项强迫选择情绪识别任务中,来自荷兰(N = 762)和中国(N = 738)的感知者通过荷兰和中国表情者的面部表情来判断情绪。与我们的假设相反,我们没有发现群体内的相互优势。相反,我们发现了一个解码器和一个编码器效应,对摆姿势和自发的表达。具体而言,荷兰感知者在识别荷兰语和汉语的情绪表达(解码器效应)方面优于中国感知者,而中国和荷兰感知者对汉语情绪表达的识别优于荷兰语表达(编码器效应)。贝叶斯分析证实了这些影响的有力证据。这些发现挑战了群体内优势的稳健性,并强调了在非语言情感交流研究中使用生态有效刺激的重要性。补充信息:在线版本包含补充资料,可在10.1007/s10919-025-00492-1获得。
{"title":"Testing the Ingroup Advantage in Emotion Perception from Dynamic Posed and Spontaneous Expressions.","authors":"Yong-Qi Cong, Lidya Yurdum, Agneta Fischer, Disa Sauter","doi":"10.1007/s10919-025-00492-1","DOIUrl":"10.1007/s10919-025-00492-1","url":null,"abstract":"<p><p>The ingroup advantage refers to the phenomenon of emotions being more accurately recognised when expressors and perceivers share the same cultural background. Though well-documented in studies using posed facial expressions, whether the in-group advantage generalizes to spontaneous expressions remains unclear. This pre-registered study examined cross-cultural emotion recognition using dynamic posed and spontaneous expressions in a balanced design. Perceivers from the Netherlands (<i>N</i> = 762) and China (<i>N</i> = 738) judged emotions from facial expressions produced by Dutch and Chinese expressors in a forced-choice emotion recognition task. Contrary to our hypothesis, we did not find a mutual ingroup advantage. Instead, we found a Decoder and an Encoder effect for both posed and spontaneous expressions. Specifically, Dutch Perceivers outperformed Chinese Perceivers in recognising both Dutch and Chinese emotional expressions (the Decoder effect), while Chinese emotional expressions were better recognised than Dutch expressions by both Chinese and Dutch Perceivers (the Encoder effect). Bayesian analyses confirmed robust evidence for these effects. These findings challenge the robustness of the ingroup advantage and highlight the importance of using ecologically valid stimuli in the study of nonverbal emotional communication.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10919-025-00492-1.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"49 4","pages":"489-503"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12627183/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Predicts Accuracy in Eyewitnesses’ Voices 机器学习预测目击者声音的准确性
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-09-09 DOI: 10.1007/s10919-024-00474-9
Philip U. Gustafsson, Tim Lachmann, Petri Laukka

An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice.

刑事司法中的一项重要任务是评估目击证人证词的准确性。在本研究中,我们考察了机器学习是否可用于检测准确性。具体来说,我们研究了支持向量机(SVM)是否可以纯粹根据声音的非语言方面准确地将证词陈述分为正确或不正确。我们根据 94 个声音变量分析了 51 份目击证人证词中的 3,337 项陈述(准确率为 76.61%)。我们还使用 Lasso 回归分析了每个声音变量的相对重要性。结果表明,机器学习算法能够预测高于偶然水平 20% 到 40% 的准确率(AUC = 0.50)。最重要的预测因素包括与语音振幅(响度)和停顿时间有关的声学变量,振幅越大,预测的正确率越高,停顿时间越长,预测的错误率越高。综上所述,我们发现机器学习方法能够预测目击证人证词的正确与否,准确率高于偶然性,与人类的表现不相上下,但不会产生有害的人为偏差。这为机器学习评估目击证人的准确性提供了概念证明,并开辟了新的研究途径,我们希望这能改善社会公正。
{"title":"Machine Learning Predicts Accuracy in Eyewitnesses’ Voices","authors":"Philip U. Gustafsson, Tim Lachmann, Petri Laukka","doi":"10.1007/s10919-024-00474-9","DOIUrl":"https://doi.org/10.1007/s10919-024-00474-9","url":null,"abstract":"<p>An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"31 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Expression of Vocal Emotions in Cognitively Healthy Adult Speakers: Impact of Emotion Category, Gender, and Age 认知健康的成年演讲者的声乐情感表达:情感类别、性别和年龄的影响
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-09-04 DOI: 10.1007/s10919-024-00472-x
Valérie Coulombe, Vincent Martel-Sauvageau, Laura Monetta

This study examines the ability to express distinct emotions of negative and positive valences through voice modulations (i.e., affective prosody production) and how the speaker’s gender and age influence this ability. A sample of 31 neurologically healthy adults (17 women and 14 men, aged 41–76) were asked to say “papa” with six emotional tones—sadness, anger, fear, pleasant surprise, joy, and awe—in response to affect-evoking scenarios. The speakers’ vocal expressions were recorded and then assessed by five expert raters and 30 naive listeners using an emotion recognition task. Results showed that negative emotions were expressed more accurately than positive ones, highlighting a valence effect. In addition, female speakers showed higher recognition rates for their expressions of vocal emotions than male speakers. Furthermore, aging was associated with a moderate decline in the accuracy of prosodic emotional expression. Despite generally lower recognition rates from naive listeners compared to expert raters, recognition rates for all emotions, with the exception of awe, were not statistically different between listener groups. In conclusion, cognitively healthy adults can convey discrete emotions through prosody, including distinct positive emotions, but there are significant differences depending on the emotion expressed and individual speaker characteristics. These results highlight the complexity of affective prosody production and contribute to the understanding of individual differences in nonverbal emotional expression.

本研究探讨了通过声音调制(即情感拟声)表达消极和积极情绪的能力,以及说话者的性别和年龄对这种能力的影响。研究人员要求 31 名神经系统健康的成年人(17 名女性和 14 名男性,年龄在 41-76 岁之间)在说 "爸爸 "时用六种情绪语调--悲伤、愤怒、恐惧、惊喜、喜悦和敬畏--来回应情感激发情景。说话者的声音表达被录制下来,然后由五位专家评分员和 30 位天真的听众通过情绪识别任务进行评估。结果显示,消极情绪的表达比积极情绪的表达更准确,这凸显了情绪效应。此外,女性说话者的声音情绪表达识别率高于男性说话者。此外,衰老与前声情绪表达准确性的适度下降有关。尽管与专家评分者相比,幼稚听众的识别率普遍较低,但除了敬畏之外,其他情绪的识别率在听者群体之间并无统计学差异。总之,认知健康的成年人可以通过拟声来传达不同的情绪,包括明显的积极情绪,但根据所表达的情绪和说话者的个体特征,两者之间存在显著差异。这些结果凸显了情感拟声词产生的复杂性,有助于人们理解非语言情感表达的个体差异。
{"title":"The Expression of Vocal Emotions in Cognitively Healthy Adult Speakers: Impact of Emotion Category, Gender, and Age","authors":"Valérie Coulombe, Vincent Martel-Sauvageau, Laura Monetta","doi":"10.1007/s10919-024-00472-x","DOIUrl":"https://doi.org/10.1007/s10919-024-00472-x","url":null,"abstract":"<p>This study examines the ability to express distinct emotions of negative and positive valences through voice modulations (i.e., affective prosody production) and how the speaker’s gender and age influence this ability. A sample of 31 neurologically healthy adults (17 women and 14 men, aged 41–76) were asked to say “papa” with six emotional tones—sadness, anger, fear, pleasant surprise, joy, and awe—in response to affect-evoking scenarios. The speakers’ vocal expressions were recorded and then assessed by five expert raters and 30 naive listeners using an emotion recognition task. Results showed that negative emotions were expressed more accurately than positive ones, highlighting a valence effect. In addition, female speakers showed higher recognition rates for their expressions of vocal emotions than male speakers. Furthermore, aging was associated with a moderate decline in the accuracy of prosodic emotional expression. Despite generally lower recognition rates from naive listeners compared to expert raters, recognition rates for all emotions, with the exception of awe, were not statistically different between listener groups. In conclusion, cognitively healthy adults can convey discrete emotions through prosody, including distinct positive emotions, but there are significant differences depending on the emotion expressed and individual speaker characteristics. These results highlight the complexity of affective prosody production and contribute to the understanding of individual differences in nonverbal emotional expression.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"6 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Face Masks and Sunglasses on Emotion Perception over Two Years of the COVID-19 Pandemic 在 COVID-19 大流行的两年中,口罩和太阳镜对情绪感知的影响
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-08-18 DOI: 10.1007/s10919-024-00471-y
Xia Fang, Kerry Kawakami

Since the beginning of the COVID-19 pandemic in early 2020, face masks have become a common experience for many people to reduce the spread of the disease. Although recent research has shown that face masks impair emotion recognition, it is unclear how this impairment differs from other familiar types of face covering, such as sunglasses. In the present study, participants identified expressions and rated their confidence in this assessment of six affective expressions (anger, disgust, fear, surprise, sadness, and happiness) on faces wearing masks or sunglasses at four different time points during the pandemic (June 2020, March 2021, September 2021, June 2022). They also provided judgements of emotion intensity and genuineness. Overall, emotion identification of faces with masks was less accurate and had lower ratings of confidence and emotion intensity than faces with sunglasses. Faces with sunglasses, alternatively, were rated as less genuine than faces with masks. Furthermore, this pattern for both masks and sunglasses remained stable across two years of the pandemic. This study provides new insights on the differential effects of face masks and sunglasses on emotion perception and highlights the importance of face coverings for emotion communication and social interactions.

自 2020 年初 COVID-19 大流行以来,为减少疾病传播,戴口罩已成为许多人的共同经历。虽然最近的研究表明,口罩会影响情绪识别,但目前还不清楚这种影响与太阳镜等其他熟悉的面部遮挡类型有何不同。在本研究中,参与者在大流行期间的四个不同时间点(2020 年 6 月、2021 年 3 月、2021 年 9 月和 2022 年 6 月)对戴着口罩或太阳镜的人脸的六种情感表情(愤怒、厌恶、恐惧、惊讶、悲伤和快乐)进行了识别,并对他们在这种评估中的信心进行了评分。他们还提供了对情绪强度和真实性的判断。总体而言,与戴太阳镜的面孔相比,戴面具的面孔的情绪识别准确度较低,其自信度和情绪强度也较低。戴太阳镜的面孔则比戴面具的面孔更不真实。此外,面具和太阳镜的这种模式在大流行的两年中保持稳定。这项研究就面具和太阳镜对情绪感知的不同影响提供了新的见解,并强调了面部遮挡对于情绪交流和社会互动的重要性。
{"title":"The Effect of Face Masks and Sunglasses on Emotion Perception over Two Years of the COVID-19 Pandemic","authors":"Xia Fang, Kerry Kawakami","doi":"10.1007/s10919-024-00471-y","DOIUrl":"https://doi.org/10.1007/s10919-024-00471-y","url":null,"abstract":"<p>Since the beginning of the COVID-19 pandemic in early 2020, face masks have become a common experience for many people to reduce the spread of the disease. Although recent research has shown that face masks impair emotion recognition, it is unclear how this impairment differs from other familiar types of face covering, such as sunglasses. In the present study, participants identified expressions and rated their confidence in this assessment of six affective expressions (anger, disgust, fear, surprise, sadness, and happiness) on faces wearing masks or sunglasses at four different time points during the pandemic (June 2020, March 2021, September 2021, June 2022). They also provided judgements of emotion intensity and genuineness. Overall, emotion identification of faces with masks was less accurate and had lower ratings of confidence and emotion intensity than faces with sunglasses. Faces with sunglasses, alternatively, were rated as less genuine than faces with masks. Furthermore, this pattern for both masks and sunglasses remained stable across two years of the pandemic. This study provides new insights on the differential effects of face masks and sunglasses on emotion perception and highlights the importance of face coverings for emotion communication and social interactions.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"76 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Digital Witness: Exploring Gestural Misinformation in Tele-Forensic Interviews with 5-8-Year-Old Children 数字证人:探索远程法证访谈中 5-8 岁儿童的手势错误信息
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-08-06 DOI: 10.1007/s10919-024-00470-z
Kirsty L. Johnstone, Chris Martin, Mark Blades

Child abuse is a major concern worldwide. While live-link interviews have been successful in legal and medical contexts, its potential for eyewitness interviews remains insufficiently studied, particularly in terms of non-verbal misinformation. This study explored tele-forensic interviewing (tele-FI), where video-conferencing software such as Zoom or Skype are used to conduct forensic interviews, as an alternative to face-to-face interviews. Focus was given to the susceptibility of eyewitness memory to the gestural misinformation effect (GME) where post-event information in the form of gesture can distort recall of a witnessed incident. Forty-seven children were recruited ranging in age from 5- to 8-years-old (M = 6 years 11 months). Comparisons were made to face-to-face conditions from prior published work by the authors (N = 63, M = 7 years 2 months) using the same methodology, video, and question sets. Results find support for the GME during tele-FI, with 1.23 misinformation details recorded on average and tele-FI showing a similar response pattern as face-to-face interviews. Accuracy was shown to be comparable in tele-FI (M = 16.21) compared to face-to-face interviews (M = 14.02), with a notable increase in the amount of relevant information provided in the tele-FI condition. Developmental age showed significant increases in the quality and quantity of data. This study provides evidence for tele-FI as a viable alternative to face-to-face interviews, and represents the first exploration of the GME in tele-FI, to the best of our knowledge. Discussion focuses on the benefits of tele-FI and the implications for police interview guidelines.

虐待儿童是全世界关注的一个主要问题。虽然实时链接访谈在法律和医疗领域取得了成功,但其在目击者访谈方面的潜力仍未得到充分研究,特别是在非语言错误信息方面。本研究探讨了远程法证访谈(tele-FI),即使用 Zoom 或 Skype 等视频会议软件进行法证访谈,以替代面对面访谈。研究的重点是目击者的记忆是否容易受到手势误导效应(GME)的影响,即事件发生后以手势形式出现的信息会扭曲对目击事件的回忆。研究共招募了 47 名儿童,年龄从 5 岁到 8 岁不等(平均年龄为 6 岁 11 个月)。使用相同的方法、视频和问题集,与作者之前发表的作品(N = 63,M = 7 岁 2 个月)中的面对面条件进行了比较。结果发现,远程 FI 支持 GME,平均记录了 1.23 个错误信息细节,远程 FI 显示出与面对面访谈类似的反应模式。与面对面访谈(M = 14.02)相比,远程-口语(M = 16.21)的准确性相当,但远程-口语条件下提供的相关信息量明显增加。从发育年龄来看,数据的质量和数量都有明显增加。这项研究为远程 FI 作为面对面访谈的一种可行替代方法提供了证据,据我们所知,这是首次对远程 FI 中的 GME 进行探讨。讨论的重点是远程 FI 的益处以及对警方面谈指南的影响。
{"title":"The Digital Witness: Exploring Gestural Misinformation in Tele-Forensic Interviews with 5-8-Year-Old Children","authors":"Kirsty L. Johnstone, Chris Martin, Mark Blades","doi":"10.1007/s10919-024-00470-z","DOIUrl":"https://doi.org/10.1007/s10919-024-00470-z","url":null,"abstract":"<p>Child abuse is a major concern worldwide. While live-link interviews have been successful in legal and medical contexts, its potential for eyewitness interviews remains insufficiently studied, particularly in terms of non-verbal misinformation. This study explored tele-forensic interviewing (tele-FI), where video-conferencing software such as Zoom or Skype are used to conduct forensic interviews, as an alternative to face-to-face interviews. Focus was given to the susceptibility of eyewitness memory to the gestural misinformation effect (GME) where post-event information in the form of gesture can distort recall of a witnessed incident. Forty-seven children were recruited ranging in age from 5- to 8-years-old (<i>M</i> = 6 years 11 months). Comparisons were made to face-to-face conditions from prior published work by the authors (<i>N</i> = 63, <i>M</i> = 7 years 2 months) using the same methodology, video, and question sets. Results find support for the GME during tele-FI, with 1.23 misinformation details recorded on average and tele-FI showing a similar response pattern as face-to-face interviews. Accuracy was shown to be comparable in tele-FI (<i>M</i> = 16.21) compared to face-to-face interviews (<i>M</i> = 14.02), with a notable increase in the amount of relevant information provided in the tele-FI condition. Developmental age showed significant increases in the quality and quantity of data. This study provides evidence for tele-FI as a viable alternative to face-to-face interviews, and represents the first exploration of the GME in tele-FI, to the best of our knowledge. Discussion focuses on the benefits of tele-FI and the implications for police interview guidelines.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"37 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141943920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of mate poaching predict jealousy towards higher-pitched women’s voices 对偷猎配偶的看法预示着对音调较高的女性声音的嫉妒
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-07-10 DOI: 10.1007/s10919-024-00469-6
Jillian J. M. O’Connor

Previous research has found that higher-pitched female voices elicit jealousy among women. However, it is unknown whether jealousy towards higher-pitched female voices is driven by perceptions of the rival’s mating strategy or by beliefs about the speaker’s attractiveness to one’s romantic partner. In addition, the degree to which higher-pitched female voices elicit jealousy could be associated with variation in trait jealousy among women listeners. Here, I manipulated women’s voices to be higher or lower in pitch, and tested whether variation in jealousy towards female voices was more strongly associated with perceptions of mate poaching, beliefs about the speaker’s attractiveness to listeners’ romantic partner, or with individual differences in trait jealousy. I replicated findings that higher voice pitch elicits more jealousy from women, which was positively associated with perceptions of mate poaching. I found no evidence of an association between trait jealousy and any voice-based perception. The findings suggest that perceptions of a target’s proclivity to mate poach better explain the jealousy-inducing nature of higher-pitched female voices than do beliefs about the speaker’s attractiveness to one’s romantic partner.

以往的研究发现,音调较高的女性声音会引起女性的嫉妒。然而,对高音调女声的嫉妒是受对手交配策略的感知驱动,还是受说话者对自己恋爱伴侣的吸引力的信念驱动,目前尚不得而知。此外,高音调女声引起嫉妒的程度可能与女性听众特质嫉妒的变化有关。在此,我操纵女性声音的音调高低,并测试了对女性声音的嫉妒差异是否与偷猎配偶的看法、关于说话者对听众恋爱伴侣的吸引力的信念或特质嫉妒的个体差异有更大的关联。我重复了这一研究结果,即较高的声调会引起女性更多的嫉妒,而这种嫉妒与对偷猎配偶的看法呈正相关。我没有发现特质嫉妒与任何基于声音的感知之间存在关联的证据。研究结果表明,与认为说话者对恋爱伴侣有吸引力相比,对目标有偷猎配偶倾向的看法更能解释高音调女性声音引起嫉妒的本质。
{"title":"Perceptions of mate poaching predict jealousy towards higher-pitched women’s voices","authors":"Jillian J. M. O’Connor","doi":"10.1007/s10919-024-00469-6","DOIUrl":"https://doi.org/10.1007/s10919-024-00469-6","url":null,"abstract":"<p>Previous research has found that higher-pitched female voices elicit jealousy among women. However, it is unknown whether jealousy towards higher-pitched female voices is driven by perceptions of the rival’s mating strategy or by beliefs about the speaker’s attractiveness to one’s romantic partner. In addition, the degree to which higher-pitched female voices elicit jealousy could be associated with variation in trait jealousy among women listeners. Here, I manipulated women’s voices to be higher or lower in pitch, and tested whether variation in jealousy towards female voices was more strongly associated with perceptions of mate poaching, beliefs about the speaker’s attractiveness to listeners’ romantic partner, or with individual differences in trait jealousy. I replicated findings that higher voice pitch elicits more jealousy from women, which was positively associated with perceptions of mate poaching. I found no evidence of an association between trait jealousy and any voice-based perception. The findings suggest that perceptions of a target’s proclivity to mate poach better explain the jealousy-inducing nature of higher-pitched female voices than do beliefs about the speaker’s attractiveness to one’s romantic partner.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"29 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atheists and Christians can be Discerned from their Faces 无神论者和基督徒从面相就能辨别
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-06-26 DOI: 10.1007/s10919-024-00467-8
G. Shane Pitts, Nicholas O. Rule

Whereas research has documented how atheists are perceived, none has considered their perceptibility. Atheists must first be identified as atheists in order to experience the stigma associated with them (i.e., as distrusted, disliked, and widely maligned). Although atheism is considered a concealable aspect of one’s identity, substantial research has found that a variety of ostensibly concealable attributes about a person are indeed legible from small and subtle cues. We merged these lines of inquiry here by considering the perceptibility of religious and spiritual (dis)belief. Studies 1A-1B showed that atheists could be reliably discerned from Christians based on brief glimpses of 100 standardized male faces. Experiment 2 replicated these results using female faces. Experiments 3 A-E then interrogated the facial features that support perceivers’ detection of atheism, showing that various parts of faces suffice for independently conveying atheism. Experiment 4 investigated and showed a potential mechanism for atheism detection – expressive suppression. Thus, across nine studies (N = 677), these data show robust evidence that atheists can be categorized from facial cues.

虽然研究记录了无神论者是如何被感知的,但没有一项研究考虑过他们的可感知性。无神论者必须首先被认定为无神论者,才能感受到与之相关的耻辱(即不被信任、不被喜欢、被广泛诋毁)。虽然无神论被认为是一个人身份的可隐藏方面,但大量研究发现,一个人的各种表面上可隐藏的属性确实可以从细微的线索中看出来。在此,我们通过考虑宗教和精神(不)信仰的可感知性,将这些研究方向合并在一起。研究 1A-1B 表明,根据对 100 张标准化男性面孔的短暂观察,无神论者和基督徒可以被可靠地分辨出来。实验 2 使用女性面孔复制了这些结果。然后,实验 3 A-E 对支持感知者识别无神论的面部特征进行了研究,结果表明面部的不同部分足以独立传达无神论。实验 4 调查并展示了无神论检测的潜在机制--表情抑制。因此,在九项研究(N = 677)中,这些数据有力地证明了可以通过面部线索对无神论者进行分类。
{"title":"Atheists and Christians can be Discerned from their Faces","authors":"G. Shane Pitts, Nicholas O. Rule","doi":"10.1007/s10919-024-00467-8","DOIUrl":"https://doi.org/10.1007/s10919-024-00467-8","url":null,"abstract":"<p>Whereas research has documented how atheists are <i>perceived</i>, none has considered their <i>perceptibility</i>. Atheists must first be identified as atheists in order to experience the stigma associated with them (i.e., as distrusted, disliked, and widely maligned). Although atheism is considered a concealable aspect of one’s identity, substantial research has found that a variety of ostensibly concealable attributes about a person are indeed legible from small and subtle cues. We merged these lines of inquiry here by considering the perceptibility of religious and spiritual (dis)belief. Studies 1A-1B showed that atheists could be reliably discerned from Christians based on brief glimpses of 100 standardized male faces. Experiment 2 replicated these results using female faces. Experiments 3 A-E then interrogated the facial features that support perceivers’ detection of atheism, showing that various parts of faces suffice for independently conveying atheism. Experiment 4 investigated and showed a potential mechanism for atheism detection – expressive suppression. Thus, across nine studies (<i>N</i> = 677), these data show robust evidence that atheists can be categorized from facial cues.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":"48 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Nonverbal Behavior
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1