首页 > 最新文献

Journal of Nonverbal Behavior最新文献

英文 中文
Machine Learning Predicts Accuracy in Eyewitnesses’ Voices 机器学习预测目击者声音的准确性
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-09-09 DOI: 10.1007/s10919-024-00474-9
Philip U. Gustafsson, Tim Lachmann, Petri Laukka

An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice.

刑事司法中的一项重要任务是评估目击证人证词的准确性。在本研究中,我们考察了机器学习是否可用于检测准确性。具体来说,我们研究了支持向量机(SVM)是否可以纯粹根据声音的非语言方面准确地将证词陈述分为正确或不正确。我们根据 94 个声音变量分析了 51 份目击证人证词中的 3,337 项陈述(准确率为 76.61%)。我们还使用 Lasso 回归分析了每个声音变量的相对重要性。结果表明,机器学习算法能够预测高于偶然水平 20% 到 40% 的准确率(AUC = 0.50)。最重要的预测因素包括与语音振幅(响度)和停顿时间有关的声学变量,振幅越大,预测的正确率越高,停顿时间越长,预测的错误率越高。综上所述,我们发现机器学习方法能够预测目击证人证词的正确与否,准确率高于偶然性,与人类的表现不相上下,但不会产生有害的人为偏差。这为机器学习评估目击证人的准确性提供了概念证明,并开辟了新的研究途径,我们希望这能改善社会公正。
{"title":"Machine Learning Predicts Accuracy in Eyewitnesses’ Voices","authors":"Philip U. Gustafsson, Tim Lachmann, Petri Laukka","doi":"10.1007/s10919-024-00474-9","DOIUrl":"https://doi.org/10.1007/s10919-024-00474-9","url":null,"abstract":"<p>An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Expression of Vocal Emotions in Cognitively Healthy Adult Speakers: Impact of Emotion Category, Gender, and Age 认知健康的成年演讲者的声乐情感表达:情感类别、性别和年龄的影响
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-09-04 DOI: 10.1007/s10919-024-00472-x
Valérie Coulombe, Vincent Martel-Sauvageau, Laura Monetta

This study examines the ability to express distinct emotions of negative and positive valences through voice modulations (i.e., affective prosody production) and how the speaker’s gender and age influence this ability. A sample of 31 neurologically healthy adults (17 women and 14 men, aged 41–76) were asked to say “papa” with six emotional tones—sadness, anger, fear, pleasant surprise, joy, and awe—in response to affect-evoking scenarios. The speakers’ vocal expressions were recorded and then assessed by five expert raters and 30 naive listeners using an emotion recognition task. Results showed that negative emotions were expressed more accurately than positive ones, highlighting a valence effect. In addition, female speakers showed higher recognition rates for their expressions of vocal emotions than male speakers. Furthermore, aging was associated with a moderate decline in the accuracy of prosodic emotional expression. Despite generally lower recognition rates from naive listeners compared to expert raters, recognition rates for all emotions, with the exception of awe, were not statistically different between listener groups. In conclusion, cognitively healthy adults can convey discrete emotions through prosody, including distinct positive emotions, but there are significant differences depending on the emotion expressed and individual speaker characteristics. These results highlight the complexity of affective prosody production and contribute to the understanding of individual differences in nonverbal emotional expression.

本研究探讨了通过声音调制(即情感拟声)表达消极和积极情绪的能力,以及说话者的性别和年龄对这种能力的影响。研究人员要求 31 名神经系统健康的成年人(17 名女性和 14 名男性,年龄在 41-76 岁之间)在说 "爸爸 "时用六种情绪语调--悲伤、愤怒、恐惧、惊喜、喜悦和敬畏--来回应情感激发情景。说话者的声音表达被录制下来,然后由五位专家评分员和 30 位天真的听众通过情绪识别任务进行评估。结果显示,消极情绪的表达比积极情绪的表达更准确,这凸显了情绪效应。此外,女性说话者的声音情绪表达识别率高于男性说话者。此外,衰老与前声情绪表达准确性的适度下降有关。尽管与专家评分者相比,幼稚听众的识别率普遍较低,但除了敬畏之外,其他情绪的识别率在听者群体之间并无统计学差异。总之,认知健康的成年人可以通过拟声来传达不同的情绪,包括明显的积极情绪,但根据所表达的情绪和说话者的个体特征,两者之间存在显著差异。这些结果凸显了情感拟声词产生的复杂性,有助于人们理解非语言情感表达的个体差异。
{"title":"The Expression of Vocal Emotions in Cognitively Healthy Adult Speakers: Impact of Emotion Category, Gender, and Age","authors":"Valérie Coulombe, Vincent Martel-Sauvageau, Laura Monetta","doi":"10.1007/s10919-024-00472-x","DOIUrl":"https://doi.org/10.1007/s10919-024-00472-x","url":null,"abstract":"<p>This study examines the ability to express distinct emotions of negative and positive valences through voice modulations (i.e., affective prosody production) and how the speaker’s gender and age influence this ability. A sample of 31 neurologically healthy adults (17 women and 14 men, aged 41–76) were asked to say “papa” with six emotional tones—sadness, anger, fear, pleasant surprise, joy, and awe—in response to affect-evoking scenarios. The speakers’ vocal expressions were recorded and then assessed by five expert raters and 30 naive listeners using an emotion recognition task. Results showed that negative emotions were expressed more accurately than positive ones, highlighting a valence effect. In addition, female speakers showed higher recognition rates for their expressions of vocal emotions than male speakers. Furthermore, aging was associated with a moderate decline in the accuracy of prosodic emotional expression. Despite generally lower recognition rates from naive listeners compared to expert raters, recognition rates for all emotions, with the exception of awe, were not statistically different between listener groups. In conclusion, cognitively healthy adults can convey discrete emotions through prosody, including distinct positive emotions, but there are significant differences depending on the emotion expressed and individual speaker characteristics. These results highlight the complexity of affective prosody production and contribute to the understanding of individual differences in nonverbal emotional expression.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Face Masks and Sunglasses on Emotion Perception over Two Years of the COVID-19 Pandemic 在 COVID-19 大流行的两年中,口罩和太阳镜对情绪感知的影响
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-08-18 DOI: 10.1007/s10919-024-00471-y
Xia Fang, Kerry Kawakami

Since the beginning of the COVID-19 pandemic in early 2020, face masks have become a common experience for many people to reduce the spread of the disease. Although recent research has shown that face masks impair emotion recognition, it is unclear how this impairment differs from other familiar types of face covering, such as sunglasses. In the present study, participants identified expressions and rated their confidence in this assessment of six affective expressions (anger, disgust, fear, surprise, sadness, and happiness) on faces wearing masks or sunglasses at four different time points during the pandemic (June 2020, March 2021, September 2021, June 2022). They also provided judgements of emotion intensity and genuineness. Overall, emotion identification of faces with masks was less accurate and had lower ratings of confidence and emotion intensity than faces with sunglasses. Faces with sunglasses, alternatively, were rated as less genuine than faces with masks. Furthermore, this pattern for both masks and sunglasses remained stable across two years of the pandemic. This study provides new insights on the differential effects of face masks and sunglasses on emotion perception and highlights the importance of face coverings for emotion communication and social interactions.

自 2020 年初 COVID-19 大流行以来,为减少疾病传播,戴口罩已成为许多人的共同经历。虽然最近的研究表明,口罩会影响情绪识别,但目前还不清楚这种影响与太阳镜等其他熟悉的面部遮挡类型有何不同。在本研究中,参与者在大流行期间的四个不同时间点(2020 年 6 月、2021 年 3 月、2021 年 9 月和 2022 年 6 月)对戴着口罩或太阳镜的人脸的六种情感表情(愤怒、厌恶、恐惧、惊讶、悲伤和快乐)进行了识别,并对他们在这种评估中的信心进行了评分。他们还提供了对情绪强度和真实性的判断。总体而言,与戴太阳镜的面孔相比,戴面具的面孔的情绪识别准确度较低,其自信度和情绪强度也较低。戴太阳镜的面孔则比戴面具的面孔更不真实。此外,面具和太阳镜的这种模式在大流行的两年中保持稳定。这项研究就面具和太阳镜对情绪感知的不同影响提供了新的见解,并强调了面部遮挡对于情绪交流和社会互动的重要性。
{"title":"The Effect of Face Masks and Sunglasses on Emotion Perception over Two Years of the COVID-19 Pandemic","authors":"Xia Fang, Kerry Kawakami","doi":"10.1007/s10919-024-00471-y","DOIUrl":"https://doi.org/10.1007/s10919-024-00471-y","url":null,"abstract":"<p>Since the beginning of the COVID-19 pandemic in early 2020, face masks have become a common experience for many people to reduce the spread of the disease. Although recent research has shown that face masks impair emotion recognition, it is unclear how this impairment differs from other familiar types of face covering, such as sunglasses. In the present study, participants identified expressions and rated their confidence in this assessment of six affective expressions (anger, disgust, fear, surprise, sadness, and happiness) on faces wearing masks or sunglasses at four different time points during the pandemic (June 2020, March 2021, September 2021, June 2022). They also provided judgements of emotion intensity and genuineness. Overall, emotion identification of faces with masks was less accurate and had lower ratings of confidence and emotion intensity than faces with sunglasses. Faces with sunglasses, alternatively, were rated as less genuine than faces with masks. Furthermore, this pattern for both masks and sunglasses remained stable across two years of the pandemic. This study provides new insights on the differential effects of face masks and sunglasses on emotion perception and highlights the importance of face coverings for emotion communication and social interactions.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Digital Witness: Exploring Gestural Misinformation in Tele-Forensic Interviews with 5-8-Year-Old Children 数字证人:探索远程法证访谈中 5-8 岁儿童的手势错误信息
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-08-06 DOI: 10.1007/s10919-024-00470-z
Kirsty L. Johnstone, Chris Martin, Mark Blades

Child abuse is a major concern worldwide. While live-link interviews have been successful in legal and medical contexts, its potential for eyewitness interviews remains insufficiently studied, particularly in terms of non-verbal misinformation. This study explored tele-forensic interviewing (tele-FI), where video-conferencing software such as Zoom or Skype are used to conduct forensic interviews, as an alternative to face-to-face interviews. Focus was given to the susceptibility of eyewitness memory to the gestural misinformation effect (GME) where post-event information in the form of gesture can distort recall of a witnessed incident. Forty-seven children were recruited ranging in age from 5- to 8-years-old (M = 6 years 11 months). Comparisons were made to face-to-face conditions from prior published work by the authors (N = 63, M = 7 years 2 months) using the same methodology, video, and question sets. Results find support for the GME during tele-FI, with 1.23 misinformation details recorded on average and tele-FI showing a similar response pattern as face-to-face interviews. Accuracy was shown to be comparable in tele-FI (M = 16.21) compared to face-to-face interviews (M = 14.02), with a notable increase in the amount of relevant information provided in the tele-FI condition. Developmental age showed significant increases in the quality and quantity of data. This study provides evidence for tele-FI as a viable alternative to face-to-face interviews, and represents the first exploration of the GME in tele-FI, to the best of our knowledge. Discussion focuses on the benefits of tele-FI and the implications for police interview guidelines.

虐待儿童是全世界关注的一个主要问题。虽然实时链接访谈在法律和医疗领域取得了成功,但其在目击者访谈方面的潜力仍未得到充分研究,特别是在非语言错误信息方面。本研究探讨了远程法证访谈(tele-FI),即使用 Zoom 或 Skype 等视频会议软件进行法证访谈,以替代面对面访谈。研究的重点是目击者的记忆是否容易受到手势误导效应(GME)的影响,即事件发生后以手势形式出现的信息会扭曲对目击事件的回忆。研究共招募了 47 名儿童,年龄从 5 岁到 8 岁不等(平均年龄为 6 岁 11 个月)。使用相同的方法、视频和问题集,与作者之前发表的作品(N = 63,M = 7 岁 2 个月)中的面对面条件进行了比较。结果发现,远程 FI 支持 GME,平均记录了 1.23 个错误信息细节,远程 FI 显示出与面对面访谈类似的反应模式。与面对面访谈(M = 14.02)相比,远程-口语(M = 16.21)的准确性相当,但远程-口语条件下提供的相关信息量明显增加。从发育年龄来看,数据的质量和数量都有明显增加。这项研究为远程 FI 作为面对面访谈的一种可行替代方法提供了证据,据我们所知,这是首次对远程 FI 中的 GME 进行探讨。讨论的重点是远程 FI 的益处以及对警方面谈指南的影响。
{"title":"The Digital Witness: Exploring Gestural Misinformation in Tele-Forensic Interviews with 5-8-Year-Old Children","authors":"Kirsty L. Johnstone, Chris Martin, Mark Blades","doi":"10.1007/s10919-024-00470-z","DOIUrl":"https://doi.org/10.1007/s10919-024-00470-z","url":null,"abstract":"<p>Child abuse is a major concern worldwide. While live-link interviews have been successful in legal and medical contexts, its potential for eyewitness interviews remains insufficiently studied, particularly in terms of non-verbal misinformation. This study explored tele-forensic interviewing (tele-FI), where video-conferencing software such as Zoom or Skype are used to conduct forensic interviews, as an alternative to face-to-face interviews. Focus was given to the susceptibility of eyewitness memory to the gestural misinformation effect (GME) where post-event information in the form of gesture can distort recall of a witnessed incident. Forty-seven children were recruited ranging in age from 5- to 8-years-old (<i>M</i> = 6 years 11 months). Comparisons were made to face-to-face conditions from prior published work by the authors (<i>N</i> = 63, <i>M</i> = 7 years 2 months) using the same methodology, video, and question sets. Results find support for the GME during tele-FI, with 1.23 misinformation details recorded on average and tele-FI showing a similar response pattern as face-to-face interviews. Accuracy was shown to be comparable in tele-FI (<i>M</i> = 16.21) compared to face-to-face interviews (<i>M</i> = 14.02), with a notable increase in the amount of relevant information provided in the tele-FI condition. Developmental age showed significant increases in the quality and quantity of data. This study provides evidence for tele-FI as a viable alternative to face-to-face interviews, and represents the first exploration of the GME in tele-FI, to the best of our knowledge. Discussion focuses on the benefits of tele-FI and the implications for police interview guidelines.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141943920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of mate poaching predict jealousy towards higher-pitched women’s voices 对偷猎配偶的看法预示着对音调较高的女性声音的嫉妒
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-07-10 DOI: 10.1007/s10919-024-00469-6
Jillian J. M. O’Connor

Previous research has found that higher-pitched female voices elicit jealousy among women. However, it is unknown whether jealousy towards higher-pitched female voices is driven by perceptions of the rival’s mating strategy or by beliefs about the speaker’s attractiveness to one’s romantic partner. In addition, the degree to which higher-pitched female voices elicit jealousy could be associated with variation in trait jealousy among women listeners. Here, I manipulated women’s voices to be higher or lower in pitch, and tested whether variation in jealousy towards female voices was more strongly associated with perceptions of mate poaching, beliefs about the speaker’s attractiveness to listeners’ romantic partner, or with individual differences in trait jealousy. I replicated findings that higher voice pitch elicits more jealousy from women, which was positively associated with perceptions of mate poaching. I found no evidence of an association between trait jealousy and any voice-based perception. The findings suggest that perceptions of a target’s proclivity to mate poach better explain the jealousy-inducing nature of higher-pitched female voices than do beliefs about the speaker’s attractiveness to one’s romantic partner.

以往的研究发现,音调较高的女性声音会引起女性的嫉妒。然而,对高音调女声的嫉妒是受对手交配策略的感知驱动,还是受说话者对自己恋爱伴侣的吸引力的信念驱动,目前尚不得而知。此外,高音调女声引起嫉妒的程度可能与女性听众特质嫉妒的变化有关。在此,我操纵女性声音的音调高低,并测试了对女性声音的嫉妒差异是否与偷猎配偶的看法、关于说话者对听众恋爱伴侣的吸引力的信念或特质嫉妒的个体差异有更大的关联。我重复了这一研究结果,即较高的声调会引起女性更多的嫉妒,而这种嫉妒与对偷猎配偶的看法呈正相关。我没有发现特质嫉妒与任何基于声音的感知之间存在关联的证据。研究结果表明,与认为说话者对恋爱伴侣有吸引力相比,对目标有偷猎配偶倾向的看法更能解释高音调女性声音引起嫉妒的本质。
{"title":"Perceptions of mate poaching predict jealousy towards higher-pitched women’s voices","authors":"Jillian J. M. O’Connor","doi":"10.1007/s10919-024-00469-6","DOIUrl":"https://doi.org/10.1007/s10919-024-00469-6","url":null,"abstract":"<p>Previous research has found that higher-pitched female voices elicit jealousy among women. However, it is unknown whether jealousy towards higher-pitched female voices is driven by perceptions of the rival’s mating strategy or by beliefs about the speaker’s attractiveness to one’s romantic partner. In addition, the degree to which higher-pitched female voices elicit jealousy could be associated with variation in trait jealousy among women listeners. Here, I manipulated women’s voices to be higher or lower in pitch, and tested whether variation in jealousy towards female voices was more strongly associated with perceptions of mate poaching, beliefs about the speaker’s attractiveness to listeners’ romantic partner, or with individual differences in trait jealousy. I replicated findings that higher voice pitch elicits more jealousy from women, which was positively associated with perceptions of mate poaching. I found no evidence of an association between trait jealousy and any voice-based perception. The findings suggest that perceptions of a target’s proclivity to mate poach better explain the jealousy-inducing nature of higher-pitched female voices than do beliefs about the speaker’s attractiveness to one’s romantic partner.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atheists and Christians can be Discerned from their Faces 无神论者和基督徒从面相就能辨别
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-06-26 DOI: 10.1007/s10919-024-00467-8
G. Shane Pitts, Nicholas O. Rule

Whereas research has documented how atheists are perceived, none has considered their perceptibility. Atheists must first be identified as atheists in order to experience the stigma associated with them (i.e., as distrusted, disliked, and widely maligned). Although atheism is considered a concealable aspect of one’s identity, substantial research has found that a variety of ostensibly concealable attributes about a person are indeed legible from small and subtle cues. We merged these lines of inquiry here by considering the perceptibility of religious and spiritual (dis)belief. Studies 1A-1B showed that atheists could be reliably discerned from Christians based on brief glimpses of 100 standardized male faces. Experiment 2 replicated these results using female faces. Experiments 3 A-E then interrogated the facial features that support perceivers’ detection of atheism, showing that various parts of faces suffice for independently conveying atheism. Experiment 4 investigated and showed a potential mechanism for atheism detection – expressive suppression. Thus, across nine studies (N = 677), these data show robust evidence that atheists can be categorized from facial cues.

虽然研究记录了无神论者是如何被感知的,但没有一项研究考虑过他们的可感知性。无神论者必须首先被认定为无神论者,才能感受到与之相关的耻辱(即不被信任、不被喜欢、被广泛诋毁)。虽然无神论被认为是一个人身份的可隐藏方面,但大量研究发现,一个人的各种表面上可隐藏的属性确实可以从细微的线索中看出来。在此,我们通过考虑宗教和精神(不)信仰的可感知性,将这些研究方向合并在一起。研究 1A-1B 表明,根据对 100 张标准化男性面孔的短暂观察,无神论者和基督徒可以被可靠地分辨出来。实验 2 使用女性面孔复制了这些结果。然后,实验 3 A-E 对支持感知者识别无神论的面部特征进行了研究,结果表明面部的不同部分足以独立传达无神论。实验 4 调查并展示了无神论检测的潜在机制--表情抑制。因此,在九项研究(N = 677)中,这些数据有力地证明了可以通过面部线索对无神论者进行分类。
{"title":"Atheists and Christians can be Discerned from their Faces","authors":"G. Shane Pitts, Nicholas O. Rule","doi":"10.1007/s10919-024-00467-8","DOIUrl":"https://doi.org/10.1007/s10919-024-00467-8","url":null,"abstract":"<p>Whereas research has documented how atheists are <i>perceived</i>, none has considered their <i>perceptibility</i>. Atheists must first be identified as atheists in order to experience the stigma associated with them (i.e., as distrusted, disliked, and widely maligned). Although atheism is considered a concealable aspect of one’s identity, substantial research has found that a variety of ostensibly concealable attributes about a person are indeed legible from small and subtle cues. We merged these lines of inquiry here by considering the perceptibility of religious and spiritual (dis)belief. Studies 1A-1B showed that atheists could be reliably discerned from Christians based on brief glimpses of 100 standardized male faces. Experiment 2 replicated these results using female faces. Experiments 3 A-E then interrogated the facial features that support perceivers’ detection of atheism, showing that various parts of faces suffice for independently conveying atheism. Experiment 4 investigated and showed a potential mechanism for atheism detection – expressive suppression. Thus, across nine studies (<i>N</i> = 677), these data show robust evidence that atheists can be categorized from facial cues.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Hearing Loss and Auditory Rehabilitation on Dyads: A Microsocial Perspective 听力损失和听觉康复对家庭的影响:微观社会视角
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-06-22 DOI: 10.1007/s10919-024-00468-7
Christiane Völter, Kirsten Oberländer, Martin Brüne, Fabian T. Ramseyer

Hearing loss severely hampers verbal exchange and thus social interaction, which puts a high burden on hearing-impaired and their close partners. Until now, nonverbal interaction in hearing-impaired dyads has not been addressed as a relevant factor for well-being or quality of social relationships. Nonverbal synchrony of head- and body-movement was analysed in N = 30 dyads of persons with hearing impairment (PHI) and their significant others (SO). In a 10-minute conversation before (T1) and 6 months after cochlear implantation (T2), Motion Energy Analysis (MEA) automatically quantified head- and body-movement. Self-report measures of both dyad members were used to assess aspects of quality of life and closeness in the partnership. After cochlear implantation, nonverbal synchrony showed a downward trend and was less distinct from pseudosynchrony. Higher synchrony was associated with worse hearing-related quality of life, shorter duration of hearing impairment and less closeness in the relationship. This negative association was interpreted as an indication for the effort one has to make to cope with difficulties in a dyad`s relationship. Endorsing a holistic approach in auditory rehabilitation, we propose the assessment of nonverbal synchrony as a suitable tool to detect subtle imbalances in the interpersonal relation between PHI and SO outside conscious control and to provide cues for possible therapeutical strategies.

听力损失严重阻碍了语言交流和社会交往,这给听障人士及其亲密伙伴带来了沉重负担。迄今为止,听力障碍者之间的非语言互动还没有被作为影响社会关系的幸福感或质量的相关因素加以研究。我们对 N = 30 个听障人士(PHI)及其重要伴侣(SO)的头部和身体运动的非语言同步性进行了分析。在人工耳蜗植入前(T1)和植入后 6 个月(T2)的 10 分钟对话中,运动能量分析法(MEA)自动量化了头部和身体的运动。对话双方的自我报告测量用于评估生活质量和伙伴关系中的亲密程度。人工耳蜗植入后,非言语同步性呈下降趋势,与假同步性的差异较小。同步性越高,与听力相关的生活质量越差,听力受损时间越短,关系越不亲密。这种负相关被解释为表明一个人必须付出努力来应对两人关系中的困难。我们赞同听觉康复的整体方法,并建议将非语言同步性评估作为一种合适的工具,用于检测 PHI 和 SO 之间人际关系中有意识控制之外的微妙失衡,并为可能的治疗策略提供线索。
{"title":"Impact of Hearing Loss and Auditory Rehabilitation on Dyads: A Microsocial Perspective","authors":"Christiane Völter, Kirsten Oberländer, Martin Brüne, Fabian T. Ramseyer","doi":"10.1007/s10919-024-00468-7","DOIUrl":"https://doi.org/10.1007/s10919-024-00468-7","url":null,"abstract":"<p>Hearing loss severely hampers verbal exchange and thus social interaction, which puts a high burden on hearing-impaired and their close partners. Until now, nonverbal interaction in hearing-impaired dyads has not been addressed as a relevant factor for well-being or quality of social relationships. Nonverbal synchrony of head- and body-movement was analysed in <i>N</i> = 30 dyads of persons with hearing impairment (PHI) and their significant others (SO). In a 10-minute conversation before (T1) and 6 months after cochlear implantation (T2), Motion Energy Analysis (MEA) automatically quantified head- and body-movement. Self-report measures of both dyad members were used to assess aspects of quality of life and closeness in the partnership. After cochlear implantation, nonverbal synchrony showed a downward trend and was less distinct from pseudosynchrony. Higher synchrony was associated with worse hearing-related quality of life, shorter duration of hearing impairment and less closeness in the relationship. This negative association was interpreted as an indication for the effort one has to make to cope with difficulties in a dyad`s relationship. Endorsing a holistic approach in auditory rehabilitation, we propose the assessment of nonverbal synchrony as a suitable tool to detect subtle imbalances in the interpersonal relation between PHI and SO outside conscious control and to provide cues for possible therapeutical strategies.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
People Attribute a Range of Highly-Varied and Socially-Bound Meanings to Naturalistic Sad Facial Expressions 人们对自然的悲伤表情赋予了一系列差异很大且受社会约束的含义
IF 2.1 3区 心理学 Q4 PSYCHOLOGY, SOCIAL Pub Date : 2024-06-12 DOI: 10.1007/s10919-024-00463-y
Sarah de la Harpe, Romina Palermo, Emily Brown, Nicolas Fay, A. Dawel
{"title":"People Attribute a Range of Highly-Varied and Socially-Bound Meanings to Naturalistic Sad Facial Expressions","authors":"Sarah de la Harpe, Romina Palermo, Emily Brown, Nicolas Fay, A. Dawel","doi":"10.1007/s10919-024-00463-y","DOIUrl":"https://doi.org/10.1007/s10919-024-00463-y","url":null,"abstract":"","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141350223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Movement Synchrony and Alliance in Problem-Focused and Solution-Focused Counseling 问题导向和解决方案导向咨询中的运动同步性和联盟评估
IF 2.1 3区 心理学 Q1 Psychology Pub Date : 2024-06-01 DOI: 10.1007/s10919-024-00466-9
Christian Hoffmann, Magdalene Gürtler, Johannes Fendel, Claas Lahmann, Stefan Schmidt

The present study investigated the differences in movement synchrony and therapeutic alliance between solution-focused and problem-focused counseling. Thirty-four participants each attended two counseling sessions with different counselors, one with a solution-focus and one with a problem-focus, in randomized order. The sessions consisted of three consecutive parts: problem description, standardized intervention and free intervention. Movement synchrony, including leading and pacing synchrony, was measured using Motion Energy Analysis (MEA) and windowed cross-lagged correlation (WCLC) based on video recordings of the sessions. The Helping Alliance Questionnaire (HAQ) was used to assess therapeutic alliance. Results showed that movement synchrony was significantly higher in solution-focused than in problem-focused counseling, driven by differences in the problem description part. This difference may be explained by the allegiance of the counselors to the solution-focused approach, as we observed more leading synchrony during the problem description part in solution-focused sessions. There was no significant difference in therapeutic alliance between the two conditions. This study expands the understanding of counseling approaches in the field of movement synchrony and contributes valuable insights for practitioners and researchers alike.

本研究调查了以解决问题为中心的心理咨询和以问题为中心的心理咨询在运动同步性和治疗联盟方面的差异。34 名参与者以随机顺序参加了由不同咨询师提供的两次咨询课程,其中一次以解决方案为重点,另一次以问题为重点。咨询过程包括三个连续部分:问题描述、标准化干预和自由干预。运动同步性,包括引导同步性和步调同步性,采用运动能量分析法(MEA)和窗口交叉滞后相关法(WCLC)进行测量。帮助联盟问卷(HAQ)用于评估治疗联盟。结果显示,受问题描述部分差异的影响,以解决问题为中心的咨询中运动同步性明显高于以问题为中心的咨询。这种差异可能是由于咨询师忠于以解决问题为中心的方法,因为在以解决问题为中心的咨询中,我们观察到在问题描述部分有更多的领先同步性。两种情况下的治疗联盟没有明显差异。这项研究拓展了运动同步领域对咨询方法的理解,为从业人员和研究人员提供了宝贵的见解。
{"title":"Assessment of Movement Synchrony and Alliance in Problem-Focused and Solution-Focused Counseling","authors":"Christian Hoffmann, Magdalene Gürtler, Johannes Fendel, Claas Lahmann, Stefan Schmidt","doi":"10.1007/s10919-024-00466-9","DOIUrl":"https://doi.org/10.1007/s10919-024-00466-9","url":null,"abstract":"<p>The present study investigated the differences in movement synchrony and therapeutic alliance between solution-focused and problem-focused counseling. Thirty-four participants each attended two counseling sessions with different counselors, one with a solution-focus and one with a problem-focus, in randomized order. The sessions consisted of three consecutive parts: problem description, standardized intervention and free intervention. Movement synchrony, including leading and pacing synchrony, was measured using Motion Energy Analysis (MEA) and windowed cross-lagged correlation (WCLC) based on video recordings of the sessions. The Helping Alliance Questionnaire (HAQ) was used to assess therapeutic alliance. Results showed that movement synchrony was significantly higher in solution-focused than in problem-focused counseling, driven by differences in the problem description part. This difference may be explained by the allegiance of the counselors to the solution-focused approach, as we observed more leading synchrony during the problem description part in solution-focused sessions. There was no significant difference in therapeutic alliance between the two conditions. This study expands the understanding of counseling approaches in the field of movement synchrony and contributes valuable insights for practitioners and researchers alike.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Functions of Human Touch: An Integrative Review 人体接触的功能:综合评论
IF 2.1 3区 心理学 Q1 Psychology Pub Date : 2024-05-28 DOI: 10.1007/s10919-024-00464-x
Supreet Saluja, Ilona Croy, Richard J. Stevenson

There appears to be no attempt to categorize the specific classes of behavior that the tactile system underpins. Awareness of how an organism uses touch in their environment informs understanding of its versatility in non-verbal communication and tactile perception. This review categorizes the behavioral functions underpinned by the tactile sense, by using three sources of data: (1) Animal data, to assess if an identified function is conserved across species; (2) Human capacity data, indicating whether the tactile sense can support a proposed function; and (3) Human impaired data, documenting the impacts of impaired tactile functioning (e.g., reduced tactile sensitivity) for humans. From these data, three main functions pertinent to the tactile sense were identified: Ingestive Behavior; Environmental Hazard Detection and Management; and Social Communication. These functions are reviewed in detail and future directions are discussed with focus on social psychology, non-verbal behavior and multisensory perception.

似乎没有人试图对触觉系统所支持的特定行为类别进行分类。了解生物如何在其所处环境中使用触觉,有助于理解生物在非语言交流和触觉感知方面的多功能性。本综述通过三种数据来源对触觉所支持的行为功能进行分类:(1) 动物数据,用于评估已确定的功能是否在不同物种间得以保留;(2) 人类能力数据,表明触觉是否能够支持所建议的功能;(3) 人类受损数据,记录触觉功能受损(如触觉灵敏度降低)对人类的影响。根据这些数据,确定了与触觉相关的三大功能:摄食行为、环境危害检测和管理以及社会交流。本文详细回顾了这些功能,并重点讨论了社会心理学、非语言行为和多感官感知的未来发展方向。
{"title":"The Functions of Human Touch: An Integrative Review","authors":"Supreet Saluja, Ilona Croy, Richard J. Stevenson","doi":"10.1007/s10919-024-00464-x","DOIUrl":"https://doi.org/10.1007/s10919-024-00464-x","url":null,"abstract":"<p>There appears to be no attempt to categorize the specific classes of behavior that the tactile system underpins. Awareness of how an organism uses touch in their environment informs understanding of its versatility in non-verbal communication and tactile perception. This review categorizes the behavioral functions underpinned by the tactile sense, by using three sources of data: (1) Animal data, to assess if an identified function is conserved across species; (2) Human capacity data, indicating whether the tactile sense can support a proposed function; and (3) Human impaired data, documenting the impacts of impaired tactile functioning (e.g., reduced tactile sensitivity) for humans. From these data, three main functions pertinent to the tactile sense were identified: Ingestive Behavior; Environmental Hazard Detection and Management; and Social Communication. These functions are reviewed in detail and future directions are discussed with focus on social psychology, non-verbal behavior and multisensory perception.</p>","PeriodicalId":47747,"journal":{"name":"Journal of Nonverbal Behavior","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Nonverbal Behavior
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1