多模态身份信息并不能提高人脸和声音身份匹配的准确性。

IF 3.2 2区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY British journal of psychology Pub Date : 2024-12-17 DOI:10.1111/bjop.12757
Harriet M J Smith, Kay L Ritchie, Thom S Baguley, Nadine Lavan
{"title":"多模态身份信息并不能提高人脸和声音身份匹配的准确性。","authors":"Harriet M J Smith, Kay L Ritchie, Thom S Baguley, Nadine Lavan","doi":"10.1111/bjop.12757","DOIUrl":null,"url":null,"abstract":"<p><p>Identity verification from both faces and voices can be error-prone. Previous research has shown that faces and voices signal concordant information and cross-modal unfamiliar face-to-voice matching is possible, albeit often with low accuracy. In the current study, we ask whether performance on a face or voice identity matching task can be improved by using multimodal stimuli which add a second modality (voice or face). We find that overall accuracy is higher for face matching than for voice matching. However, contrary to predictions, presenting one unimodal and one multimodal stimulus within a matching task did not improve face or voice matching compared to presenting two unimodal stimuli. Additionally, we find that presenting two multimodal stimuli does not improve accuracy compared to presenting two unimodal face stimuli. Thus, multimodal information does not improve accuracy. However, intriguingly, we find that cross-modal face-voice matching accuracy predicts voice matching accuracy but not face matching accuracy. This suggests cross-modal information can nonetheless play a role in identity matching, and face and voice information combine to inform matching decisions. We discuss our findings in light of current models of person perception, and consider the implications for identity verification in security and forensic settings.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Face and voice identity matching accuracy is not improved by multimodal identity information.\",\"authors\":\"Harriet M J Smith, Kay L Ritchie, Thom S Baguley, Nadine Lavan\",\"doi\":\"10.1111/bjop.12757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Identity verification from both faces and voices can be error-prone. Previous research has shown that faces and voices signal concordant information and cross-modal unfamiliar face-to-voice matching is possible, albeit often with low accuracy. In the current study, we ask whether performance on a face or voice identity matching task can be improved by using multimodal stimuli which add a second modality (voice or face). We find that overall accuracy is higher for face matching than for voice matching. However, contrary to predictions, presenting one unimodal and one multimodal stimulus within a matching task did not improve face or voice matching compared to presenting two unimodal stimuli. Additionally, we find that presenting two multimodal stimuli does not improve accuracy compared to presenting two unimodal face stimuli. Thus, multimodal information does not improve accuracy. However, intriguingly, we find that cross-modal face-voice matching accuracy predicts voice matching accuracy but not face matching accuracy. This suggests cross-modal information can nonetheless play a role in identity matching, and face and voice information combine to inform matching decisions. We discuss our findings in light of current models of person perception, and consider the implications for identity verification in security and forensic settings.</p>\",\"PeriodicalId\":9300,\"journal\":{\"name\":\"British journal of psychology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British journal of psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1111/bjop.12757\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/bjop.12757","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

通过面部和声音进行身份验证很容易出错。先前的研究表明,面孔和声音发出一致的信息,跨模态的不熟悉的声音与声音匹配是可能的,尽管通常精度较低。在当前的研究中,我们询问是否可以通过使用添加第二模态(声音或面孔)的多模态刺激来改善面部或声音身份匹配任务的表现。我们发现,人脸匹配的总体准确率高于语音匹配。然而,与预测相反,与呈现两个单峰刺激相比,在匹配任务中呈现一个单峰刺激和一个多峰刺激并没有改善面部或声音匹配。此外,我们发现呈现两个多模态刺激并不比呈现两个单模态面部刺激提高准确率。因此,多模态信息不能提高准确性。然而,有趣的是,我们发现跨模态人脸-语音匹配精度预测语音匹配精度,而不是人脸匹配精度。这表明,尽管如此,跨模态信息仍然可以在身份匹配中发挥作用,面部和语音信息结合起来,为匹配决策提供信息。我们根据当前的人感知模型讨论了我们的发现,并考虑了安全和法医环境中身份验证的含义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Face and voice identity matching accuracy is not improved by multimodal identity information.

Identity verification from both faces and voices can be error-prone. Previous research has shown that faces and voices signal concordant information and cross-modal unfamiliar face-to-voice matching is possible, albeit often with low accuracy. In the current study, we ask whether performance on a face or voice identity matching task can be improved by using multimodal stimuli which add a second modality (voice or face). We find that overall accuracy is higher for face matching than for voice matching. However, contrary to predictions, presenting one unimodal and one multimodal stimulus within a matching task did not improve face or voice matching compared to presenting two unimodal stimuli. Additionally, we find that presenting two multimodal stimuli does not improve accuracy compared to presenting two unimodal face stimuli. Thus, multimodal information does not improve accuracy. However, intriguingly, we find that cross-modal face-voice matching accuracy predicts voice matching accuracy but not face matching accuracy. This suggests cross-modal information can nonetheless play a role in identity matching, and face and voice information combine to inform matching decisions. We discuss our findings in light of current models of person perception, and consider the implications for identity verification in security and forensic settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
British journal of psychology
British journal of psychology PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
7.60
自引率
2.50%
发文量
67
期刊介绍: The British Journal of Psychology publishes original research on all aspects of general psychology including cognition; health and clinical psychology; developmental, social and occupational psychology. For information on specific requirements, please view Notes for Contributors. We attract a large number of international submissions each year which make major contributions across the range of psychology.
期刊最新文献
Emotion ensemble judgement: Cognitive training for a positive perspective. Deliberate memory display can enhance conveyed value. Close encounters: Interpersonal proximity amplifies social appraisals. Why moral judgements change across variations of trolley-like problems. Working memory capacity and self-cues: Consistent benefits in children and adults.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1