有意义的沟通而非肤浅的拟人化促进人-自动化信任校准:人-自动化信任期望模型(HATEM)。

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES Human Factors Pub Date : 2024-11-01 Epub Date: 2023-12-02 DOI:10.1177/00187208231218156
Owen B J Carter, Shayne Loft, Troy A W Visser
{"title":"有意义的沟通而非肤浅的拟人化促进人-自动化信任校准:人-自动化信任期望模型(HATEM)。","authors":"Owen B J Carter, Shayne Loft, Troy A W Visser","doi":"10.1177/00187208231218156","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.</p><p><strong>Background: </strong>Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.</p><p><strong>Method: </strong>Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM <i>appearance</i> (anthropomorphic avatar vs. camera eye) and voice <i>inflection</i> (monotone vs. meaningless vs. meaningful), with the <i>meaningful</i> inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.</p><p><strong>Results: </strong><i>Avatar</i> SAM appearance was rated as more anthropomorphic than camera <i>eye</i>, and <i>meaningless</i> and <i>meaningful</i> inflections were both rated more anthropomorphic than <i>monotone</i>. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that <i>meaningful</i> inflections yielded better outcomes on these trust measures than <i>monotone</i> and <i>meaningless</i> inflections.</p><p><strong>Conclusion: </strong>Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.</p><p><strong>Application: </strong>Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457490/pdf/","citationCount":"0","resultStr":"{\"title\":\"Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).\",\"authors\":\"Owen B J Carter, Shayne Loft, Troy A W Visser\",\"doi\":\"10.1177/00187208231218156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.</p><p><strong>Background: </strong>Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.</p><p><strong>Method: </strong>Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM <i>appearance</i> (anthropomorphic avatar vs. camera eye) and voice <i>inflection</i> (monotone vs. meaningless vs. meaningful), with the <i>meaningful</i> inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.</p><p><strong>Results: </strong><i>Avatar</i> SAM appearance was rated as more anthropomorphic than camera <i>eye</i>, and <i>meaningless</i> and <i>meaningful</i> inflections were both rated more anthropomorphic than <i>monotone</i>. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that <i>meaningful</i> inflections yielded better outcomes on these trust measures than <i>monotone</i> and <i>meaningless</i> inflections.</p><p><strong>Conclusion: </strong>Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.</p><p><strong>Application: </strong>Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457490/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208231218156\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/12/2 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231218156","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的:目的是证明拟人化需要传达上下文有用的信息,以增加用户信心并准确校准人类对自动化的信任。背景:拟人化被认为可以提高人类对自动化的信任,但支持的证据仍然模棱两可。我们测试了人类-自动化信任期望模型(HATEM),该模型预测,除非拟人化有助于自然地交流上下文有用的信息,以促进自动化故障的预测,否则信任校准和对可接受建议的信心的改善将很弱。方法:98名大学生使用潜艇潜望镜模拟器对舰船进行分类,辅以舰船自动建模(SAM)系统,SAM系统的可靠性为50%。受试者之间的2 × 3设计比较了SAM外观(拟人化化身vs摄像机眼睛)和语音变化(单调vs无意义vs有意义),有意义的变化传达了关于确定性和不确定性的自动化建议的上下文有用信息。结果:Avatar SAM外观被评为比camera eye更拟人化,无意义和有意义的屈折都被评为比单调更拟人化。然而,对于主观信任、信任校准和接受SAM建议的信心,没有证据表明拟人化外观有任何影响,而有决定性的证据表明,有意义的屈折在这些信任措施上比单调和无意义的屈折产生更好的结果。结论:拟人化对人类-自动化信任的影响可以忽略不计,除非拟人化的执行增强了相关信息的沟通,使参与者能够更好地校准自动化性能的期望。应用:使用拟人化来校准信任的设计师需要考虑通过拟人化特征将传达哪些上下文有用的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).

Objective: The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.

Background: Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.

Method: Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.

Results: Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections.

Conclusion: Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.

Application: Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
期刊最新文献
Attentional Tunneling in Pilots During a Visual Tracking Task With a Head Mounted Display. Examining Patterns and Predictors of ADHD Teens' Skill-Learning Trajectories During Enhanced FOrward Concentration and Attention Learning (FOCAL+) Training. Is Less Sometimes More? An Experimental Comparison of Four Measures of Perceived Usability. An Automobile's Tail Lights: Sacrificing Safety for Playful Design? Virtual Reality Adaptive Training for Personalized Stress Inoculation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1