减轻自动化可靠性差异影响的透明自动建议。

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES Human Factors Pub Date : 2024-08-01 Epub Date: 2023-08-27 DOI:10.1177/00187208231196738
Isabella Gegoff, Monica Tatasciore, Vanessa Bowden, Jason McCarley, Shayne Loft
{"title":"减轻自动化可靠性差异影响的透明自动建议。","authors":"Isabella Gegoff, Monica Tatasciore, Vanessa Bowden, Jason McCarley, Shayne Loft","doi":"10.1177/00187208231196738","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation.</p><p><strong>Background: </strong>Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent.</p><p><strong>Methods: </strong>Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects.</p><p><strong>Results: </strong>With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice <i>and</i> identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation.</p><p><strong>Conclusion: </strong>Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation.</p><p><strong>Application: </strong>These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141097/pdf/","citationCount":"0","resultStr":"{\"title\":\"Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability.\",\"authors\":\"Isabella Gegoff, Monica Tatasciore, Vanessa Bowden, Jason McCarley, Shayne Loft\",\"doi\":\"10.1177/00187208231196738\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation.</p><p><strong>Background: </strong>Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent.</p><p><strong>Methods: </strong>Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects.</p><p><strong>Results: </strong>With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice <i>and</i> identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation.</p><p><strong>Conclusion: </strong>Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation.</p><p><strong>Application: </strong>These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141097/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208231196738\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/27 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231196738","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的:研究提高自动化透明度在多大程度上可以减轻低自动化可靠性和高自动化可靠性对滥用和误用自动化建议以及自动化信任感的潜在负面影响:研究提高自动化透明度在多大程度上可以减轻低自动化可靠性和高自动化可靠性对不使用和滥用自动化建议以及对自动化的感知信任的潜在负面影响:背景:工作场所越来越多地使用自动决策辅助工具,这些工具提供的建议可靠性各不相同。低可靠性自动化会增加自动建议的废弃率,而高可靠性自动化则会增加误用率。如果自动建议的基本原理更加透明,就可以减少这些影响:方法:参与者选择最佳UV来完成任务。推荐器(自动决策辅助工具)通过提供建议来帮助参与者,但它并不总是可靠的。参与者决定推荐器是否提供了准确的信息,以及是否接受或拒绝建议。自动化透明度(中、高)和可靠性(低:65%,高:90%)在被试之间进行了调节:结果:与低可靠性自动化相比,高可靠性自动化使参与者做出的决策更准确(正确接受建议并识别信息是否准确/不准确)、更快速,并表示对自动化的信任度有所提高。透明度的提高使决策更准确、更迅速,主观工作量更低,可用性评分更高。它还消除了与低可靠性自动化相关的自动化滥用现象。然而,透明度并没有减少与高可靠性自动化相关的滥用现象:结论:透明度能防止低可靠性自动化的滥用,但不能防止与高可靠性自动化的监控和验证减少相关的滥用增加:这些结果可以为透明自动化的设计提供参考,从而在不同自动化可靠性条件下改善人机协作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability.

Objective: To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation.

Background: Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent.

Methods: Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects.

Results: With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice and identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation.

Conclusion: Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation.

Application: These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
期刊最新文献
Road Users Fail to Appreciate the Special Optical Properties of Retroreflective Materials. Effectiveness of Safe Patient Handling Equipment and Techniques: A Review of Biomechanical Studies. Changes in Neck and Shoulder Muscles Fatigue Threshold When Using a Passive Head/Neck Supporting Exoskeleton During Repetitive Overhead Tasks. The Role of State and Trait Self-Control on the Sustained Attention to Response Task. Improving Social Bot Detection Through Aid and Training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1