Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams

IF 2.2 Q3 ENGINEERING, INDUSTRIAL Journal of Cognitive Engineering and Decision Making Pub Date : 2022-09-28 DOI:10.1177/15553434221129166
Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher
{"title":"Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams","authors":"Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher","doi":"10.1177/15553434221129166","DOIUrl":null,"url":null,"abstract":"When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"3 - 25"},"PeriodicalIF":2.2000,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221129166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 1

Abstract

When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人-代理自主协作团队中代理可靠性与人信任、情境感知和绩效的中介关系
当与人类合作时,智能代理的可靠性可能会因故障或环境限制而偶尔发生变化。或者,代理可能比人类更可靠,因为它们的性能不太可能下降(例如,由于疲劳)。研究经常在很少或没有时间限制的情况下调查人类与代理的互动,例如离散决策任务,其中自动化被降级为助手的角色。本文对时间压力下的连续追求任务中人类智能体团队的变化可靠性进行了定量研究,并通过结构方程模型将个体差异、感知因素和任务绩效联系起来。结果表明,降低可靠性可能会产生一个更有效的代理,与一个完全可靠的代理在不知不觉中不同,同时有助于整体团队绩效。中介分析显示了信任和情境意识文献中研究的因素的复制,同时提供了新的见解:在任务中具有积极利益的代理人(即,成功取决于团队绩效)抵消了情境意识的丧失,这与通常的过度信任概念不同。我们总结了抽象追求任务的一般含义,并强调了在时间压力连续领域进行研究时面临的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.60
自引率
10.00%
发文量
21
期刊最新文献
Is the Pull-Down Effect Overstated? An Examination of Trust Propagation Among Fighter Pilots in a High-Fidelity Simulation A Taxonomy for AI Hazard Analysis Understanding Automation Failure Integrating Function Allocation and Operational Event Sequence Diagrams to Support Human-Robot Coordination: Case Study of a Robotic Date Thinning System Adapting Cognitive Task Analysis Methods for Use in a Large Sample Simulation Study of High-Risk Healthcare Events.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1