Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler
{"title":"The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support?","authors":"Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler","doi":"10.1177/00187208231197347","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.</p><p><strong>Background: </strong>Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.</p><p><strong>Method: </strong>To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.</p><p><strong>Results: </strong>Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.</p><p><strong>Conclusion: </strong>The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.</p><p><strong>Application: </strong>When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"1995-2007"},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231197347","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/26 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.
Background: Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.
Method: To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.
Results: Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.
Conclusion: The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.
Application: When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.
研究目的本研究旨在更好地了解人机交互中信任态度和行为的动态变化:背景:过去的研究提供了完美自动化模式的证据,而最近的研究则提供了相互矛盾的证据:为了厘清这些相互矛盾的研究结果,我们使用模拟医疗 X 光任务进行了一项在线实验。我们操纵了受试者之间的支持代理框架(即人工智能(AI)与专家与新手)以及受试者内部的失败经验(即完美支持、不完美支持、回归完美支持)。信任态度和行为以及感知可靠性作为因变量:结果:人类专家的信任态度和感知可靠性高于人工智能新手。此外,在信任态度和行为以及感知可靠性方面,结果显示了典型的信任形成、解除和恢复模式。在经历失败后,不同的代理之间的宽容度并无差别:结论:研究结果有力地证明了不完善的自动化模式的存在。这说明在人机交互中需要考虑代理的专业知识:应用:在用人工智能替代人类专家作为支持代理时,可能会出现对新代理的信任度较低的问题。
期刊介绍:
Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.