Trust in an Autonomous Agent for Predictive Maintenance: How Agent Transparency Could Impact Compliance

Loïck Simon, Philippe Rauffet, Clément Guérin, Cédric Seguin
{"title":"Trust in an Autonomous Agent for Predictive Maintenance: How Agent Transparency Could Impact Compliance","authors":"Loïck Simon, Philippe Rauffet, Clément Guérin, Cédric Seguin","doi":"10.54941/ahfe1001602","DOIUrl":null,"url":null,"abstract":"In the context of Industry 4.0, human operators will increasingly cooperate with intelligent systems, considered as teammates in the joint activity. This human-autonomy teaming is particularly prevalent in the activity of predictive maintenance, where the system advises the operator to advance or postpone some operations on the machines according to the projection of their future state. Like in human-human cooperation, the effectiveness of cooperation with those autonomous agents especially depends on the notion of trust. The challenge is to calibrate an appropriate level of trust and avoid misuse, disuse or abuse of the recommending system. Compliance (i.e. positive response of the operator on advice from an autonomous agent) can be interpreted as an objective measure of trust as the operator relies on the advice from the autonomous agent. This compliance is also based on the risk perception of the situation as the operator assesses the risk and the benefits of advancing or postponing an operation. A way to calibrate the trust and enhance risk perception is to use the transparency concept. Transparency has been defined as an information during a human-machine interaction that is easy to use with the intent to promote the comprehension, the shared awareness, the intent, the role, the interaction, the performance, the future plans and the reasoning process. This research will focus on two aspects of the transparency concept : the reliability of the autonomous agent ; the outcomes linked to the advice of the autonomous agent. The objective of this research is to understand the effect of the autonomous agent transparency on human trust after an advice from an autonomous agent (here an AI for predictive maintenance) for a more or less risky situation. Our hypothesis is that transparency will impact compliance (H1: Risk transparency will decrease compliance ; H2: Reliability transparency will increase compliance ; H3: Full transparency will decrease compliance)For this experiment we recruited participants to complete decision situations (i.e. accept or deny a proposition, from a predictive maintenance algorithme, of advancing or postponing a CMMS maintenance). A software for predictive maintenance in maritime context was used to address those situations. During this experiment, agent transparency level is manipulated by displaying information related to agent reliability and to situation outcomes, separately or in combination. This agent transparency is mixed with situation complexity (high or low) and the type of advice (advancinc or postponing the maintenance interventions). Age, gender, profession and affinity for the use of technology are assessed for control variables. As the situation represents risk taking, a scale for propensity of risk taking is also used. Trust (subjective and objective), risk perception and mental workload are measured after each situation. As a final question, the participant gives the main information he used to make his choice for each experimental setting.","PeriodicalId":221615,"journal":{"name":"Industrial Cognitive Ergonomics and Engineering Psychology","volume":"119 51","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Cognitive Ergonomics and Engineering Psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1001602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the context of Industry 4.0, human operators will increasingly cooperate with intelligent systems, considered as teammates in the joint activity. This human-autonomy teaming is particularly prevalent in the activity of predictive maintenance, where the system advises the operator to advance or postpone some operations on the machines according to the projection of their future state. Like in human-human cooperation, the effectiveness of cooperation with those autonomous agents especially depends on the notion of trust. The challenge is to calibrate an appropriate level of trust and avoid misuse, disuse or abuse of the recommending system. Compliance (i.e. positive response of the operator on advice from an autonomous agent) can be interpreted as an objective measure of trust as the operator relies on the advice from the autonomous agent. This compliance is also based on the risk perception of the situation as the operator assesses the risk and the benefits of advancing or postponing an operation. A way to calibrate the trust and enhance risk perception is to use the transparency concept. Transparency has been defined as an information during a human-machine interaction that is easy to use with the intent to promote the comprehension, the shared awareness, the intent, the role, the interaction, the performance, the future plans and the reasoning process. This research will focus on two aspects of the transparency concept : the reliability of the autonomous agent ; the outcomes linked to the advice of the autonomous agent. The objective of this research is to understand the effect of the autonomous agent transparency on human trust after an advice from an autonomous agent (here an AI for predictive maintenance) for a more or less risky situation. Our hypothesis is that transparency will impact compliance (H1: Risk transparency will decrease compliance ; H2: Reliability transparency will increase compliance ; H3: Full transparency will decrease compliance)For this experiment we recruited participants to complete decision situations (i.e. accept or deny a proposition, from a predictive maintenance algorithme, of advancing or postponing a CMMS maintenance). A software for predictive maintenance in maritime context was used to address those situations. During this experiment, agent transparency level is manipulated by displaying information related to agent reliability and to situation outcomes, separately or in combination. This agent transparency is mixed with situation complexity (high or low) and the type of advice (advancinc or postponing the maintenance interventions). Age, gender, profession and affinity for the use of technology are assessed for control variables. As the situation represents risk taking, a scale for propensity of risk taking is also used. Trust (subjective and objective), risk perception and mental workload are measured after each situation. As a final question, the participant gives the main information he used to make his choice for each experimental setting.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
预测性维护对自主代理的信任:代理透明度如何影响合规性
在工业4.0的背景下,人类操作员将越来越多地与智能系统合作,在联合活动中被视为队友。这种人类自主团队在预测性维护活动中尤其普遍,系统会根据对机器未来状态的预测,建议操作员提前或推迟对机器的某些操作。就像人与人之间的合作一样,与这些自主代理合作的有效性尤其取决于信任的概念。我们面临的挑战是校准一个适当的信任水平,并避免误用、废弃或滥用推荐系统。合规性(即运营商对自主代理建议的积极回应)可以被解释为信任的客观衡量标准,因为运营商依赖于自主代理的建议。当作业者评估风险和提前或推迟作业的收益时,这种遵从性也是基于对情况的风险感知。运用透明度的概念是校准信任和增强风险感知的一种方法。透明度被定义为人机交互过程中易于使用的信息,目的是促进理解、共享意识、意图、角色、交互、性能、未来计划和推理过程。本研究将集中于透明度概念的两个方面:自主代理的可靠性;结果与自主代理的建议有关。本研究的目的是了解在一个或多或少有风险的情况下,在一个自主代理(这里是用于预测性维护的人工智能)给出建议后,自主代理透明度对人类信任的影响。我们的假设是透明度会影响合规性(H1:风险透明度会降低合规性;H2:可靠性透明度将提高合规性;H3:完全透明会降低依从性)在本实验中,我们招募参与者完成决策情境(即接受或拒绝来自预测性维护算法的提议,提前或推迟CMMS维护)。一种用于海事环境中的预测性维护的软件被用来解决这些情况。在本实验中,通过单独或组合显示与代理可靠性和情境结果相关的信息来操纵代理透明度水平。这种代理透明度与情况复杂性(高或低)和建议类型(提前或推迟维护干预)混合在一起。年龄、性别、职业和对技术使用的亲和力被评估为控制变量。由于情况代表风险承担,因此也使用了风险承担倾向的量表。在每一种情况之后测量信任(主观和客观)、风险感知和心理工作量。作为最后一个问题,参与者给出了他用来选择每个实验设置的主要信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Image Classification for Project-based Learning to Differentiate Diagram and Figures. Elicitation of Diagnosis Strategy During Scanning Chest X-Rays from Eye Tracking Stimulated Retrospections Experiencing the History and Cultural Heritage: The Tourist Centred Map Design of Liverpool City A Chain-Driven Live Roller Mechanism for Loading and Unloading Packages on Autonomous Mobile Robots in Warehouses Novel Approach for Sensing the Humanoid Hand Finger Position Using Non-contact TMR Sensor
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1