Balancing the Scales of Explainable and Transparent AI Agents within Human-Agent Teams

Sarvesh Sawant, Rohit Mallick, Camden Brady, Kapil Chalil Madathil, Nathan McNeese, Jeffrey Bertrand, Nikhil Rangaraju
{"title":"Balancing the Scales of Explainable and Transparent AI Agents within Human-Agent Teams","authors":"Sarvesh Sawant, Rohit Mallick, Camden Brady, Kapil Chalil Madathil, Nathan McNeese, Jeffrey Bertrand, Nikhil Rangaraju","doi":"10.1177/21695067231192250","DOIUrl":null,"url":null,"abstract":"With the progressive nature of Human-Agent Teams becoming more and more useful for high-quality work output, there is a proportional need for bi-directional communication between teammates to increase efficient collaboration. This need is centered around the well-known issue of innate mistrust between humans and artificial intelligence, resulting in sub-optimal work. To combat this, computer scientists and humancomputer interaction researchers alike have presented and refined specific solutions to this issue through different methods of AI interpretability. These different methods include explicit AI explanations as well as implicit manipulations of the AI interface, otherwise known as AI transparency. Individually these solutions hold considerable merit in repairing the relationship of trust between teammates, but also have individual flaws. We posit that the combination of different interpretable mechanisms mitigates each other’s flaws and extenuates their strengths within human-agent teams.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"19 1","pages":"2082 - 2087"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/21695067231192250","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the progressive nature of Human-Agent Teams becoming more and more useful for high-quality work output, there is a proportional need for bi-directional communication between teammates to increase efficient collaboration. This need is centered around the well-known issue of innate mistrust between humans and artificial intelligence, resulting in sub-optimal work. To combat this, computer scientists and humancomputer interaction researchers alike have presented and refined specific solutions to this issue through different methods of AI interpretability. These different methods include explicit AI explanations as well as implicit manipulations of the AI interface, otherwise known as AI transparency. Individually these solutions hold considerable merit in repairing the relationship of trust between teammates, but also have individual flaws. We posit that the combination of different interpretable mechanisms mitigates each other’s flaws and extenuates their strengths within human-agent teams.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在人类-代理团队中平衡可解释人工智能代理和透明人工智能代理的尺度
随着人类-人工智能团队在实现高质量工作产出方面的作用越来越大,队友之间的双向交流需求也相应增加,以提高协作效率。这一需求的核心是众所周知的人类与人工智能之间天生的不信任问题,这导致了次优工作的产生。为了解决这个问题,计算机科学家和人机交互研究人员通过不同的人工智能可解释性方法,提出并完善了具体的解决方案。这些不同的方法包括明确的人工智能解释和隐含的人工智能界面操作,也就是所谓的人工智能透明度。单独来看,这些解决方案在修复队友之间的信任关系方面具有相当大的优势,但也存在各自的缺陷。我们认为,将不同的可解释机制结合起来,可以减轻彼此的缺陷,并在人类-代理团队中发挥各自的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Novel Application of Non-linear Dynamics Investigating Cognitive Workload and Situational Trust in Highly Automated Vehicles Inattentional Insensitivity As A Predictor of Relevant Situation Awareness Questions And Irrelevant Questions Transitioning Lab Courses to Online Platforms by Higher Education Institutions during COVID-19 A Novel Application of Non-linear Dynamics Investigating Cognitive Workload and Situational Trust in Highly Automated Vehicles Transitioning Lab Courses to Online Platforms by Higher Education Institutions during COVID-19
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1