人类-代理团队中的积极和消极解释效应

Bryan Lavender, Sami Abuhaimed, Sandip Sen
{"title":"人类-代理团队中的积极和消极解释效应","authors":"Bryan Lavender,&nbsp;Sami Abuhaimed,&nbsp;Sandip Sen","doi":"10.1007/s43681-023-00396-0","DOIUrl":null,"url":null,"abstract":"<div><p>Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"47 - 56"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Positive and negative explanation effects in human–agent teams\",\"authors\":\"Bryan Lavender,&nbsp;Sami Abuhaimed,&nbsp;Sandip Sen\",\"doi\":\"10.1007/s43681-023-00396-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"4 1\",\"pages\":\"47 - 56\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-023-00396-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00396-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着代理能力的提高、计算平台和互联网连接的日益普及,人类用户与自动代理之间的合作变得更加有效和多样化。为了提高人类-代理协作团队的可行性和有效性,迫切需要开展研究,使这些团队能够最大限度地利用人类和自动推理者的相对优势。我们研究了由一名人类和一名代理组成的虚拟临时团队,他们在几个事件中进行合作,每个事件要求他们完成从给定任务类型中选择的一组任务。团队成员最初并不了解其伙伴的能力,而作为任务分配者的代理必须调整任务分配过程,以最大限度地提高团队绩效。本文的重点是分析分配决策的解释如何影响用户的绩效和人类工作者的前景,包括动机和满意度等因素。我们研究了代理分配者向人类提供的解释对人类队友在调查中报告的绩效和关键因素的影响。调查因素包括解释对动机、解释力和可理解性的影响,以及对队友的满意度和信任/信心。我们通过对 MTurk 员工进行实验,评估了与积极、消极和无解释情景相关的这些因素的一系列假设。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Positive and negative explanation effects in human–agent teams

Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploring the mutations of society in the era of generative AI The need for an empirical research program regarding human–AI relational norms AI to renew public employment services? Explanation and trust of domain experts Waging warfare against states: the deployment of artificial intelligence in cyber espionage Technology, liberty, and guardrails
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1