人工智能体团队和信任校准:理论框架、可配置的试验台、经验说明以及对自适应系统开发的启示

P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky
{"title":"人工智能体团队和信任校准:理论框架、可配置的试验台、经验说明以及对自适应系统开发的启示","authors":"P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky","doi":"10.1080/1463922X.2022.2086644","DOIUrl":null,"url":null,"abstract":"Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.","PeriodicalId":22852,"journal":{"name":"Theoretical Issues in Ergonomics Science","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2022-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems\",\"authors\":\"P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky\",\"doi\":\"10.1080/1463922X.2022.2086644\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.\",\"PeriodicalId\":22852,\"journal\":{\"name\":\"Theoretical Issues in Ergonomics Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2022-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Theoretical Issues in Ergonomics Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/1463922X.2022.2086644\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ERGONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical Issues in Ergonomics Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1463922X.2022.2086644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 6

摘要

摘要在给定新技术和算法能力的情况下,人工智能体团队(HAT)有望主导由异构团队解决复杂问题的环境。在这样的团队中,信任校准是关键;即人类和制剂共生工作,人类适当地信任和依赖制剂。在本文中,我们重点了解HAT中的信任校准。我们提出了一个HAT中校准信任的理论框架。接下来,我们提供了一个可配置的测试平台,用于研究HAT中经过校准的信任。为了证明灵活的测试平台和我们的框架,我们进行了一项研究,调查了关于代理透明度和可靠性的假设。研究结果与迄今为止的研究一致,支持透明度导致校准信任的观点。此外,与低透明度相比,高透明度产生了更积极的影响和更低的工作量。我们还发现,代理可靠性的提高导致了对代理的更高信任,以及更高的正价。这表明,当代理人可靠且可能值得信赖时,参与者对任务的参与度会更高。我们还以我们的框架和试验台为基础,概述了评估HAT中人类信任动态以及开发后续实时智能自适应系统的研究议程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems
Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
6.20%
发文量
38
期刊最新文献
Impact of the ambulatory surgery system on the usability of a home medical device for post-operative follow-up: a mixed-method study in simulation Using anthropomorphism, transparency, and feedback to improve phishing detection aids The role of participatory ergonomics in supporting the safety of healthcare workers; a systematic review Safety leadership and safety citizenship behavior: the mediating roles of safety knowledge, safety motivation, and psychological contract of safety An ecological theory of learning transfer in human activity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1