P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky
{"title":"人工智能体团队和信任校准:理论框架、可配置的试验台、经验说明以及对自适应系统开发的启示","authors":"P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky","doi":"10.1080/1463922X.2022.2086644","DOIUrl":null,"url":null,"abstract":"Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.","PeriodicalId":22852,"journal":{"name":"Theoretical Issues in Ergonomics Science","volume":"24 1","pages":"310 - 334"},"PeriodicalIF":1.4000,"publicationDate":"2022-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems\",\"authors\":\"P. Bobko, Leanne M. Hirshfield, Lucca Eloy, Cara A. Spencer, Emily Doherty, Jack Driscoll, Hannah Obolsky\",\"doi\":\"10.1080/1463922X.2022.2086644\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.\",\"PeriodicalId\":22852,\"journal\":{\"name\":\"Theoretical Issues in Ergonomics Science\",\"volume\":\"24 1\",\"pages\":\"310 - 334\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2022-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Theoretical Issues in Ergonomics Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/1463922X.2022.2086644\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ERGONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical Issues in Ergonomics Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1463922X.2022.2086644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ERGONOMICS","Score":null,"Total":0}
Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems
Abstract Given new technologies and algorithmic capabilities, human-agent teaming (HAT) is expected to dominate environments where complex problems are solved by heterogenous teams. In such teams, trust calibration is key; i.e. humans and agents working symbiotically, with humans trusting and relying on agents as appropriate. In this paper, we focus on understanding trust-calibration in HATs. We propose a theoretical framework of calibrated trust in HATs. Next, we provide a configurable testbed designed to investigate calibrated trust in HATs. To demonstrate the flexible testbed and our framework, we conduct a study investigating hypotheses about agent transparency and reliability. Results align with research to date, supporting the notion that transparency results in calibrated trust. Further, high transparency yielded more positive affect and lower workload than low transparency. We also found that increased agent reliability resulted in higher trust in the agent, as well as more positive valence. This suggests that participants experienced more engagement with the task when the agent was reliable and presumably trustworthy. We also build on our framework and testbed to outline a research agenda for the assessment of human trust dynamics in HATs and the development of subsequent real-time, intelligent adaptive systems.