M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth
{"title":"Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare","authors":"M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth","doi":"10.1177/15553434221133288","DOIUrl":null,"url":null,"abstract":"Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"179 - 181"},"PeriodicalIF":2.2000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221133288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 3
Abstract
Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.