Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare

IF 2.2 Q3 ENGINEERING, INDUSTRIAL Journal of Cognitive Engineering and Decision Making Pub Date : 2022-10-16 DOI:10.1177/15553434221133288
M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth
{"title":"Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare","authors":"M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth","doi":"10.1177/15553434221133288","DOIUrl":null,"url":null,"abstract":"Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"179 - 181"},"PeriodicalIF":2.2000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221133288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 3

Abstract

Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
《人与人工智能合作》特刊及《人工智能在医疗保健中的应用》特刊
基于机器学习的进步,依赖于人工智能(AI)的软件正在广泛的系统中引入,包括医疗保健、自动驾驶汽车、先进制造、航空和军事系统。人工智能系统可能不可靠或不够健壮;然而,由于基于嘈杂和不完整的数据集开发可靠和强大的人工智能算法面临挑战,缺乏预测未来结果所需的因果模型,存在未检测到的偏差,以及嘈杂或错误的传感器输入。因此,预计在可预见的未来,人工智能系统将需要与人类合作来执行任务,并且通常作为人类和人工智能系统组成的更大团队的一部分。此外,人工智能系统可以在不同时间、针对不同类型的任务或环境,以不同级别的自主性进行实例化,从而创造广泛的设计空间供考虑。与人类有效协同工作的人工智能系统的设计和实现带来了重大挑战,包括提供足够水平的人工智能透明度和可解释性,以支持人类的情况感知(SA)、信任和绩效、决策制定、支持人类和人工智能系统之间的协作和协调需求。本期特刊涵盖了旨在更好地将人与人工智能结合起来的新研究,使他们能够有效地发挥作用。有几篇文章探讨了信任在调解人类与人工智能团队互动中的作用。Dorton和Harper(2022)探索了导致情报分析师信任人工智能系统的因素,发现系统的性能及其可解释性是主要因素,以及它在帮助他们完成工作方面的感知效用。Textor等人(2022)研究了人工智能遵守道德规范在影响人类对系统信任方面的作用,表明不道德的建议在信任关系中具有微妙的作用,并且对此类违规行为的典型人类反应在修复信任方面是无效的。Appelganc等人(2022)进一步探讨了系统可靠性的作用,特别是比较了人类感知代理(人类、AI和DSS)高度可靠所需的可靠性。研究结果表明,无论代理类型如何,与任何代理一起工作所需的可靠性都同样高,但人类比AI和DSS更信任人类。Ezenyilimba等人(2023)研究了机器人透明度和可解释性对搜救任务中人类队友的SA和信任的比较影响。虽然自主机器人系统状态的透明度提高了SA和信任,但提供对进化事件和机器人能力的详细解释比透明度本身更能提高SA和信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.60
自引率
10.00%
发文量
21
期刊最新文献
Is the Pull-Down Effect Overstated? An Examination of Trust Propagation Among Fighter Pilots in a High-Fidelity Simulation A Taxonomy for AI Hazard Analysis Understanding Automation Failure Integrating Function Allocation and Operational Event Sequence Diagrams to Support Human-Robot Coordination: Case Study of a Robotic Date Thinning System Adapting Cognitive Task Analysis Methods for Use in a Large Sample Simulation Study of High-Risk Healthcare Events.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1