首页 > 最新文献

Journal of Cognitive Engineering and Decision Making最新文献

英文 中文
A Sociotechnical Systems Framework for the Application of Artificial Intelligence in Health Care Delivery. 人工智能在卫生保健服务中的应用的社会技术系统框架。
IF 2 Q1 Engineering Pub Date : 2022-12-01 Epub Date: 2022-05-11 DOI: 10.1177/15553434221097357
Megan E Salwei, Pascale Carayon

In the coming years, artificial intelligence (AI) will pervade almost every aspect of the health care delivery system. AI has the potential to improve patient safety (e.g. diagnostic accuracy) as well as reduce the burden on clinicians (e.g. documentation-related workload); however, these benefits are yet to be realized. AI is only one element of a larger sociotechnical system that needs to be considered for effective AI application. In this paper, we describe the current challenges of integrating AI into clinical care and propose a sociotechnical systems (STS) approach for AI design and implementation. We demonstrate the importance of an STS approach through a case study on the design and implementation of a clinical decision support (CDS). In order for AI to reach its potential, the entire work system as well as clinical workflow must be systematically considered throughout the design of AI technology.

在未来几年,人工智能(AI)将渗透到医疗服务系统的几乎每一个方面。人工智能有可能改善患者安全(例如诊断准确性),并减轻临床医生的负担(例如与文件相关的工作量);然而,这些好处尚未实现。人工智能只是一个更大的社会技术系统的一个元素,需要考虑有效的人工智能应用。在本文中,我们描述了将人工智能整合到临床护理中的当前挑战,并提出了一种用于人工智能设计和实施的社会技术系统(STS)方法。我们通过对临床决策支持(CDS)的设计和实施的案例研究来证明STS方法的重要性。为了使人工智能发挥其潜力,必须在整个人工智能技术设计过程中系统地考虑整个工作系统以及临床工作流程。
{"title":"A Sociotechnical Systems Framework for the Application of Artificial Intelligence in Health Care Delivery.","authors":"Megan E Salwei, Pascale Carayon","doi":"10.1177/15553434221097357","DOIUrl":"10.1177/15553434221097357","url":null,"abstract":"<p><p>In the coming years, artificial intelligence (AI) will pervade almost every aspect of the health care delivery system. AI has the potential to improve patient safety (e.g. diagnostic accuracy) as well as reduce the burden on clinicians (e.g. documentation-related workload); however, these benefits are yet to be realized. AI is only one element of a larger sociotechnical system that needs to be considered for effective AI application. In this paper, we describe the current challenges of integrating AI into clinical care and propose a sociotechnical systems (STS) approach for AI design and implementation. We demonstrate the importance of an STS approach through a case study on the design and implementation of a clinical decision support (CDS). In order for AI to reach its potential, the entire work system as well as clinical workflow must be systematically considered throughout the design of AI technology.</p>","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9873227/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10583415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Impact of Transparency and Explanations on Trust and Situation Awareness in Human–Robot Teams 透明度和解释对人-机器人团队信任和情境意识的影响
IF 2 Q1 Engineering Pub Date : 2022-11-16 DOI: 10.1177/15553434221136358
Akuadasuo Ezenyilimba, Margaret E. Wong, Alexander J. Hehr, Mustafa Demir, Alexandra T. Wolff, Erin K. Chiou, Nancy J. Cooke
Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.
城市搜索和救援(USAR)任务继续受益于人机团队(hrt)的结合。USAR环境可能是模糊的、危险的和不稳定的。将机器人队友整合到USAR任务中,使人类队友能够进入不确定的区域,包括危险区域。为了使hrt有效,理解影响团队效率的因素是相关的,比如有共同的目标、相互理解和有效的沟通。我们研究的目的是确定如何(1)更好地建立人类信任,(2)确定机器人透明度和机器人解释的有用水平,(3)确保态势感知,以及(4)鼓励团队成员之间的两党合作。通过实现机器人透明度和机器人解释,我们发现有效hrt的驱动因素依赖于情境驱动的机器人解释,并且人类队友很容易获得。
{"title":"Impact of Transparency and Explanations on Trust and Situation Awareness in Human–Robot Teams","authors":"Akuadasuo Ezenyilimba, Margaret E. Wong, Alexander J. Hehr, Mustafa Demir, Alexandra T. Wolff, Erin K. Chiou, Nancy J. Cooke","doi":"10.1177/15553434221136358","DOIUrl":"https://doi.org/10.1177/15553434221136358","url":null,"abstract":"Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47820931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Teaming with Your Car: Redefining the Driver–Automation Relationship in Highly Automated Vehicles 与您的汽车合作:重新定义高度自动化车辆中的驾驶员与自动化关系
IF 2 Q1 Engineering Pub Date : 2022-11-11 DOI: 10.1177/15553434221132636
Joonbum Lee, Hansol Rheem, John D. Lee, Joseph F. Szczerba, Omer Tsimhoni
Advances in automated driving systems (ADSs) have shifted the primary responsibility of controlling a vehicle from human drivers to automation. Framing driving a highly automated vehicle as teamwork can reveal practical requirements and design considerations to support the dynamic driver–ADS relationship. However, human–automation teaming is a relatively new concept in ADS research and requires further exploration. We conducted two literature reviews to identify concepts related to teaming and to define the driver–ADS relationship, requirements, and design considerations. The first literature review identified coordination, cooperation, and collaboration (3Cs) as core concepts to define driver–ADS teaming. Based on these findings, we propose the panarchy framework of 3Cs to understand drivers’ roles and relationships with automation in driver–ADS teaming. The second literature review identified main challenges for designing driver–ADS teams. The challenges include supporting mutual communication, enhancing observability and directability, developing a responsive ADS, and identifying and supporting the interdependent relationship between the driver and ADS. This study suggests that the teaming concept can promote a better understanding of the driver–ADS team where the driver and automation require interplay. Eventually, the driver–ADS teaming frame will lead to adequate expectations and mental models of partially automated vehicles.
自动驾驶系统(ads)的进步已经将控制车辆的主要责任从人类驾驶员转移到自动驾驶系统。将驾驶高度自动化的车辆视为团队合作,可以揭示实际需求和设计考虑,以支持驾驶员与ads之间的动态关系。然而,人-自动化团队在ADS研究中是一个相对较新的概念,需要进一步探索。我们进行了两篇文献综述,以确定与团队相关的概念,并定义驱动程序与ads的关系、需求和设计考虑。第一篇文献综述确定了协调、合作和协作(3c)是定义驾驶员- ads团队的核心概念。基于这些发现,我们提出了3c的层级框架来理解驾驶员- ads团队中驾驶员的角色及其与自动化的关系。第二篇文献综述指出了设计驾驶员- ads团队的主要挑战。挑战包括支持相互沟通,增强可观察性和可指导性,开发响应式ADS,以及识别和支持驾驶员和ADS之间的相互依存关系。本研究表明,团队合作概念可以促进对驾驶员- ADS团队的更好理解,其中驾驶员和自动化需要相互作用。最终,驾驶员- ads团队框架将导致对部分自动驾驶汽车的充分期望和心理模型。
{"title":"Teaming with Your Car: Redefining the Driver–Automation Relationship in Highly Automated Vehicles","authors":"Joonbum Lee, Hansol Rheem, John D. Lee, Joseph F. Szczerba, Omer Tsimhoni","doi":"10.1177/15553434221132636","DOIUrl":"https://doi.org/10.1177/15553434221132636","url":null,"abstract":"Advances in automated driving systems (ADSs) have shifted the primary responsibility of controlling a vehicle from human drivers to automation. Framing driving a highly automated vehicle as teamwork can reveal practical requirements and design considerations to support the dynamic driver–ADS relationship. However, human–automation teaming is a relatively new concept in ADS research and requires further exploration. We conducted two literature reviews to identify concepts related to teaming and to define the driver–ADS relationship, requirements, and design considerations. The first literature review identified coordination, cooperation, and collaboration (3Cs) as core concepts to define driver–ADS teaming. Based on these findings, we propose the panarchy framework of 3Cs to understand drivers’ roles and relationships with automation in driver–ADS teaming. The second literature review identified main challenges for designing driver–ADS teams. The challenges include supporting mutual communication, enhancing observability and directability, developing a responsive ADS, and identifying and supporting the interdependent relationship between the driver and ADS. This study suggests that the teaming concept can promote a better understanding of the driver–ADS team where the driver and automation require interplay. Eventually, the driver–ADS teaming frame will lead to adequate expectations and mental models of partially automated vehicles.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43148422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare 《人与人工智能合作》特刊及《人工智能在医疗保健中的应用》特刊
IF 2 Q1 Engineering Pub Date : 2022-10-16 DOI: 10.1177/15553434221133288
M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth
Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.
基于机器学习的进步,依赖于人工智能(AI)的软件正在广泛的系统中引入,包括医疗保健、自动驾驶汽车、先进制造、航空和军事系统。人工智能系统可能不可靠或不够健壮;然而,由于基于嘈杂和不完整的数据集开发可靠和强大的人工智能算法面临挑战,缺乏预测未来结果所需的因果模型,存在未检测到的偏差,以及嘈杂或错误的传感器输入。因此,预计在可预见的未来,人工智能系统将需要与人类合作来执行任务,并且通常作为人类和人工智能系统组成的更大团队的一部分。此外,人工智能系统可以在不同时间、针对不同类型的任务或环境,以不同级别的自主性进行实例化,从而创造广泛的设计空间供考虑。与人类有效协同工作的人工智能系统的设计和实现带来了重大挑战,包括提供足够水平的人工智能透明度和可解释性,以支持人类的情况感知(SA)、信任和绩效、决策制定、支持人类和人工智能系统之间的协作和协调需求。本期特刊涵盖了旨在更好地将人与人工智能结合起来的新研究,使他们能够有效地发挥作用。有几篇文章探讨了信任在调解人类与人工智能团队互动中的作用。Dorton和Harper(2022)探索了导致情报分析师信任人工智能系统的因素,发现系统的性能及其可解释性是主要因素,以及它在帮助他们完成工作方面的感知效用。Textor等人(2022)研究了人工智能遵守道德规范在影响人类对系统信任方面的作用,表明不道德的建议在信任关系中具有微妙的作用,并且对此类违规行为的典型人类反应在修复信任方面是无效的。Appelganc等人(2022)进一步探讨了系统可靠性的作用,特别是比较了人类感知代理(人类、AI和DSS)高度可靠所需的可靠性。研究结果表明,无论代理类型如何,与任何代理一起工作所需的可靠性都同样高,但人类比AI和DSS更信任人类。Ezenyilimba等人(2023)研究了机器人透明度和可解释性对搜救任务中人类队友的SA和信任的比较影响。虽然自主机器人系统状态的透明度提高了SA和信任,但提供对进化事件和机器人能力的详细解释比透明度本身更能提高SA和信任。
{"title":"Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare","authors":"M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth","doi":"10.1177/15553434221133288","DOIUrl":"https://doi.org/10.1177/15553434221133288","url":null,"abstract":"Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46927393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Promise of Artificial Intelligence in Supporting an Aging Population 人工智能在支持老龄化人口方面的前景
IF 2 Q1 Engineering Pub Date : 2022-10-16 DOI: 10.1177/15553434221129914
S. Czaja, Marco Ceruso
The aging of the population is a great achievement but also poses challenges for society, families, and older adults. Because of age-related changes in abilities, many older adults encounter difficulties that threaten independence and well-being. Further, the likelihood of developing a disability or a chronic condition increases with age. Currently, family members provide a significant source of support for older adults. However, changes in family and social structures raises questions regarding how care will be provided to future cohorts of older adults. There is clearly a need for innovative strategies to address care needs of future generations of aging individuals. Artificial Intelligence (AI) applications hold promise in terms of providing support for older adults. For example, applications are available that can track and monitor vital signs, health indicators, and cognition; or provide support for everyday activities. This paper highlights, with examples, the potential role of AI in providing support for aging adults to enhance independent living and the quality of life for both older adults and families. Challenges associated with the implementation of AI applications are also discussed and recommendations for needed research are highlighted.
人口老龄化是一项伟大的成就,但也给社会、家庭和老年人带来了挑战。由于与年龄相关的能力变化,许多老年人遇到了威胁独立性和幸福感的困难。此外,随着年龄的增长,患残疾或慢性病的可能性也会增加。目前,家庭成员为老年人提供了重要的支持来源。然而,家庭和社会结构的变化提出了如何为未来的老年人群体提供护理的问题。显然需要创新战略来满足未来几代老年人的护理需求。在为老年人提供支持方面,人工智能(AI)应用前景广阔。例如,可以跟踪和监测生命体征、健康指标和认知的应用程序;或为日常活动提供支持。本文举例强调了人工智能在为老年人提供支持以提高老年人和家庭的独立生活和生活质量方面的潜在作用。还讨论了与人工智能应用实施相关的挑战,并强调了所需研究的建议。
{"title":"The Promise of Artificial Intelligence in Supporting an Aging Population","authors":"S. Czaja, Marco Ceruso","doi":"10.1177/15553434221129914","DOIUrl":"https://doi.org/10.1177/15553434221129914","url":null,"abstract":"The aging of the population is a great achievement but also poses challenges for society, families, and older adults. Because of age-related changes in abilities, many older adults encounter difficulties that threaten independence and well-being. Further, the likelihood of developing a disability or a chronic condition increases with age. Currently, family members provide a significant source of support for older adults. However, changes in family and social structures raises questions regarding how care will be provided to future cohorts of older adults. There is clearly a need for innovative strategies to address care needs of future generations of aging individuals. Artificial Intelligence (AI) applications hold promise in terms of providing support for older adults. For example, applications are available that can track and monitor vital signs, health indicators, and cognition; or provide support for everyday activities. This paper highlights, with examples, the potential role of AI in providing support for aging adults to enhance independent living and the quality of life for both older adults and families. Challenges associated with the implementation of AI applications are also discussed and recommendations for needed research are highlighted.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43388060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Assessing Quality Goal Rankings as a Method for Communicating Operator Intent 评价质量目标排名作为传达操作员意图的一种方法
IF 2 Q1 Engineering Pub Date : 2022-10-11 DOI: 10.1177/15553434221131665
Michael F. Schneider, Michael E. Miller, J. McGuirl
Effective teammates coordinate their actions to achieve shared goals. In current human-Artificial Intelligent Agent (AIA) Teams, humans explicitly communicate task-oriented goals and how the goals are to be achieved to the AIAs as the AIAs do not support implicit communication. This research develops a construct for applying quality goals to improve coordination among human-AIA teams. This construct assumes that trained operators will exhibit similar priorities in similar situations and provides a shorthand communication mechanism to convey intentions. A study was designed and performed to assess situated operator priorities to provide insight into “how” operators desire a task to be performed. This assessment was performed episodically by trained and experienced Remotely Piloted Aircraft operators as they controlled an aircraft in a synthetic task environment through three challenging tactical scenarios. The results indicate that operator priorities change dynamically with situation changes. Further, the results are suitably cohesive across most trained operators to apply the data collected from the proposed method as training data to bootstrap development of an intent estimation agent. However, the data differed sufficiently among individual operators to justify the development of operator specific models, necessary for robust estimation of operator priorities to indicate “how” task-oriented goals should be pursued.
高效的队友协调行动,实现共同目标。在当前的人工智能代理(AIA)团队中,由于AIA不支持隐式通信,因此人类会向AIA明确传达面向任务的目标以及如何实现这些目标。这项研究开发了一个应用质量目标来改善人类AIA团队之间协调的结构。这种结构假设受过训练的操作员在类似的情况下会表现出类似的优先级,并提供了一种简短的沟通机制来传达意图。设计并执行了一项研究,以评估所处位置的操作员优先级,从而深入了解操作员希望执行任务的“方式”。这项评估是由训练有素、经验丰富的遥控飞机操作员在合成任务环境中通过三种具有挑战性的战术场景控制飞机时偶尔进行的。结果表明,操作员的优先级随着情况的变化而动态变化。此外,结果在大多数经过训练的操作员之间具有适当的内聚性,以将从所提出的方法收集的数据作为训练数据应用于意图估计代理的自举开发。然而,各个运营商之间的数据差异很大,足以证明开发特定运营商模型的合理性,这是对运营商优先级进行稳健估计所必需的,以表明“如何”实现面向任务的目标。
{"title":"Assessing Quality Goal Rankings as a Method for Communicating Operator Intent","authors":"Michael F. Schneider, Michael E. Miller, J. McGuirl","doi":"10.1177/15553434221131665","DOIUrl":"https://doi.org/10.1177/15553434221131665","url":null,"abstract":"Effective teammates coordinate their actions to achieve shared goals. In current human-Artificial Intelligent Agent (AIA) Teams, humans explicitly communicate task-oriented goals and how the goals are to be achieved to the AIAs as the AIAs do not support implicit communication. This research develops a construct for applying quality goals to improve coordination among human-AIA teams. This construct assumes that trained operators will exhibit similar priorities in similar situations and provides a shorthand communication mechanism to convey intentions. A study was designed and performed to assess situated operator priorities to provide insight into “how” operators desire a task to be performed. This assessment was performed episodically by trained and experienced Remotely Piloted Aircraft operators as they controlled an aircraft in a synthetic task environment through three challenging tactical scenarios. The results indicate that operator priorities change dynamically with situation changes. Further, the results are suitably cohesive across most trained operators to apply the data collected from the proposed method as training data to bootstrap development of an intent estimation agent. However, the data differed sufficiently among individual operators to justify the development of operator specific models, necessary for robust estimation of operator priorities to indicate “how” task-oriented goals should be pursued.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48342787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision Support for Flexible Manufacturing Systems: The Evaluation of an Ecological Interface and Principles of Ecological Interface Design 柔性制造系统的决策支持:生态接口的评价与生态接口设计原则
IF 2 Q1 Engineering Pub Date : 2022-10-07 DOI: 10.1177/15553434221118978
K. Bennett, Dylan G. Cravens, Natalie C. Jackson, Christopher Edman
The cognitive systems engineering (CSE)/ecological interface design (EID) approach was applied in developing decision support for the flexible manufacturing system (FMS) work domain. Four interfaces were designed via the factorial application/non-application of direct perception (DP) and direct manipulation (DM). The capability of these interfaces to support performance in a simulated FMS was evaluated using a variety of traditional and novel dependent variables. The ecological interface (with DP, DM and an intact perception-action loop) provided clearly superior decision support (32 favorable significant results) relative to the other three interfaces (a combined total of 28 favorable significant results). The novel dependent variables were very sensitive. The results are interpreted from three different perspectives: traditional EID, the quality of constraint matching between triadic system components and closed-loop, dynamical control systems. The rationale for an expanded theoretical framework which complements, but does not replace, the original principles of CSE/EID is discussed. The potential for both specific interface features and novel dependent variables to generalize to real-world FMS applications is addressed. The expanded theoretical framework is universally relevant for the development of decision making and problem solving support in all computer-mediated work domains.
将认知系统工程(CSE)/生态接口设计(EID)方法应用于柔性制造系统(FMS)工作领域的决策支持开发。通过直接感知(DP)和直接操作(DM)的析因应用/非应用设计了四个界面。使用各种传统和新颖的因变量评估了这些接口在模拟FMS中支持性能的能力。相对于其他三个界面(总共28个有利的显著结果),生态界面(具有DP、DM和完整的感知-行动循环)提供了明显优越的决策支持(32个有利的显着结果)。新的因变量非常敏感。从三个不同的角度解释了结果:传统的EID、三元系统组件与闭环动态控制系统之间的约束匹配质量。讨论了扩展理论框架的基本原理,该框架补充但不取代CSE/EID的原始原则。具体的接口特征和新的因变量都有可能推广到现实世界的FMS应用中。扩展的理论框架与所有计算机介导的工作领域中的决策和问题解决支持的发展普遍相关。
{"title":"Decision Support for Flexible Manufacturing Systems: The Evaluation of an Ecological Interface and Principles of Ecological Interface Design","authors":"K. Bennett, Dylan G. Cravens, Natalie C. Jackson, Christopher Edman","doi":"10.1177/15553434221118978","DOIUrl":"https://doi.org/10.1177/15553434221118978","url":null,"abstract":"The cognitive systems engineering (CSE)/ecological interface design (EID) approach was applied in developing decision support for the flexible manufacturing system (FMS) work domain. Four interfaces were designed via the factorial application/non-application of direct perception (DP) and direct manipulation (DM). The capability of these interfaces to support performance in a simulated FMS was evaluated using a variety of traditional and novel dependent variables. The ecological interface (with DP, DM and an intact perception-action loop) provided clearly superior decision support (32 favorable significant results) relative to the other three interfaces (a combined total of 28 favorable significant results). The novel dependent variables were very sensitive. The results are interpreted from three different perspectives: traditional EID, the quality of constraint matching between triadic system components and closed-loop, dynamical control systems. The rationale for an expanded theoretical framework which complements, but does not replace, the original principles of CSE/EID is discussed. The potential for both specific interface features and novel dependent variables to generalize to real-world FMS applications is addressed. The expanded theoretical framework is universally relevant for the development of decision making and problem solving support in all computer-mediated work domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49086144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams 人-代理自主协作团队中代理可靠性与人信任、情境感知和绩效的中介关系
IF 2 Q1 Engineering Pub Date : 2022-09-28 DOI: 10.1177/15553434221129166
Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher
When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.
当与人类合作时,智能代理的可靠性可能会因故障或环境限制而偶尔发生变化。或者,代理可能比人类更可靠,因为它们的性能不太可能下降(例如,由于疲劳)。研究经常在很少或没有时间限制的情况下调查人类与代理的互动,例如离散决策任务,其中自动化被降级为助手的角色。本文对时间压力下的连续追求任务中人类智能体团队的变化可靠性进行了定量研究,并通过结构方程模型将个体差异、感知因素和任务绩效联系起来。结果表明,降低可靠性可能会产生一个更有效的代理,与一个完全可靠的代理在不知不觉中不同,同时有助于整体团队绩效。中介分析显示了信任和情境意识文献中研究的因素的复制,同时提供了新的见解:在任务中具有积极利益的代理人(即,成功取决于团队绩效)抵消了情境意识的丧失,这与通常的过度信任概念不同。我们总结了抽象追求任务的一般含义,并强调了在时间压力连续领域进行研究时面临的挑战。
{"title":"Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams","authors":"Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher","doi":"10.1177/15553434221129166","DOIUrl":"https://doi.org/10.1177/15553434221129166","url":null,"abstract":"When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46216008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding Human Decision Processes: Inferring Decision Strategies From Behavioral Data 理解人类决策过程:从行为数据推断决策策略
IF 2 Q1 Engineering Pub Date : 2022-09-26 DOI: 10.1177/15553434221122899
S. E. Walsh, K. Feigh
This work investigates a method to infer and classify decision strategies from human behavior, with the goal of improving human-agent team performance by providing AI-based decision support systems with knowledge about their human teammate. First, an experiment was designed to mimic a realistic emergency preparedness scenario in which the test participants were tasked with allocating resources into 1 of 100 possible locations based on a variety of dynamic visual heat maps. Simple participant behavioral data, such as the frequency and duration of information access, were recorded in real time for each participant. The data were examined using a partial least squares regression to identify the participants’ likely decision strategy, that is, which heat maps they relied upon the most. The behavioral data were then used to train a random forest classifier, which was shown to be highly accurate in classifying the decision strategy of new participants. This approach presents an opportunity to give AI systems the ability to accurately model the human decision-making process in real time, enabling the creation of proactive decision support systems and improving overall human-agent teaming.
这项工作研究了一种从人类行为中推断和分类决策策略的方法,其目标是通过为基于人工智能的决策支持系统提供有关其人类队友的知识来提高人类代理团队的绩效。首先,设计了一个模拟现实应急准备情景的实验,在该实验中,测试参与者的任务是根据各种动态视觉热图将资源分配到100个可能地点中的1个。简单的参与者行为数据,如信息访问的频率和持续时间,被实时记录下来。使用偏最小二乘回归来检查数据,以确定参与者可能的决策策略,即他们最依赖的热图。然后使用行为数据来训练随机森林分类器,该分类器在对新参与者的决策策略进行分类时显示出很高的准确性。这种方法为人工智能系统提供了一个机会,使其能够实时准确地模拟人类决策过程,从而创建主动决策支持系统,并改善整体人类-代理团队。
{"title":"Understanding Human Decision Processes: Inferring Decision Strategies From Behavioral Data","authors":"S. E. Walsh, K. Feigh","doi":"10.1177/15553434221122899","DOIUrl":"https://doi.org/10.1177/15553434221122899","url":null,"abstract":"This work investigates a method to infer and classify decision strategies from human behavior, with the goal of improving human-agent team performance by providing AI-based decision support systems with knowledge about their human teammate. First, an experiment was designed to mimic a realistic emergency preparedness scenario in which the test participants were tasked with allocating resources into 1 of 100 possible locations based on a variety of dynamic visual heat maps. Simple participant behavioral data, such as the frequency and duration of information access, were recorded in real time for each participant. The data were examined using a partial least squares regression to identify the participants’ likely decision strategy, that is, which heat maps they relied upon the most. The behavioral data were then used to train a random forest classifier, which was shown to be highly accurate in classifying the decision strategy of new participants. This approach presents an opportunity to give AI systems the ability to accurately model the human decision-making process in real time, enabling the creation of proactive decision support systems and improving overall human-agent teaming.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44819330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign 自动驾驶系统中驾驶员对人工智能的理解:一个恶意停车标志的研究
IF 2 Q1 Engineering Pub Date : 2022-09-08 DOI: 10.1177/15553434221117001
Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.
与当今人们使用的许多其他系统一样,自动驾驶系统(ADS)依赖于成功的人工智能(AI)来实现道路安全运营。在ADS中,人工智能完成的一个基本功能是检测车辆道路标志的计算机视觉技术。然而,人工智能并不总是可靠的,有时需要人类的智慧才能完成任务。为了让人类与人工智能合作,了解人类对人工智能的感知至关重要。在本研究中,我们调查了在停车标志受损的驾驶环境中,人类驾驶员如何感知人工智能的能力,以及与人工智能相关的知识、经验和信任如何发挥作用。我们发现,对人工智能了解更多的参与者往往更信任人工智能,而那些报告有更多人工智能经验的人对人工智能有更深入的了解。参与者正确地推断,人工智能更难识别恶意操纵的停车标志。尽管如此,参与者仍然高估了人工智能识别恶意停车标志的能力。我们的研究结果表明,公众对特定的人工智能系统还没有足够准确的了解,这导致他们在某些情况下过度信任人工智能。
{"title":"Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign","authors":"Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen","doi":"10.1177/15553434221117001","DOIUrl":"https://doi.org/10.1177/15553434221117001","url":null,"abstract":"Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48135380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Journal of Cognitive Engineering and Decision Making
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1