Pub Date : 2022-12-01Epub Date: 2022-05-11DOI: 10.1177/15553434221097357
Megan E Salwei, Pascale Carayon
In the coming years, artificial intelligence (AI) will pervade almost every aspect of the health care delivery system. AI has the potential to improve patient safety (e.g. diagnostic accuracy) as well as reduce the burden on clinicians (e.g. documentation-related workload); however, these benefits are yet to be realized. AI is only one element of a larger sociotechnical system that needs to be considered for effective AI application. In this paper, we describe the current challenges of integrating AI into clinical care and propose a sociotechnical systems (STS) approach for AI design and implementation. We demonstrate the importance of an STS approach through a case study on the design and implementation of a clinical decision support (CDS). In order for AI to reach its potential, the entire work system as well as clinical workflow must be systematically considered throughout the design of AI technology.
{"title":"A Sociotechnical Systems Framework for the Application of Artificial Intelligence in Health Care Delivery.","authors":"Megan E Salwei, Pascale Carayon","doi":"10.1177/15553434221097357","DOIUrl":"10.1177/15553434221097357","url":null,"abstract":"<p><p>In the coming years, artificial intelligence (AI) will pervade almost every aspect of the health care delivery system. AI has the potential to improve patient safety (e.g. diagnostic accuracy) as well as reduce the burden on clinicians (e.g. documentation-related workload); however, these benefits are yet to be realized. AI is only one element of a larger sociotechnical system that needs to be considered for effective AI application. In this paper, we describe the current challenges of integrating AI into clinical care and propose a sociotechnical systems (STS) approach for AI design and implementation. We demonstrate the importance of an STS approach through a case study on the design and implementation of a clinical decision support (CDS). In order for AI to reach its potential, the entire work system as well as clinical workflow must be systematically considered throughout the design of AI technology.</p>","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 4","pages":"194-206"},"PeriodicalIF":2.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9873227/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10583415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-16DOI: 10.1177/15553434221136358
Akuadasuo Ezenyilimba, Margaret E. Wong, Alexander J. Hehr, Mustafa Demir, Alexandra T. Wolff, Erin K. Chiou, Nancy J. Cooke
Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.
{"title":"Impact of Transparency and Explanations on Trust and Situation Awareness in Human–Robot Teams","authors":"Akuadasuo Ezenyilimba, Margaret E. Wong, Alexander J. Hehr, Mustafa Demir, Alexandra T. Wolff, Erin K. Chiou, Nancy J. Cooke","doi":"10.1177/15553434221136358","DOIUrl":"https://doi.org/10.1177/15553434221136358","url":null,"abstract":"Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"75 - 93"},"PeriodicalIF":2.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47820931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.1177/15553434221132636
Joonbum Lee, Hansol Rheem, John D. Lee, Joseph F. Szczerba, Omer Tsimhoni
Advances in automated driving systems (ADSs) have shifted the primary responsibility of controlling a vehicle from human drivers to automation. Framing driving a highly automated vehicle as teamwork can reveal practical requirements and design considerations to support the dynamic driver–ADS relationship. However, human–automation teaming is a relatively new concept in ADS research and requires further exploration. We conducted two literature reviews to identify concepts related to teaming and to define the driver–ADS relationship, requirements, and design considerations. The first literature review identified coordination, cooperation, and collaboration (3Cs) as core concepts to define driver–ADS teaming. Based on these findings, we propose the panarchy framework of 3Cs to understand drivers’ roles and relationships with automation in driver–ADS teaming. The second literature review identified main challenges for designing driver–ADS teams. The challenges include supporting mutual communication, enhancing observability and directability, developing a responsive ADS, and identifying and supporting the interdependent relationship between the driver and ADS. This study suggests that the teaming concept can promote a better understanding of the driver–ADS team where the driver and automation require interplay. Eventually, the driver–ADS teaming frame will lead to adequate expectations and mental models of partially automated vehicles.
{"title":"Teaming with Your Car: Redefining the Driver–Automation Relationship in Highly Automated Vehicles","authors":"Joonbum Lee, Hansol Rheem, John D. Lee, Joseph F. Szczerba, Omer Tsimhoni","doi":"10.1177/15553434221132636","DOIUrl":"https://doi.org/10.1177/15553434221132636","url":null,"abstract":"Advances in automated driving systems (ADSs) have shifted the primary responsibility of controlling a vehicle from human drivers to automation. Framing driving a highly automated vehicle as teamwork can reveal practical requirements and design considerations to support the dynamic driver–ADS relationship. However, human–automation teaming is a relatively new concept in ADS research and requires further exploration. We conducted two literature reviews to identify concepts related to teaming and to define the driver–ADS relationship, requirements, and design considerations. The first literature review identified coordination, cooperation, and collaboration (3Cs) as core concepts to define driver–ADS teaming. Based on these findings, we propose the panarchy framework of 3Cs to understand drivers’ roles and relationships with automation in driver–ADS teaming. The second literature review identified main challenges for designing driver–ADS teams. The challenges include supporting mutual communication, enhancing observability and directability, developing a responsive ADS, and identifying and supporting the interdependent relationship between the driver and ADS. This study suggests that the teaming concept can promote a better understanding of the driver–ADS team where the driver and automation require interplay. Eventually, the driver–ADS teaming frame will lead to adequate expectations and mental models of partially automated vehicles.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"49 - 74"},"PeriodicalIF":2.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43148422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.1177/15553434221133288
M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth
Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.
{"title":"Special Issue on Human-AI Teaming and Special Issue on AI in Healthcare","authors":"M. Endsley, Nancy J. Cooke, Nathan J. Mcneese, A. Bisantz, L. Militello, Emilie Roth","doi":"10.1177/15553434221133288","DOIUrl":"https://doi.org/10.1177/15553434221133288","url":null,"abstract":"Building upon advances in machine learning, software that depends on artificial intelligence (AI) is being introduced across a wide spectrum of systems, including healthcare, autonomous vehicles, advanced manufacturing, aviation, and military systems. Artificial intelligence systems may be unreliable or insufficiently robust; however, due to challenges in the development of reliable and robust AI algorithms based on datasets that are noisy and incomplete, the lack of causal models needed for projecting future outcomes, the presence of undetected biases, and noisy or faulty sensor inputs. Therefore, it is anticipated that for the foreseeable future, AI systems will need to operate in conjunction with humans in order to perform their tasks, and often as a part of a larger team of humans and AI systems. Further, AI systemsmay be instantiatedwith different levels of autonomy, at different times, and for different types of tasks or circumstances, creating a wide design space for consideration The design and implementation of AI systems that work effectively in concert with human beings creates significant challenges, including providing sufficient levels of AI transparency and explainability to support human situation awareness (SA), trust and performance, decision making, and supporting the need for collaboration and coordination between humans and AI systems. This special issue covers new research designed to better integrate people with AI in ways that will allow them to function effectively. Several articles explore the role of trust in mediating the interactions of the human-AI team. Dorton and Harper (2022) explore factors leading to trust of AI systems for intelligence analysts, finding that both the performance of the system and its explainability were leading factors, along with its perceived utility for aiding them in doing their jobs. Textor et al. (2022) investigate the role of AI conformance to ethical norms in affecting human trust in the system, showing that unethical recommendations had a nuanced role in the trust relationship, and that typical human responses to such violations were ineffective at repairing trust. Appelganc et al. (2022) further explored the role of system reliability, specifically comparing the reliability that is needed by humans to perceive agents (human, AI, and DSS) as being highly reliable. Findings indicate that the required reliability to work together with any of the agents was equally high regardless of agent type but humans trusted the humanmore than AI and DSS. Ezenyilimba et al. (2023) studied the comparative effects of robot transparency and explainability on the SA and trust of human teammates in a search and rescue task. Although transparency of the autonomous robot’s system status improved SA and trust, the provision of detailed explanations of evolving events and robot capabilities improved SA and trust over and above that of transparency alone.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"179 - 181"},"PeriodicalIF":2.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46927393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-16DOI: 10.1177/15553434221129914
S. Czaja, Marco Ceruso
The aging of the population is a great achievement but also poses challenges for society, families, and older adults. Because of age-related changes in abilities, many older adults encounter difficulties that threaten independence and well-being. Further, the likelihood of developing a disability or a chronic condition increases with age. Currently, family members provide a significant source of support for older adults. However, changes in family and social structures raises questions regarding how care will be provided to future cohorts of older adults. There is clearly a need for innovative strategies to address care needs of future generations of aging individuals. Artificial Intelligence (AI) applications hold promise in terms of providing support for older adults. For example, applications are available that can track and monitor vital signs, health indicators, and cognition; or provide support for everyday activities. This paper highlights, with examples, the potential role of AI in providing support for aging adults to enhance independent living and the quality of life for both older adults and families. Challenges associated with the implementation of AI applications are also discussed and recommendations for needed research are highlighted.
{"title":"The Promise of Artificial Intelligence in Supporting an Aging Population","authors":"S. Czaja, Marco Ceruso","doi":"10.1177/15553434221129914","DOIUrl":"https://doi.org/10.1177/15553434221129914","url":null,"abstract":"The aging of the population is a great achievement but also poses challenges for society, families, and older adults. Because of age-related changes in abilities, many older adults encounter difficulties that threaten independence and well-being. Further, the likelihood of developing a disability or a chronic condition increases with age. Currently, family members provide a significant source of support for older adults. However, changes in family and social structures raises questions regarding how care will be provided to future cohorts of older adults. There is clearly a need for innovative strategies to address care needs of future generations of aging individuals. Artificial Intelligence (AI) applications hold promise in terms of providing support for older adults. For example, applications are available that can track and monitor vital signs, health indicators, and cognition; or provide support for everyday activities. This paper highlights, with examples, the potential role of AI in providing support for aging adults to enhance independent living and the quality of life for both older adults and families. Challenges associated with the implementation of AI applications are also discussed and recommendations for needed research are highlighted.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"182 - 193"},"PeriodicalIF":2.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43388060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-11DOI: 10.1177/15553434221131665
Michael F. Schneider, Michael E. Miller, J. McGuirl
Effective teammates coordinate their actions to achieve shared goals. In current human-Artificial Intelligent Agent (AIA) Teams, humans explicitly communicate task-oriented goals and how the goals are to be achieved to the AIAs as the AIAs do not support implicit communication. This research develops a construct for applying quality goals to improve coordination among human-AIA teams. This construct assumes that trained operators will exhibit similar priorities in similar situations and provides a shorthand communication mechanism to convey intentions. A study was designed and performed to assess situated operator priorities to provide insight into “how” operators desire a task to be performed. This assessment was performed episodically by trained and experienced Remotely Piloted Aircraft operators as they controlled an aircraft in a synthetic task environment through three challenging tactical scenarios. The results indicate that operator priorities change dynamically with situation changes. Further, the results are suitably cohesive across most trained operators to apply the data collected from the proposed method as training data to bootstrap development of an intent estimation agent. However, the data differed sufficiently among individual operators to justify the development of operator specific models, necessary for robust estimation of operator priorities to indicate “how” task-oriented goals should be pursued.
{"title":"Assessing Quality Goal Rankings as a Method for Communicating Operator Intent","authors":"Michael F. Schneider, Michael E. Miller, J. McGuirl","doi":"10.1177/15553434221131665","DOIUrl":"https://doi.org/10.1177/15553434221131665","url":null,"abstract":"Effective teammates coordinate their actions to achieve shared goals. In current human-Artificial Intelligent Agent (AIA) Teams, humans explicitly communicate task-oriented goals and how the goals are to be achieved to the AIAs as the AIAs do not support implicit communication. This research develops a construct for applying quality goals to improve coordination among human-AIA teams. This construct assumes that trained operators will exhibit similar priorities in similar situations and provides a shorthand communication mechanism to convey intentions. A study was designed and performed to assess situated operator priorities to provide insight into “how” operators desire a task to be performed. This assessment was performed episodically by trained and experienced Remotely Piloted Aircraft operators as they controlled an aircraft in a synthetic task environment through three challenging tactical scenarios. The results indicate that operator priorities change dynamically with situation changes. Further, the results are suitably cohesive across most trained operators to apply the data collected from the proposed method as training data to bootstrap development of an intent estimation agent. However, the data differed sufficiently among individual operators to justify the development of operator specific models, necessary for robust estimation of operator priorities to indicate “how” task-oriented goals should be pursued.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"26 - 48"},"PeriodicalIF":2.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48342787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-07DOI: 10.1177/15553434221118978
K. Bennett, Dylan G. Cravens, Natalie C. Jackson, Christopher Edman
The cognitive systems engineering (CSE)/ecological interface design (EID) approach was applied in developing decision support for the flexible manufacturing system (FMS) work domain. Four interfaces were designed via the factorial application/non-application of direct perception (DP) and direct manipulation (DM). The capability of these interfaces to support performance in a simulated FMS was evaluated using a variety of traditional and novel dependent variables. The ecological interface (with DP, DM and an intact perception-action loop) provided clearly superior decision support (32 favorable significant results) relative to the other three interfaces (a combined total of 28 favorable significant results). The novel dependent variables were very sensitive. The results are interpreted from three different perspectives: traditional EID, the quality of constraint matching between triadic system components and closed-loop, dynamical control systems. The rationale for an expanded theoretical framework which complements, but does not replace, the original principles of CSE/EID is discussed. The potential for both specific interface features and novel dependent variables to generalize to real-world FMS applications is addressed. The expanded theoretical framework is universally relevant for the development of decision making and problem solving support in all computer-mediated work domains.
{"title":"Decision Support for Flexible Manufacturing Systems: The Evaluation of an Ecological Interface and Principles of Ecological Interface Design","authors":"K. Bennett, Dylan G. Cravens, Natalie C. Jackson, Christopher Edman","doi":"10.1177/15553434221118978","DOIUrl":"https://doi.org/10.1177/15553434221118978","url":null,"abstract":"The cognitive systems engineering (CSE)/ecological interface design (EID) approach was applied in developing decision support for the flexible manufacturing system (FMS) work domain. Four interfaces were designed via the factorial application/non-application of direct perception (DP) and direct manipulation (DM). The capability of these interfaces to support performance in a simulated FMS was evaluated using a variety of traditional and novel dependent variables. The ecological interface (with DP, DM and an intact perception-action loop) provided clearly superior decision support (32 favorable significant results) relative to the other three interfaces (a combined total of 28 favorable significant results). The novel dependent variables were very sensitive. The results are interpreted from three different perspectives: traditional EID, the quality of constraint matching between triadic system components and closed-loop, dynamical control systems. The rationale for an expanded theoretical framework which complements, but does not replace, the original principles of CSE/EID is discussed. The potential for both specific interface features and novel dependent variables to generalize to real-world FMS applications is addressed. The expanded theoretical framework is universally relevant for the development of decision making and problem solving support in all computer-mediated work domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"120 - 146"},"PeriodicalIF":2.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49086144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.1177/15553434221129166
Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher
When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.
{"title":"Mediating Agent Reliability with Human Trust, Situation Awareness, and Performance in Autonomously-Collaborative Human-Agent Teams","authors":"Sebastian S. Rodriguez, Erin G. Zaroukian, Jeff Hoye, Derrik E. Asher","doi":"10.1177/15553434221129166","DOIUrl":"https://doi.org/10.1177/15553434221129166","url":null,"abstract":"When teaming with humans, the reliability of intelligent agents may sporadically change due to failure or environmental constraints. Alternatively, an agent may be more reliable than a human because their performance is less likely to degrade (e.g., due to fatigue). Research often investigates human-agent interactions under little to no time constraints, such as discrete decision-making tasks where the automation is relegated to the role of an assistant. This paper conducts a quantitative investigation towards varying reliability in human-agent teams in a time-pressured continuous pursuit task, and it interconnects individual differences, perceptual factors, and task performance through structural equation modeling. Results indicate that reducing reliability may generate a more effective agent imperceptibly different from a fully reliable agent, while contributing to overall team performance. The mediation analysis shows replication of factors studied in the trust and situation awareness literature while providing new insights: agents with an active stake in the task (i.e., success is dependent on team performance) offset loss of situation awareness, differing from the usual notion of overtrust. We conclude with generalizing implications from an abstract pursuit task, and we highlight challenges when conducting research in time-pressured continuous domains.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"3 - 25"},"PeriodicalIF":2.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46216008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1177/15553434221122899
S. E. Walsh, K. Feigh
This work investigates a method to infer and classify decision strategies from human behavior, with the goal of improving human-agent team performance by providing AI-based decision support systems with knowledge about their human teammate. First, an experiment was designed to mimic a realistic emergency preparedness scenario in which the test participants were tasked with allocating resources into 1 of 100 possible locations based on a variety of dynamic visual heat maps. Simple participant behavioral data, such as the frequency and duration of information access, were recorded in real time for each participant. The data were examined using a partial least squares regression to identify the participants’ likely decision strategy, that is, which heat maps they relied upon the most. The behavioral data were then used to train a random forest classifier, which was shown to be highly accurate in classifying the decision strategy of new participants. This approach presents an opportunity to give AI systems the ability to accurately model the human decision-making process in real time, enabling the creation of proactive decision support systems and improving overall human-agent teaming.
{"title":"Understanding Human Decision Processes: Inferring Decision Strategies From Behavioral Data","authors":"S. E. Walsh, K. Feigh","doi":"10.1177/15553434221122899","DOIUrl":"https://doi.org/10.1177/15553434221122899","url":null,"abstract":"This work investigates a method to infer and classify decision strategies from human behavior, with the goal of improving human-agent team performance by providing AI-based decision support systems with knowledge about their human teammate. First, an experiment was designed to mimic a realistic emergency preparedness scenario in which the test participants were tasked with allocating resources into 1 of 100 possible locations based on a variety of dynamic visual heat maps. Simple participant behavioral data, such as the frequency and duration of information access, were recorded in real time for each participant. The data were examined using a partial least squares regression to identify the participants’ likely decision strategy, that is, which heat maps they relied upon the most. The behavioral data were then used to train a random forest classifier, which was shown to be highly accurate in classifying the decision strategy of new participants. This approach presents an opportunity to give AI systems the ability to accurately model the human decision-making process in real time, enabling the creation of proactive decision support systems and improving overall human-agent teaming.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"301 - 325"},"PeriodicalIF":2.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44819330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-08DOI: 10.1177/15553434221117001
Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.
{"title":"Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign","authors":"Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen","doi":"10.1177/15553434221117001","DOIUrl":"https://doi.org/10.1177/15553434221117001","url":null,"abstract":"Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"237 - 251"},"PeriodicalIF":2.0,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48135380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}