Pub Date : 2019-12-01DOI: 10.1177/1555343419869465
N. Hertz, Tyler H. Shaw, E. D. de Visser, E. Wiese
This study examines to what extent mixed groups of computers and humans are able to produce conformity effects in human interaction partners. Previous studies reveal that nonhuman groups can induce conformity under certain circumstances, but it is unknown to what extent mixed groups of human and nonhuman agents are able to produce similar effects. It is also unknown how varying the number of human agents per group can affect conformity. Participants were assigned to one of five groups varying in their proportion of human to nonhuman agent composition and were asked to complete a social and analytical task with the assigned group. These task types were chosen to represent tasks which humans (i.e., social task) or computers (i.e., analytical task) may be perceived as having greater expertise in, as well as roughly approximating real-world tasks humans may complete. A mixed analysis of variance (ANOVA) revealed higher rates of conformity (i.e., percentage of time participants answered in line with their group on critical trials) with the group opinion for the analytical versus the social task. In addition, there was an impact of the ratio of human to nonhuman agents per group on conformity on the social task, with higher conformity with the group opinion as the number of humans in the group increased. No such effect was observed for the analytical task. The findings suggest that mixed groups produce different levels of conformity depending on group composition and task type. Designers of systems should be aware that group composition and task type may influence compliance and should design systems accordingly.
{"title":"Mixing It Up: How Mixed Groups of Humans and Machines Modulate Conformity","authors":"N. Hertz, Tyler H. Shaw, E. D. de Visser, E. Wiese","doi":"10.1177/1555343419869465","DOIUrl":"https://doi.org/10.1177/1555343419869465","url":null,"abstract":"This study examines to what extent mixed groups of computers and humans are able to produce conformity effects in human interaction partners. Previous studies reveal that nonhuman groups can induce conformity under certain circumstances, but it is unknown to what extent mixed groups of human and nonhuman agents are able to produce similar effects. It is also unknown how varying the number of human agents per group can affect conformity. Participants were assigned to one of five groups varying in their proportion of human to nonhuman agent composition and were asked to complete a social and analytical task with the assigned group. These task types were chosen to represent tasks which humans (i.e., social task) or computers (i.e., analytical task) may be perceived as having greater expertise in, as well as roughly approximating real-world tasks humans may complete. A mixed analysis of variance (ANOVA) revealed higher rates of conformity (i.e., percentage of time participants answered in line with their group on critical trials) with the group opinion for the analytical versus the social task. In addition, there was an impact of the ratio of human to nonhuman agents per group on conformity on the social task, with higher conformity with the group opinion as the number of humans in the group increased. No such effect was observed for the analytical task. The findings suggest that mixed groups produce different levels of conformity depending on group composition and task type. Designers of systems should be aware that group composition and task type may influence compliance and should design systems accordingly.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"242 - 257"},"PeriodicalIF":2.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419869465","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45133613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-04DOI: 10.1177/1555343419878038
E. Roth, Christen E. Sushereba, L. Militello, Julie Diiulio, Katie Ernst
Function allocation refers to strategies for distributing system functions and tasks across people and technology. We review approaches to function allocation in the context of human machine teaming with technology that exhibits high levels of autonomy (e.g., unmanned aerial systems). Although most function allocation projects documented in the literature have employed a single method, we advocate for an integrated approach that leverages four key activities: (1) analyzing operational demands and work requirements; (2) exploring alternative distribution of work across person and machine agents that make up a human machine team (HMT); (3) examining interdependencies between human and autonomous technologies required for effective HMT performance under routine and off-nominal (unexpected) conditions; and (4) exploring the trade-space of alternative HMT options. Our literature review identified methods to support each of these activities. In combination, they enable system designers to uncover, explore, and weigh a range of critical design considerations beyond those emphasized by the MABA–MABA (“Men are better at, Machines are better at”) and Levels of Automation function allocation traditions. Example applications are used to illustrate the value of these methods to design of HMT that includes autonomous machine agents.
{"title":"Function Allocation Considerations in the Era of Human Autonomy Teaming","authors":"E. Roth, Christen E. Sushereba, L. Militello, Julie Diiulio, Katie Ernst","doi":"10.1177/1555343419878038","DOIUrl":"https://doi.org/10.1177/1555343419878038","url":null,"abstract":"Function allocation refers to strategies for distributing system functions and tasks across people and technology. We review approaches to function allocation in the context of human machine teaming with technology that exhibits high levels of autonomy (e.g., unmanned aerial systems). Although most function allocation projects documented in the literature have employed a single method, we advocate for an integrated approach that leverages four key activities: (1) analyzing operational demands and work requirements; (2) exploring alternative distribution of work across person and machine agents that make up a human machine team (HMT); (3) examining interdependencies between human and autonomous technologies required for effective HMT performance under routine and off-nominal (unexpected) conditions; and (4) exploring the trade-space of alternative HMT options. Our literature review identified methods to support each of these activities. In combination, they enable system designers to uncover, explore, and weigh a range of critical design considerations beyond those emphasized by the MABA–MABA (“Men are better at, Machines are better at”) and Levels of Automation function allocation traditions. Example applications are used to illustrate the value of these methods to design of HMT that includes autonomous machine agents.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"199 - 220"},"PeriodicalIF":2.0,"publicationDate":"2019-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419878038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41459114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-25DOI: 10.1177/1555343419874569
Ronald H. Stevens, Trysha Galloway
We describe efforts to make humans more transparent to machines by focusing on uncertainty, a concept with roots in neuronal populations that scales through social interactions. To be effective team partners, machines will need to learn why uncertainty happens, how it happens, how long it will last, and possible mitigations the machine can supply. Electroencephalography-derived measures of team neurodynamic organization were used to identify times of uncertainty in military, health care, and high school problem-solving teams. A set of neurodynamic sequences was assembled that differed in the magnitudes and durations of uncertainty with the goal of training machines to detect the onset of prolonged periods of high level uncertainty, that is, when a team might require support. Variations in uncertainty onset were identified by classifying the first 70 s of the exemplars using self-organizing maps (SOM), a machine architecture that develops a topology during training that separates closely related from desperate data. Clusters developed during training that distinguished patterns of no uncertainty, low-level and quickly resolved uncertainty, and prolonged high-level uncertainty, creating opportunities for neurodynamic-based systems that can interpret the ebbs and flows in team uncertainty and provide recommendations to the trainer or team in near real time when needed.
{"title":"Teaching Machines to Recognize Neurodynamic Correlates of Team and Team Member Uncertainty","authors":"Ronald H. Stevens, Trysha Galloway","doi":"10.1177/1555343419874569","DOIUrl":"https://doi.org/10.1177/1555343419874569","url":null,"abstract":"We describe efforts to make humans more transparent to machines by focusing on uncertainty, a concept with roots in neuronal populations that scales through social interactions. To be effective team partners, machines will need to learn why uncertainty happens, how it happens, how long it will last, and possible mitigations the machine can supply. Electroencephalography-derived measures of team neurodynamic organization were used to identify times of uncertainty in military, health care, and high school problem-solving teams. A set of neurodynamic sequences was assembled that differed in the magnitudes and durations of uncertainty with the goal of training machines to detect the onset of prolonged periods of high level uncertainty, that is, when a team might require support. Variations in uncertainty onset were identified by classifying the first 70 s of the exemplars using self-organizing maps (SOM), a machine architecture that develops a topology during training that separates closely related from desperate data. Clusters developed during training that distinguished patterns of no uncertainty, low-level and quickly resolved uncertainty, and prolonged high-level uncertainty, creating opportunities for neurodynamic-based systems that can interpret the ebbs and flows in team uncertainty and provide recommendations to the trainer or team in near real time when needed.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"310 - 327"},"PeriodicalIF":2.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419874569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41503747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-30DOI: 10.1177/1555343419869484
Martijn Ijtsma, L. Ma, A. Pritchett, K. Feigh
This paper presents a three-phase computational methodology for making informed design decisions when determining the allocation of work and the interaction modes for human-robot teams. The methodology highlights the necessity to consider constraints and dependencies in the work and the work environment as a basis for team design, particularly those dependencies that arise within the dynamics of the team’s collective activities. These constraints and dependencies form natural clusters in the team’s work, which drive the team’s performance and behavior. The proposed methodology employs network visualization and computational simulation of work models to identify dependencies resulting from the interplay of taskwork distributed between teammates, teamwork, and the work environment. Results from these analyses provide insight into not only team efficiency and performance, but also quantified measures of required teamwork, communication, and physical interaction. The paper describes each phase of the methodology in detail and demonstrates each phase with a case study examining the allocation of work in a human-robot team for space operations.
{"title":"Computational Methodology for the Allocation of Work and Interaction in Human-Robot Teams","authors":"Martijn Ijtsma, L. Ma, A. Pritchett, K. Feigh","doi":"10.1177/1555343419869484","DOIUrl":"https://doi.org/10.1177/1555343419869484","url":null,"abstract":"This paper presents a three-phase computational methodology for making informed design decisions when determining the allocation of work and the interaction modes for human-robot teams. The methodology highlights the necessity to consider constraints and dependencies in the work and the work environment as a basis for team design, particularly those dependencies that arise within the dynamics of the team’s collective activities. These constraints and dependencies form natural clusters in the team’s work, which drive the team’s performance and behavior. The proposed methodology employs network visualization and computational simulation of work models to identify dependencies resulting from the interplay of taskwork distributed between teammates, teamwork, and the work environment. Results from these analyses provide insight into not only team efficiency and performance, but also quantified measures of required teamwork, communication, and physical interaction. The paper describes each phase of the methodology in detail and demonstrates each phase with a case study examining the allocation of work in a human-robot team for space operations.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"221 - 241"},"PeriodicalIF":2.0,"publicationDate":"2019-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419869484","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65549579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-16DOI: 10.25384/SAGE.C.4621289.V1
N. Hertz, Tyler H. Shaw, E. D. Visser, E. Wiese
This study examines to what extent mixed groups of computers and humans are able to produce conformity effects in human interaction partners. Previous studies reveal that nonhuman groups can induce...
{"title":"Mixing It Up: How Mixed Groups of Humans and Machines Modulate Conformity:","authors":"N. Hertz, Tyler H. Shaw, E. D. Visser, E. Wiese","doi":"10.25384/SAGE.C.4621289.V1","DOIUrl":"https://doi.org/10.25384/SAGE.C.4621289.V1","url":null,"abstract":"This study examines to what extent mixed groups of computers and humans are able to produce conformity effects in human interaction partners. Previous studies reveal that nonhuman groups can induce...","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"242-257"},"PeriodicalIF":2.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44306577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-12DOI: 10.1177/1555343419868917
M. Cummings, Lixiao Huang, Haibei Zhu, D. Finkelstein, Ran Wei
A common assumption across many industries is that inserting advanced autonomy can often replace humans for low-level tasks, with cost reduction benefits. However, humans are often only partially replaced and moved into a supervisory capacity with reduced training. It is not clear how this shift from human to automation control and subsequent training reduction influences human performance, errors, and a tendency toward automation bias. To this end, a study was conducted to determine whether adding autonomy and skipping skill-based training could influence performance in a supervisory control task. In the human-in-the-loop experiment, operators performed unmanned aerial vehicle (UAV) search tasks with varying degrees of autonomy and training. At the lowest level of autonomy, operators searched images and, at the highest level, an automated target recognition algorithm presented its best estimate of a possible target, occasionally incorrectly. Results were mixed, with search time not affected by skill-based training. However, novices with skill-based training and automated target search misclassified more targets, suggesting a propensity toward automation bias. More experienced operators had significantly fewer misclassifications when the autonomy erred. A descriptive machine learning model in the form of a hidden Markov model also provided new insights for improved training protocols and interventional technologies.
{"title":"The Impact of Increasing Autonomy on Training Requirements in a UAV Supervisory Control Task","authors":"M. Cummings, Lixiao Huang, Haibei Zhu, D. Finkelstein, Ran Wei","doi":"10.1177/1555343419868917","DOIUrl":"https://doi.org/10.1177/1555343419868917","url":null,"abstract":"A common assumption across many industries is that inserting advanced autonomy can often replace humans for low-level tasks, with cost reduction benefits. However, humans are often only partially replaced and moved into a supervisory capacity with reduced training. It is not clear how this shift from human to automation control and subsequent training reduction influences human performance, errors, and a tendency toward automation bias. To this end, a study was conducted to determine whether adding autonomy and skipping skill-based training could influence performance in a supervisory control task. In the human-in-the-loop experiment, operators performed unmanned aerial vehicle (UAV) search tasks with varying degrees of autonomy and training. At the lowest level of autonomy, operators searched images and, at the highest level, an automated target recognition algorithm presented its best estimate of a possible target, occasionally incorrectly. Results were mixed, with search time not affected by skill-based training. However, novices with skill-based training and automated target search misclassified more targets, suggesting a propensity toward automation bias. More experienced operators had significantly fewer misclassifications when the autonomy erred. A descriptive machine learning model in the form of a hidden Markov model also provided new insights for improved training protocols and interventional technologies.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"295 - 309"},"PeriodicalIF":2.0,"publicationDate":"2019-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419868917","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48201926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-09DOI: 10.1177/1555343419867563
James C. Walliser, E. D. de Visser, E. Wiese, Tyler H. Shaw
Research suggests that humans and autonomous agents can be more effective when working together as a combined unit rather than as individual entities. However, most research has focused on autonomous agent design characteristics while ignoring the importance of social interactions and team dynamics. Two experiments examined how the perception of teamwork among human–human and human–autonomous agents and the application of team building interventions could enhance teamwork outcomes. Participants collaborated with either a human or an autonomous agent. In the first experiment, it was revealed that manipulating team structure by considering your human and autonomous partner as a teammate rather than a tool can increase affect and behavior, but does not benefit performance. In the second experiment, participants completed goal setting and role clarification (team building) with their teammate prior to task performance. Team building interventions led to significant improvements for all teamwork outcomes, including performance. Across both studies, participants communicated more substantially with human partners than they did with autonomous partners. Taken together, these findings suggest that social interactions between humans and autonomous teammates should be an important design consideration and that particular attention should be given to team building interventions to improve affect, behavior, and performance.
{"title":"Team Structure and Team Building Improve Human–Machine Teaming With Autonomous Agents","authors":"James C. Walliser, E. D. de Visser, E. Wiese, Tyler H. Shaw","doi":"10.1177/1555343419867563","DOIUrl":"https://doi.org/10.1177/1555343419867563","url":null,"abstract":"Research suggests that humans and autonomous agents can be more effective when working together as a combined unit rather than as individual entities. However, most research has focused on autonomous agent design characteristics while ignoring the importance of social interactions and team dynamics. Two experiments examined how the perception of teamwork among human–human and human–autonomous agents and the application of team building interventions could enhance teamwork outcomes. Participants collaborated with either a human or an autonomous agent. In the first experiment, it was revealed that manipulating team structure by considering your human and autonomous partner as a teammate rather than a tool can increase affect and behavior, but does not benefit performance. In the second experiment, participants completed goal setting and role clarification (team building) with their teammate prior to task performance. Team building interventions led to significant improvements for all teamwork outcomes, including performance. Across both studies, participants communicated more substantially with human partners than they did with autonomous partners. Taken together, these findings suggest that social interactions between humans and autonomous teammates should be an important design consideration and that particular attention should be given to team building interventions to improve affect, behavior, and performance.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"258 - 278"},"PeriodicalIF":2.0,"publicationDate":"2019-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419867563","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43706630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-06DOI: 10.1177/1555343419869083
N. Tenhundfeld, E. D. de Visser, Kerstin S Haring, Anthony J. Ries, V. Finomore, Chad C. Tossell
Because one of the largest influences on trust in automation is the familiarity with the system, we sought to examine the effects of familiarity on driver interventions while using the autoparking feature of a Tesla Model X. Participants were either told or shown how the autoparking feature worked. Results showed a significantly higher initial driver intervention rate when the participants were only told how to employ the autoparking feature, than when shown. However, the intervention rate quickly leveled off, and differences between conditions disappeared. The number of interventions and the distances from the parking anchoring point (a trashcan) were used to create a new measure of distrust in autonomy. Eyetracking measures revealed that participants disengaged from monitoring the center display as the experiment progressed, which could be a further indication of a lowering of distrust in the system. Combined, these results have important implications for development and design of explainable artificial intelligence and autonomous systems. Finally, we detail the substantial hurdles encountered while trying to evaluate “autonomy in the wild.” Our research highlights the need to re-evaluate trust concepts in high-risk, high-consequence environments.
{"title":"Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X","authors":"N. Tenhundfeld, E. D. de Visser, Kerstin S Haring, Anthony J. Ries, V. Finomore, Chad C. Tossell","doi":"10.1177/1555343419869083","DOIUrl":"https://doi.org/10.1177/1555343419869083","url":null,"abstract":"Because one of the largest influences on trust in automation is the familiarity with the system, we sought to examine the effects of familiarity on driver interventions while using the autoparking feature of a Tesla Model X. Participants were either told or shown how the autoparking feature worked. Results showed a significantly higher initial driver intervention rate when the participants were only told how to employ the autoparking feature, than when shown. However, the intervention rate quickly leveled off, and differences between conditions disappeared. The number of interventions and the distances from the parking anchoring point (a trashcan) were used to create a new measure of distrust in autonomy. Eyetracking measures revealed that participants disengaged from monitoring the center display as the experiment progressed, which could be a further indication of a lowering of distrust in the system. Combined, these results have important implications for development and design of explainable artificial intelligence and autonomous systems. Finally, we detail the substantial hurdles encountered while trying to evaluate “autonomy in the wild.” Our research highlights the need to re-evaluate trust concepts in high-risk, high-consequence environments.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"279 - 294"},"PeriodicalIF":2.0,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419869083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-25DOI: 10.1177/1555343419855850
Chloe Barrett-Pink, L. Alison, S. Maskell, N. Shortland
This paper explores the current state of automated systems in the Royal Navy (RN), as well as exploring where personnel view systems would have the most benefit to their operations in the future. In addition, personnel’s views on the current consultation process for new systems are presented. Currently serving RN personnel (n = 46) completed a questionnaire distributed at the Maritime Warfare School. Thematic analysis was conducted on the 5,125 words that were generated by personnel. Results show that RN personnel understand the requirement to utilize automated systems to maintain capability in the increasingly complex environments they face. This requirement will increase as future warfare continues to change and increasingly sophisticated threats are faced. However, it was highlighted that current consultation and procurement procedures often result in new automated systems that are not fit for purpose at time of release. This has negative consequences on operator tasks, for example by increasing workload and reducing appropriate system use, as well as increasing financial costs associated with the new systems. It is recommended that an increase in communication and collaboration between currently serving personnel and system designers may result in preventing the release of systems that are not fit for purpose.
{"title":"On the Bridges: Insight Into the Current and Future Use of Automated Systems as Seen by Royal Navy Personnel","authors":"Chloe Barrett-Pink, L. Alison, S. Maskell, N. Shortland","doi":"10.1177/1555343419855850","DOIUrl":"https://doi.org/10.1177/1555343419855850","url":null,"abstract":"This paper explores the current state of automated systems in the Royal Navy (RN), as well as exploring where personnel view systems would have the most benefit to their operations in the future. In addition, personnel’s views on the current consultation process for new systems are presented. Currently serving RN personnel (n = 46) completed a questionnaire distributed at the Maritime Warfare School. Thematic analysis was conducted on the 5,125 words that were generated by personnel. Results show that RN personnel understand the requirement to utilize automated systems to maintain capability in the increasingly complex environments they face. This requirement will increase as future warfare continues to change and increasingly sophisticated threats are faced. However, it was highlighted that current consultation and procurement procedures often result in new automated systems that are not fit for purpose at time of release. This has negative consequences on operator tasks, for example by increasing workload and reducing appropriate system use, as well as increasing financial costs associated with the new systems. It is recommended that an increase in communication and collaboration between currently serving personnel and system designers may result in preventing the release of systems that are not fit for purpose.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"127 - 145"},"PeriodicalIF":2.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419855850","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49006199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01Epub Date: 2019-02-04DOI: 10.1177/1555343418825429
Austin F Mount-Campbell, Kevin D Evans, David D Woods, Esther M Chipps, Susan D Moffatt-Bruce, Emily S Patterson
We identify the value and usage of a cognitive artifact used by hospital nurses. By analyzing the value and usage of workaround artifacts, unmet needs using intended systems can be uncovered. A descriptive study employed direct observations of registered nurses at two hospitals using a paper workaround ("brains") and the Electronic Health Record. Field notes and photographs were taken; the format, size, layout, permanence, and content of the artifact were analyzed. Thirty-nine observations, spanning 156 hr, were conducted with 20 nurses across four clinical units. A total of 322 photographs of paper-based artifacts for 161 patients were collected. All participants used and updated "brains" during report, and throughout the shift, most were self-generated. These artifacts contained patient identifiers in a header with room number, last name, age, code status, and physician; clinical data were recorded in the body with historical chronic issues, detailed assessment information, and planned activities for the shift. Updates continuously made during the shift highlighted important information, updated values, and tracked the completion of activities. The primary functional uses of "brains" are to support nurses' needs for clinical immediacy through personally generated snapshot overviews for clinical summaries and updates to the status of planned activities.
{"title":"Value and Usage of a Workaround Artifact: A Cognitive Work Analysis of \"Brains\" Use by Hospital Nurses.","authors":"Austin F Mount-Campbell, Kevin D Evans, David D Woods, Esther M Chipps, Susan D Moffatt-Bruce, Emily S Patterson","doi":"10.1177/1555343418825429","DOIUrl":"https://doi.org/10.1177/1555343418825429","url":null,"abstract":"<p><p>We identify the value and usage of a cognitive artifact used by hospital nurses. By analyzing the value and usage of workaround artifacts, unmet needs using intended systems can be uncovered. A descriptive study employed direct observations of registered nurses at two hospitals using a paper workaround (\"brains\") and the Electronic Health Record. Field notes and photographs were taken; the format, size, layout, permanence, and content of the artifact were analyzed. Thirty-nine observations, spanning 156 hr, were conducted with 20 nurses across four clinical units. A total of 322 photographs of paper-based artifacts for 161 patients were collected. All participants used and updated \"brains\" during report, and throughout the shift, most were self-generated. These artifacts contained patient identifiers in a header with room number, last name, age, code status, and physician; clinical data were recorded in the body with historical chronic issues, detailed assessment information, and planned activities for the shift. Updates continuously made during the shift highlighted important information, updated values, and tracked the completion of activities. The primary functional uses of \"brains\" are to support nurses' needs for clinical immediacy through personally generated snapshot overviews for clinical summaries and updates to the status of planned activities.</p>","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 2","pages":"67-80"},"PeriodicalIF":2.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343418825429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38462021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}