Apostolos Kalatzis, Saidur Rahman, Vishnunarayan Girishan Prabhu, Laura Stanley, Mike Wittie
{"title":"A Multimodal Approach to Investigate the Role of Cognitive Workload and User Interfaces in Human-robot Collaboration","authors":"Apostolos Kalatzis, Saidur Rahman, Vishnunarayan Girishan Prabhu, Laura Stanley, Mike Wittie","doi":"10.1145/3577190.3614112","DOIUrl":null,"url":null,"abstract":"One of the primary aims of Industry 5.0 is to refine the interaction between humans, machines, and robots by developing human-centered design solutions to enhance Human-Robot Collaboration, performance, trust, and safety. This research investigated how deploying a user interface utilizing a 2-D and 3-D display affects participants’ cognitive effort, task performance, trust, and situational awareness while performing a collaborative task using a robot. The study used a within-subject design where fifteen participants were subjected to three conditions: no interface, display User Interface, and mixed reality User Interface where vision assistance was provided. Participants performed a pick-and-place task with a robot in each condition under two levels of cognitive workload (i.e., high and low). The cognitive workload was measured using subjective (i.e., NASA TLX) and objective measures (i.e., heart rate variability). Additionally, task performance, situation awareness, and trust when using these interfaces were measured to understand the impact of different user interfaces during a Human-Robot Collaboration task. Findings from this study indicated that cognitive workload and user interfaces impacted task performance, where a significant decrease in efficiency and accuracy was observed while using the mixed reality interface. Additionally, irrespective of the three conditions, all participants perceived the task as more cognitively demanding during the high cognitive workload session. However, no significant differences across the interfaces were observed. Finally, cognitive workload impacted situational awareness and trust, where lower levels were reported in the high cognitive workload session, and the lowest levels were observed under the mixed reality user interface condition.","PeriodicalId":93171,"journal":{"name":"Companion Publication of the 2020 International Conference on Multimodal Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 2020 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3577190.3614112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
One of the primary aims of Industry 5.0 is to refine the interaction between humans, machines, and robots by developing human-centered design solutions to enhance Human-Robot Collaboration, performance, trust, and safety. This research investigated how deploying a user interface utilizing a 2-D and 3-D display affects participants’ cognitive effort, task performance, trust, and situational awareness while performing a collaborative task using a robot. The study used a within-subject design where fifteen participants were subjected to three conditions: no interface, display User Interface, and mixed reality User Interface where vision assistance was provided. Participants performed a pick-and-place task with a robot in each condition under two levels of cognitive workload (i.e., high and low). The cognitive workload was measured using subjective (i.e., NASA TLX) and objective measures (i.e., heart rate variability). Additionally, task performance, situation awareness, and trust when using these interfaces were measured to understand the impact of different user interfaces during a Human-Robot Collaboration task. Findings from this study indicated that cognitive workload and user interfaces impacted task performance, where a significant decrease in efficiency and accuracy was observed while using the mixed reality interface. Additionally, irrespective of the three conditions, all participants perceived the task as more cognitively demanding during the high cognitive workload session. However, no significant differences across the interfaces were observed. Finally, cognitive workload impacted situational awareness and trust, where lower levels were reported in the high cognitive workload session, and the lowest levels were observed under the mixed reality user interface condition.