Hongwei Niu, Jia Hao, Zhiyuan Ming, Xiaonan Yang, Lu Wang
The past two decades have witnessed dramatic advancement in computer-aided design (CAD). However, development of human–computer interfaces (HCI) for CAD have not kept up with these advances. Windows, Icons, Menus, Pointer (WIMP) is still the mainly used interface for CAD applications which limits the naturalness and intuitiveness of the CAD modeling process. As a novel interface, Brain–computer interfaces (BCIs) have great potential in the application of CAD modeling. Utilizing BCIs, the user can create CAD models just by thinking about it in principle, because BCIs provide an end-to-end interaction channel between users and CAD models. However, current related studies are mainly limited to the existing BCIs paradigms, while ignoring the relationship between electroencephalogram (EEG) signals and CAD models, which largely increases the cognitive load on the users. In this study, we aimed to explore the potential of using BCI to create CAD models directly independent of the classical BCIs paradigms. For this purpose, EEG signals evoked by six basic CAD models (i.e., point, square, trapezoid, line, triangle, and circle) were collected from 28 participants. After preprocessing and sub-trial principal components analysis (st-PCA) of recorded data, the peak, mean and time-frequency energy features were extracted from EEG signals. By applying the one-way repeated measures analysis of variance, we demonstrated that there were significant differences among these EEG features evoked by different CAD models. These features from EEG electrode channels ranked by mutual information were then used to train a discriminant classifier of genetic algorithm-based support vector machine. The empirical result showed that this classifier can discriminate the CAD models with an average accuracy of about 72%, which turns out that EEG based model generation is feasible, and provides the technical and theoretical basis for building a novel BCI for CAD modeling.
{"title":"Characterization and classification of EEG signals evoked by different CAD models","authors":"Hongwei Niu, Jia Hao, Zhiyuan Ming, Xiaonan Yang, Lu Wang","doi":"10.1002/hfm.21027","DOIUrl":"10.1002/hfm.21027","url":null,"abstract":"<p>The past two decades have witnessed dramatic advancement in computer-aided design (CAD). However, development of human–computer interfaces (HCI) for CAD have not kept up with these advances. Windows, Icons, Menus, Pointer (WIMP) is still the mainly used interface for CAD applications which limits the naturalness and intuitiveness of the CAD modeling process. As a novel interface, Brain–computer interfaces (BCIs) have great potential in the application of CAD modeling. Utilizing BCIs, the user can create CAD models just by thinking about it in principle, because BCIs provide an end-to-end interaction channel between users and CAD models. However, current related studies are mainly limited to the existing BCIs paradigms, while ignoring the relationship between electroencephalogram (EEG) signals and CAD models, which largely increases the cognitive load on the users. In this study, we aimed to explore the potential of using BCI to create CAD models directly independent of the classical BCIs paradigms. For this purpose, EEG signals evoked by six basic CAD models (i.e., point, square, trapezoid, line, triangle, and circle) were collected from 28 participants. After preprocessing and sub-trial principal components analysis (st-PCA) of recorded data, the peak, mean and time-frequency energy features were extracted from EEG signals. By applying the one-way repeated measures analysis of variance, we demonstrated that there were significant differences among these EEG features evoked by different CAD models. These features from EEG electrode channels ranked by mutual information were then used to train a discriminant classifier of genetic algorithm-based support vector machine. The empirical result showed that this classifier can discriminate the CAD models with an average accuracy of about 72%, which turns out that EEG based model generation is feasible, and provides the technical and theoretical basis for building a novel BCI for CAD modeling.</p>","PeriodicalId":55048,"journal":{"name":"Human Factors and Ergonomics in Manufacturing & Service Industries","volume":"34 4","pages":"292-308"},"PeriodicalIF":2.4,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139840286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Owing to the increasing amount of information presented in the cockpit, the visual and hearing channels are unable to adequately transmit information, which may increase the mental load on pilots. This study explores the benefits of multimodal alarms under high and low residual capacities during take-off in civil aircrafts in a quasi-experimental study. The performance of two modes of multimodal (visual and auditory [VA], and visual, auditory, and tactile [VAT]) alarms were tested. The results showed that the VAT alarms were superior to the VA alarms in terms of choice response times (CRTs) when the participants were exposed to low residual capacities of vision and hearing. However, this effect was not observed when the participants had high residual capacities for vision and hearing. Thus, we considered that an additional tactile alarm could play a significant role in the CRTs when VA resources were consumed. There was no significant difference in the number of response errors between the three multimodal alarm modes. This study provides a key comparison of the two modes of multimodal alarms, indicating that VAT alarms are ideal for use in alarm design strategies for next-generation civil cockpits.
由于驾驶舱内呈现的信息量越来越大,视觉和听觉通道无法充分传递信息,这可能会增加飞行员的精神负担。本研究通过一项准实验研究,探讨了民用飞机起飞时在高剩余容量和低剩余容量情况下多模态警报的益处。测试了两种多模式(视觉和听觉 [VA] 以及视觉、听觉和触觉 [VAT])警报的性能。结果表明,当参与者的视觉和听觉残余能力较低时,就选择反应时间(CRT)而言,VAT 警报优于 VA 警报。然而,当参与者的视觉和听觉残余能力较高时,则没有观察到这种效果。因此,我们认为,在消耗 VA 资源的情况下,额外的触觉警报可能会对选择反应时间起到重要作用。三种多模态警报模式的反应错误次数没有明显差异。本研究对两种多模态警报模式进行了重要比较,表明 VAT 警报是下一代民用驾驶舱警报设计策略的理想选择。
{"title":"Exploration of multimodal alarms for civil aircraft flying task: A laboratory study","authors":"Wenzhe Cun, Suihuai Yu, Jianjie Chu, Yanhao Chen, Jianhua Sun, Hao Fan","doi":"10.1002/hfm.21026","DOIUrl":"10.1002/hfm.21026","url":null,"abstract":"<p>Owing to the increasing amount of information presented in the cockpit, the visual and hearing channels are unable to adequately transmit information, which may increase the mental load on pilots. This study explores the benefits of multimodal alarms under high and low residual capacities during take-off in civil aircrafts in a quasi-experimental study. The performance of two modes of multimodal (visual and auditory [VA], and visual, auditory, and tactile [VAT]) alarms were tested. The results showed that the VAT alarms were superior to the VA alarms in terms of choice response times (CRTs) when the participants were exposed to low residual capacities of vision and hearing. However, this effect was not observed when the participants had high residual capacities for vision and hearing. Thus, we considered that an additional tactile alarm could play a significant role in the CRTs when VA resources were consumed. There was no significant difference in the number of response errors between the three multimodal alarm modes. This study provides a key comparison of the two modes of multimodal alarms, indicating that VAT alarms are ideal for use in alarm design strategies for next-generation civil cockpits.</p>","PeriodicalId":55048,"journal":{"name":"Human Factors and Ergonomics in Manufacturing & Service Industries","volume":"34 4","pages":"279-291"},"PeriodicalIF":2.4,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139860129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Pan, Di Zhao, Youchen Pu, Liang Wang, Yijing Zhang
Human–robot collaboration has been widely used in postdisaster investigation and rescue. Human–robot team training is a good way to improve the team rescue efficiency and safety; two common training methods, namely, procedural training and cross-training, are explored in this study. Currently, relatively few studies have explored the impact of cross-training on human–robot collaboration in rescue tasks. Cross-training will be novel to most rescuers and as such, an evaluation of cross-training in comparison with more conventional procedural training is warranted. This study investigated the effects of these two training methods on rescue performance, situation awareness and workload. Forty-two participants completed a path-planning and a photo-taking task in an unfamiliar simulated postdisaster environment. The rescue performance results showed that cross-training method had significant advantages over procedural training for human–robot collaborative rescue tasks. During the training process, compared with procedural training, participants were more likely to achieve excellent photo-taking performance after cross-training; after training, the length of the route planned by the cross-training group was significantly shorter than that of the procedural-training group. In addition, procedural-training marginal significantly increased the emotion demand, which proves that cross-training can well control the emotions of the operators and make them more involved in the rescue task. The study also found that arousal level increased significantly after the first cross-training session, and decreased to the same level as procedural training after multiple sessions. These results contribute to the application of cross-training in human–robot collaborative rescue teams.
{"title":"Use of cross-training in human–robot collaborative rescue","authors":"Dan Pan, Di Zhao, Youchen Pu, Liang Wang, Yijing Zhang","doi":"10.1002/hfm.21025","DOIUrl":"10.1002/hfm.21025","url":null,"abstract":"<p>Human–robot collaboration has been widely used in postdisaster investigation and rescue. Human–robot team training is a good way to improve the team rescue efficiency and safety; two common training methods, namely, procedural training and cross-training, are explored in this study. Currently, relatively few studies have explored the impact of cross-training on human–robot collaboration in rescue tasks. Cross-training will be novel to most rescuers and as such, an evaluation of cross-training in comparison with more conventional procedural training is warranted. This study investigated the effects of these two training methods on rescue performance, situation awareness and workload. Forty-two participants completed a path-planning and a photo-taking task in an unfamiliar simulated postdisaster environment. The rescue performance results showed that cross-training method had significant advantages over procedural training for human–robot collaborative rescue tasks. During the training process, compared with procedural training, participants were more likely to achieve excellent photo-taking performance after cross-training; after training, the length of the route planned by the cross-training group was significantly shorter than that of the procedural-training group. In addition, procedural-training marginal significantly increased the emotion demand, which proves that cross-training can well control the emotions of the operators and make them more involved in the rescue task. The study also found that arousal level increased significantly after the first cross-training session, and decreased to the same level as procedural training after multiple sessions. These results contribute to the application of cross-training in human–robot collaborative rescue teams.</p>","PeriodicalId":55048,"journal":{"name":"Human Factors and Ergonomics in Manufacturing & Service Industries","volume":"34 3","pages":"261-276"},"PeriodicalIF":2.4,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139865263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative robots (cobots) are an essential component of intelligent manufacturing. However, employees working alongside them have negative attitudes toward cobots that assist humans' work. To address this industrial human–robot interaction problem, this study adopted the idea of cognitive ergonomics research, invited 323 participants, and conducted an empirical study using an experimental vignette methodology. This study found that (1) perceived intelligence plays a mediating role in the relationship between cobots anthropomorphism and negative attitudes toward cobots; (2) perceived intelligence and perceived threat play a serial mediating role in the relationship between cobots anthropomorphism and negative attitudes toward cobots; (3) robot use self-efficacy plays a moderating role in the relationship between perceived threat and negative attitudes toward cobots. The results provide a mechanistic explanation and related measures to eliminate the negative attitudes toward cobots.
{"title":"Why not work with anthropomorphic collaborative robots? The mediation effect of perceived intelligence and the moderation effect of self-efficacy","authors":"Shilong Liao, Long Lin, Qin Chen, Hairun Pei","doi":"10.1002/hfm.21024","DOIUrl":"10.1002/hfm.21024","url":null,"abstract":"<p>Collaborative robots (cobots) are an essential component of intelligent manufacturing. However, employees working alongside them have negative attitudes toward cobots that assist humans' work. To address this industrial human–robot interaction problem, this study adopted the idea of cognitive ergonomics research, invited 323 participants, and conducted an empirical study using an experimental vignette methodology. This study found that (1) perceived intelligence plays a mediating role in the relationship between cobots anthropomorphism and negative attitudes toward cobots; (2) perceived intelligence and perceived threat play a serial mediating role in the relationship between cobots anthropomorphism and negative attitudes toward cobots; (3) robot use self-efficacy plays a moderating role in the relationship between perceived threat and negative attitudes toward cobots. The results provide a mechanistic explanation and related measures to eliminate the negative attitudes toward cobots.</p>","PeriodicalId":55048,"journal":{"name":"Human Factors and Ergonomics in Manufacturing & Service Industries","volume":"34 3","pages":"241-260"},"PeriodicalIF":2.4,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139447938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to examine the influence of elbow and forearm postures, as well as sex, on the perception of grip force in a sample of individuals without any known health conditions. A total of 21 healthy participants (10 women and 11 men) from college were included and completed a force reproducibility assignment with four elbow and forearm positions (full pronation, supination, and extension, and at 90° of flexion) at three force levels (10%, 30%, and 50% of maximal voluntary isometric contraction [MVIC]). Our results show that participants were more sensitive in detecting variations in their grip force when their elbow was in full supination (14.1 ± 8.5% MVIC, p < .05) and full extension (13.8 ± 10.1% MVIC, p < .01) than when it was at 90° of flexion (19.9 ± 20.1% MVIC). The normalized absolute error exhibited comparable patterns among both male and female participants. Specifically, when the working range of the muscles increased (as indicated by higher maximum voluntary isometric contraction values in males), the accuracy decreased (as reflected by the more significant absolute error in men). Moreover, men exhibited a greater degree of both constant and variable error than women. Recent research indicates that the prevalence of musculoskeletal disorders is higher in women than in males. The results we obtained may contribute to developing strategies to reduce injury risk.
{"title":"The influence of elbow and forearm posture on grip force perception in healthy individuals","authors":"Huihui Wang, Shengkou Wu, Lin Li","doi":"10.1002/hfm.21022","DOIUrl":"10.1002/hfm.21022","url":null,"abstract":"<p>This study aimed to examine the influence of elbow and forearm postures, as well as sex, on the perception of grip force in a sample of individuals without any known health conditions. A total of 21 healthy participants (10 women and 11 men) from college were included and completed a force reproducibility assignment with four elbow and forearm positions (full pronation, supination, and extension, and at 90° of flexion) at three force levels (10%, 30%, and 50% of maximal voluntary isometric contraction [MVIC]). Our results show that participants were more sensitive in detecting variations in their grip force when their elbow was in full supination (14.1 ± 8.5% MVIC, <i>p</i> < .05) and full extension (13.8 ± 10.1% MVIC, <i>p</i> < .01) than when it was at 90° of flexion (19.9 ± 20.1% MVIC). The normalized absolute error exhibited comparable patterns among both male and female participants. Specifically, when the working range of the muscles increased (as indicated by higher maximum voluntary isometric contraction values in males), the accuracy decreased (as reflected by the more significant absolute error in men). Moreover, men exhibited a greater degree of both constant and variable error than women. Recent research indicates that the prevalence of musculoskeletal disorders is higher in women than in males. The results we obtained may contribute to developing strategies to reduce injury risk.</p>","PeriodicalId":55048,"journal":{"name":"Human Factors and Ergonomics in Manufacturing & Service Industries","volume":"34 3","pages":"231-240"},"PeriodicalIF":2.4,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139386331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}