C.-Q. Yan, Y.-C. Sun, X. Zhang, H. Mao, J.-Y. Jiang
{"title":"一种利用眼动数据和深度残余收缩网络检测飞行员对预警信息感知的方法","authors":"C.-Q. Yan, Y.-C. Sun, X. Zhang, H. Mao, J.-Y. Jiang","doi":"10.1017/aer.2022.101","DOIUrl":null,"url":null,"abstract":"Abstract This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.","PeriodicalId":22567,"journal":{"name":"The Aeronautical Journal (1968)","volume":"1 1","pages":"1219 - 1233"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A methodology to detect pilot perception of warning information by eye movement data and deep residual shrinkage networks\",\"authors\":\"C.-Q. Yan, Y.-C. Sun, X. Zhang, H. Mao, J.-Y. Jiang\",\"doi\":\"10.1017/aer.2022.101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.\",\"PeriodicalId\":22567,\"journal\":{\"name\":\"The Aeronautical Journal (1968)\",\"volume\":\"1 1\",\"pages\":\"1219 - 1233\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Aeronautical Journal (1968)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1017/aer.2022.101\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Aeronautical Journal (1968)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/aer.2022.101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A methodology to detect pilot perception of warning information by eye movement data and deep residual shrinkage networks
Abstract This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.