{"title":"HEN:基于混合可解释神经网络的新型鲁棒网络入侵检测框架","authors":"Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, Xiaofeng Zou","doi":"10.1007/s11432-023-4067-x","DOIUrl":null,"url":null,"abstract":"<p>With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.</p>","PeriodicalId":21618,"journal":{"name":"Science China Information Sciences","volume":null,"pages":null},"PeriodicalIF":7.3000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HEN: a novel hybrid explainable neural network based framework for robust network intrusion detection\",\"authors\":\"Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, Xiaofeng Zou\",\"doi\":\"10.1007/s11432-023-4067-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.</p>\",\"PeriodicalId\":21618,\"journal\":{\"name\":\"Science China Information Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science China Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11432-023-4067-x\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11432-023-4067-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
HEN: a novel hybrid explainable neural network based framework for robust network intrusion detection
With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.
期刊介绍:
Science China Information Sciences is a dedicated journal that showcases high-quality, original research across various domains of information sciences. It encompasses Computer Science & Technologies, Control Science & Engineering, Information & Communication Engineering, Microelectronics & Solid-State Electronics, and Quantum Information, providing a platform for the dissemination of significant contributions in these fields.