Ying-Dar Lin , Wei-Hsiang Chan , Yuan-Cheng Lai , Chia-Mu Yu , Yu-Sung Wu , Wei-Bin Lee
{"title":"Enhancing can security with ML-based IDS: Strategies and efficacies against adversarial attacks","authors":"Ying-Dar Lin , Wei-Hsiang Chan , Yuan-Cheng Lai , Chia-Mu Yu , Yu-Sung Wu , Wei-Bin Lee","doi":"10.1016/j.cose.2025.104322","DOIUrl":null,"url":null,"abstract":"<div><div>Control Area Networks (CAN) face serious security threats recently due to their inherent vulnerabilities and the increasing sophistication of cyberattacks targeting automotive and industrial systems. This paper focuses on enhancing the security of CAN, which currently lack adequate defense mechanisms. We propose integrating Machine Learning-based Intrusion Detection Systems (ML-based IDS) into the network to address this vulnerability. However, ML systems are susceptible to adversarial attacks, leading to misclassification of data. We introduce three defense combination methods to mitigate this risk: adversarial training, ensemble learning, and distance-based optimization. Additionally, we employ a simulated annealing algorithm in distance-based optimization to optimize the distance moved in feature space, aiming to minimize intra-class distance and maximize the inter-class distance. Our results show that the ZOO attack is the most potent adversarial attack, significantly impacting model performance. In terms of model, the basic models achieve an F1 score of 0.99, with CNN being the most robust against adversarial attacks. Under known adversarial attacks, the average F1 score decreases to 0.56. Adversarial training with triplet loss does not perform well, achieving only 0.64, while our defense method attains the highest F1 score of 0.97. For unknown adversarial attacks, the F1 score drops to 0.24, with adversarial training with triplet loss scoring 0.47. Our defense method still achieves the highest score of 0.61. These results demonstrate our method’s excellent performance against known and unknown adversarial attacks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"151 ","pages":"Article 104322"},"PeriodicalIF":4.8000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404825000112","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Control Area Networks (CAN) face serious security threats recently due to their inherent vulnerabilities and the increasing sophistication of cyberattacks targeting automotive and industrial systems. This paper focuses on enhancing the security of CAN, which currently lack adequate defense mechanisms. We propose integrating Machine Learning-based Intrusion Detection Systems (ML-based IDS) into the network to address this vulnerability. However, ML systems are susceptible to adversarial attacks, leading to misclassification of data. We introduce three defense combination methods to mitigate this risk: adversarial training, ensemble learning, and distance-based optimization. Additionally, we employ a simulated annealing algorithm in distance-based optimization to optimize the distance moved in feature space, aiming to minimize intra-class distance and maximize the inter-class distance. Our results show that the ZOO attack is the most potent adversarial attack, significantly impacting model performance. In terms of model, the basic models achieve an F1 score of 0.99, with CNN being the most robust against adversarial attacks. Under known adversarial attacks, the average F1 score decreases to 0.56. Adversarial training with triplet loss does not perform well, achieving only 0.64, while our defense method attains the highest F1 score of 0.97. For unknown adversarial attacks, the F1 score drops to 0.24, with adversarial training with triplet loss scoring 0.47. Our defense method still achieves the highest score of 0.61. These results demonstrate our method’s excellent performance against known and unknown adversarial attacks.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.