Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov
{"title":"Adversarial Attacks and Defenses in Automated Control Systems: A Comprehensive Benchmark","authors":"Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov","doi":"arxiv-2403.13502","DOIUrl":null,"url":null,"abstract":"Integrating machine learning into Automated Control Systems (ACS) enhances\ndecision-making in industrial process management. One of the limitations to the\nwidespread adoption of these technologies in industry is the vulnerability of\nneural networks to adversarial attacks. This study explores the threats in\ndeploying deep learning models for fault diagnosis in ACS using the Tennessee\nEastman Process dataset. By evaluating three neural networks with different\narchitectures, we subject them to six types of adversarial attacks and explore\nfive different defense methods. Our results highlight the strong vulnerability\nof models to adversarial samples and the varying effectiveness of defense\nstrategies. We also propose a novel protection approach by combining multiple\ndefense methods and demonstrate it's efficacy. This research contributes\nseveral insights into securing machine learning within ACS, ensuring robust\nfault diagnosis in industrial processes.","PeriodicalId":501062,"journal":{"name":"arXiv - CS - Systems and Control","volume":"132 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.13502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Integrating machine learning into Automated Control Systems (ACS) enhances
decision-making in industrial process management. One of the limitations to the
widespread adoption of these technologies in industry is the vulnerability of
neural networks to adversarial attacks. This study explores the threats in
deploying deep learning models for fault diagnosis in ACS using the Tennessee
Eastman Process dataset. By evaluating three neural networks with different
architectures, we subject them to six types of adversarial attacks and explore
five different defense methods. Our results highlight the strong vulnerability
of models to adversarial samples and the varying effectiveness of defense
strategies. We also propose a novel protection approach by combining multiple
defense methods and demonstrate it's efficacy. This research contributes
several insights into securing machine learning within ACS, ensuring robust
fault diagnosis in industrial processes.