Tianyuan Jia, Ying Tian, Zhong Yin, Wei Zhang, Zhanquan Sun
{"title":"Data-free stealing attack and defense strategy for industrial fault diagnosis system","authors":"Tianyuan Jia, Ying Tian, Zhong Yin, Wei Zhang, Zhanquan Sun","doi":"10.1016/j.cherd.2025.02.031","DOIUrl":null,"url":null,"abstract":"<div><div>In modern industry, data-driven fault diagnosis models have been widely applied, which greatly ensures the safety of the system. However, these data-driven fault diagnosis models are vulnerable to adversarial attacks, where small perturbations on samples can lead to incorrect prediction results. To ensure the security of fault diagnosis systems, it is necessary to design adversarial attack models and corresponding defense strategies, and apply them to the fault diagnosis system. Considering that the training data and internal structure of the fault diagnosis model are not available to the attacker, this paper designs a data-free model stealing attack strategy. Specifically, the strategy generates training data by designing a data generator and uses knowledge distillation to train an substitute model to the attacked model. Then, the output of the substitute model guides the update of the generator. By iteratively training the generator and the substitute model, a satisfactory substitute model is obtained. Next, a white-box attack method is used to attack the substitute model and generate adversarial samples, achieving black-box attacks on the model to be attacked. To counter this data-free stealing attack, this paper proposes a corresponding adversarial training defense strategy, which utilizes the original model to generate adversarial samples for adversarial training. The effectiveness of the proposed attack strategy and defense method is validated through experiments on the Tennessee Eastman process dataset. This research contributes several insights into securing machine learning within fault diagnosis systems, ensuring robust fault diagnosis in industrial processes.</div></div>","PeriodicalId":10019,"journal":{"name":"Chemical Engineering Research & Design","volume":"216 ","pages":"Pages 200-215"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chemical Engineering Research & Design","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0263876225000875","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0
Abstract
In modern industry, data-driven fault diagnosis models have been widely applied, which greatly ensures the safety of the system. However, these data-driven fault diagnosis models are vulnerable to adversarial attacks, where small perturbations on samples can lead to incorrect prediction results. To ensure the security of fault diagnosis systems, it is necessary to design adversarial attack models and corresponding defense strategies, and apply them to the fault diagnosis system. Considering that the training data and internal structure of the fault diagnosis model are not available to the attacker, this paper designs a data-free model stealing attack strategy. Specifically, the strategy generates training data by designing a data generator and uses knowledge distillation to train an substitute model to the attacked model. Then, the output of the substitute model guides the update of the generator. By iteratively training the generator and the substitute model, a satisfactory substitute model is obtained. Next, a white-box attack method is used to attack the substitute model and generate adversarial samples, achieving black-box attacks on the model to be attacked. To counter this data-free stealing attack, this paper proposes a corresponding adversarial training defense strategy, which utilizes the original model to generate adversarial samples for adversarial training. The effectiveness of the proposed attack strategy and defense method is validated through experiments on the Tennessee Eastman process dataset. This research contributes several insights into securing machine learning within fault diagnosis systems, ensuring robust fault diagnosis in industrial processes.
期刊介绍:
ChERD aims to be the principal international journal for publication of high quality, original papers in chemical engineering.
Papers showing how research results can be used in chemical engineering design, and accounts of experimental or theoretical research work bringing new perspectives to established principles, highlighting unsolved problems or indicating directions for future research, are particularly welcome. Contributions that deal with new developments in plant or processes and that can be given quantitative expression are encouraged. The journal is especially interested in papers that extend the boundaries of traditional chemical engineering.