Data-free stealing attack and defense strategy for industrial fault diagnosis system

IF 3.7 3区 工程技术 Q2 ENGINEERING, CHEMICAL Chemical Engineering Research & Design Pub Date : 2025-02-27 DOI:10.1016/j.cherd.2025.02.031
Tianyuan Jia, Ying Tian, Zhong Yin, Wei Zhang, Zhanquan Sun
{"title":"Data-free stealing attack and defense strategy for industrial fault diagnosis system","authors":"Tianyuan Jia,&nbsp;Ying Tian,&nbsp;Zhong Yin,&nbsp;Wei Zhang,&nbsp;Zhanquan Sun","doi":"10.1016/j.cherd.2025.02.031","DOIUrl":null,"url":null,"abstract":"<div><div>In modern industry, data-driven fault diagnosis models have been widely applied, which greatly ensures the safety of the system. However, these data-driven fault diagnosis models are vulnerable to adversarial attacks, where small perturbations on samples can lead to incorrect prediction results. To ensure the security of fault diagnosis systems, it is necessary to design adversarial attack models and corresponding defense strategies, and apply them to the fault diagnosis system. Considering that the training data and internal structure of the fault diagnosis model are not available to the attacker, this paper designs a data-free model stealing attack strategy. Specifically, the strategy generates training data by designing a data generator and uses knowledge distillation to train an substitute model to the attacked model. Then, the output of the substitute model guides the update of the generator. By iteratively training the generator and the substitute model, a satisfactory substitute model is obtained. Next, a white-box attack method is used to attack the substitute model and generate adversarial samples, achieving black-box attacks on the model to be attacked. To counter this data-free stealing attack, this paper proposes a corresponding adversarial training defense strategy, which utilizes the original model to generate adversarial samples for adversarial training. The effectiveness of the proposed attack strategy and defense method is validated through experiments on the Tennessee Eastman process dataset. This research contributes several insights into securing machine learning within fault diagnosis systems, ensuring robust fault diagnosis in industrial processes.</div></div>","PeriodicalId":10019,"journal":{"name":"Chemical Engineering Research & Design","volume":"216 ","pages":"Pages 200-215"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chemical Engineering Research & Design","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0263876225000875","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

Abstract

In modern industry, data-driven fault diagnosis models have been widely applied, which greatly ensures the safety of the system. However, these data-driven fault diagnosis models are vulnerable to adversarial attacks, where small perturbations on samples can lead to incorrect prediction results. To ensure the security of fault diagnosis systems, it is necessary to design adversarial attack models and corresponding defense strategies, and apply them to the fault diagnosis system. Considering that the training data and internal structure of the fault diagnosis model are not available to the attacker, this paper designs a data-free model stealing attack strategy. Specifically, the strategy generates training data by designing a data generator and uses knowledge distillation to train an substitute model to the attacked model. Then, the output of the substitute model guides the update of the generator. By iteratively training the generator and the substitute model, a satisfactory substitute model is obtained. Next, a white-box attack method is used to attack the substitute model and generate adversarial samples, achieving black-box attacks on the model to be attacked. To counter this data-free stealing attack, this paper proposes a corresponding adversarial training defense strategy, which utilizes the original model to generate adversarial samples for adversarial training. The effectiveness of the proposed attack strategy and defense method is validated through experiments on the Tennessee Eastman process dataset. This research contributes several insights into securing machine learning within fault diagnosis systems, ensuring robust fault diagnosis in industrial processes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Chemical Engineering Research & Design
Chemical Engineering Research & Design 工程技术-工程:化工
CiteScore
6.10
自引率
7.70%
发文量
623
审稿时长
42 days
期刊介绍: ChERD aims to be the principal international journal for publication of high quality, original papers in chemical engineering. Papers showing how research results can be used in chemical engineering design, and accounts of experimental or theoretical research work bringing new perspectives to established principles, highlighting unsolved problems or indicating directions for future research, are particularly welcome. Contributions that deal with new developments in plant or processes and that can be given quantitative expression are encouraged. The journal is especially interested in papers that extend the boundaries of traditional chemical engineering.
期刊最新文献
Inside Front Cover Contents Regeneration of saturated activated carbon by low-voltage arcing method: Insight from nature water treatment Antibacterial effect of ethyl-methylimidazolium-based ionic liquids anions on forward osmosis membranes Effects of inlet water flow and potential aggregate breakage on the change of turbid particle size distribution during coagulation-sedimentation-filtration (CSF): Pilot-scale experimental and CFD-aided studies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1