{"title":"基于深度神经网络的信息物理生产系统状态监测系统的对抗示例生成以防止误分类","authors":"Felix Specht, J. Otto, O. Niggemann, B. Hammer","doi":"10.1109/INDIN.2018.8472060","DOIUrl":null,"url":null,"abstract":"Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of the physical process poses a serious threat for production systems and employees. This paper introduces CyberProtect, a novel approach to prevent misclassification caused by adversarial example attacks. CyberProtect generates adversarial examples and uses them to retrain deep neural networks. This results in a hardened deep neural network with a significant reduced misclassification rate. The proposed countermeasure increases the classification rate from 20% to 82%, as proved by empirical results.","PeriodicalId":6467,"journal":{"name":"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)","volume":"16 1","pages":"760-765"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Generation of Adversarial Examples to Prevent Misclassification of Deep Neural Network based Condition Monitoring Systems for Cyber-Physical Production Systems\",\"authors\":\"Felix Specht, J. Otto, O. Niggemann, B. Hammer\",\"doi\":\"10.1109/INDIN.2018.8472060\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of the physical process poses a serious threat for production systems and employees. This paper introduces CyberProtect, a novel approach to prevent misclassification caused by adversarial example attacks. CyberProtect generates adversarial examples and uses them to retrain deep neural networks. This results in a hardened deep neural network with a significant reduced misclassification rate. The proposed countermeasure increases the classification rate from 20% to 82%, as proved by empirical results.\",\"PeriodicalId\":6467,\"journal\":{\"name\":\"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)\",\"volume\":\"16 1\",\"pages\":\"760-765\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INDIN.2018.8472060\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDIN.2018.8472060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generation of Adversarial Examples to Prevent Misclassification of Deep Neural Network based Condition Monitoring Systems for Cyber-Physical Production Systems
Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of the physical process poses a serious threat for production systems and employees. This paper introduces CyberProtect, a novel approach to prevent misclassification caused by adversarial example attacks. CyberProtect generates adversarial examples and uses them to retrain deep neural networks. This results in a hardened deep neural network with a significant reduced misclassification rate. The proposed countermeasure increases the classification rate from 20% to 82%, as proved by empirical results.