Pub Date : 2020-06-01DOI: 10.1109/DSN-W50199.2020.00014
Abdulrahman Mahmoud, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, S. Adve, Christopher W. Fletcher, I. Frosio, S. Hari
PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime. It is designed with the programmer in mind, providing a simple and easy-to-use API, requiring as little as three lines of code for use. It also provides an extensible interface, enabling researchers to choose from various perturbation models (or design their own custom models), which allows for the study of hardware error (or general perturbation) propagation to the software layer of the DNN output. Additionally, PyTorchFI is extremely versatile: we demonstrate how it can be applied to five different use cases for dependability and reliability research, including resiliency analysis of classification networks, resiliency analysis of object detection networks, analysis of models robust to adversarial attacks, training resilient models, and for DNN interpertability. This paper discusses the technical underpinnings and design decisions of PyTorchFI which make it an easy-to-use, extensible, fast, and versatile research tool. PyTorchFI is open-sourced and available for download via pip or github at: https://github.com/pytorchfi
{"title":"PyTorchFI: A Runtime Perturbation Tool for DNNs","authors":"Abdulrahman Mahmoud, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, S. Adve, Christopher W. Fletcher, I. Frosio, S. Hari","doi":"10.1109/DSN-W50199.2020.00014","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00014","url":null,"abstract":"PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime. It is designed with the programmer in mind, providing a simple and easy-to-use API, requiring as little as three lines of code for use. It also provides an extensible interface, enabling researchers to choose from various perturbation models (or design their own custom models), which allows for the study of hardware error (or general perturbation) propagation to the software layer of the DNN output. Additionally, PyTorchFI is extremely versatile: we demonstrate how it can be applied to five different use cases for dependability and reliability research, including resiliency analysis of classification networks, resiliency analysis of object detection networks, analysis of models robust to adversarial attacks, training resilient models, and for DNN interpertability. This paper discusses the technical underpinnings and design decisions of PyTorchFI which make it an easy-to-use, extensible, fast, and versatile research tool. PyTorchFI is open-sourced and available for download via pip or github at: https://github.com/pytorchfi","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"8 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114036621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-30DOI: 10.1109/DSN-W50199.2020.00030
H. D. Doran, Monika Reif, Marco Oehler, Curdin Stoehr, Pierluigi Capone
Autonomous robots and drones will work collaboratively and cooperatively in tomorrow’s industry and agriculture. Before this becomes a reality, some form of standardised communication between man and machine must be established that specifically facilitates communication between autonomous machines and both trained and un-trained human actors in the working environment. We present preliminary results on a human-drone and a drone-human language situated in the agricultural industry where interactions with trained and untrained workers and visitors can be expected. We present basic visual indicators enhanced with flight patterns for drone-human interaction and human signaling based on aircraft marshalling for humane-drone interaction. We discuss preliminary results on image recognition and future work.
{"title":"Conceptual Design of Human-Drone Communication in Collaborative Environments","authors":"H. D. Doran, Monika Reif, Marco Oehler, Curdin Stoehr, Pierluigi Capone","doi":"10.1109/DSN-W50199.2020.00030","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00030","url":null,"abstract":"Autonomous robots and drones will work collaboratively and cooperatively in tomorrow’s industry and agriculture. Before this becomes a reality, some form of standardised communication between man and machine must be established that specifically facilitates communication between autonomous machines and both trained and un-trained human actors in the working environment. We present preliminary results on a human-drone and a drone-human language situated in the agricultural industry where interactions with trained and untrained workers and visitors can be expected. We present basic visual indicators enhanced with flight patterns for drone-human interaction and human signaling based on aircraft marshalling for humane-drone interaction. We discuss preliminary results on image recognition and future work.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-19DOI: 10.1109/DSN-W50199.2020.00018
Peilun Wu, Hui Guo
One challenge for building a secure network communication environment is how to effectively detect and prevent malicious network behaviours. The abnormal network activities threaten users’ privacy and potentially damage the function and infrastructure of the whole network. To address this problem, the network intrusion detection system (NIDS) has been used. By continuously monitoring network activities, the system can timely identify attacks and prompt counter-attack actions. NIDS has been evolving over years. The current-generation NIDS incorporates machine learning (ML) as the core technology in order to improve the detection performance on novel attacks. However, the high detection rate achieved by a traditional ML-based detection method is often accompanied by large false-alarms, which greatly affects its overall performance. In this paper, we propose a deep neural network, Pelican, that is built upon specially-designed residual blocks. We evaluated Pelican on two network traffic datasets, NSL-KDD and UNSW-NB15. Our experiments show that Pelican can achieve a high attack detection performance while keeping a much low false alarm rate when compared with a set of up-to-date machine learning based designs.
{"title":"Pelican: A Deep Residual Network for Network Intrusion Detection","authors":"Peilun Wu, Hui Guo","doi":"10.1109/DSN-W50199.2020.00018","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00018","url":null,"abstract":"One challenge for building a secure network communication environment is how to effectively detect and prevent malicious network behaviours. The abnormal network activities threaten users’ privacy and potentially damage the function and infrastructure of the whole network. To address this problem, the network intrusion detection system (NIDS) has been used. By continuously monitoring network activities, the system can timely identify attacks and prompt counter-attack actions. NIDS has been evolving over years. The current-generation NIDS incorporates machine learning (ML) as the core technology in order to improve the detection performance on novel attacks. However, the high detection rate achieved by a traditional ML-based detection method is often accompanied by large false-alarms, which greatly affects its overall performance. In this paper, we propose a deep neural network, Pelican, that is built upon specially-designed residual blocks. We evaluated Pelican on two network traffic datasets, NSL-KDD and UNSW-NB15. Our experiments show that Pelican can achieve a high attack detection performance while keeping a much low false alarm rate when compared with a set of up-to-date machine learning based designs.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126277900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-06DOI: 10.1109/DSN-W50199.2020.00013
Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, R. Mullins, Ross Anderson
Recent research on reinforcement learning (RL) has suggested that trained agents are vulnerable to maliciously-crafted adversarial samples. In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters or their training methods. We use sequence-to-sequence models to predict a single action or a sequence of future actions that a trained agent will make. First, we show that our approximation model, based on time-series information from the agent, consistently predicts RL agents’ future actions with high accuracy in a Black-box setup on a wide range of games and RL algorithms. Second, we find that although adversarial samples are transferable from the sequence-to-sequence model to our RL agents, they often outperform Random Gaussian Noise only marginally. Third, we propose a novel use for adversarial samples in Black-box attacks of RL agents: they can be used to trigger a trained agent to misbehave after a specific time delay. This potentially enables an attacker to use devices controlled by RL agents as time bombs.
{"title":"Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information","authors":"Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, R. Mullins, Ross Anderson","doi":"10.1109/DSN-W50199.2020.00013","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00013","url":null,"abstract":"Recent research on reinforcement learning (RL) has suggested that trained agents are vulnerable to maliciously-crafted adversarial samples. In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters or their training methods. We use sequence-to-sequence models to predict a single action or a sequence of future actions that a trained agent will make. First, we show that our approximation model, based on time-series information from the agent, consistently predicts RL agents’ future actions with high accuracy in a Black-box setup on a wide range of games and RL algorithms. Second, we find that although adversarial samples are transferable from the sequence-to-sequence model to our RL agents, they often outperform Random Gaussian Noise only marginally. Third, we propose a novel use for adversarial samples in Black-box attacks of RL agents: they can be used to trigger a trained agent to misbehave after a specific time delay. This potentially enables an attacker to use devices controlled by RL agents as time bombs.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121390813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-06DOI: 10.1109/DSN-W50199.2020.00016
Ravi Raju, Mikko H. Lipasti
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations $(RP_{2})$, generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this low-pass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.
{"title":"BlurNet: Defense by Filtering the Feature Maps","authors":"Ravi Raju, Mikko H. Lipasti","doi":"10.1109/DSN-W50199.2020.00016","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00016","url":null,"abstract":"Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations $(RP_{2})$, generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this low-pass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122750575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}