首页 > 最新文献

2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)最新文献

英文 中文
PyTorchFI: A Runtime Perturbation Tool for DNNs PyTorchFI:一个用于dnn的运行时扰动工具
Abdulrahman Mahmoud, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, S. Adve, Christopher W. Fletcher, I. Frosio, S. Hari
PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime. It is designed with the programmer in mind, providing a simple and easy-to-use API, requiring as little as three lines of code for use. It also provides an extensible interface, enabling researchers to choose from various perturbation models (or design their own custom models), which allows for the study of hardware error (or general perturbation) propagation to the software layer of the DNN output. Additionally, PyTorchFI is extremely versatile: we demonstrate how it can be applied to five different use cases for dependability and reliability research, including resiliency analysis of classification networks, resiliency analysis of object detection networks, analysis of models robust to adversarial attacks, training resilient models, and for DNN interpertability. This paper discusses the technical underpinnings and design decisions of PyTorchFI which make it an easy-to-use, extensible, fast, and versatile research tool. PyTorchFI is open-sourced and available for download via pip or github at: https://github.com/pytorchfi
PyTorchFI是深度神经网络(dnn)的运行时扰动工具,为流行的PyTorch深度学习平台实现。PyTorchFI使用户能够在运行时对dnn的权重或神经元执行扰动。它的设计考虑到了程序员,提供了一个简单易用的API,只需三行代码即可使用。它还提供了一个可扩展的接口,使研究人员能够从各种扰动模型中进行选择(或设计自己的自定义模型),这允许研究硬件错误(或一般扰动)传播到DNN输出的软件层。此外,PyTorchFI是非常通用的:我们展示了如何将它应用于可靠性和可靠性研究的五个不同用例,包括分类网络的弹性分析,对象检测网络的弹性分析,对对抗性攻击的模型分析,训练弹性模型,以及DNN互操作性。本文讨论了PyTorchFI的技术基础和设计决策,使其成为易于使用,可扩展,快速和通用的研究工具。PyTorchFI是开源的,可以通过pip或github下载:https://github.com/pytorchfi
{"title":"PyTorchFI: A Runtime Perturbation Tool for DNNs","authors":"Abdulrahman Mahmoud, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, S. Adve, Christopher W. Fletcher, I. Frosio, S. Hari","doi":"10.1109/DSN-W50199.2020.00014","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00014","url":null,"abstract":"PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime. It is designed with the programmer in mind, providing a simple and easy-to-use API, requiring as little as three lines of code for use. It also provides an extensible interface, enabling researchers to choose from various perturbation models (or design their own custom models), which allows for the study of hardware error (or general perturbation) propagation to the software layer of the DNN output. Additionally, PyTorchFI is extremely versatile: we demonstrate how it can be applied to five different use cases for dependability and reliability research, including resiliency analysis of classification networks, resiliency analysis of object detection networks, analysis of models robust to adversarial attacks, training resilient models, and for DNN interpertability. This paper discusses the technical underpinnings and design decisions of PyTorchFI which make it an easy-to-use, extensible, fast, and versatile research tool. PyTorchFI is open-sourced and available for download via pip or github at: https://github.com/pytorchfi","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"8 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114036621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Conceptual Design of Human-Drone Communication in Collaborative Environments 协同环境下人机通信的概念设计
H. D. Doran, Monika Reif, Marco Oehler, Curdin Stoehr, Pierluigi Capone
Autonomous robots and drones will work collaboratively and cooperatively in tomorrow’s industry and agriculture. Before this becomes a reality, some form of standardised communication between man and machine must be established that specifically facilitates communication between autonomous machines and both trained and un-trained human actors in the working environment. We present preliminary results on a human-drone and a drone-human language situated in the agricultural industry where interactions with trained and untrained workers and visitors can be expected. We present basic visual indicators enhanced with flight patterns for drone-human interaction and human signaling based on aircraft marshalling for humane-drone interaction. We discuss preliminary results on image recognition and future work.
自主机器人和无人机将在未来的工业和农业中协同工作。在这成为现实之前,必须建立某种形式的人与机器之间的标准化通信,专门促进自主机器与工作环境中受过训练和未经训练的人类参与者之间的通信。我们介绍了在农业行业中与训练有素和未经训练的工人和游客进行互动的人-无人机和无人机-人语言的初步结果。我们提出了无人机与人类互动的基本视觉指标和基于飞机编组的人类信号,用于人类与无人机的互动。我们讨论了图像识别的初步结果和未来的工作。
{"title":"Conceptual Design of Human-Drone Communication in Collaborative Environments","authors":"H. D. Doran, Monika Reif, Marco Oehler, Curdin Stoehr, Pierluigi Capone","doi":"10.1109/DSN-W50199.2020.00030","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00030","url":null,"abstract":"Autonomous robots and drones will work collaboratively and cooperatively in tomorrow’s industry and agriculture. Before this becomes a reality, some form of standardised communication between man and machine must be established that specifically facilitates communication between autonomous machines and both trained and un-trained human actors in the working environment. We present preliminary results on a human-drone and a drone-human language situated in the agricultural industry where interactions with trained and untrained workers and visitors can be expected. We present basic visual indicators enhanced with flight patterns for drone-human interaction and human signaling based on aircraft marshalling for humane-drone interaction. We discuss preliminary results on image recognition and future work.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Pelican: A Deep Residual Network for Network Intrusion Detection 鹈鹕:用于网络入侵检测的深度残留网络
Peilun Wu, Hui Guo
One challenge for building a secure network communication environment is how to effectively detect and prevent malicious network behaviours. The abnormal network activities threaten users’ privacy and potentially damage the function and infrastructure of the whole network. To address this problem, the network intrusion detection system (NIDS) has been used. By continuously monitoring network activities, the system can timely identify attacks and prompt counter-attack actions. NIDS has been evolving over years. The current-generation NIDS incorporates machine learning (ML) as the core technology in order to improve the detection performance on novel attacks. However, the high detection rate achieved by a traditional ML-based detection method is often accompanied by large false-alarms, which greatly affects its overall performance. In this paper, we propose a deep neural network, Pelican, that is built upon specially-designed residual blocks. We evaluated Pelican on two network traffic datasets, NSL-KDD and UNSW-NB15. Our experiments show that Pelican can achieve a high attack detection performance while keeping a much low false alarm rate when compared with a set of up-to-date machine learning based designs.
如何有效地检测和预防恶意网络行为是构建安全网络通信环境的挑战之一。异常的网络活动不仅威胁到用户的隐私,还可能破坏整个网络的功能和基础设施。为了解决这一问题,人们采用了网络入侵检测系统(NIDS)。通过对网络活动的持续监控,系统可以及时发现攻击,并及时采取反击行动。NIDS已经发展了多年。为了提高对新型攻击的检测性能,当前一代的网络入侵检测系统将机器学习作为核心技术。然而,传统的基于ml的检测方法在达到较高的检测率的同时,往往伴随着较大的虚警,这极大地影响了其整体性能。在本文中,我们提出了一个深度神经网络Pelican,它是建立在特殊设计的残差块之上的。我们在NSL-KDD和UNSW-NB15两个网络流量数据集上对Pelican进行了评估。我们的实验表明,与一组最新的基于机器学习的设计相比,Pelican可以实现高的攻击检测性能,同时保持低得多的误报率。
{"title":"Pelican: A Deep Residual Network for Network Intrusion Detection","authors":"Peilun Wu, Hui Guo","doi":"10.1109/DSN-W50199.2020.00018","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00018","url":null,"abstract":"One challenge for building a secure network communication environment is how to effectively detect and prevent malicious network behaviours. The abnormal network activities threaten users’ privacy and potentially damage the function and infrastructure of the whole network. To address this problem, the network intrusion detection system (NIDS) has been used. By continuously monitoring network activities, the system can timely identify attacks and prompt counter-attack actions. NIDS has been evolving over years. The current-generation NIDS incorporates machine learning (ML) as the core technology in order to improve the detection performance on novel attacks. However, the high detection rate achieved by a traditional ML-based detection method is often accompanied by large false-alarms, which greatly affects its overall performance. In this paper, we propose a deep neural network, Pelican, that is built upon specially-designed residual blocks. We evaluated Pelican on two network traffic datasets, NSL-KDD and UNSW-NB15. Our experiments show that Pelican can achieve a high attack detection performance while keeping a much low false alarm rate when compared with a set of up-to-date machine learning based designs.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126277900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information 基于近似时间信息的强化学习智能体黑盒攻击
Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, R. Mullins, Ross Anderson
Recent research on reinforcement learning (RL) has suggested that trained agents are vulnerable to maliciously-crafted adversarial samples. In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters or their training methods. We use sequence-to-sequence models to predict a single action or a sequence of future actions that a trained agent will make. First, we show that our approximation model, based on time-series information from the agent, consistently predicts RL agents’ future actions with high accuracy in a Black-box setup on a wide range of games and RL algorithms. Second, we find that although adversarial samples are transferable from the sequence-to-sequence model to our RL agents, they often outperform Random Gaussian Noise only marginally. Third, we propose a novel use for adversarial samples in Black-box attacks of RL agents: they can be used to trigger a trained agent to misbehave after a specific time delay. This potentially enables an attacker to use devices controlled by RL agents as time bombs.
最近关于强化学习(RL)的研究表明,经过训练的代理容易受到恶意制作的对抗性样本的攻击。在这项工作中,我们展示了如何将这些样本从白盒和灰盒攻击推广到强黑盒案例,攻击者不知道代理,他们的训练参数或他们的训练方法。我们使用序列到序列模型来预测一个经过训练的智能体将做出的单个动作或一系列未来动作。首先,我们展示了我们的近似模型,基于来自代理的时间序列信息,在广泛的游戏和RL算法的黑盒设置中始终如一地以高精度预测RL代理的未来行为。其次,我们发现尽管对抗性样本可以从序列到序列模型转移到我们的强化学习代理,但它们通常只略微优于随机高斯噪声。第三,我们提出了在RL代理的黑盒攻击中对抗性样本的一种新用途:它们可以用来触发经过训练的代理在特定的时间延迟后行为不端的行为。这可能使攻击者将RL代理控制的设备用作定时炸弹。
{"title":"Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information","authors":"Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, R. Mullins, Ross Anderson","doi":"10.1109/DSN-W50199.2020.00013","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00013","url":null,"abstract":"Recent research on reinforcement learning (RL) has suggested that trained agents are vulnerable to maliciously-crafted adversarial samples. In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters or their training methods. We use sequence-to-sequence models to predict a single action or a sequence of future actions that a trained agent will make. First, we show that our approximation model, based on time-series information from the agent, consistently predicts RL agents’ future actions with high accuracy in a Black-box setup on a wide range of games and RL algorithms. Second, we find that although adversarial samples are transferable from the sequence-to-sequence model to our RL agents, they often outperform Random Gaussian Noise only marginally. Third, we propose a novel use for adversarial samples in Black-box attacks of RL agents: they can be used to trigger a trained agent to misbehave after a specific time delay. This potentially enables an attacker to use devices controlled by RL agents as time bombs.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121390813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
BlurNet: Defense by Filtering the Feature Maps BlurNet:通过过滤特征映射进行防御
Ravi Raju, Mikko H. Lipasti
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations $(RP_{2})$, generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this low-pass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.
最近,对抗性机器学习领域引起了人们的关注,因为它表明,最先进的深度神经网络很容易受到对抗性示例的影响,这源于输入图像中添加的小扰动。恶意的攻击者通过获取模型参数(如梯度信息)来改变输入,或者通过攻击替代模型并将这些恶意示例转移到攻击受害者模型来生成对抗性示例。具体来说,其中一种攻击算法鲁棒物理扰动$(RP_{2})$生成带有黑白贴纸的停车标志的对抗图像,以实现针对标准架构交通标志分类器的高目标误分类率。在本文中,我们提出了一种针对RP2攻击的防御方法——BlurNet。首先,我们通过对LISA数据集上网络第一层特征图的频率分析来激励防御,结果表明RP2算法将高频噪声引入了输入图像中。为了去除高频噪声,我们在第一层之后引入标准模糊核的深度卷积层。我们执行了一个黑盒传输攻击,以表明低通滤波特征映射比滤波输入更有益。然后,我们提出了各种正则化方案,将这种低通滤波行为纳入网络的训练体系,并执行白盒攻击。最后,我们对自适应攻击进行了评估,结果表明,采用总变差正则化(所提出的防御之一)后,攻击的成功率从90%下降到20%。
{"title":"BlurNet: Defense by Filtering the Feature Maps","authors":"Ravi Raju, Mikko H. Lipasti","doi":"10.1109/DSN-W50199.2020.00016","DOIUrl":"https://doi.org/10.1109/DSN-W50199.2020.00016","url":null,"abstract":"Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations $(RP_{2})$, generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this low-pass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.","PeriodicalId":427687,"journal":{"name":"2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122750575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1