Changgang Zheng, Chen Zhen, Haiyong Xie, Shufan Yang
{"title":"Towards Secure Multi-Agent Deep Reinforcement Learning: Adversarial Attacks and Countermeasures","authors":"Changgang Zheng, Chen Zhen, Haiyong Xie, Shufan Yang","doi":"10.1109/DSC54232.2022.9888828","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) is one of the most popular methods for solving complex sequential decision-making problems. Deep RL needs careful sensing of the environment, selecting algorithms as well as hyper-parameters via soft agents, and simultaneously predicting which best actions should be. The RL computing paradigm is progressively becoming a popular solution in numerous fields. However, many deployment decisions, such as security of distributed computing, the defence system of network communication and algorithms details such as frequency of batch updating and the number of time steps, are typically not treated as an integrated system. This makes it difficult to have appropriate vulnerability management when applying deep RL in real life problems. For these reasons, we propose a framework that allows users to focus on the algorithm of reasoning, trust, and explainability in accordance with human perception, followed by exploring potential threats, especially adversarial attacks and countermeasures.","PeriodicalId":368903,"journal":{"name":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC54232.2022.9888828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement Learning (RL) is one of the most popular methods for solving complex sequential decision-making problems. Deep RL needs careful sensing of the environment, selecting algorithms as well as hyper-parameters via soft agents, and simultaneously predicting which best actions should be. The RL computing paradigm is progressively becoming a popular solution in numerous fields. However, many deployment decisions, such as security of distributed computing, the defence system of network communication and algorithms details such as frequency of batch updating and the number of time steps, are typically not treated as an integrated system. This makes it difficult to have appropriate vulnerability management when applying deep RL in real life problems. For these reasons, we propose a framework that allows users to focus on the algorithm of reasoning, trust, and explainability in accordance with human perception, followed by exploring potential threats, especially adversarial attacks and countermeasures.