{"title":"战术网络的强化学习环境","authors":"Thies Möhlenhof, N. Jansen, Wiam Rachid","doi":"10.1109/ICMCIS52405.2021.9486411","DOIUrl":null,"url":null,"abstract":"Providing situational awareness is a crucial requirement and a challenging task in the tactical domain. Tactical networks can be characterized as Disconnected, Intermittent and Limited (DIL) networks. The use of cross-layer approaches in DIL networks can help to better utilize the tactical communications resources and thus improve the overall situational awareness perceived by the user. The specification of suitable cross-layer strategies (heuristics) which describe the rules for optimizing the applications remains a challenging task. We introduce an architectural concept which proposes the use of decentralized, machine learning based reinforcement agents to improve the use of network resources in DIL networks. This approach shall lead to more sophisticated strategies which are learned autonomously by the agents. As basis for the training of such reinforcement learning (RL) agents, an architecture for a learning environment is introduced. Since for the training of these agents a large number of scenarios is needed, an additional tactical model is defined. The purpose of the tactical model is to generate scenarios with dynamically changing network conditions and dynamic information exchanges between the applications and thus build the basis for training the RL agents. The tactical model itself is also based on RL agents, which simulate military units in a war gaming environment.","PeriodicalId":246290,"journal":{"name":"2021 International Conference on Military Communication and Information Systems (ICMCIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Reinforcement Learning Environment for Tactical Networks\",\"authors\":\"Thies Möhlenhof, N. Jansen, Wiam Rachid\",\"doi\":\"10.1109/ICMCIS52405.2021.9486411\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Providing situational awareness is a crucial requirement and a challenging task in the tactical domain. Tactical networks can be characterized as Disconnected, Intermittent and Limited (DIL) networks. The use of cross-layer approaches in DIL networks can help to better utilize the tactical communications resources and thus improve the overall situational awareness perceived by the user. The specification of suitable cross-layer strategies (heuristics) which describe the rules for optimizing the applications remains a challenging task. We introduce an architectural concept which proposes the use of decentralized, machine learning based reinforcement agents to improve the use of network resources in DIL networks. This approach shall lead to more sophisticated strategies which are learned autonomously by the agents. As basis for the training of such reinforcement learning (RL) agents, an architecture for a learning environment is introduced. Since for the training of these agents a large number of scenarios is needed, an additional tactical model is defined. The purpose of the tactical model is to generate scenarios with dynamically changing network conditions and dynamic information exchanges between the applications and thus build the basis for training the RL agents. The tactical model itself is also based on RL agents, which simulate military units in a war gaming environment.\",\"PeriodicalId\":246290,\"journal\":{\"name\":\"2021 International Conference on Military Communication and Information Systems (ICMCIS)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Military Communication and Information Systems (ICMCIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMCIS52405.2021.9486411\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Military Communication and Information Systems (ICMCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMCIS52405.2021.9486411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reinforcement Learning Environment for Tactical Networks
Providing situational awareness is a crucial requirement and a challenging task in the tactical domain. Tactical networks can be characterized as Disconnected, Intermittent and Limited (DIL) networks. The use of cross-layer approaches in DIL networks can help to better utilize the tactical communications resources and thus improve the overall situational awareness perceived by the user. The specification of suitable cross-layer strategies (heuristics) which describe the rules for optimizing the applications remains a challenging task. We introduce an architectural concept which proposes the use of decentralized, machine learning based reinforcement agents to improve the use of network resources in DIL networks. This approach shall lead to more sophisticated strategies which are learned autonomously by the agents. As basis for the training of such reinforcement learning (RL) agents, an architecture for a learning environment is introduced. Since for the training of these agents a large number of scenarios is needed, an additional tactical model is defined. The purpose of the tactical model is to generate scenarios with dynamically changing network conditions and dynamic information exchanges between the applications and thus build the basis for training the RL agents. The tactical model itself is also based on RL agents, which simulate military units in a war gaming environment.