Reinforcement Learning Environment for Tactical Networks

Thies Möhlenhof, N. Jansen, Wiam Rachid
{"title":"Reinforcement Learning Environment for Tactical Networks","authors":"Thies Möhlenhof, N. Jansen, Wiam Rachid","doi":"10.1109/ICMCIS52405.2021.9486411","DOIUrl":null,"url":null,"abstract":"Providing situational awareness is a crucial requirement and a challenging task in the tactical domain. Tactical networks can be characterized as Disconnected, Intermittent and Limited (DIL) networks. The use of cross-layer approaches in DIL networks can help to better utilize the tactical communications resources and thus improve the overall situational awareness perceived by the user. The specification of suitable cross-layer strategies (heuristics) which describe the rules for optimizing the applications remains a challenging task. We introduce an architectural concept which proposes the use of decentralized, machine learning based reinforcement agents to improve the use of network resources in DIL networks. This approach shall lead to more sophisticated strategies which are learned autonomously by the agents. As basis for the training of such reinforcement learning (RL) agents, an architecture for a learning environment is introduced. Since for the training of these agents a large number of scenarios is needed, an additional tactical model is defined. The purpose of the tactical model is to generate scenarios with dynamically changing network conditions and dynamic information exchanges between the applications and thus build the basis for training the RL agents. The tactical model itself is also based on RL agents, which simulate military units in a war gaming environment.","PeriodicalId":246290,"journal":{"name":"2021 International Conference on Military Communication and Information Systems (ICMCIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Military Communication and Information Systems (ICMCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMCIS52405.2021.9486411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Providing situational awareness is a crucial requirement and a challenging task in the tactical domain. Tactical networks can be characterized as Disconnected, Intermittent and Limited (DIL) networks. The use of cross-layer approaches in DIL networks can help to better utilize the tactical communications resources and thus improve the overall situational awareness perceived by the user. The specification of suitable cross-layer strategies (heuristics) which describe the rules for optimizing the applications remains a challenging task. We introduce an architectural concept which proposes the use of decentralized, machine learning based reinforcement agents to improve the use of network resources in DIL networks. This approach shall lead to more sophisticated strategies which are learned autonomously by the agents. As basis for the training of such reinforcement learning (RL) agents, an architecture for a learning environment is introduced. Since for the training of these agents a large number of scenarios is needed, an additional tactical model is defined. The purpose of the tactical model is to generate scenarios with dynamically changing network conditions and dynamic information exchanges between the applications and thus build the basis for training the RL agents. The tactical model itself is also based on RL agents, which simulate military units in a war gaming environment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
战术网络的强化学习环境
在战术领域,提供态势感知是一项至关重要的要求和具有挑战性的任务。战术网络可以被描述为断开、间歇和有限(DIL)网络。在DIL网络中使用跨层方法可以帮助更好地利用战术通信资源,从而提高用户感知的整体态势感知能力。描述应用程序优化规则的合适的跨层策略(启发式)的规范仍然是一项具有挑战性的任务。我们引入了一个架构概念,该概念提出使用分散的、基于机器学习的强化代理来改善DIL网络中网络资源的使用。这种方法将导致由代理自主学习的更复杂的策略。作为训练这种强化学习(RL)智能体的基础,介绍了一种学习环境的体系结构。由于这些智能体的训练需要大量的场景,因此定义了一个额外的战术模型。战术模型的目的是生成具有动态变化的网络条件和应用程序之间动态信息交换的场景,从而为RL agent的训练奠定基础。战术模型本身也是基于RL代理,它在战争游戏环境中模拟军事单位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Novel Multi-Parameter based Rate-Matching of Polar Codes A Multimodal Mixed Reality Data Exploration Framework for Tactical Decision Making Mobile cyber defense agents for low throughput DNS-based data exfiltration detection in military networks CNN-based processing of radio frequency signals for augmenting acoustic source localization and enhancement in UAV security applications Cyber Intrusion Detection using Natural Language Processing on Windows Event Logs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1