Dan Shen, Genshe Chen, Jose B Cruz, C. Kwan, M. Kruger
{"title":"An Adaptive Markov Game Model for Threat Intent Inference","authors":"Dan Shen, Genshe Chen, Jose B Cruz, C. Kwan, M. Kruger","doi":"10.1109/AERO.2007.352800","DOIUrl":null,"url":null,"abstract":"In an adversarial military environment, it is important to efficiently and promptly predict the enemy's tactical intent from lower level spatial and temporal information. In this paper, we propose a decentralized Markov game (MG) theoretic approach to estimate the belief of each possible enemy course of action (ECOA), which is utilized to model the adversary intents. It has the following advantages: (1) It is decentralized. Each cluster or team makes decisions mostly based on local information. We put more autonomies in each group allowing for more flexibilities; (2) A Markov decision process (MDP) can effectively model the uncertainties in the noisy military environment; (3) It is a game model with three players: red force (enemies), blue force (friendly forces), and white force (neutral objects); (4) Correlated-Q reinforcement learning is integrated. With the consideration that actual value functions are not normally known and they must be estimated, we integrate correlated-Q learning concept in our game approach to dynamically adjust the payoffs function of each player. A simulation software package has been developed to demonstrate the performance of our proposed algorithms. Simulations have verified that our proposed algorithms are scalable, stable, and satisfactory in performance.","PeriodicalId":6295,"journal":{"name":"2007 IEEE Aerospace Conference","volume":"95 1","pages":"1-13"},"PeriodicalIF":0.0000,"publicationDate":"2007-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Aerospace Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO.2007.352800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
In an adversarial military environment, it is important to efficiently and promptly predict the enemy's tactical intent from lower level spatial and temporal information. In this paper, we propose a decentralized Markov game (MG) theoretic approach to estimate the belief of each possible enemy course of action (ECOA), which is utilized to model the adversary intents. It has the following advantages: (1) It is decentralized. Each cluster or team makes decisions mostly based on local information. We put more autonomies in each group allowing for more flexibilities; (2) A Markov decision process (MDP) can effectively model the uncertainties in the noisy military environment; (3) It is a game model with three players: red force (enemies), blue force (friendly forces), and white force (neutral objects); (4) Correlated-Q reinforcement learning is integrated. With the consideration that actual value functions are not normally known and they must be estimated, we integrate correlated-Q learning concept in our game approach to dynamically adjust the payoffs function of each player. A simulation software package has been developed to demonstrate the performance of our proposed algorithms. Simulations have verified that our proposed algorithms are scalable, stable, and satisfactory in performance.