{"title":"Monte Carlo Methods for Sensor Management in Target Tracking","authors":"C. Kreucher, A. Hero","doi":"10.1109/NSSPW.2006.4378862","DOIUrl":null,"url":null,"abstract":"Surveillance for multi-target detection, identification and tracking is one of the natural problem domains in which particle filtering approaches have been gainfully applied. Sequential importance sampling is used to generate and update estimates of the joint multi-target probability density for the number of targets, their dynamical model, and their state vector. In many cases there are a large number of degrees of freedom in sensor deployment, e.g., choice of waveform or modality. This gives rise to a resource allocation problem that can be formulated as determining an optimal policy for a partially observable Markov decision process (POMDP). In this paper we summarize approaches to solving this problem which involve using particle filtering to estimate both posterior state probabilities and the expected reward for both myopic and multistage policies.","PeriodicalId":388611,"journal":{"name":"2006 IEEE Nonlinear Statistical Signal Processing Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE Nonlinear Statistical Signal Processing Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSSPW.2006.4378862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Surveillance for multi-target detection, identification and tracking is one of the natural problem domains in which particle filtering approaches have been gainfully applied. Sequential importance sampling is used to generate and update estimates of the joint multi-target probability density for the number of targets, their dynamical model, and their state vector. In many cases there are a large number of degrees of freedom in sensor deployment, e.g., choice of waveform or modality. This gives rise to a resource allocation problem that can be formulated as determining an optimal policy for a partially observable Markov decision process (POMDP). In this paper we summarize approaches to solving this problem which involve using particle filtering to estimate both posterior state probabilities and the expected reward for both myopic and multistage policies.