{"title":"具有Kullback-Leibler控制成本的在线马尔可夫决策过程","authors":"Peng Guan, M. Raginsky, R. Willett","doi":"10.1109/ACC.2012.6314926","DOIUrl":null,"url":null,"abstract":"We consider an online (real-time) control problem that involves an agent performing a discrete-time random walk over a finite state space. The agent's action at each time step is to specify the probability distribution for the next state given the current state. Following the set-up of Todorov (2007, 2009), the state-action cost at each time step is a sum of a nonnegative state cost and a control cost given by the Kullback-Leibler divergence between the agent's next-state distribution and that determined by some fixed passive dynamics. The online aspect of the problem is due to the fact that the state cost functions are generated by a dynamic environment, and the agent learns the current state cost only after having selected the corresponding action. We give an explicit construction of an efficient strategy that has small regret (i.e., the difference between the total state-action cost incurred causally and the smallest cost attainable using noncausal knowledge of the state costs) under mild regularity conditions on the passive dynamics. We demonstrate the performance of our proposed strategy on a simulated target tracking problem.","PeriodicalId":74510,"journal":{"name":"Proceedings of the ... American Control Conference. American Control Conference","volume":"2 1","pages":"1388-1393"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Online Markov decision processes with Kullback-Leibler control cost\",\"authors\":\"Peng Guan, M. Raginsky, R. Willett\",\"doi\":\"10.1109/ACC.2012.6314926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider an online (real-time) control problem that involves an agent performing a discrete-time random walk over a finite state space. The agent's action at each time step is to specify the probability distribution for the next state given the current state. Following the set-up of Todorov (2007, 2009), the state-action cost at each time step is a sum of a nonnegative state cost and a control cost given by the Kullback-Leibler divergence between the agent's next-state distribution and that determined by some fixed passive dynamics. The online aspect of the problem is due to the fact that the state cost functions are generated by a dynamic environment, and the agent learns the current state cost only after having selected the corresponding action. We give an explicit construction of an efficient strategy that has small regret (i.e., the difference between the total state-action cost incurred causally and the smallest cost attainable using noncausal knowledge of the state costs) under mild regularity conditions on the passive dynamics. We demonstrate the performance of our proposed strategy on a simulated target tracking problem.\",\"PeriodicalId\":74510,\"journal\":{\"name\":\"Proceedings of the ... American Control Conference. American Control Conference\",\"volume\":\"2 1\",\"pages\":\"1388-1393\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... American Control Conference. American Control Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACC.2012.6314926\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... American Control Conference. American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACC.2012.6314926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Online Markov decision processes with Kullback-Leibler control cost
We consider an online (real-time) control problem that involves an agent performing a discrete-time random walk over a finite state space. The agent's action at each time step is to specify the probability distribution for the next state given the current state. Following the set-up of Todorov (2007, 2009), the state-action cost at each time step is a sum of a nonnegative state cost and a control cost given by the Kullback-Leibler divergence between the agent's next-state distribution and that determined by some fixed passive dynamics. The online aspect of the problem is due to the fact that the state cost functions are generated by a dynamic environment, and the agent learns the current state cost only after having selected the corresponding action. We give an explicit construction of an efficient strategy that has small regret (i.e., the difference between the total state-action cost incurred causally and the smallest cost attainable using noncausal knowledge of the state costs) under mild regularity conditions on the passive dynamics. We demonstrate the performance of our proposed strategy on a simulated target tracking problem.