Berk Bozkurt;Aditya Mahajan;Ashutosh Nayyar;Yi Ouyang
{"title":"Model Approximation in MDPs With Unbounded Per-Step Cost","authors":"Berk Bozkurt;Aditya Mahajan;Ashutosh Nayyar;Yi Ouyang","doi":"10.1109/TAC.2025.3532181","DOIUrl":null,"url":null,"abstract":"In this article, we consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process <inline-formula><tex-math>$\\mathcal {M}$</tex-math></inline-formula> when we only have access to an approximate model <inline-formula><tex-math>$\\hat{\\mathcal {M}}$</tex-math></inline-formula>. How well does an optimal policy <inline-formula><tex-math>$\\hat{\\pi }^{\\star }$</tex-math></inline-formula> of the approximate model perform when used in the original model <inline-formula><tex-math>$\\mathcal {M}$</tex-math></inline-formula>? We answer this question by bounding a weighted norm of the difference between the value function of <inline-formula><tex-math>$\\hat{\\pi }^\\star$</tex-math></inline-formula> when used in <inline-formula><tex-math>$\\mathcal {M}$</tex-math></inline-formula> and the optimal value function of <inline-formula><tex-math>$\\mathcal {M}$</tex-math></inline-formula>. We then extend our results and obtain potentially tighter upper bounds by considering affine transformations of the per-step cost. We further provide upper bounds that explicitly depend on the weighted distance between cost functions and weighted distance between transition kernels of the original and approximate models. We present examples to illustrate our results.","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"70 7","pages":"4624-4639"},"PeriodicalIF":7.0000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automatic Control","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10848136/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In this article, we consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process $\mathcal {M}$ when we only have access to an approximate model $\hat{\mathcal {M}}$. How well does an optimal policy $\hat{\pi }^{\star }$ of the approximate model perform when used in the original model $\mathcal {M}$? We answer this question by bounding a weighted norm of the difference between the value function of $\hat{\pi }^\star$ when used in $\mathcal {M}$ and the optimal value function of $\mathcal {M}$. We then extend our results and obtain potentially tighter upper bounds by considering affine transformations of the per-step cost. We further provide upper bounds that explicitly depend on the weighted distance between cost functions and weighted distance between transition kernels of the original and approximate models. We present examples to illustrate our results.
期刊介绍:
In the IEEE Transactions on Automatic Control, the IEEE Control Systems Society publishes high-quality papers on the theory, design, and applications of control engineering. Two types of contributions are regularly considered:
1) Papers: Presentation of significant research, development, or application of control concepts.
2) Technical Notes and Correspondence: Brief technical notes, comments on published areas or established control topics, corrections to papers and notes published in the Transactions.
In addition, special papers (tutorials, surveys, and perspectives on the theory and applications of control systems topics) are solicited.