{"title":"On asymptotic value for dynamic games with saddle point","authors":"D. Khlopin","doi":"10.1137/1.9781611974072.39","DOIUrl":null,"url":null,"abstract":"The paper is concerned with two-person games with saddle point. We investigate the limits of value functions for long-time-average payoff, discounted average payoff, and the payoff that follows a probability density. \nMost of our assumptions restrict the dynamics of games. In particular, we assume the closedness of strategies under concatenation. It is also necessary for the value function to satisfy Bellman's optimality principle, even if in a weakened, asymptotic sense. \nWe provide two results. The first one is a uniform Tauber result for games: if the value functions for long-time-average payoff converge uniformly, then there exists the uniform limit for probability densities from a sufficiently broad set; moreover, these limits coincide. The second one is the uniform Abel result: if a uniform limit for self-similar densities exists, then the uniform limit for long-time average payoff also exists, and they coincide.","PeriodicalId":193106,"journal":{"name":"SIAM Conf. on Control and its Applications","volume":"57 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Conf. on Control and its Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/1.9781611974072.39","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
The paper is concerned with two-person games with saddle point. We investigate the limits of value functions for long-time-average payoff, discounted average payoff, and the payoff that follows a probability density.
Most of our assumptions restrict the dynamics of games. In particular, we assume the closedness of strategies under concatenation. It is also necessary for the value function to satisfy Bellman's optimality principle, even if in a weakened, asymptotic sense.
We provide two results. The first one is a uniform Tauber result for games: if the value functions for long-time-average payoff converge uniformly, then there exists the uniform limit for probability densities from a sufficiently broad set; moreover, these limits coincide. The second one is the uniform Abel result: if a uniform limit for self-similar densities exists, then the uniform limit for long-time average payoff also exists, and they coincide.