Shubhada Agrawal, Prashanth L. A., Siva Theja Maguluri
{"title":"Markov Chain Variance Estimation: A Stochastic Approximation Approach","authors":"Shubhada Agrawal, Prashanth L. A., Siva Theja Maguluri","doi":"arxiv-2409.05733","DOIUrl":null,"url":null,"abstract":"We consider the problem of estimating the asymptotic variance of a function\ndefined on a Markov chain, an important step for statistical inference of the\nstationary mean. We design the first recursive estimator that requires $O(1)$\ncomputation at each step, does not require storing any historical samples or\nany prior knowledge of run-length, and has optimal $O(\\frac{1}{n})$ rate of\nconvergence for the mean-squared error (MSE) with provable finite sample\nguarantees. Here, $n$ refers to the total number of samples generated. The\npreviously best-known rate of convergence in MSE was $O(\\frac{\\log n}{n})$,\nachieved by jackknifed estimators, which also do not enjoy these other\ndesirable properties. Our estimator is based on linear stochastic approximation\nof an equivalent formulation of the asymptotic variance in terms of the\nsolution of the Poisson equation. We generalize our estimator in several directions, including estimating the\ncovariance matrix for vector-valued functions, estimating the stationary\nvariance of a Markov chain, and approximately estimating the asymptotic\nvariance in settings where the state space of the underlying Markov chain is\nlarge. We also show applications of our estimator in average reward\nreinforcement learning (RL), where we work with asymptotic variance as a risk\nmeasure to model safety-critical applications. We design a temporal-difference\ntype algorithm tailored for policy evaluation in this context. We consider both\nthe tabular and linear function approximation settings. Our work paves the way\nfor developing actor-critic style algorithms for variance-constrained RL.","PeriodicalId":501379,"journal":{"name":"arXiv - STAT - Statistics Theory","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We consider the problem of estimating the asymptotic variance of a function
defined on a Markov chain, an important step for statistical inference of the
stationary mean. We design the first recursive estimator that requires $O(1)$
computation at each step, does not require storing any historical samples or
any prior knowledge of run-length, and has optimal $O(\frac{1}{n})$ rate of
convergence for the mean-squared error (MSE) with provable finite sample
guarantees. Here, $n$ refers to the total number of samples generated. The
previously best-known rate of convergence in MSE was $O(\frac{\log n}{n})$,
achieved by jackknifed estimators, which also do not enjoy these other
desirable properties. Our estimator is based on linear stochastic approximation
of an equivalent formulation of the asymptotic variance in terms of the
solution of the Poisson equation. We generalize our estimator in several directions, including estimating the
covariance matrix for vector-valued functions, estimating the stationary
variance of a Markov chain, and approximately estimating the asymptotic
variance in settings where the state space of the underlying Markov chain is
large. We also show applications of our estimator in average reward
reinforcement learning (RL), where we work with asymptotic variance as a risk
measure to model safety-critical applications. We design a temporal-difference
type algorithm tailored for policy evaluation in this context. We consider both
the tabular and linear function approximation settings. Our work paves the way
for developing actor-critic style algorithms for variance-constrained RL.