{"title":"Optimal Risk-Sensitive Scheduling Policies for Remote Estimation of Autoregressive Markov Processes","authors":"Manali Dutta;Rahul Singh","doi":"10.1109/LCSYS.2024.3522196","DOIUrl":null,"url":null,"abstract":"We consider a remote estimation setup, where data packets containing sensor observations are transmitted over a Gilbert-Elliot channel to a remote estimator, and design scheduling policies that minimize a risk-sensitive cost, which is equal to the expected value of the exponential of the cumulative cost incurred during a finite horizon, that is the sum of the cumulative transmission power consumed, and the cumulative squared estimation error. More specifically, consider a sensor that observes a discrete-time autoregressive Markov process, and at each time decides whether or not to transmit its observations to a remote estimator using an unreliable wireless communication channel after encoding these observations into data packets. Modeling the communication channel as a Gilbert-Elliot channel allows us to take into account the temporal correlations in its fading. We pose this dynamic optimization problem as a Markov decision process (MDP), and show that there exists an optimal policy that has a threshold structure, i.e., at each time t it transmits only when the current channel state is good, and the magnitude of the current “error” exceeds a certain threshold.","PeriodicalId":37235,"journal":{"name":"IEEE Control Systems Letters","volume":"8 ","pages":"3099-3104"},"PeriodicalIF":2.4000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Control Systems Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10812990/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
We consider a remote estimation setup, where data packets containing sensor observations are transmitted over a Gilbert-Elliot channel to a remote estimator, and design scheduling policies that minimize a risk-sensitive cost, which is equal to the expected value of the exponential of the cumulative cost incurred during a finite horizon, that is the sum of the cumulative transmission power consumed, and the cumulative squared estimation error. More specifically, consider a sensor that observes a discrete-time autoregressive Markov process, and at each time decides whether or not to transmit its observations to a remote estimator using an unreliable wireless communication channel after encoding these observations into data packets. Modeling the communication channel as a Gilbert-Elliot channel allows us to take into account the temporal correlations in its fading. We pose this dynamic optimization problem as a Markov decision process (MDP), and show that there exists an optimal policy that has a threshold structure, i.e., at each time t it transmits only when the current channel state is good, and the magnitude of the current “error” exceeds a certain threshold.