{"title":"当简单性满足最优性:随机能量收集的有效传输功率控制","authors":"Qingsi Wang, M. Liu","doi":"10.1109/INFCOM.2013.6566839","DOIUrl":null,"url":null,"abstract":"We consider the optimal transmission power control of a single wireless node with stochastic energy harvesting and an infinite/saturated queue with the objective of maximizing a certain reward function, e.g., the total data rate. We develop simple control policies that achieve near optimal performance in the finite-horizon case with finite energy storage. The same policies are shown to be asymptotically optimal in the infinite horizon case for sufficiently large energy storage. Such policies are typically difficult to directly obtain using a Markov Decision Process (MDP) formulation or through a dynamic programming framework due to the computational complexity. We relate our results to those obtained in the unsaturated regime, and highlight a type of threshold-based policies that is universally optimal.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":"{\"title\":\"When simplicity meets optimality: Efficient transmission power control with stochastic energy harvesting\",\"authors\":\"Qingsi Wang, M. Liu\",\"doi\":\"10.1109/INFCOM.2013.6566839\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the optimal transmission power control of a single wireless node with stochastic energy harvesting and an infinite/saturated queue with the objective of maximizing a certain reward function, e.g., the total data rate. We develop simple control policies that achieve near optimal performance in the finite-horizon case with finite energy storage. The same policies are shown to be asymptotically optimal in the infinite horizon case for sufficiently large energy storage. Such policies are typically difficult to directly obtain using a Markov Decision Process (MDP) formulation or through a dynamic programming framework due to the computational complexity. We relate our results to those obtained in the unsaturated regime, and highlight a type of threshold-based policies that is universally optimal.\",\"PeriodicalId\":206346,\"journal\":{\"name\":\"2013 Proceedings IEEE INFOCOM\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"44\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 Proceedings IEEE INFOCOM\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFCOM.2013.6566839\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Proceedings IEEE INFOCOM","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFCOM.2013.6566839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
When simplicity meets optimality: Efficient transmission power control with stochastic energy harvesting
We consider the optimal transmission power control of a single wireless node with stochastic energy harvesting and an infinite/saturated queue with the objective of maximizing a certain reward function, e.g., the total data rate. We develop simple control policies that achieve near optimal performance in the finite-horizon case with finite energy storage. The same policies are shown to be asymptotically optimal in the infinite horizon case for sufficiently large energy storage. Such policies are typically difficult to directly obtain using a Markov Decision Process (MDP) formulation or through a dynamic programming framework due to the computational complexity. We relate our results to those obtained in the unsaturated regime, and highlight a type of threshold-based policies that is universally optimal.