Shuping Liu, A. Panangadan, A. Talukder, C. Raghavendra
{"title":"Compact representation of coordinated sampling policies for Body Sensor Networks","authors":"Shuping Liu, A. Panangadan, A. Talukder, C. Raghavendra","doi":"10.1109/GLOCOMW.2010.5700304","DOIUrl":null,"url":null,"abstract":"Embedded sensors of a Body Sensor Network need to efficiently utilize their energy resources to operate for an extended amount of time. A Markov Decision Process (MDP) framework has been used to obtain a globally optimal policy that coordinated the sampling of multiple sensors to achieve high efficiency in such sensor networks. However, storing the coordinated sampling policy table requires a large amount of memory which may not be available at the embedded sensors. Computing a compact representation of the MDP global policy will be useful for such sensor nodes. In this paper we show that a decision tree-based learning of a compact representation is feasible with little loss in performance. The global optimal policy is computed offline using the MDP framework and this is then used as training data in a decision tree learner. Our simulation results show that both unpruned and high confidence-pruned decision trees provide an error rate of less than 1% while significantly reducing the memory requirements. Ensembles of lower-confidence trees are capable of perfect representation with only small increase in classifier size compared to individual pruned trees.","PeriodicalId":232205,"journal":{"name":"2010 IEEE Globecom Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE Globecom Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOCOMW.2010.5700304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Embedded sensors of a Body Sensor Network need to efficiently utilize their energy resources to operate for an extended amount of time. A Markov Decision Process (MDP) framework has been used to obtain a globally optimal policy that coordinated the sampling of multiple sensors to achieve high efficiency in such sensor networks. However, storing the coordinated sampling policy table requires a large amount of memory which may not be available at the embedded sensors. Computing a compact representation of the MDP global policy will be useful for such sensor nodes. In this paper we show that a decision tree-based learning of a compact representation is feasible with little loss in performance. The global optimal policy is computed offline using the MDP framework and this is then used as training data in a decision tree learner. Our simulation results show that both unpruned and high confidence-pruned decision trees provide an error rate of less than 1% while significantly reducing the memory requirements. Ensembles of lower-confidence trees are capable of perfect representation with only small increase in classifier size compared to individual pruned trees.