{"title":"A comparison of reinforcement learning based approaches to appliance scheduling","authors":"Namit Chauhan, Neha Choudhary, K. George","doi":"10.1109/IC3I.2016.7917970","DOIUrl":null,"url":null,"abstract":"Reinforcement learning is often proposed as a technique for intelligent control in a smart home setup with dynamic real-time energy pricing and advanced sub-metering infrastructure. In this paper, we introduce a variation of State Action Reward State Action (SARSA) as an optimization algorithm for appliance scheduling in smart homes with multiple appliances and compare it with the popular reinforcement learning method Q-learning. A simple, intuitive and unique treelike Markov decision process (MDP) structure of appliances is proposed which takes into account the states, such as on/off/runtime status, of all schedulable appliances but does not require the knowledge of the state to state transition probabilities.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3I.2016.7917970","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Reinforcement learning is often proposed as a technique for intelligent control in a smart home setup with dynamic real-time energy pricing and advanced sub-metering infrastructure. In this paper, we introduce a variation of State Action Reward State Action (SARSA) as an optimization algorithm for appliance scheduling in smart homes with multiple appliances and compare it with the popular reinforcement learning method Q-learning. A simple, intuitive and unique treelike Markov decision process (MDP) structure of appliances is proposed which takes into account the states, such as on/off/runtime status, of all schedulable appliances but does not require the knowledge of the state to state transition probabilities.