{"title":"从人类演示中持续强化学习,集成自动驾驶经验回放","authors":"Sixiang Zuo, Zhiyang Wang, Xiaorui Zhu, Y. Ou","doi":"10.1109/ROBIO.2017.8324787","DOIUrl":null,"url":null,"abstract":"As a promising subfield of machine learning, Reinforcement Learning (RL) has drawn increasing attention among the academia as well as the public. However, the practical application of RL is still restricted by a variety of reasons. The two most significant challenges of RL are the large exploration domain and the difficulty to converge. Integrating RL with human expertise is technically an interesting way to accelerate the exploration and increase the stability. In this work, we propose a continuous reinforcement learning method which integrates Deep Deterministic Policy Gradient (DDPG) with human demonstrations. The proposed method uses a combined loss function for updating the actor and critic networks. In addition, the experience replay buffer is also drawn from different transition data samples to make the learning more stable. The proposed method is tested with a popular RL task, i.e. the autonomous driving, by simulations with TORCS environment. Experimental results not only show the effectiveness of our method in improving the learning stability, but also manifest the potential capability of our method in mastering human preferences.","PeriodicalId":197159,"journal":{"name":"2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Continuous reinforcement learning from human demonstrations with integrated experience replay for autonomous driving\",\"authors\":\"Sixiang Zuo, Zhiyang Wang, Xiaorui Zhu, Y. Ou\",\"doi\":\"10.1109/ROBIO.2017.8324787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a promising subfield of machine learning, Reinforcement Learning (RL) has drawn increasing attention among the academia as well as the public. However, the practical application of RL is still restricted by a variety of reasons. The two most significant challenges of RL are the large exploration domain and the difficulty to converge. Integrating RL with human expertise is technically an interesting way to accelerate the exploration and increase the stability. In this work, we propose a continuous reinforcement learning method which integrates Deep Deterministic Policy Gradient (DDPG) with human demonstrations. The proposed method uses a combined loss function for updating the actor and critic networks. In addition, the experience replay buffer is also drawn from different transition data samples to make the learning more stable. The proposed method is tested with a popular RL task, i.e. the autonomous driving, by simulations with TORCS environment. Experimental results not only show the effectiveness of our method in improving the learning stability, but also manifest the potential capability of our method in mastering human preferences.\",\"PeriodicalId\":197159,\"journal\":{\"name\":\"2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO.2017.8324787\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO.2017.8324787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Continuous reinforcement learning from human demonstrations with integrated experience replay for autonomous driving
As a promising subfield of machine learning, Reinforcement Learning (RL) has drawn increasing attention among the academia as well as the public. However, the practical application of RL is still restricted by a variety of reasons. The two most significant challenges of RL are the large exploration domain and the difficulty to converge. Integrating RL with human expertise is technically an interesting way to accelerate the exploration and increase the stability. In this work, we propose a continuous reinforcement learning method which integrates Deep Deterministic Policy Gradient (DDPG) with human demonstrations. The proposed method uses a combined loss function for updating the actor and critic networks. In addition, the experience replay buffer is also drawn from different transition data samples to make the learning more stable. The proposed method is tested with a popular RL task, i.e. the autonomous driving, by simulations with TORCS environment. Experimental results not only show the effectiveness of our method in improving the learning stability, but also manifest the potential capability of our method in mastering human preferences.