{"title":"四足机器人平稳全向运动的学习","authors":"Jiaxi Wu, Chenan Wang, Dianmin Zhang, Shanlin Zhong, Boxing Wang, Hong Qiao","doi":"10.1109/ICARM52023.2021.9536204","DOIUrl":null,"url":null,"abstract":"It often takes a lot of trial and error to get a quadruped robot to learn a proper and natural gait directly through reinforcement learning. Moreover, it requires plenty of attempts and clever reward settings to learn appropriate locomotion. However, the success rate of network convergence is still relatively low. In this paper, the referred trajectory, inverse kinematics, and transformation loss are integrated into the training process of reinforcement learning as prior knowledge. Therefore reinforcement learning only needs to search for the optimal solution around the referred trajectory, making it easier to find the appropriate locomotion and guarantee convergence. When testing, a PD controller is fused into the trained model to reduce the velocity following error. Based on the above ideas, we propose two control framework - single closed-loop and double closed-loop. And their effectiveness is proved through experiments. It can efficiently help quadruped robots learn appropriate gait and realize smooth and omnidirectional locomotion, which all learned in one model.","PeriodicalId":367307,"journal":{"name":"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Smooth and Omnidirectional Locomotion for Quadruped Robots\",\"authors\":\"Jiaxi Wu, Chenan Wang, Dianmin Zhang, Shanlin Zhong, Boxing Wang, Hong Qiao\",\"doi\":\"10.1109/ICARM52023.2021.9536204\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It often takes a lot of trial and error to get a quadruped robot to learn a proper and natural gait directly through reinforcement learning. Moreover, it requires plenty of attempts and clever reward settings to learn appropriate locomotion. However, the success rate of network convergence is still relatively low. In this paper, the referred trajectory, inverse kinematics, and transformation loss are integrated into the training process of reinforcement learning as prior knowledge. Therefore reinforcement learning only needs to search for the optimal solution around the referred trajectory, making it easier to find the appropriate locomotion and guarantee convergence. When testing, a PD controller is fused into the trained model to reduce the velocity following error. Based on the above ideas, we propose two control framework - single closed-loop and double closed-loop. And their effectiveness is proved through experiments. It can efficiently help quadruped robots learn appropriate gait and realize smooth and omnidirectional locomotion, which all learned in one model.\",\"PeriodicalId\":367307,\"journal\":{\"name\":\"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)\",\"volume\":\"151 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICARM52023.2021.9536204\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARM52023.2021.9536204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Smooth and Omnidirectional Locomotion for Quadruped Robots
It often takes a lot of trial and error to get a quadruped robot to learn a proper and natural gait directly through reinforcement learning. Moreover, it requires plenty of attempts and clever reward settings to learn appropriate locomotion. However, the success rate of network convergence is still relatively low. In this paper, the referred trajectory, inverse kinematics, and transformation loss are integrated into the training process of reinforcement learning as prior knowledge. Therefore reinforcement learning only needs to search for the optimal solution around the referred trajectory, making it easier to find the appropriate locomotion and guarantee convergence. When testing, a PD controller is fused into the trained model to reduce the velocity following error. Based on the above ideas, we propose two control framework - single closed-loop and double closed-loop. And their effectiveness is proved through experiments. It can efficiently help quadruped robots learn appropriate gait and realize smooth and omnidirectional locomotion, which all learned in one model.