{"title":"基于强化学习的软机械臂无模型控制","authors":"Xuanke You, Yixiao Zhang, Xiaotong Chen, Xinghua Liu, Zhanchi Wang, Hao Jiang, Xiaoping Chen","doi":"10.1109/IROS.2017.8206123","DOIUrl":null,"url":null,"abstract":"Most control methods of soft manipulators are developed based on physical models derived from mathematical analysis or learning methods. However, due to internal nonlinearity and external uncertain disturbances, it is difficult to build an accurate model, further, these methods lack robustness and portability among different prototypes. In this work, we propose a model-free control method based on reinforcement learning and implement it on a multi-segment soft manipulator in 2D plane, which focuses on the learning of control strategy rather than the physical model. The control strategy is validated to be effective and robust in prototype experiments, where we design a simulation method to speed up the training process.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"6 1","pages":"2909-2915"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":"{\"title\":\"Model-free control for soft manipulators based on reinforcement learning\",\"authors\":\"Xuanke You, Yixiao Zhang, Xiaotong Chen, Xinghua Liu, Zhanchi Wang, Hao Jiang, Xiaoping Chen\",\"doi\":\"10.1109/IROS.2017.8206123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most control methods of soft manipulators are developed based on physical models derived from mathematical analysis or learning methods. However, due to internal nonlinearity and external uncertain disturbances, it is difficult to build an accurate model, further, these methods lack robustness and portability among different prototypes. In this work, we propose a model-free control method based on reinforcement learning and implement it on a multi-segment soft manipulator in 2D plane, which focuses on the learning of control strategy rather than the physical model. The control strategy is validated to be effective and robust in prototype experiments, where we design a simulation method to speed up the training process.\",\"PeriodicalId\":6658,\"journal\":{\"name\":\"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"volume\":\"6 1\",\"pages\":\"2909-2915\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"41\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2017.8206123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2017.8206123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Model-free control for soft manipulators based on reinforcement learning
Most control methods of soft manipulators are developed based on physical models derived from mathematical analysis or learning methods. However, due to internal nonlinearity and external uncertain disturbances, it is difficult to build an accurate model, further, these methods lack robustness and portability among different prototypes. In this work, we propose a model-free control method based on reinforcement learning and implement it on a multi-segment soft manipulator in 2D plane, which focuses on the learning of control strategy rather than the physical model. The control strategy is validated to be effective and robust in prototype experiments, where we design a simulation method to speed up the training process.