{"title":"基于深度p网络的生成对抗模仿学习在机器人布料操作中的应用","authors":"Yoshihisa Tsurumine, Yunduan Cui, Kimitoshi Yamazaki, Takamitsu Matsubara","doi":"10.1109/Humanoids43949.2019.9034991","DOIUrl":null,"url":null,"abstract":"Although deep Reinforcement Learning (RL) has been successfully applied to a variety of tasks, manually designing appropriate reward functions for such complex tasks as robotic cloth manipulation still remains challenging and costly. In this paper, we explore an approach of Generative Adversarial Imitation Learning (GAIL) for robotic cloth manipulation tasks, which allows an agent to learn near-optimal behaviors from expert demonstration and self explorations without explicit reward function design. Based on the recent success of value-function based RL with the discrete action set for robotic cloth manipulation tasks [1], we develop a novel value-function based imitation learning framework, P-GAIL. P-GAIL employs a modified value-function based deep RL, Entropy-maximizing Deep P-Network, that can consider both the smoothness and causal entropy in policy update. After investigating its effectiveness through a toy problem in simulation, P-GAIL is applied to a dual-arm humanoid robot tasked with flipping a handkerchief and successfully learns a policy close to a human demonstration with limited exploration and demonstration. Experimental results suggest both fast and stable imitation learning ability and sample efficiency of P-GAIL in robotic cloth manipulation.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Generative Adversarial Imitation Learning with Deep P-Network for Robotic Cloth Manipulation\",\"authors\":\"Yoshihisa Tsurumine, Yunduan Cui, Kimitoshi Yamazaki, Takamitsu Matsubara\",\"doi\":\"10.1109/Humanoids43949.2019.9034991\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although deep Reinforcement Learning (RL) has been successfully applied to a variety of tasks, manually designing appropriate reward functions for such complex tasks as robotic cloth manipulation still remains challenging and costly. In this paper, we explore an approach of Generative Adversarial Imitation Learning (GAIL) for robotic cloth manipulation tasks, which allows an agent to learn near-optimal behaviors from expert demonstration and self explorations without explicit reward function design. Based on the recent success of value-function based RL with the discrete action set for robotic cloth manipulation tasks [1], we develop a novel value-function based imitation learning framework, P-GAIL. P-GAIL employs a modified value-function based deep RL, Entropy-maximizing Deep P-Network, that can consider both the smoothness and causal entropy in policy update. After investigating its effectiveness through a toy problem in simulation, P-GAIL is applied to a dual-arm humanoid robot tasked with flipping a handkerchief and successfully learns a policy close to a human demonstration with limited exploration and demonstration. Experimental results suggest both fast and stable imitation learning ability and sample efficiency of P-GAIL in robotic cloth manipulation.\",\"PeriodicalId\":404758,\"journal\":{\"name\":\"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Humanoids43949.2019.9034991\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Humanoids43949.2019.9034991","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generative Adversarial Imitation Learning with Deep P-Network for Robotic Cloth Manipulation
Although deep Reinforcement Learning (RL) has been successfully applied to a variety of tasks, manually designing appropriate reward functions for such complex tasks as robotic cloth manipulation still remains challenging and costly. In this paper, we explore an approach of Generative Adversarial Imitation Learning (GAIL) for robotic cloth manipulation tasks, which allows an agent to learn near-optimal behaviors from expert demonstration and self explorations without explicit reward function design. Based on the recent success of value-function based RL with the discrete action set for robotic cloth manipulation tasks [1], we develop a novel value-function based imitation learning framework, P-GAIL. P-GAIL employs a modified value-function based deep RL, Entropy-maximizing Deep P-Network, that can consider both the smoothness and causal entropy in policy update. After investigating its effectiveness through a toy problem in simulation, P-GAIL is applied to a dual-arm humanoid robot tasked with flipping a handkerchief and successfully learns a policy close to a human demonstration with limited exploration and demonstration. Experimental results suggest both fast and stable imitation learning ability and sample efficiency of P-GAIL in robotic cloth manipulation.