机器人精密插入技能的深度强化学习

Xiapeng Wu, Dapeng Zhang, Fangbo Qin, De Xu
{"title":"机器人精密插入技能的深度强化学习","authors":"Xiapeng Wu, Dapeng Zhang, Fangbo Qin, De Xu","doi":"10.1109/COASE.2019.8842940","DOIUrl":null,"url":null,"abstract":"Automatic high precision assembly of millimeter sized objects is a challenging task. Traditional methods for precision assembly rely on explicit programming with real robot system, and require complex parameter-tuning work. In this paper, we realize deep reinforcement learning of precision insertion skill learning, based on prioritized dueling deep Q-network (DQN). The Q-function is represented by the long short term memory (LSTM) neural network, whose input and output are the raw 6D force-torque feedback and the Q-value, respectively. According to the Q values conditioned on the current state, the skill model selects a 6 degree-of-freedom action from the predefined action set. To accelerate the learning process, the data from demonstrations is used to pre-train the model before the DQN starts. In order to improve the insertion efficiency and safety, insertion step length is modulated based on the instant reward. Our proposed method is validated with the peg-in-hole insertion experiments on a precision assembly robot.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"2010 1","pages":"1651-1656"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Deep Reinforcement Learning of Robotic Precision Insertion Skill Accelerated by Demonstrations\",\"authors\":\"Xiapeng Wu, Dapeng Zhang, Fangbo Qin, De Xu\",\"doi\":\"10.1109/COASE.2019.8842940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic high precision assembly of millimeter sized objects is a challenging task. Traditional methods for precision assembly rely on explicit programming with real robot system, and require complex parameter-tuning work. In this paper, we realize deep reinforcement learning of precision insertion skill learning, based on prioritized dueling deep Q-network (DQN). The Q-function is represented by the long short term memory (LSTM) neural network, whose input and output are the raw 6D force-torque feedback and the Q-value, respectively. According to the Q values conditioned on the current state, the skill model selects a 6 degree-of-freedom action from the predefined action set. To accelerate the learning process, the data from demonstrations is used to pre-train the model before the DQN starts. In order to improve the insertion efficiency and safety, insertion step length is modulated based on the instant reward. Our proposed method is validated with the peg-in-hole insertion experiments on a precision assembly robot.\",\"PeriodicalId\":6695,\"journal\":{\"name\":\"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)\",\"volume\":\"2010 1\",\"pages\":\"1651-1656\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COASE.2019.8842940\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2019.8842940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

摘要

毫米级物体的高精度自动装配是一项具有挑战性的任务。传统的精密装配方法依赖于实际机器人系统的显式编程,并且需要复杂的参数整定工作。在本文中,我们基于优先级决斗深度q网络(DQN)实现了精确插入技能学习的深度强化学习。q函数由长短期记忆(LSTM)神经网络表示,其输入和输出分别为原始6D力-扭矩反馈和q值。根据以当前状态为条件的Q值,技能模型从预定义的动作集中选择6个自由度的动作。为了加速学习过程,在DQN开始之前,使用演示中的数据对模型进行预训练。为了提高插入效率和安全性,根据即时奖励来调节插入步长。在精密装配机器人上进行了孔内钉插入实验,验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Reinforcement Learning of Robotic Precision Insertion Skill Accelerated by Demonstrations
Automatic high precision assembly of millimeter sized objects is a challenging task. Traditional methods for precision assembly rely on explicit programming with real robot system, and require complex parameter-tuning work. In this paper, we realize deep reinforcement learning of precision insertion skill learning, based on prioritized dueling deep Q-network (DQN). The Q-function is represented by the long short term memory (LSTM) neural network, whose input and output are the raw 6D force-torque feedback and the Q-value, respectively. According to the Q values conditioned on the current state, the skill model selects a 6 degree-of-freedom action from the predefined action set. To accelerate the learning process, the data from demonstrations is used to pre-train the model before the DQN starts. In order to improve the insertion efficiency and safety, insertion step length is modulated based on the instant reward. Our proposed method is validated with the peg-in-hole insertion experiments on a precision assembly robot.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A proposed mapping method for aligning machine execution data to numerical control code optimizing outpatient Department Staffing Level using Multi-Fidelity Models Advanced Sensor and Target Development to Support Robot Accuracy Degradation Assessment Multi-Task Hierarchical Imitation Learning for Home Automation Deep Reinforcement Learning of Robotic Precision Insertion Skill Accelerated by Demonstrations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1