Simultaneously Learning of Motion, Stiffness, and Force From Human Demonstration Based on Riemannian DMP and QP Optimization

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automation Science and Engineering Pub Date : 2024-10-15 DOI:10.1109/TASE.2024.3469961
Zhiwei Liao;Francesco Tassi;Chenwei Gong;Mattia Leonori;Fei Zhao;Gedong Jiang;Arash Ajoudani
{"title":"Simultaneously Learning of Motion, Stiffness, and Force From Human Demonstration Based on Riemannian DMP and QP Optimization","authors":"Zhiwei Liao;Francesco Tassi;Chenwei Gong;Mattia Leonori;Fei Zhao;Gedong Jiang;Arash Ajoudani","doi":"10.1109/TASE.2024.3469961","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a motion, stiffness, and force learning framework based on an extended dynamic movement primitive (DMP) and quadratic programming (QP) optimization. The objective is to learn kinematic and dynamic operational parameters from a one-shot human demonstration, through measurement and estimation of the motion, 3-dimensional (3-D) endpoint stiffness, and applied forces of the human arm during manipulation tasks. To this end, first, the framework features an extended DMP to model the motion, stiffness, and force variations in Cartesian space and 2-D sphere manifold. Second, to account for collected errors and human-robot operation gaps, a QP optimization is applied to fine-tune the desired position of the controller. Finally, we validate the framework through two experiments in real scenarios on the Franka Emika Panda robot. Experimental results show that the robot can not only inherit the variation laws of motion, stiffness, and force in the human demonstration, but also exhibit certain generalization capabilities to other situations. The framework provides a reference for robots learning multiple skills via a one-shot human demonstration, which finds great potential application in human-robot cooperation, contact-rich scenarios, and skillful operations, where the motion, stiffness, and applied forces need to be considered simultaneously. Note to Practitioners—Fast programming in robotics through skill transfer plays a critical role in next-generation robots entering ordinary people’s lives. Existing research focuses more on skill learning at the kinematic level and lacks on the dynamic level, such as stiffness and contact force. The goal of this paper is to propose a novel framework for robots learning of motion, stiffness, and force variations from a one-shot human demonstration, simultaneously. To this end, a Riemannian-based DMP method is employed to model the variation laws of motion, stiffness, and force in Cartesian space and 2-D sphere manifold, respectively. In this way, the learning module needs to be run only once, and the patterns can also be generalized to other targets without repeated robot teaching and additional time-consuming processes. To accurately reproduce the learned skills, a human-like motion/stiffness/force controller combined with QP optimization is investigated. In this paper, rather than identifying real environmental parameters, we directly use interacted forces during the human demonstration to represent environmental effects and employ QP to update the desired position in a limited range to account for collected errors and human-robot operation gaps. Experiments on button pressing and polishing tasks by the Panda robot have achieved very good results. The work of this paper lays a foundation for multiple skills learning from human demonstration (LfHD).","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"7773-7785"},"PeriodicalIF":6.4000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10719672/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a motion, stiffness, and force learning framework based on an extended dynamic movement primitive (DMP) and quadratic programming (QP) optimization. The objective is to learn kinematic and dynamic operational parameters from a one-shot human demonstration, through measurement and estimation of the motion, 3-dimensional (3-D) endpoint stiffness, and applied forces of the human arm during manipulation tasks. To this end, first, the framework features an extended DMP to model the motion, stiffness, and force variations in Cartesian space and 2-D sphere manifold. Second, to account for collected errors and human-robot operation gaps, a QP optimization is applied to fine-tune the desired position of the controller. Finally, we validate the framework through two experiments in real scenarios on the Franka Emika Panda robot. Experimental results show that the robot can not only inherit the variation laws of motion, stiffness, and force in the human demonstration, but also exhibit certain generalization capabilities to other situations. The framework provides a reference for robots learning multiple skills via a one-shot human demonstration, which finds great potential application in human-robot cooperation, contact-rich scenarios, and skillful operations, where the motion, stiffness, and applied forces need to be considered simultaneously. Note to Practitioners—Fast programming in robotics through skill transfer plays a critical role in next-generation robots entering ordinary people’s lives. Existing research focuses more on skill learning at the kinematic level and lacks on the dynamic level, such as stiffness and contact force. The goal of this paper is to propose a novel framework for robots learning of motion, stiffness, and force variations from a one-shot human demonstration, simultaneously. To this end, a Riemannian-based DMP method is employed to model the variation laws of motion, stiffness, and force in Cartesian space and 2-D sphere manifold, respectively. In this way, the learning module needs to be run only once, and the patterns can also be generalized to other targets without repeated robot teaching and additional time-consuming processes. To accurately reproduce the learned skills, a human-like motion/stiffness/force controller combined with QP optimization is investigated. In this paper, rather than identifying real environmental parameters, we directly use interacted forces during the human demonstration to represent environmental effects and employ QP to update the desired position in a limited range to account for collected errors and human-robot operation gaps. Experiments on button pressing and polishing tasks by the Panda robot have achieved very good results. The work of this paper lays a foundation for multiple skills learning from human demonstration (LfHD).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于 Riemannian DMP 和 QP 优化的人体演示同步学习运动、刚度和力
在本文中,我们提出了一个基于扩展动态运动原语(DMP)和二次规划(QP)优化的运动、刚度和力学习框架。目标是通过测量和估计运动,三维(3-D)端点刚度,以及在操作任务中人体手臂的施加力,从一次人体演示中学习运动学和动态操作参数。为此,首先,该框架具有扩展的DMP来模拟笛卡尔空间和二维球流形中的运动,刚度和力变化。其次,考虑到收集到的误差和人机操作间隙,应用QP优化对控制器的期望位置进行微调。最后,我们在Franka Emika Panda机器人上通过两个真实场景实验验证了该框架。实验结果表明,该机器人不仅可以继承人类演示中运动、刚度和力的变化规律,而且对其他情况也具有一定的泛化能力。该框架为机器人通过一次人体演示学习多种技能提供了参考,在人机协作、多接触场景和熟练操作等需要同时考虑运动、刚度和施力的场景中具有很大的应用潜力。从业人员注意:通过技能转移的机器人快速编程在下一代机器人进入普通人生活中起着至关重要的作用。现有的研究多集中于运动学层面的技能学习,缺乏对刚度、接触力等动态层面的研究。本文的目标是提出一种新的框架,用于机器人同时从一次人类演示中学习运动、刚度和力的变化。为此,采用基于黎曼的DMP方法分别在笛卡尔空间和二维球流形中对运动、刚度和力的变化规律进行建模。这样,学习模块只需要运行一次,模式也可以推广到其他目标,而不需要重复的机器人教学和额外的耗时过程。为了准确再现所学技能,研究了一种结合QP优化的类人运动/刚度/力控制器。在本文中,我们没有识别真实的环境参数,而是直接使用人类演示过程中的相互作用力来表示环境影响,并使用QP来更新有限范围内的期望位置,以解释收集到的误差和人机操作间隙。熊猫机器人在按按钮和抛光任务上的实验取得了很好的效果。本文的工作为人类示范多技能学习(LfHD)奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
期刊最新文献
Automation 5.0: The Step to Systems Intelligence for a Sustainable Future Dual-Layer Bumpless Transfer Control for Markovian Jump Systems and Its Applications: A Weighted Coefficient Mechanism Continuous-time/event-triggered decentralized output feedback prescribed-time control applied to a 2-DOF helicopter High-Precision Tracking Control of Multi-Axis Electro-Optical Systems Based on Accurate Identification of Dynamic Parameters Distributed Adaptive Secondary Control of DC Microgrids with Uncertainties: A Real-time Parameter Estimation Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1