Label-Free Adaptive Gaussian Sample Consensus Framework for Learning From Perfect and Imperfect Demonstrations

IF 3.4 Q2 ENGINEERING, BIOMEDICAL IEEE transactions on medical robotics and bionics Pub Date : 2024-07-03 DOI:10.1109/TMRB.2024.3422652
Yi Hu;Zahra Samadikhoshkho;Jun Jin;Mahdi Tavakoli
{"title":"Label-Free Adaptive Gaussian Sample Consensus Framework for Learning From Perfect and Imperfect Demonstrations","authors":"Yi Hu;Zahra Samadikhoshkho;Jun Jin;Mahdi Tavakoli","doi":"10.1109/TMRB.2024.3422652","DOIUrl":null,"url":null,"abstract":"Autonomous robotic surgery represents one of the most groundbreaking advancements in medical technology. Learning from human demonstrations is promising in this domain, which facilitates the transfer of skills from humans to robots. However, the practical application of this method is challenged by the difficulty of acquiring high-quality demonstrations. Surgical tasks often involve complex manipulations and stringent precision requirements, leading to frequent errors in the demonstrations. These imperfect demonstrations adversely affect the performance of controller policies learned from the data. Unlike existing methods that rely on extensive human labeling of demonstrated trajectories, we present a novel label-free adaptive Gaussian sample consensus framework to progressively refine the control policy. We demonstrate the efficacy and practicality of our approach through two experimental studies: a handwriting classification task, providing reproducible ground-truth labels for evaluation, and an endoscopy scanning task, demonstrating the feasibility of our method in a real-world clinical context. Both experiments highlight our method’s capacity to efficiently adapt to and learn from an ongoing stream of imperfect demonstrations.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"6 3","pages":"1093-1103"},"PeriodicalIF":3.4000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical robotics and bionics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10582909/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous robotic surgery represents one of the most groundbreaking advancements in medical technology. Learning from human demonstrations is promising in this domain, which facilitates the transfer of skills from humans to robots. However, the practical application of this method is challenged by the difficulty of acquiring high-quality demonstrations. Surgical tasks often involve complex manipulations and stringent precision requirements, leading to frequent errors in the demonstrations. These imperfect demonstrations adversely affect the performance of controller policies learned from the data. Unlike existing methods that rely on extensive human labeling of demonstrated trajectories, we present a novel label-free adaptive Gaussian sample consensus framework to progressively refine the control policy. We demonstrate the efficacy and practicality of our approach through two experimental studies: a handwriting classification task, providing reproducible ground-truth labels for evaluation, and an endoscopy scanning task, demonstrating the feasibility of our method in a real-world clinical context. Both experiments highlight our method’s capacity to efficiently adapt to and learn from an ongoing stream of imperfect demonstrations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从完美和不完美演示中学习的无标签自适应高斯样本共识框架
自主机器人手术是医疗技术领域最具突破性的进步之一。在这一领域,从人类演示中学习很有前景,这有助于将人类的技能传授给机器人。然而,由于难以获得高质量的演示,这种方法的实际应用受到了挑战。外科手术任务通常涉及复杂的操作和严格的精度要求,导致演示中经常出现错误。这些不完美的演示会对从数据中学到的控制器策略的性能产生不利影响。现有方法依赖于对演示轨迹进行大量人工标注,与之不同的是,我们提出了一种新颖的无标注自适应高斯样本共识框架,以逐步完善控制策略。我们通过两项实验研究证明了我们方法的有效性和实用性:一项是手写分类任务,为评估提供了可重复的地面真实标签;另一项是内窥镜扫描任务,证明了我们的方法在实际临床环境中的可行性。这两项实验都凸显了我们的方法能够有效地适应和学习正在进行的不完美演示流。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents IEEE Transactions on Medical Robotics and Bionics Society Information Guest Editorial Special section on the Hamlyn Symposium 2023—Immersive Tech: The Future of Medicine IEEE Transactions on Medical Robotics and Bionics Publication Information IEEE Transactions on Medical Robotics and Bionics Information for Authors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1