Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation

Md Masudur Rahman, Natalia Sanchez-Tamayo, Glebys T. Gonzalez, Mridul Agarwal, V. Aggarwal, R. Voyles, Yexiang Xue, J. Wachs
{"title":"Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation","authors":"Md Masudur Rahman, Natalia Sanchez-Tamayo, Glebys T. Gonzalez, Mridul Agarwal, V. Aggarwal, R. Voyles, Yexiang Xue, J. Wachs","doi":"10.1109/RO-MAN46459.2019.8956396","DOIUrl":null,"url":null,"abstract":"In the future, deployable, teleoperated surgical robots can save the lives of critically injured patients in battlefield environments. These robotic systems will need to have autonomous capabilities to take over during communication delays and unexpected environmental conditions during critical phases of the procedure. Understanding and predicting the next surgical actions (referred as “surgemes”) is essential for autonomous surgery. Most approaches for surgeme recognition cannot cope with the high variability associated with austere environments and thereby cannot “transfer” well to field robotics. We propose a methodology that uses compact image representations with kinematic features for surgeme recognition in the DESK dataset. This dataset offers samples for surgical procedures over different robotic platforms with a high variability in the setup. We performed surgeme classification in two setups: 1) No transfer, 2) Transfer from a simulated scenario to two real deployable robots. Then, the results were compared with recognition accuracies using only kinematic data with the same experimental setup. The results show that our approach improves the recognition performance over kinematic data across different domains. The proposed approach produced a transfer accuracy gain up to 20% between the simulated and the real robot, and up to 31% between the simulated robot and a different robot. A transfer accuracy gain was observed for all cases, even those already above 90%.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956396","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

In the future, deployable, teleoperated surgical robots can save the lives of critically injured patients in battlefield environments. These robotic systems will need to have autonomous capabilities to take over during communication delays and unexpected environmental conditions during critical phases of the procedure. Understanding and predicting the next surgical actions (referred as “surgemes”) is essential for autonomous surgery. Most approaches for surgeme recognition cannot cope with the high variability associated with austere environments and thereby cannot “transfer” well to field robotics. We propose a methodology that uses compact image representations with kinematic features for surgeme recognition in the DESK dataset. This dataset offers samples for surgical procedures over different robotic platforms with a high variability in the setup. We performed surgeme classification in two setups: 1) No transfer, 2) Transfer from a simulated scenario to two real deployable robots. Then, the results were compared with recognition accuracies using only kinematic data with the same experimental setup. The results show that our approach improves the recognition performance over kinematic data across different domains. The proposed approach produced a transfer accuracy gain up to 20% between the simulated and the real robot, and up to 31% between the simulated robot and a different robot. A transfer accuracy gain was observed for all cases, even those already above 90%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
半自主遥操作中灵巧外科技能知识在机器人间的传递
在未来,可部署的远程手术机器人可以在战场环境中挽救重伤患者的生命。这些机器人系统需要有自主能力,在关键阶段的通信延迟和意外环境条件下接管工作。理解和预测下一个手术动作(称为“外科手术”)对于自主手术至关重要。大多数突波识别方法不能处理与恶劣环境相关的高度可变性,因此不能很好地“转移”到现场机器人。我们提出了一种方法,该方法使用具有运动学特征的紧凑图像表示来识别DESK数据集中的突波。该数据集为不同机器人平台上的外科手术提供了样本,在设置上具有很高的可变性。我们在两种设置中执行了涌浪分类:1)不转移,2)从模拟场景转移到两个真实的可部署机器人。然后,将结果与相同实验设置下仅使用运动数据的识别精度进行比较。结果表明,该方法提高了对不同域运动数据的识别性能。所提出的方法在模拟机器人和真实机器人之间产生了高达20%的传递精度增益,在模拟机器人和不同机器人之间产生了高达31%的传递精度增益。在所有情况下都观察到传输精度的增加,即使已经超过90%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Comparison of Descriptive and Emotive Labels to Explain Human Perception of Gait Styles on a Compass Walker in Variable Contexts Communicating with SanTO – the first Catholic robot Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation Improving Robot Transparency: An Investigation With Mobile Augmented Reality Development and Applicability of a Cable-driven Wearable Adaptive Rehabilitation Suit (WeARS)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1