Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration.

Ji Woong Kim, Changyan He, Muller Urias, Peter Gehlbach, Gregory D Hager, Iulian Iordachita, Marin Kobilarov
{"title":"Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration.","authors":"Ji Woong Kim,&nbsp;Changyan He,&nbsp;Muller Urias,&nbsp;Peter Gehlbach,&nbsp;Gregory D Hager,&nbsp;Iulian Iordachita,&nbsp;Marin Kobilarov","doi":"10.1109/icra40945.2020.9196537","DOIUrl":null,"url":null,"abstract":"<p><p>A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 <i>μm</i> accuracy in physical experiments and 94 <i>μm</i> in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9196537","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra40945.2020.9196537","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/9/15 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

Abstract

A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过示范学习在眼内自主导航手术工具。
视网膜手术的一个基本挑战是如何安全地将手术工具导航到视网膜表面所需的目标位置,同时避免损伤周围组织,这一过程通常需要几十微米的精度。在实践中,外科医生依靠深度估计技能来定位工具尖相对于视网膜的位置,以执行工具导航任务,这很容易出现人为错误。为了减轻这种不确定性,先前的工作已经引入了一些方法,通过估计工具尖端到视网膜的距离来帮助外科医生,并提供触觉或听觉反馈。然而,自动化工具导航任务本身仍然没有得到解决,而且在很大程度上没有被探索过。这种能力,如果可靠地自动化,可以作为简化复杂程序的基石,减少组织损伤的机会。为此,我们建议通过学习模仿任务的专家演示来自动化工具导航任务。具体地说,一个深度网络被训练来模仿专家轨迹到视网膜上的不同位置,基于记录的视觉伺服到用户指定的给定目标。在仿真和物理实验中对所提出的自主导航系统进行了评估。我们表明,该网络可以可靠地将针头手术工具导航到各种所需位置,在物理实验中精度为137 μm,在模拟中平均精度为94 μm,并且可以很好地推广到不可见的情况,例如存在辅助手术工具,可变眼睛背景和亮度条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
0.00%
发文量
0
期刊最新文献
Gait Event Detection with Proprioceptive Force Sensing in a Powered Knee-Ankle Prosthesis: Validation over Walking Speeds and Slopes. SAGE: SLAM with Appearance and Geometry Prior for Endoscopy. Adaptive Semi-Supervised Intent Inferral to Control a Powered Hand Orthosis for Stroke. Resolution-Optimal Motion Planning for Steerable Needles. Stair Ascent Phase-Variable Control of a Powered Knee-Ankle Prosthesis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1