首页 > 最新文献

2021 International Symposium on Medical Robotics (ISMR)最新文献

英文 中文
A Pneumatic Optical Soft Sensor for Fingertip Force Sensing 一种用于指尖力传感的气动光学软传感器
Pub Date : 2021-11-17 DOI: 10.1109/ismr48346.2021.9661559
Le Chen, Boshen Qi, Jun Sheng
This paper presents the design and development of a pneumatic optical soft sensor with a potential application to fingertip force sensing. To enable safe and successful interaction with delicate objects, it is important to measure contact force. In particular, prosthetic hands require force sensing at fingertips to enable prosthetic hands to apply appropriate force on objects when performing tasks in daily life. Emerging artificial skins usually feature delicate electronics requiring special packaging to survive in an unstructured environment. In this project, we present a robust soft force sensor with a low profile and high compliance. It consists of a soft silicone base, an inflatable chamber, a hyperelastic membrane, and a photo interrupter. External force applied on the inflated membrane will cause the change of light reflection inside the chamber and thus change the signal output of the photo interrupter. The working principle of the developed sensor is modeled, and experimental studies are performed to evaluate the working performance of the sensor and calibrate the measurements.
本文介绍了一种具有应用前景的气动光学软传感器的设计与研制。为了安全、成功地与精细物体相互作用,测量接触力是很重要的。特别是假手需要指尖的力感应,使假手在日常生活中执行任务时能够对物体施加适当的力。新兴的人造皮肤通常具有精密的电子元件,需要特殊的包装才能在非结构化环境中生存。在这个项目中,我们提出了一个具有低姿态和高依从性的鲁棒软力传感器。它由一个柔软的硅胶底座、一个充气腔、一个超弹性膜和一个光中断器组成。施加在膨胀膜上的外力会引起腔内光反射的变化,从而改变光灭光器的信号输出。对所研制传感器的工作原理进行了建模,并进行了实验研究,以评估传感器的工作性能和校准测量值。
{"title":"A Pneumatic Optical Soft Sensor for Fingertip Force Sensing","authors":"Le Chen, Boshen Qi, Jun Sheng","doi":"10.1109/ismr48346.2021.9661559","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661559","url":null,"abstract":"This paper presents the design and development of a pneumatic optical soft sensor with a potential application to fingertip force sensing. To enable safe and successful interaction with delicate objects, it is important to measure contact force. In particular, prosthetic hands require force sensing at fingertips to enable prosthetic hands to apply appropriate force on objects when performing tasks in daily life. Emerging artificial skins usually feature delicate electronics requiring special packaging to survive in an unstructured environment. In this project, we present a robust soft force sensor with a low profile and high compliance. It consists of a soft silicone base, an inflatable chamber, a hyperelastic membrane, and a photo interrupter. External force applied on the inflated membrane will cause the change of light reflection inside the chamber and thus change the signal output of the photo interrupter. The working principle of the developed sensor is modeled, and experimental studies are performed to evaluate the working performance of the sensor and calibrate the measurements.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126580043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototyping a sensorized tool wristband for objective skill assessment and feedback during training in minimally invasive surgery 一种用于微创手术训练中客观技能评估和反馈的传感工具腕带原型
Pub Date : 2021-11-17 DOI: 10.1109/ismr48346.2021.9661567
A. Mariani, Matteo Conti, S. Gandah, C. G. D. Paratesi, A. Menciassi
Skill assessment is a key component of surgical practical training. Towards an objective, automatic and cost-effective skill evaluation, this work introduces a preliminary sensorized wristband as a training add-on for standard minimally invasive surgical tools. The prototype herein presented allows to classify the presence of the tool in the camera field of view, as well as to provide feedback accordingly. A usability study on 14 non-medical participants was carried out using the da Vinci Research Kit. Results demonstrated the classification accuracy of the method and the usefulness of the feedback to minimize the time spent with the tool out of the field of view. Embedding additional sensors and testing usability on surgical residents will pave the way towards the evolution of this proof of concept to an advanced prototype to use in a real training setting.
技能评估是外科实践训练的重要组成部分。为了实现客观、自动和经济有效的技能评估,这项工作引入了一种初步的传感器腕带,作为标准微创手术工具的培训附加组件。本文提出的原型允许对相机视场中工具的存在进行分类,并相应地提供反馈。使用达芬奇研究工具包对14名非医疗参与者进行了可用性研究。结果证明了该方法的分类准确性和反馈的有效性,可以最大限度地减少工具在视野之外花费的时间。在外科住院医生身上植入额外的传感器和测试可用性,将为这一概念验证的发展铺平道路,使其成为可在实际培训环境中使用的先进原型。
{"title":"Prototyping a sensorized tool wristband for objective skill assessment and feedback during training in minimally invasive surgery","authors":"A. Mariani, Matteo Conti, S. Gandah, C. G. D. Paratesi, A. Menciassi","doi":"10.1109/ismr48346.2021.9661567","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661567","url":null,"abstract":"Skill assessment is a key component of surgical practical training. Towards an objective, automatic and cost-effective skill evaluation, this work introduces a preliminary sensorized wristband as a training add-on for standard minimally invasive surgical tools. The prototype herein presented allows to classify the presence of the tool in the camera field of view, as well as to provide feedback accordingly. A usability study on 14 non-medical participants was carried out using the da Vinci Research Kit. Results demonstrated the classification accuracy of the method and the usefulness of the feedback to minimize the time spent with the tool out of the field of view. Embedding additional sensors and testing usability on surgical residents will pave the way towards the evolution of this proof of concept to an advanced prototype to use in a real training setting.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121947820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Force Estimation with Learned Intraoperative Correction 机器人力估计与习得术中校正
Pub Date : 2021-11-17 DOI: 10.1109/ismr48346.2021.9661568
J. Wu, Nural Yilmaz, U. Tumerdem, P. Kazanzides
Measurement of environment interaction forces during robotic minimally-invasive surgery would enable haptic feedback to the surgeon, thereby solving one long-standing limitation. Estimating this force from existing sensor data avoids the challenge of retrofitting systems with force sensors, but is difficult due to mechanical effects such as friction and compliance in the robot mechanism. We have previously shown that neural networks can be trained to estimate the internal robot joint torques, thereby enabling estimation of external forces on the da Vinci Research Kit (dVRK). In this work, we extend the method to estimate external Cartesian forces and torques, and also present a two-step approach to adapt to the specific surgical setup by compensating for forces due to the interactions between the instrument shaft and cannula seal and between the trocar and patient body. Experiments show that this approach provides estimates of external forces and torques within a mean root-mean-square error (RMSE) of 1.8N and 0.1Nm, respectively. Furthermore, the two-step approach can add as little as 5 minutes to the surgery setup time, with about 4 minutes to collect intraoperative training data and 1 minute to train the second-step network.
在机器人微创手术中测量环境相互作用力将使触觉反馈给外科医生,从而解决一个长期存在的限制。从现有的传感器数据估计这种力,避免了用力传感器改造系统的挑战,但由于机器人机构中的摩擦和顺应性等机械效应,这是困难的。我们之前已经证明,神经网络可以被训练来估计机器人内部关节扭矩,从而能够估计达芬奇研究套件(dVRK)上的外力。在这项工作中,我们扩展了该方法来估计外部笛卡尔力和扭矩,并提出了一种两步方法,通过补偿由于器械轴与套管密封之间以及套管针与患者身体之间的相互作用而产生的力来适应特定的手术设置。实验表明,该方法在平均均方根误差(RMSE)分别为1.8N和0.1Nm的范围内提供了外力和扭矩的估计。此外,两步方法只会增加5分钟的手术准备时间,其中约4分钟用于收集术中训练数据,1分钟用于训练第二步网络。
{"title":"Robot Force Estimation with Learned Intraoperative Correction","authors":"J. Wu, Nural Yilmaz, U. Tumerdem, P. Kazanzides","doi":"10.1109/ismr48346.2021.9661568","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661568","url":null,"abstract":"Measurement of environment interaction forces during robotic minimally-invasive surgery would enable haptic feedback to the surgeon, thereby solving one long-standing limitation. Estimating this force from existing sensor data avoids the challenge of retrofitting systems with force sensors, but is difficult due to mechanical effects such as friction and compliance in the robot mechanism. We have previously shown that neural networks can be trained to estimate the internal robot joint torques, thereby enabling estimation of external forces on the da Vinci Research Kit (dVRK). In this work, we extend the method to estimate external Cartesian forces and torques, and also present a two-step approach to adapt to the specific surgical setup by compensating for forces due to the interactions between the instrument shaft and cannula seal and between the trocar and patient body. Experiments show that this approach provides estimates of external forces and torques within a mean root-mean-square error (RMSE) of 1.8N and 0.1Nm, respectively. Furthermore, the two-step approach can add as little as 5 minutes to the surgery setup time, with about 4 minutes to collect intraoperative training data and 1 minute to train the second-step network.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122944924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation 半自主机器人内窥镜操作的远程运动中心视觉伺服
Pub Date : 2021-10-25 DOI: 10.1109/ISMR48346.2021.9661563
M. Huber, John Bason Mitchell, Ross Henry, S. Ourselin, Tom Kamiel Magda Vercauteren, C. Bergeles
The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope’s field of view based on the surgical tools’ distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator 3. A video is provided 4.
在微创手术(MIS)中,主要的视觉伺服入路遵循单点或根据手术工具的距离调整内窥镜的视野。这些方法依赖于相对于相机帧的点位置来推断控制策略。与主流方法不同,我们制定了一个机器人控制器,允许基于图像的视觉伺服,既不需要明确的工具和相机位置,也不需要任何明确的图像深度信息。该方法依赖于基于同形图的图像配准,将自动化范式从以点为中心转变为以手术场景为中心。它同时尊重可编程的远程运动中心(RCM)。我们的方法允许外科医生构建所需视图的图表,一旦构建,视图可以手动选择并自动伺服,而不考虑机器人-患者框架的转换变化。我们在一个腹部假体上评估了我们的方法,并提供了一个开源的ROS Moveit集成,用于任何串行机械臂3。视频介绍4。
{"title":"Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation","authors":"M. Huber, John Bason Mitchell, Ross Henry, S. Ourselin, Tom Kamiel Magda Vercauteren, C. Bergeles","doi":"10.1109/ISMR48346.2021.9661563","DOIUrl":"https://doi.org/10.1109/ISMR48346.2021.9661563","url":null,"abstract":"The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope’s field of view based on the surgical tools’ distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator 3. A video is provided 4.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots 从肌腱驱动手术机器人的演示中学习上下文相关任务
Pub Date : 2021-10-15 DOI: 10.1109/ismr48346.2021.9661534
Yixuan Huang, Michael Bentley, Tucker Hermans, A. Kuntz
Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.
肌腱驱动机器人是一种连续体机器人,有可能通过接近难以到达的解剖目标来减少手术的侵入性。在未来,这些机器人的手术任务自动化可能有助于减少外科医生在面对快速增长的人口时的压力。然而,对这些机器人直接编码手术任务及其相关环境是不可行的。在这项工作中,我们采取了一些步骤,使系统能够通过直接从一组专家演示中学习,成功地完成与上下文相关的手术任务。我们提出了三个模型,这些模型以编码演示上下文的向量为条件,对演示进行训练。然后,我们使用这些模型来计划和执行肌腱驱动机器人的运动,类似于训练集中未见的新环境的演示。我们证明了我们的方法在三个手术启发任务的有效性。
{"title":"Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots","authors":"Yixuan Huang, Michael Bentley, Tucker Hermans, A. Kuntz","doi":"10.1109/ismr48346.2021.9661534","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661534","url":null,"abstract":"Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126607853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Planning Sensing Sequences for Subsurface 3D Tumor Mapping 面向地下三维肿瘤映射的传感序列规划
Pub Date : 2021-10-12 DOI: 10.1109/ismr48346.2021.9661488
Brian Y. Cho, Tucker Hermans, A. Kuntz
Surgical automation has the potential to enable increased precision and reduce the per-patient workload of overburdened human surgeons. An effective automation system must be able to sense and map subsurface anatomy, such as tumors, efficiently and accurately. In this work, we present a method that plans a sequence of sensing actions to map the 3D geometry of subsurface tumors. We leverage a sequential Bayesian Hilbert map to create a 3D probabilistic occupancy model that represents the likelihood that any given point in the anatomy is occupied by a tumor, conditioned on sensor readings. We iteratively update the map, utilizing Bayesian optimization to determine sensing poses that explore unsensed regions of anatomy and exploit the knowledge gained by previous sensing actions. We demonstrate our method’s efficiency and accuracy in three anatomical scenarios including a liver tumor scenario generated from a real patient’s CT scan. The results show that our proposed method significantly outperforms comparison methods in terms of efficiency while detecting subsurface tumors with high accuracy.
手术自动化有可能提高精度,减少负担过重的人类外科医生的每个病人的工作量。一个有效的自动化系统必须能够有效和准确地感知和绘制地下解剖结构,如肿瘤。在这项工作中,我们提出了一种方法,该方法计划一系列传感动作来绘制表面下肿瘤的三维几何形状。我们利用序列贝叶斯希尔伯特图来创建一个3D概率占用模型,该模型表示解剖结构中任何给定点被肿瘤占用的可能性,条件是传感器读数。我们迭代更新地图,利用贝叶斯优化来确定探索解剖学未感知区域的传感姿势,并利用以前的传感动作获得的知识。我们在三种解剖场景中展示了我们的方法的效率和准确性,包括从真实患者的CT扫描生成的肝脏肿瘤场景。结果表明,我们提出的方法在检测表面下肿瘤的准确率较高的同时,在效率上明显优于比较方法。
{"title":"Planning Sensing Sequences for Subsurface 3D Tumor Mapping","authors":"Brian Y. Cho, Tucker Hermans, A. Kuntz","doi":"10.1109/ismr48346.2021.9661488","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661488","url":null,"abstract":"Surgical automation has the potential to enable increased precision and reduce the per-patient workload of overburdened human surgeons. An effective automation system must be able to sense and map subsurface anatomy, such as tumors, efficiently and accurately. In this work, we present a method that plans a sequence of sensing actions to map the 3D geometry of subsurface tumors. We leverage a sequential Bayesian Hilbert map to create a 3D probabilistic occupancy model that represents the likelihood that any given point in the anatomy is occupied by a tumor, conditioned on sensor readings. We iteratively update the map, utilizing Bayesian optimization to determine sensing poses that explore unsensed regions of anatomy and exploit the knowledge gained by previous sensing actions. We demonstrate our method’s efficiency and accuracy in three anatomical scenarios including a liver tumor scenario generated from a real patient’s CT scan. The results show that our proposed method significantly outperforms comparison methods in terms of efficiency while detecting subsurface tumors with high accuracy.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114151927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning from Demonstrations for Autonomous Soft-tissue Retraction * 从自主软组织牵入演示中学习*
Pub Date : 2021-10-01 DOI: 10.1109/ismr48346.2021.9661514
Ameya Pore, E. Tagliabue, M. Piccinelli, D. Dall’Alba, A. Casals, P. Fiorini
The current research focus in Robot-Assisted Minimally Invasive Surgery (RAMIS) is directed towards increasing the level of robot autonomy, to place surgeons in a supervisory position. Although Learning from Demonstrations (LfD) approaches are among the preferred ways for an autonomous surgical system to learn expert gestures, they require a high number of demonstrations and show poor generalization to the variable conditions of the surgical environment. In this work, we propose an LfD methodology based on Generative Adversarial Imitation Learning (GAIL) that is built on a Deep Reinforcement Learning (DRL) setting. GAIL combines generative adversarial networks to learn the distribution of expert trajectories with a DRL setting to ensure generalisation of trajectories providing human-like behaviour. We consider automation of tissue retraction, a common RAMIS task that involves soft tissues manipulation to expose a region of interest. In our proposed methodology, a small set of expert trajectories can be acquired through the da Vinci Research Kit (dVRK) and used to train the proposed LfD method inside a simulated environment. Results indicate that our methodology can accomplish the tissue retraction task with human-like behaviour while being more sample-efficient than the baseline DRL method. Towards the end, we show that the learnt policies can be successfully transferred to the real robotic platform and deployed for soft tissue retraction on a synthetic phantom.
目前机器人辅助微创手术(RAMIS)的研究重点是提高机器人的自主水平,使外科医生处于监督地位。尽管从演示中学习(LfD)方法是自主手术系统学习专家手势的首选方法之一,但它们需要大量的演示,并且对手术环境的可变条件表现出较差的泛化。在这项工作中,我们提出了一种基于生成对抗模仿学习(GAIL)的LfD方法,该方法建立在深度强化学习(DRL)设置上。GAIL将生成式对抗网络与DRL设置相结合,以学习专家轨迹的分布,以确保提供类似人类行为的轨迹的泛化。我们考虑组织收缩的自动化,这是一种常见的RAMIS任务,涉及软组织操作以暴露感兴趣的区域。在我们提出的方法中,可以通过达芬奇研究工具包(dVRK)获取一小部分专家轨迹,并用于在模拟环境中训练所提出的LfD方法。结果表明,我们的方法可以完成类似人类行为的组织收缩任务,同时比基线DRL方法更具样本效率。最后,我们证明了学习到的策略可以成功地转移到真实的机器人平台上,并在合成幻影上部署软组织收缩。
{"title":"Learning from Demonstrations for Autonomous Soft-tissue Retraction *","authors":"Ameya Pore, E. Tagliabue, M. Piccinelli, D. Dall’Alba, A. Casals, P. Fiorini","doi":"10.1109/ismr48346.2021.9661514","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661514","url":null,"abstract":"The current research focus in Robot-Assisted Minimally Invasive Surgery (RAMIS) is directed towards increasing the level of robot autonomy, to place surgeons in a supervisory position. Although Learning from Demonstrations (LfD) approaches are among the preferred ways for an autonomous surgical system to learn expert gestures, they require a high number of demonstrations and show poor generalization to the variable conditions of the surgical environment. In this work, we propose an LfD methodology based on Generative Adversarial Imitation Learning (GAIL) that is built on a Deep Reinforcement Learning (DRL) setting. GAIL combines generative adversarial networks to learn the distribution of expert trajectories with a DRL setting to ensure generalisation of trajectories providing human-like behaviour. We consider automation of tissue retraction, a common RAMIS task that involves soft tissues manipulation to expose a region of interest. In our proposed methodology, a small set of expert trajectories can be acquired through the da Vinci Research Kit (dVRK) and used to train the proposed LfD method inside a simulated environment. Results indicate that our methodology can accomplish the tissue retraction task with human-like behaviour while being more sample-efficient than the baseline DRL method. Towards the end, we show that the learnt policies can be successfully transferred to the real robotic platform and deployed for soft tissue retraction on a synthetic phantom.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
From Bench to Bedside: The First Live Robotic Surgery on the dVRK to Enable Remote Telesurgery with Motion Scaling 从工作台到床边:dVRK上的第一个实时机器人手术,使远程手术与运动缩放成为可能
Pub Date : 2021-09-24 DOI: 10.1109/ismr48346.2021.9661536
Florian Richter, E. Funk, Won Seo Park, R. Orosco, Michael C. Yip
Innovations from surgical robotic research rarely translates to live surgery due to the significant difference between the lab and a live environment. Live environments require considerations that are often overlooked during early stages of research such as surgical staff, surgical procedure, and the challenges of working with live tissue. One such example is the da Vinci Research Kit (dVRK) which is used by over 40 robotics research groups and represents an open-sourced version of the da Vinci ® Surgical System. Despite dVRK being available for nearly a decade and the ideal candidate for translating research to practice on over 5,000 da Vinci ® Systems used in hospitals around the world, not one live surgery has been conducted with it. In this paper, we address the challenges, considerations, and solutions for translating surgical robotic research from bench-to-bedside. This is explained from the perspective of a remote telesurgery scenario where motion scaling solutions previously experimented in a lab setting are translated to a live pig surgery. This study presents results from the first ever use of a dVRK in a live animal and discusses how the surgical robotics community can approach translating their research to practice.
由于实验室和现场环境之间的显著差异,手术机器人研究的创新很少转化为现场手术。活体环境需要考虑一些在早期研究阶段经常被忽视的因素,如手术人员、手术程序和处理活体组织的挑战。其中一个例子是达芬奇研究工具包(dVRK),它被40多个机器人研究小组使用,代表了达芬奇®手术系统的开源版本。尽管dVRK已问世近十年,是将研究成果转化为世界各地医院使用的5000多个达芬奇®系统的理想候选者,但尚未进行过一次现场手术。在这篇论文中,我们讨论了将外科机器人研究从实验室转移到临床的挑战、考虑和解决方案。这是从远程远程手术场景的角度来解释的,即先前在实验室环境中实验的运动缩放解决方案被转化为活猪手术。本研究展示了首次在活体动物中使用dVRK的结果,并讨论了外科机器人社区如何将他们的研究转化为实践。
{"title":"From Bench to Bedside: The First Live Robotic Surgery on the dVRK to Enable Remote Telesurgery with Motion Scaling","authors":"Florian Richter, E. Funk, Won Seo Park, R. Orosco, Michael C. Yip","doi":"10.1109/ismr48346.2021.9661536","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661536","url":null,"abstract":"Innovations from surgical robotic research rarely translates to live surgery due to the significant difference between the lab and a live environment. Live environments require considerations that are often overlooked during early stages of research such as surgical staff, surgical procedure, and the challenges of working with live tissue. One such example is the da Vinci Research Kit (dVRK) which is used by over 40 robotics research groups and represents an open-sourced version of the da Vinci ® Surgical System. Despite dVRK being available for nearly a decade and the ideal candidate for translating research to practice on over 5,000 da Vinci ® Systems used in hospitals around the world, not one live surgery has been conducted with it. In this paper, we address the challenges, considerations, and solutions for translating surgical robotic research from bench-to-bedside. This is explained from the perspective of a remote telesurgery scenario where motion scaling solutions previously experimented in a lab setting are translated to a live pig surgery. This study presents results from the first ever use of a dVRK in a live animal and discusses how the surgical robotics community can approach translating their research to practice.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124382778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Autonomous tissue retraction with a biomechanically informed logic based framework 自主组织收缩与生物力学知情逻辑为基础的框架
Pub Date : 2021-09-06 DOI: 10.1109/ismr48346.2021.9661573
D. Meli, E. Tagliabue, D. Dall’Alba, P. Fiorini
Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available.
机器人辅助手术的自主性对于减少外科医生的认知负荷并最终改善整体手术效果至关重要。在手术等安全关键场景中,自主的一个关键要求在于生成依赖于专家知识的可解释计划。此外,自主机器人手术系统(ARSS)必须能够对动态和不可预测的解剖环境进行推理,并在出现意外情况时快速调整手术计划。在本文中,我们提出了一个模块化框架的机器人辅助手术(FRAS)在可变形的解剖环境。我们的框架集成了一个用于任务级可解释推理的逻辑模块,一个补充真实传感器数据的生物力学模拟,以及一个用于上下文解释的态势感知模块。该框架的性能是评估模拟软组织收缩,一个常见的手术任务,以消除组织隐藏感兴趣的区域。结果表明,该框架具有成功完成任务所需的适应性,能够处理动态环境条件和可能出现的故障,同时保证了真实手术场景所需的计算效率。该框架是公开的。
{"title":"Autonomous tissue retraction with a biomechanically informed logic based framework","authors":"D. Meli, E. Tagliabue, D. Dall’Alba, P. Fiorini","doi":"10.1109/ismr48346.2021.9661573","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661573","url":null,"abstract":"Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125201845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Toward Robotically Automated Femoral Vascular Access 迈向机器人自动化股骨血管通路
Pub Date : 2021-07-06 DOI: 10.1109/ismr48346.2021.9661560
N. Zevallos, Evan Harber, Abhimanyu, K. Patel, Yizhu Gu, Kenny Sladick, F. Guyette, L. Weiss, M. Pinsky, H. Gómez, J. Galeotti, H. Choset
Advanced resuscitative technologies, such as Extra Corporeal Membrane Oxygenation (ECMO) cannulation or Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA), are technically difficult even for skilled medical personnel. This paper describes the core technologies that comprise a teleoperated system capable of granting femoral vascular access, an essential step in these procedures, and a significant roadblock in their broader use in the field. These technologies include a kinematic manipulator, various sensing modalities, and a user interface. In addition, we evaluate our system on a surgical phantom as well as in-vivo porcine experiments. To the best of our knowledge, these resulted in the first robot-assisted arterial catheterizations, a significant step towards our eventual goal of automatic catheter insertion through the Seldinger technique.
先进的复苏技术,如体外膜氧合(ECMO)插管或复苏血管内球囊闭塞主动脉(REBOA),即使对熟练的医务人员来说,在技术上也是困难的。本文描述了包括远程操作系统的核心技术,该系统能够授予股骨血管通路,这是这些程序中的重要步骤,也是其在该领域广泛应用的重要障碍。这些技术包括一个运动学机械手,各种传感模式,和一个用户界面。此外,我们对我们的系统进行了外科假体和猪体内实验的评估。据我们所知,这些导致了第一次机器人辅助动脉导管插入术,这是我们通过Seldinger技术实现自动导管插入的最终目标的重要一步。
{"title":"Toward Robotically Automated Femoral Vascular Access","authors":"N. Zevallos, Evan Harber, Abhimanyu, K. Patel, Yizhu Gu, Kenny Sladick, F. Guyette, L. Weiss, M. Pinsky, H. Gómez, J. Galeotti, H. Choset","doi":"10.1109/ismr48346.2021.9661560","DOIUrl":"https://doi.org/10.1109/ismr48346.2021.9661560","url":null,"abstract":"Advanced resuscitative technologies, such as Extra Corporeal Membrane Oxygenation (ECMO) cannulation or Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA), are technically difficult even for skilled medical personnel. This paper describes the core technologies that comprise a teleoperated system capable of granting femoral vascular access, an essential step in these procedures, and a significant roadblock in their broader use in the field. These technologies include a kinematic manipulator, various sensing modalities, and a user interface. In addition, we evaluate our system on a surgical phantom as well as in-vivo porcine experiments. To the best of our knowledge, these resulted in the first robot-assisted arterial catheterizations, a significant step towards our eventual goal of automatic catheter insertion through the Seldinger technique.","PeriodicalId":405817,"journal":{"name":"2021 International Symposium on Medical Robotics (ISMR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2021 International Symposium on Medical Robotics (ISMR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1