首页 > 最新文献

IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation最新文献

英文 中文
Cylinder Diameter Measurement for Rail Tankers Using 3D Laser Scanning Technology 利用三维激光扫描技术测量铁路罐车气缸直径
Qi Chao, Shao Xuejun, Pan Qing, Wu Huijie
Rail tankers are the main tools of transport for liquid goods and represent a measuring instrument for trade settlement. The cylinder diameter of rail tankers must be measured for process quality control and to calculate the volume. In this paper, an automated system for non-tactile diameter measurements is presented. This study aims to improve the working efficiency and reduce the artificial labor intensity of such measurements. Therefore, a 3D laser scanner is selected and combined with a computing system. The scanner collects many points along with the coordinate information, and these points constitute the point cloud and accurately reflect the tanker shape. Then, the computing system processes the point cloud by establishing a digital model, calculating the fitting initial values, fitting the curved surface, etc., and it then displays the diameter value in a 3D diagram. To verify the performance of this method, the results are compared with those of the artificial method.
铁路罐车是运输液体货物的主要工具,是贸易结算的计量工具。轨道罐车的缸径测量是过程质量控制和容积计算的必要条件。本文介绍了一种非接触式直径自动测量系统。本研究旨在提高此类测量的工作效率,降低人工劳动强度。因此,选择了三维激光扫描仪并与计算系统相结合。扫描仪收集了许多点以及坐标信息,这些点构成点云,准确地反映了油轮的形状。然后,计算系统通过建立数字模型、计算拟合初值、拟合曲面等方式对点云进行处理,并将直径值以三维图的形式显示出来。为了验证该方法的性能,将结果与人工方法进行了比较。
{"title":"Cylinder Diameter Measurement for Rail Tankers Using 3D Laser Scanning Technology","authors":"Qi Chao, Shao Xuejun, Pan Qing, Wu Huijie","doi":"10.1109/ICRAS49812.2020.9134924","DOIUrl":"https://doi.org/10.1109/ICRAS49812.2020.9134924","url":null,"abstract":"Rail tankers are the main tools of transport for liquid goods and represent a measuring instrument for trade settlement. The cylinder diameter of rail tankers must be measured for process quality control and to calculate the volume. In this paper, an automated system for non-tactile diameter measurements is presented. This study aims to improve the working efficiency and reduce the artificial labor intensity of such measurements. Therefore, a 3D laser scanner is selected and combined with a computing system. The scanner collects many points along with the coordinate information, and these points constitute the point cloud and accurately reflect the tanker shape. Then, the computing system processes the point cloud by establishing a digital model, calculating the fitting initial values, fitting the curved surface, etc., and it then displays the diameter value in a 3D diagram. To verify the performance of this method, the results are compared with those of the artificial method.","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79003119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Bimanual Vein Cannulation: Preliminary Study of a Bimanual Robotic System With a Dual Force Constraint Controller. 面向双手静脉插管:双力约束双手机器人系统的初步研究。
Changyan He, Ali Ebrahimi, Emily Yang, Muller Urias, Yang Yang, Peter Gehlbach, Iulian Iordachita

Retinal vein cannulation is a promising approach for treating retinal vein occlusion that involves injecting medicine into the occluded vessel to dissolve the clot. The approach remains largely unexploited clinically due to surgeon limitations in detecting interaction forces between surgical tools and retinal tissue. In this paper, a dual force constraint controller for robot-assisted retinal surgery was presented to keep the tool-to-vessel forces and tool-to-sclera forces below prescribed thresholds. A cannulation tool and forceps with dual force-sensing capability were developed and used to measure force information fed into the robot controller, which was implemented on existing Steady Hand Eye Robot platforms. The robotic system facilitates retinal vein cannulation by allowing a user to grasp the target vessel with the forceps and then enter the vessel with the cannula. The system was evaluated on an eye phantom. The results showed that, while the eyeball was subjected to rotational disturbances, the proposed controller actuates the robotic manipulators to maintain the average tool-to-vessel force at 10.9 mN and 13.1 mN and the average tool-to-sclera force at 38.1 mN and 41.2 mN for the cannula and the forcpes, respectively. Such small tool-to-tissue forces are acceptable to avoid retinal tissue injury. Additionally, two clinicians participated in a preliminary user study of the bimanual cannulation demonstrating that the operation time and tool-to-tissue forces are significantly decreased when using the bimanual robotic system as compared to freehand performance.

视网膜静脉插管是治疗视网膜静脉阻塞的一种很有前途的方法,它包括向闭塞的血管注射药物以溶解血栓。由于外科医生在检测手术工具和视网膜组织之间的相互作用力方面的局限性,该方法在临床上仍未得到充分利用。本文提出了一种用于机器人辅助视网膜手术的双力约束控制器,以保持工具对血管的力和工具对巩膜的力低于规定的阈值。开发了一种具有双力感应能力的插管工具和镊子,用于测量输入到机器人控制器的力信息,并在现有的稳态手眼机器人平台上实现。机器人系统通过允许用户用镊子抓住目标血管,然后用套管进入血管,从而促进视网膜静脉插管。该系统在眼幻影上进行了评估。结果表明,当眼球受到旋转干扰时,所提出的控制器驱动机器人操作器使套管和力分别保持工具对血管的平均力为10.9 mN和13.1 mN,工具对巩膜的平均力为38.1 mN和41.2 mN。这样小的工具到组织的力是可以接受的,以避免视网膜组织损伤。此外,两名临床医生参与了对双手插管的初步用户研究,表明与徒手操作相比,使用双手机器人系统可以显著减少操作时间和工具对组织的作用力。
{"title":"Towards Bimanual Vein Cannulation: Preliminary Study of a Bimanual Robotic System With a Dual Force Constraint Controller.","authors":"Changyan He,&nbsp;Ali Ebrahimi,&nbsp;Emily Yang,&nbsp;Muller Urias,&nbsp;Yang Yang,&nbsp;Peter Gehlbach,&nbsp;Iulian Iordachita","doi":"10.1109/icra40945.2020.9196889","DOIUrl":"https://doi.org/10.1109/icra40945.2020.9196889","url":null,"abstract":"<p><p>Retinal vein cannulation is a promising approach for treating retinal vein occlusion that involves injecting medicine into the occluded vessel to dissolve the clot. The approach remains largely unexploited clinically due to surgeon limitations in detecting interaction forces between surgical tools and retinal tissue. In this paper, a dual force constraint controller for robot-assisted retinal surgery was presented to keep the tool-to-vessel forces and tool-to-sclera forces below prescribed thresholds. A cannulation tool and forceps with dual force-sensing capability were developed and used to measure force information fed into the robot controller, which was implemented on existing Steady Hand Eye Robot platforms. The robotic system facilitates retinal vein cannulation by allowing a user to grasp the target vessel with the forceps and then enter the vessel with the cannula. The system was evaluated on an eye phantom. The results showed that, while the eyeball was subjected to rotational disturbances, the proposed controller actuates the robotic manipulators to maintain the average tool-to-vessel force at 10.9 mN and 13.1 mN and the average tool-to-sclera force at 38.1 mN and 41.2 mN for the cannula and the forcpes, respectively. Such small tool-to-tissue forces are acceptable to avoid retinal tissue injury. Additionally, two clinicians participated in a preliminary user study of the bimanual cannulation demonstrating that the operation time and tool-to-tissue forces are significantly decreased when using the bimanual robotic system as compared to freehand performance.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":"4441-4447"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9196889","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Fully Actuated Body-Mounted Robotic Assistant for MRI-Guided Low Back Pain Injection. 一种用于MRI引导下腰痛注射的全驱动车载机器人助手。
Gang Li, Niravkumar A Patel, Weiqiang Liu, Di Wu, Karun Sharma, Kevin Cleary, Jan Fritz, Iulian Iordachita

This paper reports the development of a fully actuated body-mounted robotic assistant for MRI-guided low back pain injection. The robot is designed with a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module. The 6-DOF fully actuated robot can operate inside the scanner bore during imaging; hence, minimizing the need of moving the patient in or out of the scanner during the procedure, and thus potentially reducing the procedure time and streamlining the workflow. The robot is built with a lightweight and compact structure that can be attached directly to the patient's lower back using straps; therefore, attenuating the effect of patient motion by moving with the patient. The novel remote actuation design of the needle driver module with beaded chain transmission can reduce the weight and profile on the patient, as well as minimize the imaging degradation caused by the actuation electronics. The free space positioning accuracy of the system was evaluated with an optical tracking system, demonstrating the mean absolute errors (MAE) of the tip position to be 0.99±0.46 mm and orientation to be 0.99±0.65°. Qualitative imaging quality evaluation was performed on a human volunteer, revealing minimal visible image degradation that should not affect the procedure. The mounting stability of the system was assessed on a human volunteer, indicating the 3D position variation of target movement with respect to the robot frame to be less than 0.7 mm.

本文报道了一种用于MRI引导下腰痛注射的全驱动身体安装机器人助手的开发。该机器人设计有一个4自由度的针对齐模块和一个2自由度的远程驱动针驱动器模块。6自由度全驱动机器人可以在成像过程中在扫描仪孔内操作;因此,最大限度地减少了在手术过程中将患者移入或移出扫描仪的需要,从而潜在地减少了手术时间并简化了工作流程。该机器人采用轻型紧凑的结构,可以使用带子直接连接到患者的下背部;从而通过与患者一起移动来减弱患者运动的影响。具有珠链传动的针头驱动器模块的新型远程致动设计可以减轻患者的重量和外形,并最大限度地减少致动电子设备引起的成像退化。使用光学跟踪系统评估了该系统的自由空间定位精度,表明尖端位置的平均绝对误差(MAE)为0.99±0.46 mm,方位为0.99°±0.65°。对一名人类志愿者进行了定性成像质量评估,显示可见图像退化最小,不应影响手术。在人类志愿者身上评估了该系统的安装稳定性,表明目标运动相对于机器人框架的3D位置变化小于0.7mm。
{"title":"A Fully Actuated Body-Mounted Robotic Assistant for MRI-Guided Low Back Pain Injection.","authors":"Gang Li,&nbsp;Niravkumar A Patel,&nbsp;Weiqiang Liu,&nbsp;Di Wu,&nbsp;Karun Sharma,&nbsp;Kevin Cleary,&nbsp;Jan Fritz,&nbsp;Iulian Iordachita","doi":"10.1109/icra40945.2020.9197534","DOIUrl":"10.1109/icra40945.2020.9197534","url":null,"abstract":"<p><p>This paper reports the development of a fully actuated body-mounted robotic assistant for MRI-guided low back pain injection. The robot is designed with a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module. The 6-DOF fully actuated robot can operate inside the scanner bore during imaging; hence, minimizing the need of moving the patient in or out of the scanner during the procedure, and thus potentially reducing the procedure time and streamlining the workflow. The robot is built with a lightweight and compact structure that can be attached directly to the patient's lower back using straps; therefore, attenuating the effect of patient motion by moving with the patient. The novel remote actuation design of the needle driver module with beaded chain transmission can reduce the weight and profile on the patient, as well as minimize the imaging degradation caused by the actuation electronics. The free space positioning accuracy of the system was evaluated with an optical tracking system, demonstrating the mean absolute errors (MAE) of the tip position to be 0.99±0.46 mm and orientation to be 0.99±0.65°. Qualitative imaging quality evaluation was performed on a human volunteer, revealing minimal visible image degradation that should not affect the procedure. The mounting stability of the system was assessed on a human volunteer, indicating the 3D position variation of target movement with respect to the robot frame to be less than 0.7 mm.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9197534","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39348468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Large-Scale Volumetric Scene Reconstruction using LiDAR 基于激光雷达的大规模体景重建
Tilman Kuhner, Julius Kummerle
Large-scale 3D scene reconstruction is an important task in autonomous driving and other robotics applications as having an accurate representation of the environment is necessary to safely interact with it. Reconstructions are used for numerous tasks ranging from localization and mapping to planning. In robotics, volumetric depth fusion is the method of choice for indoor applications since the emergence of commodity RGB-D cameras due to its robustness and high reconstruction quality. In this work we present an approach for volumetric depth fusion using LiDAR sensors as they are common on most autonomous cars. We present a framework for large-scale mapping of urban areas considering loop closures. Our method creates a meshed representation of an urban area from recordings over a distance of 3.7km with a high level of detail on consumer graphics hardware in several minutes. The whole process is fully automated and does not need any user interference. We quantitatively evaluate our results from a real world application. Also, we investigate the effects of the sensor model that we assume on reconstruction quality by using synthetic data.
大规模3D场景重建在自动驾驶和其他机器人应用中是一项重要的任务,因为拥有准确的环境表示是与环境安全交互所必需的。重建用于许多任务,从定位和映射到规划。在机器人技术中,自商用RGB-D相机出现以来,体积深度融合是室内应用的首选方法,因为它具有鲁棒性和高重建质量。在这项工作中,我们提出了一种使用激光雷达传感器进行体积深度融合的方法,因为它们在大多数自动驾驶汽车上很常见。我们提出了一个考虑环路封闭的城市地区大规模制图框架。我们的方法在几分钟内从距离为3.7公里的记录中创建一个城市区域的网格表示,并在消费者图形硬件上提供高水平的细节。整个过程是全自动的,不需要任何用户的干预。我们从实际应用中定量评估我们的结果。此外,我们还利用合成数据研究了我们假设的传感器模型对重建质量的影响。
{"title":"Large-Scale Volumetric Scene Reconstruction using LiDAR","authors":"Tilman Kuhner, Julius Kummerle","doi":"10.1109/ICRA40945.2020.9197388","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197388","url":null,"abstract":"Large-scale 3D scene reconstruction is an important task in autonomous driving and other robotics applications as having an accurate representation of the environment is necessary to safely interact with it. Reconstructions are used for numerous tasks ranging from localization and mapping to planning. In robotics, volumetric depth fusion is the method of choice for indoor applications since the emergence of commodity RGB-D cameras due to its robustness and high reconstruction quality. In this work we present an approach for volumetric depth fusion using LiDAR sensors as they are common on most autonomous cars. We present a framework for large-scale mapping of urban areas considering loop closures. Our method creates a meshed representation of an urban area from recordings over a distance of 3.7km with a high level of detail on consumer graphics hardware in several minutes. The whole process is fully automated and does not need any user interference. We quantitatively evaluate our results from a real world application. Also, we investigate the effects of the sensor model that we assume on reconstruction quality by using synthetic data.","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"39 1","pages":"6261-6267"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82884048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mechanism and Model of a Soft Robot for Head Stabilization in Cancer Radiation Therapy. 肿瘤放射治疗中头部稳定软机器人的机理与模型。
Olalekan Ogunmolu, Xinmin Liu, Nicholas Gans, Rodney D Wiersma

We present a parallel robot mechanism and the constitutive laws that govern the deformation of its constituent soft actuators. Our ultimate goal is the real-time motion-correction of a patient's head deviation from a target pose where the soft actuators control the position of the patient's cranial region on a treatment machine. We describe the mechanism, derive the stress-strain constitutive laws for the individual actuators and the inverse kinematics that prescribes a given deformation, and then present simulation results that validate our mathematical formulation. Our results demonstrate deformations consistent with our radially symmetric displacement formulation under a finite elastic deformation framework.

提出了一种并联机器人机构及其组成的软作动器变形的本构规律。我们的最终目标是实时纠正患者头部偏离目标姿势的运动,其中软致动器控制患者颅骨区域在治疗机上的位置。我们描述了机构,推导了单个执行器的应力-应变本构律和规定给定变形的逆运动学,然后给出了验证我们的数学公式的仿真结果。我们的结果表明,在有限弹性变形框架下,变形与我们的径向对称位移公式一致。
{"title":"Mechanism and Model of a Soft Robot for Head Stabilization in Cancer Radiation Therapy.","authors":"Olalekan Ogunmolu,&nbsp;Xinmin Liu,&nbsp;Nicholas Gans,&nbsp;Rodney D Wiersma","doi":"10.1109/icra40945.2020.9197007","DOIUrl":"https://doi.org/10.1109/icra40945.2020.9197007","url":null,"abstract":"<p><p>We present a parallel robot mechanism and the constitutive laws that govern the deformation of its constituent soft actuators. Our ultimate goal is the real-time motion-correction of a patient's head deviation from a target pose where the soft actuators control the position of the patient's cranial region on a treatment machine. We describe the mechanism, derive the stress-strain constitutive laws for the individual actuators and the inverse kinematics that prescribes a given deformation, and then present simulation results that validate our mathematical formulation. Our results demonstrate deformations consistent with our radially symmetric displacement formulation under a finite elastic deformation framework.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":"4609-4615"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9197007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38649715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unified Intrinsic and Extrinsic Camera and LiDAR Calibration under Uncertainties 不确定条件下相机和激光雷达内、外统一标定
Julius Kummerle, Tilman Kuhner
Many approaches for camera and LiDAR calibration are presented in literature but none of them estimates all intrinsic and extrinsic parameters simultaneously and therefore optimally in a probabilistic sense.In this work, we present a method to simultaneously estimate intrinsic and extrinsic parameters of cameras and LiDARs in a unified problem. We derive a probabilistic formulation that enables flawless integration of different measurement types without hand-tuned weights. An arbitrary number of cameras and LiDARs can be calibrated simultaneously. Measurements are not required to be time-synchronized. The method is designed to work with any camera model.In evaluation, we show that additional LiDAR measurements significantly improve intrinsic camera calibration. Further, we show on real data that our method achieves state-of-the-art calibration precision with high reliability.
文献中提出了许多相机和激光雷达校准方法,但没有一种方法可以同时估计所有的内在和外在参数,因此在概率意义上是最优的。在这项工作中,我们提出了一种在统一问题中同时估计相机和激光雷达的内在和外在参数的方法。我们推导了一个概率公式,使不同测量类型的完美集成无需手动调整权重。可以同时校准任意数量的摄像机和激光雷达。测量不需要时间同步。该方法适用于任何相机模型。在评估中,我们表明额外的LiDAR测量显着改善了相机的固有校准。此外,我们在实际数据上表明,我们的方法达到了最先进的校准精度和高可靠性。
{"title":"Unified Intrinsic and Extrinsic Camera and LiDAR Calibration under Uncertainties","authors":"Julius Kummerle, Tilman Kuhner","doi":"10.1109/ICRA40945.2020.9197496","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197496","url":null,"abstract":"Many approaches for camera and LiDAR calibration are presented in literature but none of them estimates all intrinsic and extrinsic parameters simultaneously and therefore optimally in a probabilistic sense.In this work, we present a method to simultaneously estimate intrinsic and extrinsic parameters of cameras and LiDARs in a unified problem. We derive a probabilistic formulation that enables flawless integration of different measurement types without hand-tuned weights. An arbitrary number of cameras and LiDARs can be calibrated simultaneously. Measurements are not required to be time-synchronized. The method is designed to work with any camera model.In evaluation, we show that additional LiDAR measurements significantly improve intrinsic camera calibration. Further, we show on real data that our method achieves state-of-the-art calibration precision with high reliability.","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"40 1","pages":"6028-6034"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86291431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Contact Stability Analysis of Magnetically-Actuated Robotic Catheter Under Surface Motion. 磁动机器人导管在表面运动下的接触稳定性分析
Ran Hao, Tipakorn Greigarn, M Cenk Çavuşoğlu

Contact force quality is one of the most critical factors for safe and effective lesion formation during cardiac ablation. The contact force and contact stability plays important roles in determining the lesion size and creating a gap-free lesion. In this paper, the contact stability of a novel magnetic resonance imaging (MRI)-actuated robotic catheter under tissue surface motion is studied. The robotic catheter is modeled using a pseudo-rigid-body model, and the contact model under surface constraint is provided. Two contact force control schemes to improve the contact stability of the catheter under heart surface motions are proposed and their performance are evaluated in simulation.

接触力质量是心脏消融过程中安全有效地形成病灶的最关键因素之一。接触力和接触稳定性在决定病灶大小和形成无间隙病灶方面起着重要作用。本文研究了新型磁共振成像(MRI)驱动机器人导管在组织表面运动时的接触稳定性。利用伪刚体模型对机器人导管进行建模,并提供了表面约束下的接触模型。提出了两种接触力控制方案,以提高导管在心脏表面运动下的接触稳定性,并对其性能进行了仿真评估。
{"title":"Contact Stability Analysis of Magnetically-Actuated Robotic Catheter Under Surface Motion.","authors":"Ran Hao, Tipakorn Greigarn, M Cenk Çavuşoğlu","doi":"10.1109/icra40945.2020.9196951","DOIUrl":"10.1109/icra40945.2020.9196951","url":null,"abstract":"<p><p>Contact force quality is one of the most critical factors for safe and effective lesion formation during cardiac ablation. The contact force and contact stability plays important roles in determining the lesion size and creating a gap-free lesion. In this paper, the contact stability of a novel magnetic resonance imaging (MRI)-actuated robotic catheter under tissue surface motion is studied. The robotic catheter is modeled using a pseudo-rigid-body model, and the contact model under surface constraint is provided. Two contact force control schemes to improve the contact stability of the catheter under heart surface motions are proposed and their performance are evaluated in simulation.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":"4455-4462"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8197595/pdf/nihms-1705040.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39010750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Analysis of a Synergy-Inspired Three-Fingered Hand 协同启发的三指手的设计与分析
Chen Wenrui, Xia Zhilan, Lu Jingwen, Zhao Zilong, W. Yao-nan
Hand synergy from neuroscience provides an effective tool for anthropomorphic hands to realize versatile grasping with simple planning and control. This paper aims to extend the synergy-inspired design from anthropomorphic hands to multi-fingered robot hands. The synergy-inspired hands are not necessarily humanoid in morphology but perform primary characteristics and functions similar to the human hand. At first, the biomechanics of hand synergy is investigated. Three biomechanical characteristics of the human hand synergy are explored as a basis for the mechanical simplification of the robot hands. Secondly, according to the synergy characteristics, a three-fingered hand is designed, and its kinematic model is developed for the analysis of some typical grasping and manipulation functions. Finally, a prototype is developed and preliminary grasping experiments validate the effectiveness of the design and analysis.
神经科学的手协同作用为拟人手提供了有效的工具,通过简单的规划和控制实现多功能抓取。本文旨在将协同启发设计从拟人手扩展到多指机器人手。受协同作用启发的手在形态上不一定是类人的,但其主要特征和功能与人的手相似。首先,对手协同的生物力学进行了研究。探讨了人手协同作用的三个生物力学特征,为机器人手的力学简化奠定了基础。其次,根据三指手的协同特性,设计了三指手,建立了三指手的运动学模型,分析了三指手的典型抓取和操作功能。最后,开发了样机并进行了初步的抓取实验,验证了设计和分析的有效性。
{"title":"Design and Analysis of a Synergy-Inspired Three-Fingered Hand","authors":"Chen Wenrui, Xia Zhilan, Lu Jingwen, Zhao Zilong, W. Yao-nan","doi":"10.1109/ICRA40945.2020.9196901","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196901","url":null,"abstract":"Hand synergy from neuroscience provides an effective tool for anthropomorphic hands to realize versatile grasping with simple planning and control. This paper aims to extend the synergy-inspired design from anthropomorphic hands to multi-fingered robot hands. The synergy-inspired hands are not necessarily humanoid in morphology but perform primary characteristics and functions similar to the human hand. At first, the biomechanics of hand synergy is investigated. Three biomechanical characteristics of the human hand synergy are explored as a basis for the mechanical simplification of the robot hands. Secondly, according to the synergy characteristics, a three-fingered hand is designed, and its kinematic model is developed for the analysis of some typical grasping and manipulation functions. Finally, a prototype is developed and preliminary grasping experiments validate the effectiveness of the design and analysis.","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"48 1","pages":"8942-8948"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87985244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
High-Resolution Optical Fiber Shape Sensing of Continuum Robots: A Comparative Study. 连续体机器人高分辨率光纤形状传感的比较研究。
Frederic Monet, Shahriar Sefati, Pierre Lorre, Arthur Poiffaut, Samuel Kadoury, Mehran Armand, Iulian Iordachita, Raman Kashyap

Flexible medical instruments, such as Continuum Dexterous Manipulators (CDM), constitute an important class of tools for minimally invasive surgery. Accurate CDM shape reconstruction during surgery is of great importance, yet a challenging task. Fiber Bragg grating (FBG) sensors have demonstrated great potential in shape sensing and consequently tip position estimation of CDMs. However, due to the limited number of sensing locations, these sensors can only accurately recover basic shapes, and become unreliable in the presence of obstacles or many inflection points such as s-bends. Optical Frequency Domain Reflectometry (OFDR), on the other hand, can achieve much higher spatial resolution, and can therefore accurately reconstruct more complex shapes. Additionally, Random Optical Gratings by Ultraviolet laser Exposure (ROGUEs) can be written in the fibers to increase signal to noise ratio of the sensors. In this comparison study, the tip position error is used as a metric to compare both FBG and OFDR shape reconstructions for a 35 mm long CDM developed for orthopedic surgeries, using a pair of stereo cameras as ground truth. Three sets of experiments were conducted to measure the accuracy of each technique in various surgical scenarios. The tip position error for the OFDR (and FBG) technique was found to be 0.32 (0.83) mm in free-bending environment, 0.41 (0.80) mm when interacting with obstacles, and 0.45 (2.27) mm in s-bending. Moreover, the maximum tip position error remains sub-millimeter for the OFDR reconstruction, while it reaches 3.40 mm for FBG reconstruction. These results propose a cost-effective, robust and more accurate alternative to FBG sensors for reconstructing complex CDM shapes.

柔性医疗器械,如连续体灵巧操纵器(CDM),是一类重要的微创手术工具。手术中准确的CDM形状重建非常重要,但也是一项具有挑战性的任务。光纤布拉格光栅(FBG)传感器在CDMs的形状传感和尖端位置估计方面显示出巨大的潜力。然而,由于传感位置的数量有限,这些传感器只能准确地恢复基本形状,并且在存在障碍物或许多拐点(如s型弯道)时变得不可靠。另一方面,光频域反射法(OFDR)可以实现更高的空间分辨率,因此可以准确地重建更复杂的形状。此外,还可以在光纤中写入紫外激光曝光随机光栅(ROGUEs),以提高传感器的信噪比。在这项比较研究中,使用一对立体摄像机作为地面真值,使用尖端位置误差作为度量来比较用于骨科手术的35mm长的CDM的FBG和OFDR形状重建。进行了三组实验来测量每种技术在不同手术情况下的准确性。OFDR(和FBG)技术的尖端位置误差在自由弯曲环境中为0.32 (0.83)mm,在与障碍物相互作用时为0.41 (0.80)mm,在s弯曲环境中为0.45 (2.27)mm。此外,OFDR重建的最大尖端位置误差保持在亚毫米,而FBG重建的最大尖端位置误差达到3.40 mm。这些结果为重建复杂CDM形状的FBG传感器提供了一种具有成本效益,鲁棒性和更精确的替代方案。
{"title":"High-Resolution Optical Fiber Shape Sensing of Continuum Robots: A Comparative Study.","authors":"Frederic Monet,&nbsp;Shahriar Sefati,&nbsp;Pierre Lorre,&nbsp;Arthur Poiffaut,&nbsp;Samuel Kadoury,&nbsp;Mehran Armand,&nbsp;Iulian Iordachita,&nbsp;Raman Kashyap","doi":"10.1109/icra40945.2020.9197454","DOIUrl":"https://doi.org/10.1109/icra40945.2020.9197454","url":null,"abstract":"<p><p>Flexible medical instruments, such as Continuum Dexterous Manipulators (CDM), constitute an important class of tools for minimally invasive surgery. Accurate CDM shape reconstruction during surgery is of great importance, yet a challenging task. Fiber Bragg grating (FBG) sensors have demonstrated great potential in shape sensing and consequently tip position estimation of CDMs. However, due to the limited number of sensing locations, these sensors can only accurately recover basic shapes, and become unreliable in the presence of obstacles or many inflection points such as s-bends. Optical Frequency Domain Reflectometry (OFDR), on the other hand, can achieve much higher spatial resolution, and can therefore accurately reconstruct more complex shapes. Additionally, Random Optical Gratings by Ultraviolet laser Exposure (ROGUEs) can be written in the fibers to increase signal to noise ratio of the sensors. In this comparison study, the tip position error is used as a metric to compare both FBG and OFDR shape reconstructions for a 35 mm long CDM developed for orthopedic surgeries, using a pair of stereo cameras as ground truth. Three sets of experiments were conducted to measure the accuracy of each technique in various surgical scenarios. The tip position error for the OFDR (and FBG) technique was found to be 0.32 (0.83) mm in free-bending environment, 0.41 (0.80) mm when interacting with obstacles, and 0.45 (2.27) mm in s-bending. Moreover, the maximum tip position error remains sub-millimeter for the OFDR reconstruction, while it reaches 3.40 mm for FBG reconstruction. These results propose a cost-effective, robust and more accurate alternative to FBG sensors for reconstructing complex CDM shapes.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9197454","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39335086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration. 通过示范学习在眼内自主导航手术工具。
Ji Woong Kim, Changyan He, Muller Urias, Peter Gehlbach, Gregory D Hager, Iulian Iordachita, Marin Kobilarov

A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.

视网膜手术的一个基本挑战是如何安全地将手术工具导航到视网膜表面所需的目标位置,同时避免损伤周围组织,这一过程通常需要几十微米的精度。在实践中,外科医生依靠深度估计技能来定位工具尖相对于视网膜的位置,以执行工具导航任务,这很容易出现人为错误。为了减轻这种不确定性,先前的工作已经引入了一些方法,通过估计工具尖端到视网膜的距离来帮助外科医生,并提供触觉或听觉反馈。然而,自动化工具导航任务本身仍然没有得到解决,而且在很大程度上没有被探索过。这种能力,如果可靠地自动化,可以作为简化复杂程序的基石,减少组织损伤的机会。为此,我们建议通过学习模仿任务的专家演示来自动化工具导航任务。具体地说,一个深度网络被训练来模仿专家轨迹到视网膜上的不同位置,基于记录的视觉伺服到用户指定的给定目标。在仿真和物理实验中对所提出的自主导航系统进行了评估。我们表明,该网络可以可靠地将针头手术工具导航到各种所需位置,在物理实验中精度为137 μm,在模拟中平均精度为94 μm,并且可以很好地推广到不可见的情况,例如存在辅助手术工具,可变眼睛背景和亮度条件。
{"title":"Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration.","authors":"Ji Woong Kim,&nbsp;Changyan He,&nbsp;Muller Urias,&nbsp;Peter Gehlbach,&nbsp;Gregory D Hager,&nbsp;Iulian Iordachita,&nbsp;Marin Kobilarov","doi":"10.1109/icra40945.2020.9196537","DOIUrl":"https://doi.org/10.1109/icra40945.2020.9196537","url":null,"abstract":"<p><p>A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 <i>μm</i> accuracy in physical experiments and 94 <i>μm</i> in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/icra40945.2020.9196537","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39520516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1