Pub Date : 2024-05-01Epub Date: 2024-08-08DOI: 10.1109/icra57147.2024.10610071
Albert J Lee, Curt A Laubscher, T Kevin Best, Robert D Gregg
Research in powered prosthesis control has explored the use of impedance-based control algorithms due to their biomimetic capabilities and intuitive structure. Modern impedance controllers feature parameters that smoothly vary over gait phase and task according to a data-driven model. However, these recent efforts only use continuous impedance control during stance and instead utilize discrete transition logic to switch to kinematic control during swing, necessitating two separate models for the different parts of the stride. In contrast, this paper presents a controller that uses smooth impedance parameter trajectories throughout the gait, unifying the stance and swing periods under a single, continuous model. Furthermore, this paper proposes a basis model to represent intertask relationships in the impedance parameters-a strategy that has previously been shown to improve model accuracy over classic linear interpolation methods. In the proposed controller, a weighted sum of Fourier series is used to model the impedance parameters of each joint as continuous functions of gait cycle progression and task. Fourier series coefficients are determined via convex optimization such that the controller best reproduces the joint torques and kinematics in a reference able-bodied dataset. Experiments with a powered knee-ankle prosthesis show that this simpler, unified model produces competitive results when compared to a more complex hybrid impedance-kinematic model over varying walking speeds and inclines.
{"title":"Towards a Unified Approach for Continuously-Variable Impedance Control of Powered Prosthetic Legs over Walking Speeds and Inclines.","authors":"Albert J Lee, Curt A Laubscher, T Kevin Best, Robert D Gregg","doi":"10.1109/icra57147.2024.10610071","DOIUrl":"10.1109/icra57147.2024.10610071","url":null,"abstract":"<p><p>Research in powered prosthesis control has explored the use of impedance-based control algorithms due to their biomimetic capabilities and intuitive structure. Modern impedance controllers feature parameters that smoothly vary over gait phase and task according to a data-driven model. However, these recent efforts only use continuous impedance control during stance and instead utilize discrete transition logic to switch to kinematic control during swing, necessitating two separate models for the different parts of the stride. In contrast, this paper presents a controller that uses smooth impedance parameter trajectories throughout the gait, unifying the stance and swing periods under a single, continuous model. Furthermore, this paper proposes a basis model to represent intertask relationships in the impedance parameters-a strategy that has previously been shown to improve model accuracy over classic linear interpolation methods. In the proposed controller, a weighted sum of Fourier series is used to model the impedance parameters of each joint as continuous functions of gait cycle progression and task. Fourier series coefficients are determined via convex optimization such that the controller best reproduces the joint torques and kinematics in a reference able-bodied dataset. Experiments with a powered knee-ankle prosthesis show that this simpler, unified model produces competitive results when compared to a more complex hybrid impedance-kinematic model over varying walking speeds and inclines.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2024 ","pages":"944-950"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11426229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142333821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In minimally invasive procedures such as biopsies and prostate cancer brachytherapy, accurate needle placement remains challenging due to limitations in current tracking methods related to interference, reliability, resolution or image contrast. This often leads to frequent needle adjustments and reinsertions. To address these shortcomings, we introduce an optimized needle shape-sensing method using a fully distributed grating-based sensor. The proposed method uses simple trigonometric and geometric modeling of the fiber using optical frequency domain reflectometry (OFDR), without requiring prior knowledge of tissue properties or needle deflection shape and amplitude. Our optimization process includes a reproducible calibration process and a novel tip curvature compensation method. We validate our approach through experiments in artificial isotropic and inhomogeneous animal tissues, establishing ground truth using 3D stereo vision and cone beam computed tomography (CBCT) acquisitions, respectively. Our results yield an average RMSE ranging from 0.58 ± 0.21 mm to 0.66 ± 0.20 mm depending on the chosen spatial resolution, achieving the submillimeter accuracy required for interventional procedures.
{"title":"Fully Distributed Shape Sensing of a Flexible Surgical Needle Using Optical Frequency Domain Reflectometry for Prostate Interventions.","authors":"Jacynthe Francoeur, Dimitri Lezcano, Yernar Zhetpissov, Raman Kashyap, Iulian Iordachita, Samuel Kadoury","doi":"10.1109/icra57147.2024.10610256","DOIUrl":"10.1109/icra57147.2024.10610256","url":null,"abstract":"<p><p>In minimally invasive procedures such as biopsies and prostate cancer brachytherapy, accurate needle placement remains challenging due to limitations in current tracking methods related to interference, reliability, resolution or image contrast. This often leads to frequent needle adjustments and reinsertions. To address these shortcomings, we introduce an optimized needle shape-sensing method using a fully distributed grating-based sensor. The proposed method uses simple trigonometric and geometric modeling of the fiber using optical frequency domain reflectometry (OFDR), without requiring prior knowledge of tissue properties or needle deflection shape and amplitude. Our optimization process includes a reproducible calibration process and a novel tip curvature compensation method. We validate our approach through experiments in artificial isotropic and inhomogeneous animal tissues, establishing ground truth using 3D stereo vision and cone beam computed tomography (CBCT) acquisitions, respectively. Our results yield an average RMSE ranging from 0.58 ± 0.21 mm to 0.66 ± 0.20 mm depending on the chosen spatial resolution, achieving the submillimeter accuracy required for interventional procedures.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2024 ","pages":"17594-17601"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11507468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-08DOI: 10.1109/icra57147.2024.10610807
Simon Pannek, Shervin Dehghani, Michael Sommersperger, Peiyao Zhang, Peter Gehlbach, M Ali Nasseri, Iulian Iordachita, Nassir Navab
Recent advancements in age-related macular degeneration treatments necessitate precision delivery into the subretinal space, emphasizing minimally invasive procedures targeting the retinal pigment epithelium (RPE)-Bruch's membrane complex without causing trauma. Even for skilled surgeons, the inherent hand tremors during manual surgery can jeopardize the safety of these critical interventions. This has fostered the evolution of robotic systems designed to prevent such tremors. These robots are enhanced by FBG sensors, which sense the small force interactions between the surgical instruments and retinal tissue. To enable the community to design algorithms taking advantage of such force feedback data, this paper focuses on the need to provide a specialized dataset, integrating optical coherence tomography (OCT) imaging together with the aforementioned force data. We introduce a unique dataset, integrating force sensing data synchronized with OCT B-scan images, derived from a sophisticated setup involving robotic assistance and OCT integrated microscopes. Furthermore, we present a neural network model for image-based force estimation to demonstrate the dataset's applicability.
年龄相关性黄斑变性治疗的最新进展要求精确进入视网膜下空间,强调在不造成创伤的情况下针对视网膜色素上皮(RPE)-布鲁克斯膜复合体进行微创手术。即使是技术娴熟的外科医生,手动手术过程中固有的手部震颤也会危及这些关键介入手术的安全性。这促进了旨在防止这种震颤的机器人系统的发展。这些机器人通过 FBG 传感器进行增强,FBG 传感器能感知手术器械与视网膜组织之间微小的力相互作用。为了让社会各界能够利用这些力反馈数据设计算法,本文重点讨论了提供专门数据集的必要性,该数据集将光学相干断层扫描(OCT)成像与上述力数据整合在一起。我们介绍了一个独特的数据集,该数据集整合了与 OCT B-scan 图像同步的力传感数据,该数据集来自一个复杂的装置,其中包括机器人辅助和 OCT 集成显微镜。此外,我们还提出了一个基于图像的力估算神经网络模型,以证明该数据集的适用性。
{"title":"Exploring the Needle Tip Interaction Force with Retinal Tissue Deformation in Vitreoretinal Surgery.","authors":"Simon Pannek, Shervin Dehghani, Michael Sommersperger, Peiyao Zhang, Peter Gehlbach, M Ali Nasseri, Iulian Iordachita, Nassir Navab","doi":"10.1109/icra57147.2024.10610807","DOIUrl":"https://doi.org/10.1109/icra57147.2024.10610807","url":null,"abstract":"<p><p>Recent advancements in age-related macular degeneration treatments necessitate precision delivery into the subretinal space, emphasizing minimally invasive procedures targeting the retinal pigment epithelium (RPE)-Bruch's membrane complex without causing trauma. Even for skilled surgeons, the inherent hand tremors during manual surgery can jeopardize the safety of these critical interventions. This has fostered the evolution of robotic systems designed to prevent such tremors. These robots are enhanced by FBG sensors, which sense the small force interactions between the surgical instruments and retinal tissue. To enable the community to design algorithms taking advantage of such force feedback data, this paper focuses on the need to provide a specialized dataset, integrating optical coherence tomography (OCT) imaging together with the aforementioned force data. We introduce a unique dataset, integrating force sensing data synchronized with OCT B-scan images, derived from a sophisticated setup involving robotic assistance and OCT integrated microscopes. Furthermore, we present a neural network model for image-based force estimation to demonstrate the dataset's applicability.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2024 ","pages":"16999-17005"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11501085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-08DOI: 10.1109/icra57147.2024.10611084
Mojtaba Esfandiari, Ji Woong Kim, Botao Zhao, Golchehr Amirkhani, Muhammad Hadi, Peter Gehlbach, Russell H Taylor, Iulian Iordachita
A surgeon's physiological hand tremor can significantly impact the outcome of delicate and precise retinal surgery, such as retinal vein cannulation (RVC) and epiretinal membrane peeling. Robot-assisted eye surgery technology provides ophthalmologists with advanced capabilities such as hand tremor cancellation, hand motion scaling, and safety constraints that enable them to perform these otherwise challenging and high-risk surgeries with high precision and safety. Steady-Hand Eye Robot (SHER) with cooperative control mode can filter out surgeon's hand tremor, yet another important safety feature, that is, minimizing the contact force between the surgical instrument and sclera surface for avoiding tissue damage cannot be met in this control mode. Also, other capabilities, such as hand motion scaling and haptic feedback, require a teleoperation control framework. In this work, for the first time, we implemented a teleoperation control mode incorporated with an adaptive sclera force control algorithm using a PHANTOM Omni haptic device and a force-sensing surgical instrument equipped with Fiber Bragg Grating (FBG) sensors attached to the SHER 2.1 end-effector. This adaptive sclera force control algorithm allows the robot to dynamically minimize the tool-sclera contact force. Moreover, for the first time, we compared the performance of the proposed adaptive teleoperation mode with the cooperative mode by conducting a vessel-following experiment inside an eye phantom under a microscope.
外科医生的生理性手颤会严重影响精细精确的视网膜手术(如视网膜静脉插管(RVC)和视网膜外膜剥离)的结果。机器人辅助眼科手术技术为眼科医生提供了手部震颤消除、手部运动缩放和安全限制等先进功能,使他们能够高精度、高安全性地完成这些具有挑战性的高风险手术。具有协同控制模式的稳定手眼机器人(SHER)可以过滤外科医生的手部震颤,但在这种控制模式下,另一个重要的安全功能,即最大限度地减少手术器械与巩膜表面的接触力以避免组织损伤,却无法实现。此外,手部运动缩放和触觉反馈等其他功能也需要远程操作控制框架。在这项工作中,我们首次使用 PHANTOM Omni 触觉设备和连接到 SHER 2.1 末端执行器上的配备有光纤布拉格光栅 (FBG) 传感器的力传感手术器械,实现了包含自适应巩膜力控制算法的远程操作控制模式。这种自适应巩膜力控制算法可使机器人动态地将工具与巩膜的接触力降至最低。此外,通过在显微镜下的眼球模型内进行血管跟踪实验,我们首次比较了所提出的自适应远程操作模式与合作模式的性能。
{"title":"Cooperative vs. Teleoperation Control of the Steady Hand Eye Robot with Adaptive Sclera Force Control: A Comparative Study.","authors":"Mojtaba Esfandiari, Ji Woong Kim, Botao Zhao, Golchehr Amirkhani, Muhammad Hadi, Peter Gehlbach, Russell H Taylor, Iulian Iordachita","doi":"10.1109/icra57147.2024.10611084","DOIUrl":"10.1109/icra57147.2024.10611084","url":null,"abstract":"<p><p>A surgeon's physiological hand tremor can significantly impact the outcome of delicate and precise retinal surgery, such as retinal vein cannulation (RVC) and epiretinal membrane peeling. Robot-assisted eye surgery technology provides ophthalmologists with advanced capabilities such as hand tremor cancellation, hand motion scaling, and safety constraints that enable them to perform these otherwise challenging and high-risk surgeries with high precision and safety. Steady-Hand Eye Robot (SHER) with cooperative control mode can filter out surgeon's hand tremor, yet another important safety feature, that is, minimizing the contact force between the surgical instrument and sclera surface for avoiding tissue damage cannot be met in this control mode. Also, other capabilities, such as hand motion scaling and haptic feedback, require a teleoperation control framework. In this work, for the first time, we implemented a teleoperation control mode incorporated with an adaptive sclera force control algorithm using a PHANTOM Omni haptic device and a force-sensing surgical instrument equipped with Fiber Bragg Grating (FBG) sensors attached to the SHER 2.1 end-effector. This adaptive sclera force control algorithm allows the robot to dynamically minimize the tool-sclera contact force. Moreover, for the first time, we compared the performance of the proposed adaptive teleoperation mode with the cooperative mode by conducting a vessel-following experiment inside an eye phantom under a microscope.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2024 ","pages":"8209-8215"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Percutaneous needle insertions are commonly performed for diagnostic and therapeutic purposes as an effective alternative to more invasive surgical procedures. However, the outcome of needle-based approaches relies heavily on the accuracy of needle placement, which remains a challenge even with robot assistance and medical imaging guidance due to needle deflection caused by contact with soft tissues. In this paper, we present a novel mechanics-based 2D bevel-tip needle model that can account for the effect of nonlinear strain-dependent behavior of biological soft tissues under compression. Real-time finite element simulation allows multiple control inputs along the length of the needle with full three-degree-of-freedom (DOF) planar needle motions. Cross-validation studies using custom-designed multi-layer tissue phantoms as well as heterogeneous chicken breast tissues result in less than 1mm in-plane errors for insertions reaching depths of up to 61 mm, demonstrating the validity and generalizability of the proposed method.
{"title":"Bevel-Tip Needle Deflection Modeling, Simulation, and Validation in Multi-Layer Tissues.","authors":"Yanzhou Wang, Lidia Al-Zogbi, Guanyun Liu, Jiawei Liu, Junichi Tokuda, Axel Krieger, Iulian Iordachita","doi":"10.1109/icra57147.2024.10610110","DOIUrl":"https://doi.org/10.1109/icra57147.2024.10610110","url":null,"abstract":"<p><p>Percutaneous needle insertions are commonly performed for diagnostic and therapeutic purposes as an effective alternative to more invasive surgical procedures. However, the outcome of needle-based approaches relies heavily on the accuracy of needle placement, which remains a challenge even with robot assistance and medical imaging guidance due to needle deflection caused by contact with soft tissues. In this paper, we present a novel mechanics-based 2D bevel-tip needle model that can account for the effect of nonlinear strain-dependent behavior of biological soft tissues under compression. Real-time finite element simulation allows multiple control inputs along the length of the needle with full three-degree-of-freedom (DOF) planar needle motions. Cross-validation studies using custom-designed multi-layer tissue phantoms as well as heterogeneous chicken breast tissues result in less than 1mm in-plane errors for insertions reaching depths of up to 61 mm, demonstrating the validity and generalizability of the proposed method.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2024 ","pages":"11598-11604"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11494283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01Epub Date: 2023-07-04DOI: 10.1109/icra48891.2023.10161102
Emily G Keller, Curt A Laubscher, Robert D Gregg
Many powered prosthetic devices use load cells to detect ground interaction forces and gait events. These sensors introduce additional weight and cost in the device. Recent proprioceptive actuators enable an algebraic relationship between actuator torques and ground contact forces. This paper presents a proprioceptive force sensing paradigm which estimates ground reaction forces as a solution to detect gait events without a load cell. A floating body dynamic model is obtained with constraints at the center of pressure representing foot-ground interaction. Constraint forces are derived to estimate ground reaction forces and subsequently timing of gait events. A treadmill experiment is conducted with a powered knee-ankle prosthesis used by an able-bodied subject walking at various speeds and slopes. Results show accurate gait event timing, with pooled data showing heel strike detection lagging by only 6.7 ± 7.2 ms and toe off detection leading by 30.4 ± 11.0 ms compared to values obtained from the load cell. These results establish proof of concept for predicting gait events without a load cell in powered prostheses with proprioceptive actuators.
{"title":"Gait Event Detection with Proprioceptive Force Sensing in a Powered Knee-Ankle Prosthesis: Validation over Walking Speeds and Slopes.","authors":"Emily G Keller, Curt A Laubscher, Robert D Gregg","doi":"10.1109/icra48891.2023.10161102","DOIUrl":"10.1109/icra48891.2023.10161102","url":null,"abstract":"<p><p>Many powered prosthetic devices use load cells to detect ground interaction forces and gait events. These sensors introduce additional weight and cost in the device. Recent proprioceptive actuators enable an algebraic relationship between actuator torques and ground contact forces. This paper presents a proprioceptive force sensing paradigm which estimates ground reaction forces as a solution to detect gait events without a load cell. A floating body dynamic model is obtained with constraints at the center of pressure representing foot-ground interaction. Constraint forces are derived to estimate ground reaction forces and subsequently timing of gait events. A treadmill experiment is conducted with a powered knee-ankle prosthesis used by an able-bodied subject walking at various speeds and slopes. Results show accurate gait event timing, with pooled data showing heel strike detection lagging by only 6.7 ± 7.2 ms and toe off detection leading by 30.4 ± 11.0 ms compared to values obtained from the load cell. These results establish proof of concept for predicting gait events without a load cell in powered prostheses with proprioceptive actuators.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2023 ","pages":"10464-10470"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10414786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10006237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01Epub Date: 2022-07-12DOI: 10.1109/icra46639.2022.9811364
Shervin Dehghani, Michael Sommersperger, Junjie Yang, Mehrdad Salehi, Benjamin Busam, Kai Huang, Peter Gehlbach, Iulian Iordachita, Nassir Navab, M Ali Nasseri
Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.
{"title":"ColibriDoc: An Eye-in-Hand Autonomous Trocar Docking System.","authors":"Shervin Dehghani, Michael Sommersperger, Junjie Yang, Mehrdad Salehi, Benjamin Busam, Kai Huang, Peter Gehlbach, Iulian Iordachita, Nassir Navab, M Ali Nasseri","doi":"10.1109/icra46639.2022.9811364","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811364","url":null,"abstract":"<p><p>Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":" ","pages":"7717-7723"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9484558/pdf/nihms-1836539.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40372293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1109/icra46639.2022.9811578
Ross J Cortino, Edgar Bolívar-Nieto, T Kevin Best, Robert D Gregg
Passive prostheses cannot provide the net positive work required at the knee and ankle for step-over stair ascent. Powered prostheses can provide this net positive work, but user synchronization of joint motion and power input are critical to enabling natural stair ascent gaits. In this work, we build on previous phase variable-based control methods for walking and propose a stair ascent controller driven by the motion of the user's residual thigh. We use reference kinematics from an able-bodied dataset to produce knee and ankle joint trajectories parameterized by gait phase. We redefine the gait cycle to begin at the point of maximum hip flexion instead of heel strike to improve the phase estimate. Able-bodied bypass adapter experiments demonstrate that the phase variable controller replicates normative able-bodied kinematic trajectories with a root mean squared error of 12.66° and 2.64° for the knee and ankle, respectively. The knee and ankle joints provided on average 0.39 J/kg and 0.21 J/kg per stride, compared to the normative averages of 0.34 J/kg and 0.21 J/kg, respectively. Thus, this controller allows powered knee-ankle prostheses to perform net positive mechanical work to assist stair ascent.
{"title":"Stair Ascent Phase-Variable Control of a Powered Knee-Ankle Prosthesis.","authors":"Ross J Cortino, Edgar Bolívar-Nieto, T Kevin Best, Robert D Gregg","doi":"10.1109/icra46639.2022.9811578","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811578","url":null,"abstract":"<p><p>Passive prostheses cannot provide the net positive work required at the knee and ankle for step-over stair ascent. Powered prostheses can provide this net positive work, but user synchronization of joint motion and power input are critical to enabling natural stair ascent gaits. In this work, we build on previous phase variable-based control methods for walking and propose a stair ascent controller driven by the motion of the user's residual thigh. We use reference kinematics from an able-bodied dataset to produce knee and ankle joint trajectories parameterized by gait phase. We redefine the gait cycle to begin at the point of maximum hip flexion instead of heel strike to improve the phase estimate. Able-bodied bypass adapter experiments demonstrate that the phase variable controller replicates normative able-bodied kinematic trajectories with a root mean squared error of 12.66° and 2.64° for the knee and ankle, respectively. The knee and ankle joints provided on average 0.39 J/kg and 0.21 J/kg per stride, compared to the normative averages of 0.34 J/kg and 0.21 J/kg, respectively. Thus, this controller allows powered knee-ankle prostheses to perform net positive mechanical work to assist stair ascent.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2022 ","pages":"5673-5678"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9432737/pdf/nihms-1785127.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9771788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01Epub Date: 2022-07-12DOI: 10.1109/icra46639.2022.9812257
Xingtong Liu, Zhaoshuo Li, Masaru Ishii, Gregory D Hager, Russell H Taylor, Mathias Unberath
In endoscopy, many applications (e.g., surgical navigation) would benefit from a real-time method that can simultaneously track the endoscope and reconstruct the dense 3D geometry of the observed anatomy from a monocular endoscopic video. To this end, we develop a Simultaneous Localization and Mapping system by combining the learning-based appearance and optimizable geometry priors and factor graph optimization. The appearance and geometry priors are explicitly learned in an end-to-end differentiable training pipeline to master the task of pair-wise image alignment, one of the core components of the SLAM system. In our experiments, the proposed SLAM system is shown to robustly handle the challenges of texture scarceness and illumination variation that are commonly seen in endoscopy. The system generalizes well to unseen endoscopes and subjects and performs favorably compared with a state-of-the-art feature-based SLAM system. The code repository is available at https://github.com/lppllppl920/SAGE-SLAM.git.
在内窥镜检查中,许多应用(如手术导航)都会受益于一种实时方法,这种方法可以同时跟踪内窥镜,并从单眼内窥镜视频中重建所观察到的解剖结构的密集三维几何图形。为此,我们结合基于学习的外观和可优化几何先验以及因子图优化,开发了同步定位和绘图系统。在端到端可微分训练流水线中,外观和几何先验被明确学习,以掌握图像配对任务,这是 SLAM 系统的核心组件之一。在我们的实验中,所提出的 SLAM 系统能稳健地应对内窥镜检查中常见的纹理稀缺和光照变化的挑战。该系统对未见过的内窥镜和受试者具有很好的通用性,与最先进的基于特征的 SLAM 系统相比,其性能更胜一筹。代码库见 https://github.com/lppllppl920/SAGE-SLAM.git。
{"title":"SAGE: SLAM with Appearance and Geometry Prior for Endoscopy.","authors":"Xingtong Liu, Zhaoshuo Li, Masaru Ishii, Gregory D Hager, Russell H Taylor, Mathias Unberath","doi":"10.1109/icra46639.2022.9812257","DOIUrl":"10.1109/icra46639.2022.9812257","url":null,"abstract":"<p><p>In endoscopy, many applications (<i>e.g</i>., surgical navigation) would benefit from a real-time method that can simultaneously track the endoscope and reconstruct the dense 3D geometry of the observed anatomy from a monocular endoscopic video. To this end, we develop a Simultaneous Localization and Mapping system by combining the learning-based appearance and optimizable geometry priors and factor graph optimization. The appearance and geometry priors are explicitly learned in an end-to-end differentiable training pipeline to master the task of pair-wise image alignment, one of the core components of the SLAM system. In our experiments, the proposed SLAM system is shown to robustly handle the challenges of texture scarceness and illumination variation that are commonly seen in endoscopy. The system generalizes well to unseen endoscopes and subjects and performs favorably compared with a state-of-the-art feature-based SLAM system. The code repository is available at https://github.com/lppllppl920/SAGE-SLAM.git.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2022 ","pages":"5587-5593"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10018746/pdf/nihms-1873358.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9156195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01Epub Date: 2022-07-12DOI: 10.1109/icra46639.2022.9811932
Jingxi Xu, Cassie Meeker, Ava Chen, Lauren Winterbottom, Michaela Fraser, Sangwoo Park, Lynne M Weber, Mitchell Miya, Dawn Nilsen, Joel Stein, Matei Ciocarlie
In order to provide therapy in a functional context, controls for wearable robotic orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG-based method to operate a robotic hand orthosis, but the process of training a control that is robust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for controlling a powered hand orthosis for stroke subjects. To the best of our knowledge, this is the first use of semi-supervised learning for an orthotic application. Specifically, we propose a disagreement-based semi-supervision algorithm for handling intrasession concept drift based on multimodal ipsilateral sensing. We evaluate the performance of our algorithm on data collected from five stroke subjects. Our results show that the proposed algorithm helps the device adapt to intrasession drift using unlabeled data and reduces the training burden placed on the user. We also validate the feasibility of our proposed algorithm with a functional task; in these experiments, two subjects successfully completed multiple instances of a pick-and-handover task.
{"title":"Adaptive Semi-Supervised Intent Inferral to Control a Powered Hand Orthosis for Stroke.","authors":"Jingxi Xu, Cassie Meeker, Ava Chen, Lauren Winterbottom, Michaela Fraser, Sangwoo Park, Lynne M Weber, Mitchell Miya, Dawn Nilsen, Joel Stein, Matei Ciocarlie","doi":"10.1109/icra46639.2022.9811932","DOIUrl":"10.1109/icra46639.2022.9811932","url":null,"abstract":"<p><p>In order to provide therapy in a functional context, controls for wearable robotic orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG-based method to operate a robotic hand orthosis, but the process of training a control that is robust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for controlling a powered hand orthosis for stroke subjects. To the best of our knowledge, this is the first use of semi-supervised learning for an orthotic application. Specifically, we propose a disagreement-based semi-supervision algorithm for handling intrasession concept drift based on multimodal ipsilateral sensing. We evaluate the performance of our algorithm on data collected from five stroke subjects. Our results show that the proposed algorithm helps the device adapt to intrasession drift using unlabeled data and reduces the training burden placed on the user. We also validate the feasibility of our proposed algorithm with a functional task; in these experiments, two subjects successfully completed multiple instances of a pick-and-handover task.</p>","PeriodicalId":73286,"journal":{"name":"IEEE International Conference on Robotics and Automation : ICRA : [proceedings]. IEEE International Conference on Robotics and Automation","volume":"2022 ","pages":"8097-8103"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181849/pdf/nihms-1847263.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9470406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}