Pub Date : 2024-09-19DOI: 10.1109/TMRB.2024.3464089
Ke Fan;Ziyang Chen;Qiaoling Liu;Giancarlo Ferrigno;Elena De Momi
3D pose reconstruction of surgical instruments from images stands as a critical component in environment perception within robotic minimally invasive surgery (RMIS). The current deep learning methods rely on complex networks to enhance accuracy, making real-time implementation difficult. Moreover, diverging from a singular rigid body, surgical instruments exhibit an articulation structure, making the annotation of 3D poses more challenging. In this paper, we present a novel approach to formulate the 3D pose reconstruction of articulated surgical instruments as a Markov Decision Process (MDP). A Reinforcement Learning (RL) agent employs 2D image labels to control a virtual articulated skeleton to reproduce the 3D pose of the real surgical instrument. Firstly, a convolutional neural network is used to estimate the 2D pixel positions of joint nodes of the surgical instrument skeleton. Subsequently, the agent controls the 3D virtual articulated skeleton to align its joint nodes’ projections on the image plane with those in the real image. Validation of our proposed method is conducted using a semi-synthetic dataset with precise 3D pose labels and two real datasets, demonstrating the accuracy and efficacy of our approach. The results indicate the potential of our method in achieving real-time 3D pose reconstruction for articulated surgical instruments in the context of RMIS, addressing the challenges posed by low-texture surfaces and articulated structures.
{"title":"A Reinforcement Learning Approach for Real-Time Articulated Surgical Instrument 3-D Pose Reconstruction","authors":"Ke Fan;Ziyang Chen;Qiaoling Liu;Giancarlo Ferrigno;Elena De Momi","doi":"10.1109/TMRB.2024.3464089","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464089","url":null,"abstract":"3D pose reconstruction of surgical instruments from images stands as a critical component in environment perception within robotic minimally invasive surgery (RMIS). The current deep learning methods rely on complex networks to enhance accuracy, making real-time implementation difficult. Moreover, diverging from a singular rigid body, surgical instruments exhibit an articulation structure, making the annotation of 3D poses more challenging. In this paper, we present a novel approach to formulate the 3D pose reconstruction of articulated surgical instruments as a Markov Decision Process (MDP). A Reinforcement Learning (RL) agent employs 2D image labels to control a virtual articulated skeleton to reproduce the 3D pose of the real surgical instrument. Firstly, a convolutional neural network is used to estimate the 2D pixel positions of joint nodes of the surgical instrument skeleton. Subsequently, the agent controls the 3D virtual articulated skeleton to align its joint nodes’ projections on the image plane with those in the real image. Validation of our proposed method is conducted using a semi-synthetic dataset with precise 3D pose labels and two real datasets, demonstrating the accuracy and efficacy of our approach. The results indicate the potential of our method in achieving real-time 3D pose reconstruction for articulated surgical instruments in the context of RMIS, addressing the challenges posed by low-texture surfaces and articulated structures.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"6 4","pages":"1458-1467"},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/TMRB.2024.3464096
Kaifeng Wang;Aofei Tian;Yupeng Hao;Chengzhi Hu;Chaoyang Shi
This article proposes a fiber Bragg grating (FBG) based angle sensor with an extensive measurement range and high precision for human knee joint measurement. The proposed sensor mainly comprises an angle-linear displacement conversion cam, a crank-slider mechanism-inspired conversion flexure, an optical fiber embedded with an FBG element, and a sensor package. The cam transforms the wide-range knee angle input into vertical linear displacement output. The conversion flexure further converts such vertical displacement into a reduced horizontal displacement/stretching applied to the optical fiber with a motion scale ratio of 6:1. The flexure design features a symmetrical structure to improve stability and depress hysteresis. The fiber is suspended on the flexure’s output beams with a two-point pasting configuration. Both theory analysis and finite element method (FEM)-based simulations revealed the linear relationship between the input angle and the fiber strain. Static and dynamic experiments have verified the performance of the proposed sensor, demonstrating a sensitivity of 62.03 pm/° with a small linearity error of 1.36% within [0, 140°]. The root mean square errors (RMSE) were 0.72° and 0.84° for angle velocities of 80°/s and 350°/s, respectively. Wearable experiments during sitting and walking have been performed to validate the effectiveness of the proposed sensor.
{"title":"Development of a High-Precision and Large-Range FBG-Based Sensor Inspired by a Crank-Slider Mechanism for Wearable Measurement of Human Knee Joint Angles","authors":"Kaifeng Wang;Aofei Tian;Yupeng Hao;Chengzhi Hu;Chaoyang Shi","doi":"10.1109/TMRB.2024.3464096","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464096","url":null,"abstract":"This article proposes a fiber Bragg grating (FBG) based angle sensor with an extensive measurement range and high precision for human knee joint measurement. The proposed sensor mainly comprises an angle-linear displacement conversion cam, a crank-slider mechanism-inspired conversion flexure, an optical fiber embedded with an FBG element, and a sensor package. The cam transforms the wide-range knee angle input into vertical linear displacement output. The conversion flexure further converts such vertical displacement into a reduced horizontal displacement/stretching applied to the optical fiber with a motion scale ratio of 6:1. The flexure design features a symmetrical structure to improve stability and depress hysteresis. The fiber is suspended on the flexure’s output beams with a two-point pasting configuration. Both theory analysis and finite element method (FEM)-based simulations revealed the linear relationship between the input angle and the fiber strain. Static and dynamic experiments have verified the performance of the proposed sensor, demonstrating a sensitivity of 62.03 pm/° with a small linearity error of 1.36% within [0, 140°]. The root mean square errors (RMSE) were 0.72° and 0.84° for angle velocities of 80°/s and 350°/s, respectively. Wearable experiments during sitting and walking have been performed to validate the effectiveness of the proposed sensor.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"6 4","pages":"1688-1698"},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/TMRB.2024.3464095
R. C. Vijayan;N. M. Sheth;J. Wei;K. Venkataraman;D. Ghanem;B. Shafiq;J. H. Siewerdsen;W. Zbijewski;G. Li;K. Cleary;A. Uneri
Robot-assisted orthopaedic joint reduction offers enhanced precision and control across multiple axes of motion, enabling precise realignment according to predefined plans. However, the high levels of forces encountered may induce unintended anatomical motion and flex mechanical components. To address this, this work presents an approach that uses 2D fluoroscopic imaging to verify and readjust the 3D reduction path by tracking deviations from the planned trajectory. The proposed method involves a 3D-2D registration algorithm using a pair of fluoroscopic images, along with prior models of each body in the radiographic scene. This objective is formulated to couple and constrain multiple object poses (fibula, tibia, talus, and robot end effector), and incorporate novel methods for automatic view and hyperparameter selection to improve robustness. The algorithms were refined through cadaver studies and evaluated in a preclinical trial, employing a robotic system to manipulate a dislocated fibula. Studies with cadaveric specimens highlighted the joint-specific formulation’s high registration accuracy ( $Delta _{x} {=} 0.3~pm ~1$