Pub Date : 2025-04-01DOI: 10.1109/TMRB.2025.3556539
Mingrui Hao;Pengcheng Zhang;Xilong Hou;Xiaolin Gu;Xiao-Hu Zhou;Zeng-Guang Hou;Chen Chen;Shuangyi Wang
Echocardiography serves as a prevalent modality for both heart disease diagnosis and procedural guidance in medical applications. Nevertheless, the conventional echocardiography examination heavily relies on the manual dexterity of the sonographer, leading to the suboptimal repeatability. Despite the extensive exploration of robot-assisted ultrasound systems, achieving a heightened level of automation in examinations and enhancing the practicality of these robotic platforms for primary utilization remain formidable challenges within the field. In this study, we introduce an innovative automatic acquisition method for cardiac views using a novel ultrasound robot. The method is designed to autonomously traverse and scan target positions and angular ranges to search and identify the target cardiac views. First, the target positions and angular ranges were derived from a professional sonographer’s practice on 14 cases. Then, an automatic traversal scanning method is designed integrating visual guidance, human-machine collaboration, and path planning within the framework of a novel parallel mechanism-based ultrasound robot. Finally, we explore deep metric learning to search for target ultrasound images in the traversed ultrasound video. Experiments on the test set to evaluate the target ultrasound view searching algorithm achieved a mAP of 98.8% and a Rank-1 accuracy of 98.23%. Our method has been successfully validated by data from five subjects, achieving the acquisition of standard parasternal long-axis and short-axis cardiac views essential for diagnosis, demonstrating the effectiveness of the proposed method.
{"title":"Towards Autonomous Cardiac Ultrasound Scanning: Combining Physician Expertise and Machine Intelligence","authors":"Mingrui Hao;Pengcheng Zhang;Xilong Hou;Xiaolin Gu;Xiao-Hu Zhou;Zeng-Guang Hou;Chen Chen;Shuangyi Wang","doi":"10.1109/TMRB.2025.3556539","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3556539","url":null,"abstract":"Echocardiography serves as a prevalent modality for both heart disease diagnosis and procedural guidance in medical applications. Nevertheless, the conventional echocardiography examination heavily relies on the manual dexterity of the sonographer, leading to the suboptimal repeatability. Despite the extensive exploration of robot-assisted ultrasound systems, achieving a heightened level of automation in examinations and enhancing the practicality of these robotic platforms for primary utilization remain formidable challenges within the field. In this study, we introduce an innovative automatic acquisition method for cardiac views using a novel ultrasound robot. The method is designed to autonomously traverse and scan target positions and angular ranges to search and identify the target cardiac views. First, the target positions and angular ranges were derived from a professional sonographer’s practice on 14 cases. Then, an automatic traversal scanning method is designed integrating visual guidance, human-machine collaboration, and path planning within the framework of a novel parallel mechanism-based ultrasound robot. Finally, we explore deep metric learning to search for target ultrasound images in the traversed ultrasound video. Experiments on the test set to evaluate the target ultrasound view searching algorithm achieved a mAP of 98.8% and a Rank-1 accuracy of 98.23%. Our method has been successfully validated by data from five subjects, achieving the acquisition of standard parasternal long-axis and short-axis cardiac views essential for diagnosis, demonstrating the effectiveness of the proposed method.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"782-792"},"PeriodicalIF":3.4,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01DOI: 10.1109/TMRB.2025.3556549
João Oliveira;Rui Moura Coelho;Herculano Carvalho;Jorge Martins
With the rise of collaborative robots, there has been a growing interest in integrating ultrasound imaging with robotic systems in medical applications. This integration allows the system to receive real-time visual information, enabling the robot to move based on relevant anatomical features. However, for the visual information to be used accurately, it is essential to calibrate the pose of the ultrasound image in the robot’s coordinate frame. This paper presents a calibration technique that eliminates the need for external trackers and minimizes sources of error. The proposed method involves scanning a straight wire phantom with an unknown arrangement to constrain the set of possible solutions. By optimizing the cost function associated with the wires’ straightness, we can estimate the pose of the B-scan in the robot flange coordinate frame precisely and reliably without significant limitations, e.g., complex scanning trajectories. The technique shows an average precision of 0.8 mm and an accuracy of 1.72 mm with a scaling factor of 0.2778 mm/pixel.
{"title":"A Calibration Procedure for Robotic Ultrasound Systems","authors":"João Oliveira;Rui Moura Coelho;Herculano Carvalho;Jorge Martins","doi":"10.1109/TMRB.2025.3556549","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3556549","url":null,"abstract":"With the rise of collaborative robots, there has been a growing interest in integrating ultrasound imaging with robotic systems in medical applications. This integration allows the system to receive real-time visual information, enabling the robot to move based on relevant anatomical features. However, for the visual information to be used accurately, it is essential to calibrate the pose of the ultrasound image in the robot’s coordinate frame. This paper presents a calibration technique that eliminates the need for external trackers and minimizes sources of error. The proposed method involves scanning a straight wire phantom with an unknown arrangement to constrain the set of possible solutions. By optimizing the cost function associated with the wires’ straightness, we can estimate the pose of the B-scan in the robot flange coordinate frame precisely and reliably without significant limitations, e.g., complex scanning trajectories. The technique shows an average precision of 0.8 mm and an accuracy of 1.72 mm with a scaling factor of 0.2778 mm/pixel.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"793-801"},"PeriodicalIF":3.4,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-28DOI: 10.1109/TMRB.2025.3573397
Jing-Sheng Li;Elaine Lowinger;Morgan E. Leslie;Geoffrey S. Balkman;Roy Kornbluh;William D. Lack;Patrick M. Aubin;Thomas Libby
The anterior cruciate ligament (ACL) is a crucial passive stabilizer of the knee joint. ACL injuries affect the anteroposterior stability of the knee joint. Many rigid knee braces with hinge designs claim to provide stabilization forces for the knee, however, current clinical practice guidelines recommend not using a functional knee brace during return to activity and return to play phases. In this study, we aimed to design a hinge-less exosuit knee brace that combines flexible materials to improve comfort at the brace-body interface while providing dynamic support forces during walking tasks using an offboard actuation system. We recruited 11 participants: 5 for design tests and 6 for walking tests. Our active exosuit knee brace was able to produce up to 88.9 N of cable force during a portion of the stance phase with a rise time of 111-150 milliseconds at different load settings and under 10N during the swing phase. Comfort scores were high (> 7) during most walking tests. The range of knee flexion was reduced by about 5 degrees when the exosuit knee brace was activated during walking, and brace migration was within 10 mm in most cases.
{"title":"Development of an Exosuit Knee Brace for Anterior Cruciate Ligament Injury","authors":"Jing-Sheng Li;Elaine Lowinger;Morgan E. Leslie;Geoffrey S. Balkman;Roy Kornbluh;William D. Lack;Patrick M. Aubin;Thomas Libby","doi":"10.1109/TMRB.2025.3573397","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573397","url":null,"abstract":"The anterior cruciate ligament (ACL) is a crucial passive stabilizer of the knee joint. ACL injuries affect the anteroposterior stability of the knee joint. Many rigid knee braces with hinge designs claim to provide stabilization forces for the knee, however, current clinical practice guidelines recommend not using a functional knee brace during return to activity and return to play phases. In this study, we aimed to design a hinge-less exosuit knee brace that combines flexible materials to improve comfort at the brace-body interface while providing dynamic support forces during walking tasks using an offboard actuation system. We recruited 11 participants: 5 for design tests and 6 for walking tests. Our active exosuit knee brace was able to produce up to 88.9 N of cable force during a portion of the stance phase with a rise time of 111-150 milliseconds at different load settings and under 10N during the swing phase. Comfort scores were high (> 7) during most walking tests. The range of knee flexion was reduced by about 5 degrees when the exosuit knee brace was activated during walking, and brace migration was within 10 mm in most cases.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1164-1174"},"PeriodicalIF":3.8,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional transfemoral lower-limb prostheses often overlook the intuitive neuronal connections between the brain and prosthetic actuators. This study bridges this gap by integrating a functional near-infrared spectroscopy (fNIRS) into real-time lower-limb prosthesis control with preliminary clinical tests on the above-knee amputee, enabling a more reliable volitional control of the prosthesis. Cerebral hemodynamic responses were measured using a 56-channel fNIRS headset, and lower-limb kinematics were recorded with a optical motion capture system. Artifacts in fNIRS were mitigated using short-separation regression, and eight features of the fNIRS data were extracted. ANOVA revealed the means, slope, and entropy as top-performing features across all subjects. Among eight classifiers tested, k-nearest neighbor (KNN) emerged as the most accurate. In this study, we recruited eleven healthy subjects and one unilateral transfemoral amputee. Classification rates surpassed 97% for all classes, maintaining an average accuracy of $99.86pm 0.01$ %. Notably, the amputee exhibited higher precision, sensitivity, and F1 scores than healthy subjects. Maximum temporal latencies for healthy subjects were $120.00pm 49.40$ ms during sit-down and $119.09pm 45.71$ ms during stand-up, while the amputee showed maximum temporal latencies of 90 ms and 190 ms, respectively. This study marks the first application of action detection in sit-to-stand tasks for transfemoral amputees via fNIRS, which underscores the potential of fNIRS in neuroprostheses control.
{"title":"fNIRS-Based Action Detection for Lower Limb Amputees in Sit-to-Stand Tasks","authors":"Ruisen Huang;Wenze Shang;Yongchen Li;Guanglin Li;Xinyu Wu;Fei Gao","doi":"10.1109/TMRB.2025.3573411","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573411","url":null,"abstract":"Traditional transfemoral lower-limb prostheses often overlook the intuitive neuronal connections between the brain and prosthetic actuators. This study bridges this gap by integrating a functional near-infrared spectroscopy (fNIRS) into real-time lower-limb prosthesis control with preliminary clinical tests on the above-knee amputee, enabling a more reliable volitional control of the prosthesis. Cerebral hemodynamic responses were measured using a 56-channel fNIRS headset, and lower-limb kinematics were recorded with a optical motion capture system. Artifacts in fNIRS were mitigated using short-separation regression, and eight features of the fNIRS data were extracted. ANOVA revealed the means, slope, and entropy as top-performing features across all subjects. Among eight classifiers tested, k-nearest neighbor (KNN) emerged as the most accurate. In this study, we recruited eleven healthy subjects and one unilateral transfemoral amputee. Classification rates surpassed 97% for all classes, maintaining an average accuracy of <inline-formula> <tex-math>$99.86pm 0.01$ </tex-math></inline-formula>%. Notably, the amputee exhibited higher precision, sensitivity, and F1 scores than healthy subjects. Maximum temporal latencies for healthy subjects were <inline-formula> <tex-math>$120.00pm 49.40$ </tex-math></inline-formula> ms during sit-down and <inline-formula> <tex-math>$119.09pm 45.71$ </tex-math></inline-formula> ms during stand-up, while the amputee showed maximum temporal latencies of 90 ms and 190 ms, respectively. This study marks the first application of action detection in sit-to-stand tasks for transfemoral amputees via fNIRS, which underscores the potential of fNIRS in neuroprostheses control.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1248-1262"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573436
Lizhi Pan;Tianze Zhang;Yiding Cheng;Zhikang Ma;Jianmin Li
Continuum robots show great potential in the medical field owing to their theoretically infinite degrees of freedom, but they still face challenges in shape sensing. This study focuses on shape sensing of continuum robots and designs a low-cost flexible resistive strain sensor based on multi-walled carbon nanotubes and polydimethylsiloxane. The sensor exhibits high linearity over the bending range of 0°-65° and offers 100% elongation at break and excellent mechanical properties, also showing good biocompatibility and environmental adaptability. A $3{times }3$ array of these sensors is attached to the continuum surgical robot to realize shape sensing. The angle change of the continuum at each position is determined from the resistance change of each sensor during bending. The position information of five key points can be obtained from these angles, and the shape is reconstructed by fitting each point. Experimental results show that the proposed sensor can accurately sense various bending shapes of the continuum within the stable linear bending range, and the position error of the distal end fluctuates about 2% of the overall shape. This study provides a new solution for shape sensing of continuum surgical robots, demonstrating strong application potential.
{"title":"Shape Sensing for Continuum Robots Based on MWCNTs-PDMS Flexible Resistive Strain Sensors","authors":"Lizhi Pan;Tianze Zhang;Yiding Cheng;Zhikang Ma;Jianmin Li","doi":"10.1109/TMRB.2025.3573436","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573436","url":null,"abstract":"Continuum robots show great potential in the medical field owing to their theoretically infinite degrees of freedom, but they still face challenges in shape sensing. This study focuses on shape sensing of continuum robots and designs a low-cost flexible resistive strain sensor based on multi-walled carbon nanotubes and polydimethylsiloxane. The sensor exhibits high linearity over the bending range of 0°-65° and offers 100% elongation at break and excellent mechanical properties, also showing good biocompatibility and environmental adaptability. A <inline-formula> <tex-math>$3{times }3$ </tex-math></inline-formula> array of these sensors is attached to the continuum surgical robot to realize shape sensing. The angle change of the continuum at each position is determined from the resistance change of each sensor during bending. The position information of five key points can be obtained from these angles, and the shape is reconstructed by fitting each point. Experimental results show that the proposed sensor can accurately sense various bending shapes of the continuum within the stable linear bending range, and the position error of the distal end fluctuates about 2% of the overall shape. This study provides a new solution for shape sensing of continuum surgical robots, demonstrating strong application potential.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1286-1296"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573412
Yuesheng Qu;Chengyu Zhang;Chi Zhang;Siyang Zuo
In robot-assisted endoscopic procedures, a guiding sheath must be flexibly advanced through anatomic paths via natural orifices, while maintaining sufficient rigidity to serve as a base for the surgical instruments used in dexterous diagnostic and therapeutic tasks. Therefore, developing a guiding sheath with the capacity of both flexible access and variable stiffness is imperative and challenging. To address these challenges, we have developed a novel robotic guiding sheath. Its stiffness can be managed using the innovatively-manufactured variable stiffness coating layer (VSCL), which has a thickness of only 1 mm and is composed of polycaprolactone (PCL) and conductive graphene. An active water heating and cooling mechanism was designed to regulate the temperature of the VSCL, thereby controlling the stiffness of the guiding sheath. Through detailed performance evaluation, the guiding sheath achieved a fixed-end bending stiffness gain of 25.76 and a mid-span bending stiffness gain of 25.01, reaching a fixed-end bending stiffness of 739.63 N/m and a mid-span bending stiffness of 5779.33 N/m. The fast switching between the rigid and flexible states was realized with switching times of 6.5 s (from rigid state to flexible state) and 10.0 s (from flexible state to rigid state). The sheath was also validated with phantom and ex-vivo experiments. The capability of this guiding sheath to traverse the tortuous digestive tract in a flexible state was proved. Additionally, the guiding sheath in a rigid state can significantly improve instrument manipulation stability during the ex-vivo trials. The experimental results demonstrated the potential clinical value of this system.
{"title":"A Novel Robotic Guiding Sheath With Variable Stiffness Capability Based on Conductive Graphene and Thermoplastic Polymer","authors":"Yuesheng Qu;Chengyu Zhang;Chi Zhang;Siyang Zuo","doi":"10.1109/TMRB.2025.3573412","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573412","url":null,"abstract":"In robot-assisted endoscopic procedures, a guiding sheath must be flexibly advanced through anatomic paths via natural orifices, while maintaining sufficient rigidity to serve as a base for the surgical instruments used in dexterous diagnostic and therapeutic tasks. Therefore, developing a guiding sheath with the capacity of both flexible access and variable stiffness is imperative and challenging. To address these challenges, we have developed a novel robotic guiding sheath. Its stiffness can be managed using the innovatively-manufactured variable stiffness coating layer (VSCL), which has a thickness of only 1 mm and is composed of polycaprolactone (PCL) and conductive graphene. An active water heating and cooling mechanism was designed to regulate the temperature of the VSCL, thereby controlling the stiffness of the guiding sheath. Through detailed performance evaluation, the guiding sheath achieved a fixed-end bending stiffness gain of 25.76 and a mid-span bending stiffness gain of 25.01, reaching a fixed-end bending stiffness of 739.63 N/m and a mid-span bending stiffness of 5779.33 N/m. The fast switching between the rigid and flexible states was realized with switching times of 6.5 s (from rigid state to flexible state) and 10.0 s (from flexible state to rigid state). The sheath was also validated with phantom and ex-vivo experiments. The capability of this guiding sheath to traverse the tortuous digestive tract in a flexible state was proved. Additionally, the guiding sheath in a rigid state can significantly improve instrument manipulation stability during the ex-vivo trials. The experimental results demonstrated the potential clinical value of this system.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1275-1285"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573420
Mingyang Liu;Geng Li;Hao Yu;Rui Song;Yibin Li;Max Q.-H. Meng;Zhe Min
In this paper, we propose a novel unsupervised learning-based non-rigid 3D point set registration method, Learning Coherent Point Drift Network (LCNet), for image-guided liver surgery. We reformulate the classical probabilistic registration approach, i.e., Coherent Point Drift (CPD) into a learning-based paradigm. We first utilise the feature extraction module (FEM) to extract the features of two original point sets, which are robust to rigid transformation. Subsequently, we establish reliable correspondences between the point sets using the optimal transport (OT) module by leveraging both original points and learned features. Then, rather than directly regressing displacement vectors, we compute the displacements by solving the involved matrix equation in the transformation module, where the point localization noise is explicitly considered. In addition, we present three variants of the proposed approach, i.e., LCNet, LCNet-ED and LCNet-WD. Among these, LCNet outperforms the other two, demonstrating the superiority of the Chamfer loss. We have extensively evaluated LCNet on the simulated and real datasets. Under experimental conditions with the rotation angle lies in the range of $[{-}45^{circ },45^{circ }]$ and the translation in the range of $[{-}30 mm, 30 mm]$ , LCNet achieves the root-mean-square-error (rmse) value being 3.46 mm on the MedShapeNet dataset, while those using CPD and RoITr are 7.65 mm $(plt 0.001)$ and 6.71 mm $(plt 0.001)$ respectively. Experimental results show that LCNet exhibits significant improvements over existing state-of-the-art registration methods and shed light on its promising use in image-guided liver surgery.
在本文中,我们提出了一种新的基于无监督学习的非刚性三维点集配准方法——学习相干点漂移网络(LCNet),用于图像引导肝脏手术。我们将经典的概率配准方法,即相干点漂移(CPD)重新表述为基于学习的范式。首先利用特征提取模块(FEM)提取两个原始点集的特征,这些特征对刚性变换具有鲁棒性;随后,我们利用最优传输(OT)模块利用原始点和学习特征在点集之间建立可靠的对应关系。然后,我们不是直接回归位移向量,而是通过求解变换模块中所涉及的矩阵方程来计算位移,其中明确考虑了点定位噪声。此外,我们提出了该方法的三种变体,即LCNet, LCNet- ed和LCNet- wd。其中,LCNet的性能优于其他两种,说明了Chamfer损耗的优越性。我们在模拟和真实数据集上对LCNet进行了广泛的评估。在旋转角度为$[{-}45^{circ},45^{circ}]$,平移角度为$[{-}30 mm, 30 mm]$的实验条件下,LCNet在MedShapeNet数据集上获得的均方根误差(rmse)值为3.46 mm,而使用CPD和RoITr的rmse值分别为7.65 mm $(plt 0.001)$和6.71 mm $(plt 0.001)$。实验结果表明,LCNet比现有的最先进的注册方法有了显著的改进,并揭示了其在图像引导肝脏手术中的应用前景。
{"title":"LCNet: A Robust and Accurate Non-Rigid 3-D Point Set Registration Approach for Image-Guided Liver Surgery","authors":"Mingyang Liu;Geng Li;Hao Yu;Rui Song;Yibin Li;Max Q.-H. Meng;Zhe Min","doi":"10.1109/TMRB.2025.3573420","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573420","url":null,"abstract":"In this paper, we propose a novel unsupervised learning-based non-rigid 3D point set registration method, Learning Coherent Point Drift Network (LCNet), for image-guided liver surgery. We reformulate the classical probabilistic registration approach, i.e., Coherent Point Drift (CPD) into a learning-based paradigm. We first utilise the feature extraction module (FEM) to extract the features of two original point sets, which are robust to rigid transformation. Subsequently, we establish reliable correspondences between the point sets using the optimal transport (OT) module by leveraging both original points and learned features. Then, rather than directly regressing displacement vectors, we compute the displacements by solving the involved matrix equation in the transformation module, where the point localization noise is explicitly considered. In addition, we present three variants of the proposed approach, i.e., LCNet, LCNet-ED and LCNet-WD. Among these, LCNet outperforms the other two, demonstrating the superiority of the Chamfer loss. We have extensively evaluated LCNet on the simulated and real datasets. Under experimental conditions with the rotation angle lies in the range of <inline-formula> <tex-math>$[{-}45^{circ },45^{circ }]$ </tex-math></inline-formula> and the translation in the range of <inline-formula> <tex-math>$[{-}30 mm, 30 mm]$ </tex-math></inline-formula>, LCNet achieves the root-mean-square-error (rmse) value being 3.46 mm on the MedShapeNet dataset, while those using CPD and RoITr are 7.65 mm <inline-formula> <tex-math>$(plt 0.001)$ </tex-math></inline-formula> and 6.71 mm <inline-formula> <tex-math>$(plt 0.001)$ </tex-math></inline-formula> respectively. Experimental results show that LCNet exhibits significant improvements over existing state-of-the-art registration methods and shed light on its promising use in image-guided liver surgery.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1073-1086"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573408
Samuel Bello;Mark M. Iskarous;Sriramana Sankar;Nitish V. Thakor
Palpation is a relatively safe, rapid, and low-cost method used by clinicians for examining diseased tissues. However, depending on the scanning speed and the physician’s experience, the size of the physical features in the body can be miscategorized or overlooked entirely. By designing tactile sensors and signal processing algorithms that mimic the body’s ability to account for variations in speed when scanning an object, we can solve the problem described above in an artificial system. We utilized a piezoresistive tactile sensor attached to a robotic arm to palpate fractures at different speeds. The analog tactile signals generated from the tactile sensor are converted into spike trains which are then scaled in time to encode the sensor data invariant of the speed of palpation. With a few principal components, the scaled dataset achieves a higher classification accuracy compared to the original dataset. Additionally, the scaled data was more robust to both spike timing noise and untrained speed conditions compared to the original data. Lastly, we demonstrated that this system could be applied in a medical setting by discriminating between 3 different fracture conditions (none, transverse, and communicated) in the ulna of a chicken wing with 99.8% accuracy at 3 different speeds.
{"title":"Robotic Palpation of Fractures Using Bioinspired Tactile Sensor and Neuromorphic Encoding Algorithm","authors":"Samuel Bello;Mark M. Iskarous;Sriramana Sankar;Nitish V. Thakor","doi":"10.1109/TMRB.2025.3573408","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573408","url":null,"abstract":"Palpation is a relatively safe, rapid, and low-cost method used by clinicians for examining diseased tissues. However, depending on the scanning speed and the physician’s experience, the size of the physical features in the body can be miscategorized or overlooked entirely. By designing tactile sensors and signal processing algorithms that mimic the body’s ability to account for variations in speed when scanning an object, we can solve the problem described above in an artificial system. We utilized a piezoresistive tactile sensor attached to a robotic arm to palpate fractures at different speeds. The analog tactile signals generated from the tactile sensor are converted into spike trains which are then scaled in time to encode the sensor data invariant of the speed of palpation. With a few principal components, the scaled dataset achieves a higher classification accuracy compared to the original dataset. Additionally, the scaled data was more robust to both spike timing noise and untrained speed conditions compared to the original data. Lastly, we demonstrated that this system could be applied in a medical setting by discriminating between 3 different fracture conditions (none, transverse, and communicated) in the ulna of a chicken wing with 99.8% accuracy at 3 different speeds.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1175-1185"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573390
Alexander D. Smith;Anant Naik;Suguna Pappu;Paul M. Arnold;Kris Hauser
Objective: This paper proposes a low-cost real-time navigation system to assist a surgeon in placing external ventricular drains. Methods: In our approach, the base of an articulated arm coordinate measuring machine is bolted to the patient’s skull, and a graphical user interface quickly guides the operator through the image registration and 3D navigation to place an external ventricular drain at a desired target specified relative to preoperative imaging. The method can be employed in workflows with and without fiducials embedded in the preoperative imaging. Results: The proposed system is evaluated using precise registration instruments, human phantom models, and ex vivo ovine models, demonstrating less than 2 mm of error with fiducials and less than 4 mm of error without fiducials. Conclusion: The registration procedure takes less than one minute and can be performed intuitively by a single operator without an assistant. Significance: Our proposed system enables real-time image-guided navigation to be used in bedside external ventricular drain placement, with potential to expand access to this procedure.
{"title":"A Low-Cost Articulated Arm Navigation System for External Ventricular Drain Placement","authors":"Alexander D. Smith;Anant Naik;Suguna Pappu;Paul M. Arnold;Kris Hauser","doi":"10.1109/TMRB.2025.3573390","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573390","url":null,"abstract":"Objective: This paper proposes a low-cost real-time navigation system to assist a surgeon in placing external ventricular drains. Methods: In our approach, the base of an articulated arm coordinate measuring machine is bolted to the patient’s skull, and a graphical user interface quickly guides the operator through the image registration and 3D navigation to place an external ventricular drain at a desired target specified relative to preoperative imaging. The method can be employed in workflows with and without fiducials embedded in the preoperative imaging. Results: The proposed system is evaluated using precise registration instruments, human phantom models, and ex vivo ovine models, demonstrating less than 2 mm of error with fiducials and less than 4 mm of error without fiducials. Conclusion: The registration procedure takes less than one minute and can be performed intuitively by a single operator without an assistant. Significance: Our proposed system enables real-time image-guided navigation to be used in bedside external ventricular drain placement, with potential to expand access to this procedure.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1087-1098"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11015586","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1109/TMRB.2025.3573416
Tzu-Cheng Hsu;Ming-Chih Ho;Cheng-Wei Chen
This study introduces the Laparoscopic Assistive Robotic Manipulator (LapARM), designed to address limitations in current robotic laparoscope holders. LapARM features a compact 4-degree-of-freedom (4-DoF) design achieved through parallel and telescopic mechanisms, synthesized to navigate spatial interference challenges with the human surgeon during surgical operations. Additionally, it improves image stability when zooming with an oblique-viewing scope by developing a custom laparoscope integrated with an embedded distance sensor, allowing the LapARM to dynamically adjust the scope’s orientation during zooming, ensuring that the imaged object remains centered in the image throughout the zooming process. Furthermore, an eye tracker provides surgeons with a contactless interface for intuitive solo control of the laparoscope via head movements. Experimental results validate LapARM’s effective scope maneuvering for laparoscopic procedures. User studies show that its head movement-based control significantly reduces completion time and user workload, contributing to the success of minimally invasive surgeries.
{"title":"Laparoscopic Assistive Robotic Manipulator (LapARM): Mechanical Design and Contactless Interface for Oblique-Viewing Scope Maneuvers","authors":"Tzu-Cheng Hsu;Ming-Chih Ho;Cheng-Wei Chen","doi":"10.1109/TMRB.2025.3573416","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3573416","url":null,"abstract":"This study introduces the Laparoscopic Assistive Robotic Manipulator (LapARM), designed to address limitations in current robotic laparoscope holders. LapARM features a compact 4-degree-of-freedom (4-DoF) design achieved through parallel and telescopic mechanisms, synthesized to navigate spatial interference challenges with the human surgeon during surgical operations. Additionally, it improves image stability when zooming with an oblique-viewing scope by developing a custom laparoscope integrated with an embedded distance sensor, allowing the LapARM to dynamically adjust the scope’s orientation during zooming, ensuring that the imaged object remains centered in the image throughout the zooming process. Furthermore, an eye tracker provides surgeons with a contactless interface for intuitive solo control of the laparoscope via head movements. Experimental results validate LapARM’s effective scope maneuvering for laparoscopic procedures. User studies show that its head movement-based control significantly reduces completion time and user workload, contributing to the success of minimally invasive surgeries.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1062-1072"},"PeriodicalIF":3.8,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}