Pub Date : 2024-09-12DOI: 10.1007/s11548-024-03267-z
Ivan Vogt, Marcel Eisenmann, Anton Schlünz, Robert Kowal, Daniel Düx, Maximilian Thormann, Julian Glandorf, Seben Sena Yerdelen, Marilena Georgiades, Robert Odenbach, Bennet Hensen, Marcel Gutberlet, Frank Wacker, Frank Fischbach, Georg Rose
Purpose
Surgical robotics have demonstrated their significance in assisting physicians during minimally invasive surgery. Especially, the integration of haptic and tactile feedback technologies can enhance the surgeon’s performance and overall patient outcomes. However, the current state-of-the-art lacks such interaction feedback opportunities, especially in robotic-assisted interventional magnetic resonance imaging (iMRI), which is gaining importance in clinical practice, specifically for percutaneous needle punctures.
Methods
The cable-driven ‘Micropositioning Robotics for Image-Guided Surgery’ (µRIGS) system utilized the back-electromotive force effect of the stepper motor load to measure cable tensile forces without external sensors, employing the TMC5160 motor driver. The aim was to generate a sensorless haptic feedback (SHF) for remote needle advancement, incorporating collision detection and homing capabilities for internal automation processes. Three different phantoms capable of mimicking soft tissue were used to evaluate the difference in force feedback between manual needle puncture and the SHF, both technically and in terms of user experience.
Results
The SHF achieved a sampling rate of 800 Hz and a mean force resolution of 0.26 ± 0.22 N, primarily dependent on motor current and rotation speed, with a mean maximum force of 15 N. In most cases, the SHF data aligned with the intended phantom-related force progression. The evaluation of the user study demonstrated no significant differences between the SHF technology and manual puncturing.
Conclusion
The presented SHF of the µRIGS system introduced a novel MR-compatible technique to bridge the gap between medical robotics and interaction during real-time needle-based interventions.
{"title":"MRI-compatible and sensorless haptic feedback for cable-driven medical robotics to perform teleoperated needle-based interventions","authors":"Ivan Vogt, Marcel Eisenmann, Anton Schlünz, Robert Kowal, Daniel Düx, Maximilian Thormann, Julian Glandorf, Seben Sena Yerdelen, Marilena Georgiades, Robert Odenbach, Bennet Hensen, Marcel Gutberlet, Frank Wacker, Frank Fischbach, Georg Rose","doi":"10.1007/s11548-024-03267-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03267-z","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Surgical robotics have demonstrated their significance in assisting physicians during minimally invasive surgery. Especially, the integration of haptic and tactile feedback technologies can enhance the surgeon’s performance and overall patient outcomes. However, the current state-of-the-art lacks such interaction feedback opportunities, especially in robotic-assisted interventional magnetic resonance imaging (iMRI), which is gaining importance in clinical practice, specifically for percutaneous needle punctures.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The cable-driven ‘Micropositioning Robotics for Image-Guided Surgery’ (µRIGS) system utilized the back-electromotive force effect of the stepper motor load to measure cable tensile forces without external sensors, employing the TMC5160 motor driver. The aim was to generate a sensorless haptic feedback (SHF) for remote needle advancement, incorporating collision detection and homing capabilities for internal automation processes. Three different phantoms capable of mimicking soft tissue were used to evaluate the difference in force feedback between manual needle puncture and the SHF, both technically and in terms of user experience.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The SHF achieved a sampling rate of 800 Hz and a mean force resolution of 0.26 ± 0.22 N, primarily dependent on motor current and rotation speed, with a mean maximum force of 15 N. In most cases, the SHF data aligned with the intended phantom-related force progression. The evaluation of the user study demonstrated no significant differences between the SHF technology and manual puncturing.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The presented SHF of the µRIGS system introduced a novel MR-compatible technique to bridge the gap between medical robotics and interaction during real-time needle-based interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate segmentation of tubular structures is crucial for clinical diagnosis and treatment but is challenging due to their complex branching structures and volume imbalance. The purpose of this study is to propose a 3D deep learning network that incorporates skeleton information to enhance segmentation accuracy in these tubular structures.
Methods
Our approach employs a 3D convolutional network to extract 3D tubular structures from medical images such as CT volumetric images. We introduce a skeleton-guided module that operates on extracted features to capture and preserve the skeleton information in the segmentation results. Additionally, to effectively train our deep model in leveraging skeleton information, we propose a sigmoid-adaptive Tversky loss function which is specifically designed for skeleton segmentation.
Results
We conducted experiments on two distinct 3D medical image datasets. The first dataset consisted of 90 cases of chest CT volumetric images, while the second dataset comprised 35 cases of abdominal CT volumetric images. Comparative analysis with previous segmentation approaches demonstrated the superior performance of our method. For the airway segmentation task, our method achieved an average tree length rate of 93.0%, a branch detection rate of 91.5%, and a precision rate of 90.0%. In the case of abdominal artery segmentation, our method attained an average precision rate of 97.7%, a recall rate of 91.7%, and an F-measure of 94.6%.
Conclusion
We present a skeleton-guided 3D convolutional network to segment tubular structures from 3D medical images. Our skeleton-guided 3D convolutional network could effectively segment small tubular structures, outperforming previous methods.
{"title":"Skeleton-guided 3D convolutional neural network for tubular structure segmentation","authors":"Ruiyun Zhu, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori","doi":"10.1007/s11548-024-03215-x","DOIUrl":"https://doi.org/10.1007/s11548-024-03215-x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Accurate segmentation of tubular structures is crucial for clinical diagnosis and treatment but is challenging due to their complex branching structures and volume imbalance. The purpose of this study is to propose a 3D deep learning network that incorporates skeleton information to enhance segmentation accuracy in these tubular structures.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Our approach employs a 3D convolutional network to extract 3D tubular structures from medical images such as CT volumetric images. We introduce a skeleton-guided module that operates on extracted features to capture and preserve the skeleton information in the segmentation results. Additionally, to effectively train our deep model in leveraging skeleton information, we propose a sigmoid-adaptive Tversky loss function which is specifically designed for skeleton segmentation.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We conducted experiments on two distinct 3D medical image datasets. The first dataset consisted of 90 cases of chest CT volumetric images, while the second dataset comprised 35 cases of abdominal CT volumetric images. Comparative analysis with previous segmentation approaches demonstrated the superior performance of our method. For the airway segmentation task, our method achieved an average tree length rate of 93.0%, a branch detection rate of 91.5%, and a precision rate of 90.0%. In the case of abdominal artery segmentation, our method attained an average precision rate of 97.7%, a recall rate of 91.7%, and an F-measure of 94.6%.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We present a skeleton-guided 3D convolutional network to segment tubular structures from 3D medical images. Our skeleton-guided 3D convolutional network could effectively segment small tubular structures, outperforming previous methods.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}