Pub Date : 2024-09-23DOI: 10.1109/TMRB.2024.3464127
Christian Marzi;Maximilian Themistocli;Björn Hein;Franziska Mathis-Ullrich
Minimally invasive continuum robots face limitations in accessing environmental and spatial information on the situs. However, such information would often be necessary for control and automation features in surgical use. Centering an endoscopic system within a hollow organ can be such a feature, providing the benefit of reduced risk of injury and assistance for navigation. To leverage such an application, this work investigates a proximity servoed continuum robot. A sensorized tip combines capacitive electrodes, a camera, and illumination and uses capacitive proximity sensing to determine the enclosing environment’s center point. A controller is presented that uses this information to center the robot’s tip. The system is evaluated in a dynamic phantom, where an average accuracy of 10.0 mm could be demonstrated and contact to the phantom’s wall was avoided during 98% of the experiment time. In a second phantom experiment, it is demonstrated how this controller can be applied to follow the center line of a bent anatomical structure. Future work should focus on improving accuracy and versatility of the system, aiming for application in more challenging and irregular environments, such as ex vivo or in vivo organs.
{"title":"Proximity Servoed Minimally Invasive Continuum Robot for Endoscopic Interventions","authors":"Christian Marzi;Maximilian Themistocli;Björn Hein;Franziska Mathis-Ullrich","doi":"10.1109/TMRB.2024.3464127","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464127","url":null,"abstract":"Minimally invasive continuum robots face limitations in accessing environmental and spatial information on the situs. However, such information would often be necessary for control and automation features in surgical use. Centering an endoscopic system within a hollow organ can be such a feature, providing the benefit of reduced risk of injury and assistance for navigation. To leverage such an application, this work investigates a proximity servoed continuum robot. A sensorized tip combines capacitive electrodes, a camera, and illumination and uses capacitive proximity sensing to determine the enclosing environment’s center point. A controller is presented that uses this information to center the robot’s tip. The system is evaluated in a dynamic phantom, where an average accuracy of 10.0 mm could be demonstrated and contact to the phantom’s wall was avoided during 98% of the experiment time. In a second phantom experiment, it is demonstrated how this controller can be applied to follow the center line of a bent anatomical structure. Future work should focus on improving accuracy and versatility of the system, aiming for application in more challenging and irregular environments, such as ex vivo or in vivo organs.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"6 4","pages":"1738-1747"},"PeriodicalIF":3.4,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1109/TMRB.2024.3464748
Lei Yang;Chenxu Zhai;Hongyong Wang;Yanhong Liu;Guibin Bian
Surgical robots have become integral to contemporary surgical procedures, with the precise segmentation of surgical instruments constituting a crucial prerequisite for ensuring their stable functionality. However, numerous factors continue to influence segmentation outcomes, including intricate surgical environments, varying viewpoints, diminished contrast between surgical instruments and surroundings, divergent sizes and shapes of instruments, and imbalanced categories. In this paper, a novel dual-branch fusion network, designated DBF-Net, is presented, which integrates both convolutional neural network (CNN) and Transformer architectures to facilitate automatic segmentation of surgical instruments. For addressing the deficiencies in feature extraction capacity in CNNs or Transformer architectures, a dual-path encoding unit is introduced to proficiently represent local detail features and global context. Meanwhile, to enhance the fusion of features extracted from the dual paths, a CNN-Transformer fusion (CTF) module is proposed, to efficiently merge features from the CNN and Transformer structures, contributing to the effective representation of both local detail features and global contextual features. Further refinement is pursued through an multi-scale feature aggregation (MFAG) module and a local feature enhancement (LFE) module, to refine local contextual features at each layer. In addition, an attention-guided enhancement (AGE) module is incorporated for feature refinement of local feature maps. Finally, an multi-scale global feature representation (MGFR) module is introduced, facilitating the extraction and aggregation of multi-scale features, and a progressive fusion module (PFM) culminates in the aggregation of full-scale features from the decoder. Experimental results underscore the superior segmentation performance of proposed network compared to other state-of-the-art (SOTA) segmentation models for surgical instruments, which have well validated the efficacy of proposed network architecture in advancing the field of surgical instrument segmentation.
{"title":"A Dual-Branch Fusion Network for Surgical Instrument Segmentation","authors":"Lei Yang;Chenxu Zhai;Hongyong Wang;Yanhong Liu;Guibin Bian","doi":"10.1109/TMRB.2024.3464748","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3464748","url":null,"abstract":"Surgical robots have become integral to contemporary surgical procedures, with the precise segmentation of surgical instruments constituting a crucial prerequisite for ensuring their stable functionality. However, numerous factors continue to influence segmentation outcomes, including intricate surgical environments, varying viewpoints, diminished contrast between surgical instruments and surroundings, divergent sizes and shapes of instruments, and imbalanced categories. In this paper, a novel dual-branch fusion network, designated DBF-Net, is presented, which integrates both convolutional neural network (CNN) and Transformer architectures to facilitate automatic segmentation of surgical instruments. For addressing the deficiencies in feature extraction capacity in CNNs or Transformer architectures, a dual-path encoding unit is introduced to proficiently represent local detail features and global context. Meanwhile, to enhance the fusion of features extracted from the dual paths, a CNN-Transformer fusion (CTF) module is proposed, to efficiently merge features from the CNN and Transformer structures, contributing to the effective representation of both local detail features and global contextual features. Further refinement is pursued through an multi-scale feature aggregation (MFAG) module and a local feature enhancement (LFE) module, to refine local contextual features at each layer. In addition, an attention-guided enhancement (AGE) module is incorporated for feature refinement of local feature maps. Finally, an multi-scale global feature representation (MGFR) module is introduced, facilitating the extraction and aggregation of multi-scale features, and a progressive fusion module (PFM) culminates in the aggregation of full-scale features from the decoder. Experimental results underscore the superior segmentation performance of proposed network compared to other state-of-the-art (SOTA) segmentation models for surgical instruments, which have well validated the efficacy of proposed network architecture in advancing the field of surgical instrument segmentation.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"6 4","pages":"1542-1554"},"PeriodicalIF":3.4,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a framework enabling upper-limb rehabilitation exoskeletons to mimic the personalised haptic guidance of therapists. Current exoskeletons face acceptability issues as they limit physical interaction between clinicians and patients and offer only predefined levels of support that cannot be tuned during the movements, when needed. To increase acceptance, we first developed a method to estimate the therapist’s force contribution while manipulating a patient’s arm using an upper-limb exoskeleton. We achieved a precision of $0.31Nm$