首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
MRI-compatible and sensorless haptic feedback for cable-driven medical robotics to perform teleoperated needle-based interventions 磁共振成像兼容和无传感器触觉反馈,用于电缆驱动的医疗机器人,以执行远程操作的针式介入治疗
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-12 DOI: 10.1007/s11548-024-03267-z
Ivan Vogt, Marcel Eisenmann, Anton Schlünz, Robert Kowal, Daniel Düx, Maximilian Thormann, Julian Glandorf, Seben Sena Yerdelen, Marilena Georgiades, Robert Odenbach, Bennet Hensen, Marcel Gutberlet, Frank Wacker, Frank Fischbach, Georg Rose

Purpose

Surgical robotics have demonstrated their significance in assisting physicians during minimally invasive surgery. Especially, the integration of haptic and tactile feedback technologies can enhance the surgeon’s performance and overall patient outcomes. However, the current state-of-the-art lacks such interaction feedback opportunities, especially in robotic-assisted interventional magnetic resonance imaging (iMRI), which is gaining importance in clinical practice, specifically for percutaneous needle punctures.

Methods

The cable-driven ‘Micropositioning Robotics for Image-Guided Surgery’ (µRIGS) system utilized the back-electromotive force effect of the stepper motor load to measure cable tensile forces without external sensors, employing the TMC5160 motor driver. The aim was to generate a sensorless haptic feedback (SHF) for remote needle advancement, incorporating collision detection and homing capabilities for internal automation processes. Three different phantoms capable of mimicking soft tissue were used to evaluate the difference in force feedback between manual needle puncture and the SHF, both technically and in terms of user experience.

Results

The SHF achieved a sampling rate of 800 Hz and a mean force resolution of 0.26 ± 0.22 N, primarily dependent on motor current and rotation speed, with a mean maximum force of 15 N. In most cases, the SHF data aligned with the intended phantom-related force progression. The evaluation of the user study demonstrated no significant differences between the SHF technology and manual puncturing.

Conclusion

The presented SHF of the µRIGS system introduced a novel MR-compatible technique to bridge the gap between medical robotics and interaction during real-time needle-based interventions.

目的 外科机器人技术在协助医生进行微创手术方面发挥了重要作用。特别是触觉和触觉反馈技术的集成,可以提高外科医生的工作效率和患者的整体治疗效果。方法缆线驱动的 "图像引导手术微定位机器人"(µRIGS)系统采用 TMC5160 电机驱动器,利用步进电机负载的反电动势效应测量缆线拉力,无需外部传感器。其目的是为远程进针生成无传感器触觉反馈(SHF),并将碰撞检测和归位功能纳入内部自动化流程。结果 SHF 的采样率为 800 Hz,平均力分辨率为 0.26 ± 0.22 N,主要取决于电机电流和转速,平均最大力为 15 N。对用户研究的评估表明,SHF 技术与手动穿刺之间没有明显差异。
{"title":"MRI-compatible and sensorless haptic feedback for cable-driven medical robotics to perform teleoperated needle-based interventions","authors":"Ivan Vogt, Marcel Eisenmann, Anton Schlünz, Robert Kowal, Daniel Düx, Maximilian Thormann, Julian Glandorf, Seben Sena Yerdelen, Marilena Georgiades, Robert Odenbach, Bennet Hensen, Marcel Gutberlet, Frank Wacker, Frank Fischbach, Georg Rose","doi":"10.1007/s11548-024-03267-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03267-z","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Surgical robotics have demonstrated their significance in assisting physicians during minimally invasive surgery. Especially, the integration of haptic and tactile feedback technologies can enhance the surgeon’s performance and overall patient outcomes. However, the current state-of-the-art lacks such interaction feedback opportunities, especially in robotic-assisted interventional magnetic resonance imaging (iMRI), which is gaining importance in clinical practice, specifically for percutaneous needle punctures.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The cable-driven ‘Micropositioning Robotics for Image-Guided Surgery’ (µRIGS) system utilized the back-electromotive force effect of the stepper motor load to measure cable tensile forces without external sensors, employing the TMC5160 motor driver. The aim was to generate a sensorless haptic feedback (SHF) for remote needle advancement, incorporating collision detection and homing capabilities for internal automation processes. Three different phantoms capable of mimicking soft tissue were used to evaluate the difference in force feedback between manual needle puncture and the SHF, both technically and in terms of user experience.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The SHF achieved a sampling rate of 800 Hz and a mean force resolution of 0.26 ± 0.22 N, primarily dependent on motor current and rotation speed, with a mean maximum force of 15 N. In most cases, the SHF data aligned with the intended phantom-related force progression. The evaluation of the user study demonstrated no significant differences between the SHF technology and manual puncturing.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The presented SHF of the µRIGS system introduced a novel MR-compatible technique to bridge the gap between medical robotics and interaction during real-time needle-based interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeleton-guided 3D convolutional neural network for tubular structure segmentation 用于管状结构分割的骨架引导三维卷积神经网络
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-12 DOI: 10.1007/s11548-024-03215-x
Ruiyun Zhu, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori

Purpose

Accurate segmentation of tubular structures is crucial for clinical diagnosis and treatment but is challenging due to their complex branching structures and volume imbalance. The purpose of this study is to propose a 3D deep learning network that incorporates skeleton information to enhance segmentation accuracy in these tubular structures.

Methods

Our approach employs a 3D convolutional network to extract 3D tubular structures from medical images such as CT volumetric images. We introduce a skeleton-guided module that operates on extracted features to capture and preserve the skeleton information in the segmentation results. Additionally, to effectively train our deep model in leveraging skeleton information, we propose a sigmoid-adaptive Tversky loss function which is specifically designed for skeleton segmentation.

Results

We conducted experiments on two distinct 3D medical image datasets. The first dataset consisted of 90 cases of chest CT volumetric images, while the second dataset comprised 35 cases of abdominal CT volumetric images. Comparative analysis with previous segmentation approaches demonstrated the superior performance of our method. For the airway segmentation task, our method achieved an average tree length rate of 93.0%, a branch detection rate of 91.5%, and a precision rate of 90.0%. In the case of abdominal artery segmentation, our method attained an average precision rate of 97.7%, a recall rate of 91.7%, and an F-measure of 94.6%.

Conclusion

We present a skeleton-guided 3D convolutional network to segment tubular structures from 3D medical images. Our skeleton-guided 3D convolutional network could effectively segment small tubular structures, outperforming previous methods.

目的管状结构的准确分割对临床诊断和治疗至关重要,但由于其复杂的分支结构和体积不平衡,分割具有挑战性。本研究的目的是提出一种结合骨架信息的三维深度学习网络,以提高这些管状结构的分割准确性。方法我们的方法采用三维卷积网络从 CT 容量图像等医学图像中提取三维管状结构。我们引入了骨架引导模块,该模块对提取的特征进行操作,以捕捉并在分割结果中保留骨架信息。此外,为了有效地训练我们的深度模型利用骨架信息,我们提出了一个专门为骨架分割设计的 sigmoid 自适应 Tversky 损失函数。第一个数据集由 90 例胸部 CT 容积图像组成,第二个数据集由 35 例腹部 CT 容积图像组成。与之前的分割方法进行的比较分析表明,我们的方法性能优越。在气道分割任务中,我们的方法实现了 93.0% 的平均树长率、91.5% 的分支检测率和 90.0% 的精确率。结论我们提出了一种骨架引导的三维卷积网络,用于从三维医学图像中分割管状结构。我们的骨架引导三维卷积网络可以有效地分割小的管状结构,优于之前的方法。
{"title":"Skeleton-guided 3D convolutional neural network for tubular structure segmentation","authors":"Ruiyun Zhu, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori","doi":"10.1007/s11548-024-03215-x","DOIUrl":"https://doi.org/10.1007/s11548-024-03215-x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Accurate segmentation of tubular structures is crucial for clinical diagnosis and treatment but is challenging due to their complex branching structures and volume imbalance. The purpose of this study is to propose a 3D deep learning network that incorporates skeleton information to enhance segmentation accuracy in these tubular structures.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Our approach employs a 3D convolutional network to extract 3D tubular structures from medical images such as CT volumetric images. We introduce a skeleton-guided module that operates on extracted features to capture and preserve the skeleton information in the segmentation results. Additionally, to effectively train our deep model in leveraging skeleton information, we propose a sigmoid-adaptive Tversky loss function which is specifically designed for skeleton segmentation.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We conducted experiments on two distinct 3D medical image datasets. The first dataset consisted of 90 cases of chest CT volumetric images, while the second dataset comprised 35 cases of abdominal CT volumetric images. Comparative analysis with previous segmentation approaches demonstrated the superior performance of our method. For the airway segmentation task, our method achieved an average tree length rate of 93.0%, a branch detection rate of 91.5%, and a precision rate of 90.0%. In the case of abdominal artery segmentation, our method attained an average precision rate of 97.7%, a recall rate of 91.7%, and an F-measure of 94.6%.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We present a skeleton-guided 3D convolutional network to segment tubular structures from 3D medical images. Our skeleton-guided 3D convolutional network could effectively segment small tubular structures, outperforming previous methods.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing surgical navigation: a robust hand–eye calibration method for the Microsoft HoloLens 2 增强手术导航:微软 HoloLens 2 的强大手眼校准方法
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-11 DOI: 10.1007/s11548-024-03250-8
Daniel Allen, Terry Peters, Elvis C. S. Chen

Purpose

Optical-see-through head-mounted displays have the ability to seamlessly integrate virtual content with the real world through a transparent lens and an optical combiner. Although their potential for use in surgical settings has been explored, their clinical translation is sparse in the current literature, largely due to their limited tracking capabilities and the need for manual alignment of virtual representations of objects with their real-world counterparts.

Methods

We propose a simple and robust hand–eye calibration process for the depth camera of the Microsoft HoloLens 2, utilizing a tracked surgical stylus fitted with infrared reflective spheres as the calibration tool.

Results

Using a Monte Carlo simulation and a paired-fiducial registration algorithm, we show that a calibration accuracy of 1.65 mm can be achieved with as little as 6 fiducial points. We also present heuristics for optimizing the accuracy of the calibration. The ability to use our calibration method in a clinical setting is validated through a user study, with users achieving a mean calibration accuracy of 1.67 mm in an average time of 42 s.

Conclusion

This work enables real-time hand–eye calibration for the Microsoft HoloLens 2, without any need for a manual alignment process. Using this framework, existing surgical navigation systems employing optical or electromagnetic tracking can easily be incorporated into an augmented reality environment with a high degree of accuracy.

目的光学透视头戴式显示器能够通过透明透镜和光学组合器将虚拟内容与现实世界无缝结合。虽然人们已经探索了它们在外科手术中的应用潜力,但在目前的文献中,它们的临床应用还很少,这主要是由于它们的追踪能力有限,而且需要手动对齐虚拟物体与现实世界中的对应物。方法我们为微软 HoloLens 2 的深度摄像头提出了一种简单而稳健的手眼校准流程,利用装有红外反射球的跟踪手术触针作为校准工具。结果利用蒙特卡洛模拟和成对靶标配准算法,我们发现只需 6 个靶标点就能达到 1.65 毫米的校准精度。我们还提出了优化校准精度的启发式方法。在临床环境中使用我们的校准方法的能力通过用户研究得到了验证,用户在平均 42 秒的时间内实现了 1.67 毫米的平均校准精度。利用这一框架,现有的采用光学或电磁跟踪的手术导航系统可以很容易地集成到增强现实环境中,并达到很高的精度。
{"title":"Enhancing surgical navigation: a robust hand–eye calibration method for the Microsoft HoloLens 2","authors":"Daniel Allen, Terry Peters, Elvis C. S. Chen","doi":"10.1007/s11548-024-03250-8","DOIUrl":"https://doi.org/10.1007/s11548-024-03250-8","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">\u0000<b>Purpose</b>\u0000</h3><p>Optical-see-through head-mounted displays have the ability to seamlessly integrate virtual content with the real world through a transparent lens and an optical combiner. Although their potential for use in surgical settings has been explored, their clinical translation is sparse in the current literature, largely due to their limited tracking capabilities and the need for manual alignment of virtual representations of objects with their real-world counterparts.</p><h3 data-test=\"abstract-sub-heading\">\u0000<b>Methods</b>\u0000</h3><p>We propose a simple and robust hand–eye calibration process for the depth camera of the Microsoft HoloLens 2, utilizing a tracked surgical stylus fitted with infrared reflective spheres as the calibration tool.</p><h3 data-test=\"abstract-sub-heading\">\u0000<b>Results</b>\u0000</h3><p>Using a Monte Carlo simulation and a paired-fiducial registration algorithm, we show that a calibration accuracy of 1.65 mm can be achieved with as little as 6 fiducial points. We also present heuristics for optimizing the accuracy of the calibration. The ability to use our calibration method in a clinical setting is validated through a user study, with users achieving a mean calibration accuracy of 1.67 mm in an average time of 42 s.</p><h3 data-test=\"abstract-sub-heading\">\u0000<b>Conclusion</b>\u0000</h3><p>This work enables real-time hand–eye calibration for the Microsoft HoloLens 2, without any need for a manual alignment process. Using this framework, existing surgical navigation systems employing optical or electromagnetic tracking can easily be incorporated into an augmented reality environment with a high degree of accuracy.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving preparation in the emergency trauma room: the development and impact of real-time data transfer and dashboard visualization system 改进创伤急救室的准备工作:实时数据传输和仪表板可视化系统的开发及其影响
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-11 DOI: 10.1007/s11548-024-03256-2
Anna Schatz, Georg Osterhoff, Christoph Georgi, Fabian Joeres, Thomas Neumuth, Max Rockstroh

Purpose

This study examines, with clinical end users, the features of a visualization system in transmitting real-time patient data from the ambulance to the emergency trauma room (ETR) to determine if the real-time data provides the basis for more informed and timely interventions in the ETR before and after patient arrival.

Methods

We conducted a qualitative in-depth interview study with 32 physicians in six German and Swiss hospitals. A visualization system was developed as prototype to display the transfer of patient data, and it serves as a basis for evaluation by the participating physicians.

Results

The prototype demonstrated the potential benefits of improving workflow within the ETR by providing critical patient information in real-time. Physicians highlighted the importance of features such as the ABCDE scheme and vital signs that directly impact patient care. Configurable and mobile versions of the prototype were suggested to meet the specific needs of each clinic or specialist, allowing for the transfer of only essential information.

Conclusion

The results highlight on the one hand the potential need for adaptable interfaces in medical communication technologies that balance efficiency with minimizing additional workload for emergency medical services and show that the use of pre-notification systems in communication between ambulance and hospital can be supportive. Further research is recommended to assess practical application and support in clinical practice, including a re-evaluation of the enhanced prototype by professionals.

目的 本研究与临床最终用户一起探讨了可视化系统在从救护车向创伤急救室(ETR)传输患者实时数据方面的功能,以确定实时数据是否为在患者到达之前和之后在创伤急救室进行更明智、更及时的干预奠定了基础。方法 我们对德国和瑞士六家医院的 32 名医生进行了深入的定性访谈研究。我们开发了一个可视化系统原型来显示病人数据的传输,并将其作为参与研究的医生进行评估的基础。结果该原型展示了通过实时提供重要的病人信息来改善急诊室工作流程的潜在好处。医生们强调了 ABCDE 计划和生命体征等直接影响病人护理的功能的重要性。研究结果一方面强调了医疗通信技术中对可调整界面的潜在需求,这种界面可以在提高效率的同时最大限度地减少急救医疗服务的额外工作量;另一方面也表明,在救护车和医院之间的通信中使用预先通知系统可以起到支持作用。建议进一步开展研究,评估临床实践中的实际应用和支持情况,包括由专业人员对增强型原型进行重新评估。
{"title":"Improving preparation in the emergency trauma room: the development and impact of real-time data transfer and dashboard visualization system","authors":"Anna Schatz, Georg Osterhoff, Christoph Georgi, Fabian Joeres, Thomas Neumuth, Max Rockstroh","doi":"10.1007/s11548-024-03256-2","DOIUrl":"https://doi.org/10.1007/s11548-024-03256-2","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This study examines, with clinical end users, the features of a visualization system in transmitting real-time patient data from the ambulance to the emergency trauma room (ETR) to determine if the real-time data provides the basis for more informed and timely interventions in the ETR before and after patient arrival.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>We conducted a qualitative in-depth interview study with 32 physicians in six German and Swiss hospitals. A visualization system was developed as prototype to display the transfer of patient data, and it serves as a basis for evaluation by the participating physicians.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The prototype demonstrated the potential benefits of improving workflow within the ETR by providing critical patient information in real-time. Physicians highlighted the importance of features such as the ABCDE scheme and vital signs that directly impact patient care. Configurable and mobile versions of the prototype were suggested to meet the specific needs of each clinic or specialist, allowing for the transfer of only essential information.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The results highlight on the one hand the potential need for adaptable interfaces in medical communication technologies that balance efficiency with minimizing additional workload for emergency medical services and show that the use of pre-notification systems in communication between ambulance and hospital can be supportive. Further research is recommended to assess practical application and support in clinical practice, including a re-evaluation of the enhanced prototype by professionals.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6G in medical robotics: development of network allocation strategies for a telerobotic examination system. 医疗机器人中的 6G:为远程机器人检查系统开发网络分配策略。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-09 DOI: 10.1007/s11548-024-03260-6
Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm

Purpose: Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.

Methods: We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).

Results: The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.

Conclusion: This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.

目的:世界各地的医疗保健系统正日益面临严峻的挑战,其原因包括人员短缺、人口结构的变化以及对人类依赖性很强的环境的依赖。解决这些问题的方法之一是开发新的远程医疗应用。目前正在研究的 6G 网络标准有望提供许多新功能,这将有利于充分发挥新兴远程医疗解决方案的潜力,并克服当前网络标准的局限性:方法:我们开发了一个具有分布式机器人控制基础设施的远程机器人检查系统,以研究医疗应用中分布式计算场景(如雾计算)的优势和挑战。我们研究了不同的软件配置,对其网络流量和计算负荷进行了描述,随后为不同类型的模块化应用功能(MAF)制定了网络分配策略:结果:研究结果表明,这些模块化应用功能的使用情况在计算负荷和网络行为方面都存在很大差异,因此可以根据不同类型模块化应用功能的要求制定分配策略。此外,研究结果还为进一步探索医疗机器人中的分布式计算方案奠定了坚实的基础:这项工作为利用 6G 网络架构和分布式计算场景(如雾计算)开发医疗机器人应用奠定了基础。未来,我们计划研究根据当前形势需求在网络内动态转移 MAF 的能力,这将有助于进一步优化基于网络的医疗应用性能,并在应对日益严峻的医疗挑战中发挥作用。
{"title":"6G in medical robotics: development of network allocation strategies for a telerobotic examination system.","authors":"Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm","doi":"10.1007/s11548-024-03260-6","DOIUrl":"https://doi.org/10.1007/s11548-024-03260-6","url":null,"abstract":"<p><strong>Purpose: </strong>Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.</p><p><strong>Methods: </strong>We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).</p><p><strong>Results: </strong>The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.</p><p><strong>Conclusion: </strong>This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global registration of kidneys in 3D ultrasound and CT images. 三维超声波和 CT 图像中肾脏的全局配准。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-06 DOI: 10.1007/s11548-024-03255-3
William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins

Purpose: Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization.

Methods: We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD).

Results: This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm.

Conclusion: This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.

目的:需要在腹部超声(US)和计算机断层扫描(CT)图像之间进行自动配准,以加强对肾脏手术的介入性指导,但这仍是一个有待解决的研究难题。我们提出了一种新方法,它不需要初始配准估计(全局方法),还能处理器官自然对称性引起的配准模糊。该方法与配准改进算法相结合,可实现稳健、准确的肾脏配准,同时避免手动初始化:我们建议分三步解决全局配准问题:(1) 自动解剖地标定位,由 2 个深度神经网络(DNN)定位每种模式下的一组地标。(2) 生成注册假设,利用 RANSAC 的确定性变体从地标计算潜在的注册。由于肾脏具有很强的双侧对称性,通常会有两个兼容的解决方案。最后,在步骤 (3) 中,利用 DNN 分类器解决几何模糊性问题,自动确定正确的解决方案。然后,可以使用配准细化方法对配准进行迭代改进。结果显示了最先进的基于曲面的细化--贝叶斯相干点漂移(BCPD):结果:这一自动全局配准方法比各种具有竞争力的先进方法效果更好,后者还需要进行器官分割。在 59 对三维 US/CT 肾脏图像上获得的结果表明,所提出的方法结合 BCPD 精化,使肾脏内部地标(肾盂)的目标配准误差 (TRE) 达到 5.78 毫米,平均近邻表面距离 (nndist) 为 2.42 毫米:这项研究首次提出了在 US 和 CT 图像中进行肾脏自动配准的方法,这种方法不需要预先知道初始手动配准估计值。结果表明,全自动配准方法是可行的,其性能可与人工方法相媲美。
{"title":"Global registration of kidneys in 3D ultrasound and CT images.","authors":"William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins","doi":"10.1007/s11548-024-03255-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03255-3","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization.</p><p><strong>Methods: </strong>We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD).</p><p><strong>Results: </strong>This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm.</p><p><strong>Conclusion: </strong>This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving assessment of lesions in longitudinal CT scans: a bi-institutional reader study on an AI-assisted registration and volumetric segmentation workflow. 改进纵向 CT 扫描中的病变评估:关于人工智能辅助注册和容积分割工作流程的双机构读者研究。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-01 Epub Date: 2024-05-30 DOI: 10.1007/s11548-024-03181-4
Alessa Hering, Max Westphal, Annika Gerken, Haidara Almansour, Michael Maurer, Benjamin Geisler, Temke Kohlbrandt, Thomas Eigentler, Teresa Amaral, Nikolas Lessmann, Sergios Gatidis, Horst Hahn, Konstantin Nikolaou, Ahmed Othman, Jan Moltz, Felix Peisen

Purpose: AI-assisted techniques for lesion registration and segmentation have the potential to make CT-based tumor follow-up assessment faster and less reader-dependent. However, empirical evidence on the advantages of AI-assisted volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans is lacking. The aim of this study was to assess the efficiency, quality, and inter-reader variability of an AI-assisted workflow for volumetric segmentation of lymph node and soft tissue metastases in follow-up CT scans. Three hypotheses were tested: (H1) Assessment time for follow-up lesion segmentation is reduced using an AI-assisted workflow. (H2) The quality of the AI-assisted segmentation is non-inferior to the quality of fully manual segmentation. (H3) The inter-reader variability of the resulting segmentations is reduced with AI assistance.

Materials and methods: The study retrospectively analyzed 126 lymph nodes and 135 soft tissue metastases from 55 patients with stage IV melanoma. Three radiologists from two institutions performed both AI-assisted and manual segmentation, and the results were statistically analyzed and compared to a manual segmentation reference standard.

Results: AI-assisted segmentation reduced user interaction time significantly by 33% (222 s vs. 336 s), achieved similar Dice scores (0.80-0.84 vs. 0.81-0.82) and decreased inter-reader variability (median Dice 0.85-1.0 vs. 0.80-0.82; ICC 0.84 vs. 0.80), compared to manual segmentation.

Conclusion: The findings of this study support the use of AI-assisted registration and volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans. The AI-assisted workflow achieved significant time savings, similar segmentation quality, and reduced inter-reader variability compared to manual segmentation.

目的:病灶登记和分割的人工智能辅助技术有可能使基于 CT 的肿瘤随访评估更快,并减少对读者的依赖。然而,关于人工智能辅助体积分割在随访 CT 扫描中对淋巴结和软组织转移的优势还缺乏实证证据。本研究旨在评估人工智能辅助工作流程在后续 CT 扫描中对淋巴结和软组织转移进行体积分割的效率、质量和阅片师之间的差异性。测试了三个假设:(H1)使用人工智能辅助工作流程可减少随访病灶分割的评估时间。(H2)人工智能辅助分割的质量不低于全手工分割的质量。(H3)在人工智能辅助下,结果分割的读者间变异性降低:研究回顾性分析了 55 名 IV 期黑色素瘤患者的 126 个淋巴结和 135 个软组织转移灶。来自两家机构的三位放射科医生同时进行了人工智能辅助和手动分割,对结果进行了统计分析,并与手动分割参考标准进行了比较:结果:与手动分割相比,人工智能辅助分割大大减少了 33% 的用户交互时间(222 秒 vs. 336 秒),获得了相似的 Dice 分数(0.80-0.84 vs. 0.81-0.82),并减少了读片者之间的差异(Dice 中位数 0.85-1.0 vs. 0.80-0.82;ICC 0.84 vs. 0.80):本研究结果支持使用人工智能辅助配准和容积分割来处理后续 CT 扫描中的淋巴结和软组织转移。与手动分割相比,人工智能辅助工作流程大大节省了时间,获得了相似的分割质量,并降低了阅片师之间的差异。
{"title":"Improving assessment of lesions in longitudinal CT scans: a bi-institutional reader study on an AI-assisted registration and volumetric segmentation workflow.","authors":"Alessa Hering, Max Westphal, Annika Gerken, Haidara Almansour, Michael Maurer, Benjamin Geisler, Temke Kohlbrandt, Thomas Eigentler, Teresa Amaral, Nikolas Lessmann, Sergios Gatidis, Horst Hahn, Konstantin Nikolaou, Ahmed Othman, Jan Moltz, Felix Peisen","doi":"10.1007/s11548-024-03181-4","DOIUrl":"10.1007/s11548-024-03181-4","url":null,"abstract":"<p><strong>Purpose: </strong>AI-assisted techniques for lesion registration and segmentation have the potential to make CT-based tumor follow-up assessment faster and less reader-dependent. However, empirical evidence on the advantages of AI-assisted volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans is lacking. The aim of this study was to assess the efficiency, quality, and inter-reader variability of an AI-assisted workflow for volumetric segmentation of lymph node and soft tissue metastases in follow-up CT scans. Three hypotheses were tested: (H1) Assessment time for follow-up lesion segmentation is reduced using an AI-assisted workflow. (H2) The quality of the AI-assisted segmentation is non-inferior to the quality of fully manual segmentation. (H3) The inter-reader variability of the resulting segmentations is reduced with AI assistance.</p><p><strong>Materials and methods: </strong>The study retrospectively analyzed 126 lymph nodes and 135 soft tissue metastases from 55 patients with stage IV melanoma. Three radiologists from two institutions performed both AI-assisted and manual segmentation, and the results were statistically analyzed and compared to a manual segmentation reference standard.</p><p><strong>Results: </strong>AI-assisted segmentation reduced user interaction time significantly by 33% (222 s vs. 336 s), achieved similar Dice scores (0.80-0.84 vs. 0.81-0.82) and decreased inter-reader variability (median Dice 0.85-1.0 vs. 0.80-0.82; ICC 0.84 vs. 0.80), compared to manual segmentation.</p><p><strong>Conclusion: </strong>The findings of this study support the use of AI-assisted registration and volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans. The AI-assisted workflow achieved significant time savings, similar segmentation quality, and reduced inter-reader variability compared to manual segmentation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11365847/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141175751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of resectability of pancreatic cancer using novel immersive high-performance virtual reality rendering of abdominal computed tomography and magnetic resonance imaging. 利用腹部计算机断层扫描和磁共振成像的新型沉浸式高性能虚拟现实渲染评估胰腺癌的可切除性。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-01 Epub Date: 2024-01-22 DOI: 10.1007/s11548-023-03048-0
Julia Madlaina Kunz, Peter Maloca, Andreas Allemann, David Fasler, Savas Soysal, Silvio Däster, Marko Kraljević, Gulbahar Syeda, Benjamin Weixler, Christian Nebiker, Vincent Ochs, Raoul Droeser, Harriet Louise Walker, Martin Bolli, Beat Müller, Philippe Cattin, Sebastian Manuel Staubli

Purpose: Virtual reality (VR) allows for an immersive and interactive analysis of imaging data such as computed tomography (CT) and magnetic resonance imaging (MRI). The aim of this study is to assess the comprehensibility of VR anatomy and its value in assessing resectability of pancreatic ductal adenocarcinoma (PDAC).

Methods: This study assesses exposure to VR anatomy and evaluates the potential role of VR in assessing resectability of PDAC. Firstly, volumetric abdominal CT and MRI data were displayed in an immersive VR environment. Volunteering physicians were asked to identify anatomical landmarks in VR. In the second stage, experienced clinicians were asked to identify vascular involvement in a total of 12 CT and MRI scans displaying PDAC (2 resectable, 2 borderline resectable, and 2 locally advanced tumours per modality). Results were compared to 2D standard PACS viewing.

Results: In VR visualisation of CT and MRI, the abdominal anatomical landmarks were recognised by all participants except the pancreas (30/34) in VR CT and the splenic (31/34) and common hepatic artery (18/34) in VR MRI, respectively. In VR CT, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 22/24, 20/24 and 19/24 scans, respectively. Whereas, in VR MRI, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 19/24, 19/24 and 21/24 scans, respectively. Interobserver agreement as measured by Fleiss κ was 0.7 for CT and 0.4 for MRI, respectively (p < 0.001). Scans were significantly assessed more accurately in VR CT than standard 2D PACS CT, with a median of 5.5 (IQR 4.75-6) and a median of 3 (IQR 2-3) correctly assessed out of 6 scans (p < 0.001).

Conclusion: VR enhanced visualisation of abdominal CT and MRI scan data provides intuitive handling and understanding of anatomy and might allow for more accurate staging of PDAC and could thus become a valuable adjunct in PDAC resectability assessment in the future.

目的:虚拟现实(VR)可对计算机断层扫描(CT)和磁共振成像(MRI)等成像数据进行身临其境的交互式分析。本研究旨在评估 VR 解剖的可理解性及其在评估胰腺导管腺癌(PDAC)可切除性方面的价值:本研究评估了 VR 解剖学的暴露情况,并评估了 VR 在评估 PDAC 可切除性方面的潜在作用。首先,在沉浸式 VR 环境中显示腹部 CT 和 MRI 容量数据。志愿者医生被要求识别 VR 中的解剖标志。在第二阶段,经验丰富的临床医生被要求识别总共 12 个显示 PDAC 的 CT 和 MRI 扫描中的血管受累情况(每种模式 2 个可切除肿瘤、2 个边缘可切除肿瘤和 2 个局部晚期肿瘤)。结果与二维标准 PACS 观察结果进行了比较:结果:在 CT 和 MRI 的 VR 可视化中,除了 VR CT 中的胰腺(30/34)和 VR MRI 中的脾脏(31/34)和肝总动脉(18/34)外,所有参与者都能识别腹部解剖标志物。在 VR CT 中,可切除、边缘可切除和局部晚期 PDAC 的正确识别率分别为 22/24、20/24 和 19/24。而在 VR MRI 扫描中,可切除、边缘可切除和局部晚期 PDAC 的正确识别率分别为 19/24、19/24 和 21/24。根据 Fleiss κ 测量,CT 和 MRI 的观察者间一致性分别为 0.7 和 0.4(p 结论:VR 增强了腹部 CT 和 MRI 的可视化:VR 增强了腹部 CT 和 MRI 扫描数据的可视化,可直观地处理和理解解剖结构,可对 PDAC 进行更准确的分期,因此将来可能成为 PDAC 可切除性评估的重要辅助工具。
{"title":"Assessment of resectability of pancreatic cancer using novel immersive high-performance virtual reality rendering of abdominal computed tomography and magnetic resonance imaging.","authors":"Julia Madlaina Kunz, Peter Maloca, Andreas Allemann, David Fasler, Savas Soysal, Silvio Däster, Marko Kraljević, Gulbahar Syeda, Benjamin Weixler, Christian Nebiker, Vincent Ochs, Raoul Droeser, Harriet Louise Walker, Martin Bolli, Beat Müller, Philippe Cattin, Sebastian Manuel Staubli","doi":"10.1007/s11548-023-03048-0","DOIUrl":"10.1007/s11548-023-03048-0","url":null,"abstract":"<p><strong>Purpose: </strong>Virtual reality (VR) allows for an immersive and interactive analysis of imaging data such as computed tomography (CT) and magnetic resonance imaging (MRI). The aim of this study is to assess the comprehensibility of VR anatomy and its value in assessing resectability of pancreatic ductal adenocarcinoma (PDAC).</p><p><strong>Methods: </strong>This study assesses exposure to VR anatomy and evaluates the potential role of VR in assessing resectability of PDAC. Firstly, volumetric abdominal CT and MRI data were displayed in an immersive VR environment. Volunteering physicians were asked to identify anatomical landmarks in VR. In the second stage, experienced clinicians were asked to identify vascular involvement in a total of 12 CT and MRI scans displaying PDAC (2 resectable, 2 borderline resectable, and 2 locally advanced tumours per modality). Results were compared to 2D standard PACS viewing.</p><p><strong>Results: </strong>In VR visualisation of CT and MRI, the abdominal anatomical landmarks were recognised by all participants except the pancreas (30/34) in VR CT and the splenic (31/34) and common hepatic artery (18/34) in VR MRI, respectively. In VR CT, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 22/24, 20/24 and 19/24 scans, respectively. Whereas, in VR MRI, resectable, borderline resectable, and locally advanced PDAC were correctly identified in 19/24, 19/24 and 21/24 scans, respectively. Interobserver agreement as measured by Fleiss κ was 0.7 for CT and 0.4 for MRI, respectively (p < 0.001). Scans were significantly assessed more accurately in VR CT than standard 2D PACS CT, with a median of 5.5 (IQR 4.75-6) and a median of 3 (IQR 2-3) correctly assessed out of 6 scans (p < 0.001).</p><p><strong>Conclusion: </strong>VR enhanced visualisation of abdominal CT and MRI scan data provides intuitive handling and understanding of anatomy and might allow for more accurate staging of PDAC and could thus become a valuable adjunct in PDAC resectability assessment in the future.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11365822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139514196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A method for accurate and reproducible specimen alignment for insertion tests of cochlear implant electrode arrays. 一种用于人工耳蜗电极阵列插入测试的精确且可重复的试样对准方法。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-01 Epub Date: 2023-05-19 DOI: 10.1007/s11548-023-02930-1
Jakob Cramer, Georg Böttcher-Rebmann, Thomas Lenarz, Thomas S Rau

Purpose: The trajectory along which the cochlear implant electrode array is inserted influences the insertion forces and the probability for intracochlear trauma. Controlling the trajectory is especially relevant for reproducible conditions in electrode insertion tests. Using ex vivo cochlear specimens, manual alignment of the invisibly embedded cochlea is imprecise and hardly reproducible. The aim of this study was to develop a method for creating a 3D printable pose setting adapter to align a specimen along a desired trajectory toward an insertion axis.

Methods: Planning points of the desired trajectory into the cochlea were set using CBCT images. A new custom-made algorithm processed these points for automated calculation of a pose setting adapter. Its shape ensures coaxial positioning of the planned trajectory to both the force sensor measuring direction and the insertion axis. The performance of the approach was evaluated by dissecting and aligning 15 porcine cochlear specimens of which four were subsequently used for automated electrode insertions.

Results: The pose setting adapter could easily be integrated into an insertion force test setup. Its calculation and 3D printing was possible in all 15 cases. Compared to planning data, a mean positioning accuracy of 0.21 ± 0.10 mm at the level of the round window and a mean angular accuracy of 0.43° ± 0.21° were measured. After alignment, four specimens were used for electrode insertions, demonstrating the practical applicability of our method.

Conclusion: In this work, we present a new method, which enables automated calculation and creation of a ready-to-print pose setting adapter for alignment of cochlear specimens in insertion test setups. The approach is characterized by a high level of accuracy and reproducibility in controlling the insertion trajectory. Therefore, it enables a higher degree of standardization in force measurement when performing ex vivo insertion tests and thereby improves reliability in electrode testing.

目的:人工耳蜗电极阵列的插入轨迹会影响插入力和蜗内创伤的概率。控制轨迹对电极插入测试的可重复性尤为重要。在使用体外耳蜗标本时,手动对齐隐形嵌入的耳蜗既不精确,也很难再现。本研究的目的是开发一种方法,用于创建可三维打印的姿势设置适配器,将标本沿所需轨迹对准插入轴:方法:使用 CBCT 图像设置进入耳蜗所需轨迹的规划点。方法:利用 CBCT 图像设置进入耳蜗所需轨迹的规划点,并采用一种新的定制算法处理这些点,以自动计算姿势设置适配器。其形状可确保规划轨迹与力传感器测量方向和插入轴同轴定位。通过对 15 个猪耳蜗标本进行解剖和对齐,对该方法的性能进行了评估,其中 4 个标本随后用于自动电极插入:结果:姿势设置适配器可轻松集成到插入力测试装置中。在所有 15 个病例中都可以进行计算和 3D 打印。与规划数据相比,测得圆窗水平的平均定位精度为 0.21 ± 0.10 毫米,平均角度精度为 0.43° ± 0.21°。对齐后,四个标本被用于电极插入,这表明我们的方法具有实际应用性:在这项工作中,我们提出了一种新方法,它能够自动计算和创建一个可直接打印的姿势设置适配器,用于在插入测试装置中校准人工耳蜗标本。这种方法的特点是在控制插入轨迹时具有高精确度和可重复性。因此,在进行体外插入测试时,它能使力测量标准化程度更高,从而提高电极测试的可靠性。
{"title":"A method for accurate and reproducible specimen alignment for insertion tests of cochlear implant electrode arrays.","authors":"Jakob Cramer, Georg Böttcher-Rebmann, Thomas Lenarz, Thomas S Rau","doi":"10.1007/s11548-023-02930-1","DOIUrl":"10.1007/s11548-023-02930-1","url":null,"abstract":"<p><strong>Purpose: </strong>The trajectory along which the cochlear implant electrode array is inserted influences the insertion forces and the probability for intracochlear trauma. Controlling the trajectory is especially relevant for reproducible conditions in electrode insertion tests. Using ex vivo cochlear specimens, manual alignment of the invisibly embedded cochlea is imprecise and hardly reproducible. The aim of this study was to develop a method for creating a 3D printable pose setting adapter to align a specimen along a desired trajectory toward an insertion axis.</p><p><strong>Methods: </strong>Planning points of the desired trajectory into the cochlea were set using CBCT images. A new custom-made algorithm processed these points for automated calculation of a pose setting adapter. Its shape ensures coaxial positioning of the planned trajectory to both the force sensor measuring direction and the insertion axis. The performance of the approach was evaluated by dissecting and aligning 15 porcine cochlear specimens of which four were subsequently used for automated electrode insertions.</p><p><strong>Results: </strong>The pose setting adapter could easily be integrated into an insertion force test setup. Its calculation and 3D printing was possible in all 15 cases. Compared to planning data, a mean positioning accuracy of 0.21 ± 0.10 mm at the level of the round window and a mean angular accuracy of 0.43° ± 0.21° were measured. After alignment, four specimens were used for electrode insertions, demonstrating the practical applicability of our method.</p><p><strong>Conclusion: </strong>In this work, we present a new method, which enables automated calculation and creation of a ready-to-print pose setting adapter for alignment of cochlear specimens in insertion test setups. The approach is characterized by a high level of accuracy and reproducibility in controlling the insertion trajectory. Therefore, it enables a higher degree of standardization in force measurement when performing ex vivo insertion tests and thereby improves reliability in electrode testing.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9628680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of a collaborative robotic platform based on monocular vision for oral surgery: an in vitro study. 开发和验证基于单目视觉的口腔外科协作机器人平台:体外研究。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-01 Epub Date: 2024-06-01 DOI: 10.1007/s11548-024-03161-8
Jingyang Huang, Jiahao Bao, Zongcai Tan, Shunyao Shen, Hongbo Yu

Purpose: Surgical robots effectively improve the accuracy and safety of surgical procedures. Current optical-navigated oral surgical robots are typically developed based on binocular vision positioning systems, which are susceptible to factors including obscured visibility, limited workplace, and ambient light interference. Hence, the purpose of this study was to develop a lightweight robotic platform based on monocular vision for oral surgery that enhances the precision and efficiency of surgical procedures.

Methods: A monocular optical positioning system (MOPS) was applied to oral surgical robots, and a semi-autonomous robotic platform was developed utilizing monocular vision. A series of vitro experiments were designed to simulate dental implant procedures to evaluate the performance of optical positioning systems and assess the robotic system accuracy. The singular configuration detection and avoidance test, the collision detection and processing test, and the drilling test under slight movement were conducted to validate the safety of the robotic system.

Results: The position error and rotation error of MOPS were 0.0906 ± 0.0762 mm and 0.0158 ± 0.0069 degrees, respectively. The attitude angle of robotic arms calculated by the forward and inverse solutions was accurate. Additionally, the robot's surgical calibration point exhibited an average error of 0.42 mm, with a maximum error of 0.57 mm. Meanwhile, the robot system was capable of effectively avoiding singularities and demonstrating robust safety measures in the presence of minor patient movements and collisions during vitro experiment procedures.

Conclusion: The results of this in vitro study demonstrate that the accuracy of MOPS meets clinical requirements, making it a promising alternative in the field of oral surgical robots. Further studies will be planned to make the monocular vision oral robot suitable for clinical application.

目的:外科手术机器人能有效提高外科手术的准确性和安全性。目前的光学导航口腔手术机器人通常是基于双目视觉定位系统开发的,容易受到能见度模糊、工作场所有限和环境光干扰等因素的影响。因此,本研究的目的是开发一种基于单目视觉的轻型口腔外科手术机器人平台,以提高外科手术的精度和效率:方法:将单目光学定位系统(MOPS)应用于口腔外科手术机器人,并利用单目视觉开发了一个半自主机器人平台。设计了一系列体外实验来模拟牙科植入手术,以评估光学定位系统的性能和机器人系统的精度。为了验证机器人系统的安全性,还进行了奇异构型检测和规避测试、碰撞检测和处理测试以及轻微移动下的钻孔测试:结果:MOPS 的位置误差和旋转误差分别为 0.0906 ± 0.0762 mm 和 0.0158 ± 0.0069 度。通过正解和反解计算出的机械臂姿态角准确无误。此外,机器人手术校准点的平均误差为 0.42 毫米,最大误差为 0.57 毫米。同时,在体外实验过程中,机器人系统能够有效避免奇异点,并在病人轻微移动和碰撞的情况下表现出稳健的安全措施:这项体外研究的结果表明,MOPS 的精确度符合临床要求,是口腔外科手术机器人领域的一个很有前途的选择。将计划开展进一步研究,使单目视觉口腔机器人适合临床应用。
{"title":"Development and validation of a collaborative robotic platform based on monocular vision for oral surgery: an in vitro study.","authors":"Jingyang Huang, Jiahao Bao, Zongcai Tan, Shunyao Shen, Hongbo Yu","doi":"10.1007/s11548-024-03161-8","DOIUrl":"10.1007/s11548-024-03161-8","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical robots effectively improve the accuracy and safety of surgical procedures. Current optical-navigated oral surgical robots are typically developed based on binocular vision positioning systems, which are susceptible to factors including obscured visibility, limited workplace, and ambient light interference. Hence, the purpose of this study was to develop a lightweight robotic platform based on monocular vision for oral surgery that enhances the precision and efficiency of surgical procedures.</p><p><strong>Methods: </strong>A monocular optical positioning system (MOPS) was applied to oral surgical robots, and a semi-autonomous robotic platform was developed utilizing monocular vision. A series of vitro experiments were designed to simulate dental implant procedures to evaluate the performance of optical positioning systems and assess the robotic system accuracy. The singular configuration detection and avoidance test, the collision detection and processing test, and the drilling test under slight movement were conducted to validate the safety of the robotic system.</p><p><strong>Results: </strong>The position error and rotation error of MOPS were 0.0906 ± 0.0762 mm and 0.0158 ± 0.0069 degrees, respectively. The attitude angle of robotic arms calculated by the forward and inverse solutions was accurate. Additionally, the robot's surgical calibration point exhibited an average error of 0.42 mm, with a maximum error of 0.57 mm. Meanwhile, the robot system was capable of effectively avoiding singularities and demonstrating robust safety measures in the presence of minor patient movements and collisions during vitro experiment procedures.</p><p><strong>Conclusion: </strong>The results of this in vitro study demonstrate that the accuracy of MOPS meets clinical requirements, making it a promising alternative in the field of oral surgical robots. Further studies will be planned to make the monocular vision oral robot suitable for clinical application.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141186926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1