首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
A position-enhanced sequential feature encoding model for lung infections and lymphoma classification on CT images. 用于 CT 图像肺部感染和淋巴瘤分类的位置增强序列特征编码模型。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-14 DOI: 10.1007/s11548-024-03230-y
Rui Zhao, Wenhao Li, Xilai Chen, Yuchong Li, Baochun He, Yucong Zhang, Yu Deng, Chunyan Wang, Fucang Jia

Purpose: Differentiating pulmonary lymphoma from lung infections using CT images is challenging. Existing deep neural network-based lung CT classification models rely on 2D slices, lacking comprehensive information and requiring manual selection. 3D models that involve chunking compromise image information and struggle with parameter reduction, limiting performance. These limitations must be addressed to improve accuracy and practicality.

Methods: We propose a transformer sequential feature encoding structure to integrate multi-level information from complete CT images, inspired by the clinical practice of using a sequence of cross-sectional slices for diagnosis. We incorporate position encoding and cross-level long-range information fusion modules into the feature extraction CNN network for cross-sectional slices, ensuring high-precision feature extraction.

Results: We conducted comprehensive experiments on a dataset of 124 patients, with respective sizes of 64, 20 and 40 for training, validation and testing. The results of ablation experiments and comparative experiments demonstrated the effectiveness of our approach. Our method outperforms existing state-of-the-art methods in the 3D CT image classification problem of distinguishing between lung infections and pulmonary lymphoma, achieving an accuracy of 0.875, AUC of 0.953 and F1 score of 0.889.

Conclusion: The experiments verified that our proposed position-enhanced transformer-based sequential feature encoding model is capable of effectively performing high-precision feature extraction and contextual feature fusion in the lungs. It enhances the ability of a standalone CNN network or transformer to extract features, thereby improving the classification performance. The source code is accessible at https://github.com/imchuyu/PTSFE .

目的:利用 CT 图像区分肺淋巴瘤和肺部感染具有挑战性。现有的基于深度神经网络的肺部 CT 分类模型依赖于二维切片,缺乏全面的信息,需要人工选择。涉及分块的三维模型会损害图像信息,并在减少参数方面遇到困难,从而限制了性能。要提高准确性和实用性,就必须解决这些局限性:我们提出了一种变压器序列特征编码结构,以整合完整 CT 图像中的多层次信息,其灵感来源于使用横断面切片序列进行诊断的临床实践。我们在横断面切片的特征提取 CNN 网络中加入了位置编码和跨级别长距离信息融合模块,确保了高精度的特征提取:我们对 124 名患者的数据集进行了综合实验,训练、验证和测试的数据集分别为 64、20 和 40。消融实验和对比实验的结果证明了我们方法的有效性。在区分肺部感染和肺淋巴瘤的三维 CT 图像分类问题上,我们的方法优于现有的最先进方法,准确率达到 0.875,AUC 达到 0.953,F1 分数达到 0.889:实验验证了我们提出的基于位置增强变换器的序列特征编码模型能够有效地进行肺部高精度特征提取和上下文特征融合。它增强了独立 CNN 网络或变换器提取特征的能力,从而提高了分类性能。源代码可从以下网址获取:https://github.com/imchuyu/PTSFE 。
{"title":"A position-enhanced sequential feature encoding model for lung infections and lymphoma classification on CT images.","authors":"Rui Zhao, Wenhao Li, Xilai Chen, Yuchong Li, Baochun He, Yucong Zhang, Yu Deng, Chunyan Wang, Fucang Jia","doi":"10.1007/s11548-024-03230-y","DOIUrl":"10.1007/s11548-024-03230-y","url":null,"abstract":"<p><strong>Purpose: </strong>Differentiating pulmonary lymphoma from lung infections using CT images is challenging. Existing deep neural network-based lung CT classification models rely on 2D slices, lacking comprehensive information and requiring manual selection. 3D models that involve chunking compromise image information and struggle with parameter reduction, limiting performance. These limitations must be addressed to improve accuracy and practicality.</p><p><strong>Methods: </strong>We propose a transformer sequential feature encoding structure to integrate multi-level information from complete CT images, inspired by the clinical practice of using a sequence of cross-sectional slices for diagnosis. We incorporate position encoding and cross-level long-range information fusion modules into the feature extraction CNN network for cross-sectional slices, ensuring high-precision feature extraction.</p><p><strong>Results: </strong>We conducted comprehensive experiments on a dataset of 124 patients, with respective sizes of 64, 20 and 40 for training, validation and testing. The results of ablation experiments and comparative experiments demonstrated the effectiveness of our approach. Our method outperforms existing state-of-the-art methods in the 3D CT image classification problem of distinguishing between lung infections and pulmonary lymphoma, achieving an accuracy of 0.875, AUC of 0.953 and F1 score of 0.889.</p><p><strong>Conclusion: </strong>The experiments verified that our proposed position-enhanced transformer-based sequential feature encoding model is capable of effectively performing high-precision feature extraction and contextual feature fusion in the lungs. It enhances the ability of a standalone CNN network or transformer to extract features, thereby improving the classification performance. The source code is accessible at https://github.com/imchuyu/PTSFE .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2001-2009"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic robotic doppler sonography of leg arteries. 腿部动脉自动机器人多普勒超声造影。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-25 DOI: 10.1007/s11548-024-03235-7
Jonas Osburg, Alexandra Scheibert, Marco Horn, Ravn Pater, Floris Ernst

Purpose: Robot-assisted systems offer an opportunity to support the diagnostic and therapeutic treatment of vascular diseases to reduce radiation exposure and support the limited medical staff in vascular medicine. In the diagnosis and follow-up care of vascular pathologies, Doppler ultrasound has become the preferred diagnostic tool. The study presents a robotic system for automatic Doppler ultrasound examinations of patients' leg vessels.

Methods: The robotic system consists of a redundant 7 DoF serial manipulator, to which a 3D ultrasound probe is attached. A compliant control was employed, whereby the transducer was guided along the vessel with a defined contact force. Visual servoing was used to correct the position of the probe during the scan so that the vessel can always be properly visualized. To track the vessel's position, methods based on template matching and Doppler sonography were used.

Results: Our system was able to successfully scan the femoral artery of seven volunteers automatically for a distance of 20 cm. In particular, our approach using Doppler ultrasound data showed high robustness and an accuracy of 10.7 (±3.1) px in determining the vessel's position and thus outperformed our template matching approach, whereby an accuracy of 13.9 (±6.4) px was achieved.

Conclusions: The developed system enables automated robotic ultrasound examinations of vessels and thus represents an opportunity to reduce radiation exposure and staff workload. The integration of Doppler ultrasound improves the accuracy and robustness of vessel tracking, and could thus contribute to the realization of routine robotic vascular examinations and potential endovascular interventions.

目的:机器人辅助系统为血管疾病的诊断和治疗提供了支持,减少了辐射暴露,并为血管医学领域有限的医务人员提供了支持。在血管病变的诊断和后续治疗中,多普勒超声已成为首选诊断工具。本研究介绍了一种对患者腿部血管进行自动多普勒超声检查的机器人系统:该机器人系统由一个冗余的 7 DoF 串行机械手组成,机械手上安装有一个 3D 超声波探头。该系统采用顺应式控制,以确定的接触力引导探头沿血管移动。视觉伺服用于在扫描过程中校正探头的位置,以便始终正确地观察血管。为了跟踪血管的位置,我们使用了基于模板匹配和多普勒超声的方法:结果:我们的系统能够成功地自动扫描七名志愿者的股动脉,扫描距离为 20 厘米。特别是,我们使用多普勒超声数据的方法显示出很高的鲁棒性,在确定血管位置方面的准确度达到了 10.7 (±3.1) px,因此优于我们的模板匹配方法,后者的准确度达到了 13.9 (±6.4) px:所开发的系统可对血管进行自动机器人超声检查,因此是减少辐射和工作人员工作量的一个机会。多普勒超声的整合提高了血管追踪的准确性和稳健性,从而有助于实现常规机器人血管检查和潜在的血管内介入治疗。
{"title":"Automatic robotic doppler sonography of leg arteries.","authors":"Jonas Osburg, Alexandra Scheibert, Marco Horn, Ravn Pater, Floris Ernst","doi":"10.1007/s11548-024-03235-7","DOIUrl":"10.1007/s11548-024-03235-7","url":null,"abstract":"<p><strong>Purpose: </strong>Robot-assisted systems offer an opportunity to support the diagnostic and therapeutic treatment of vascular diseases to reduce radiation exposure and support the limited medical staff in vascular medicine. In the diagnosis and follow-up care of vascular pathologies, Doppler ultrasound has become the preferred diagnostic tool. The study presents a robotic system for automatic Doppler ultrasound examinations of patients' leg vessels.</p><p><strong>Methods: </strong>The robotic system consists of a redundant 7 DoF serial manipulator, to which a 3D ultrasound probe is attached. A compliant control was employed, whereby the transducer was guided along the vessel with a defined contact force. Visual servoing was used to correct the position of the probe during the scan so that the vessel can always be properly visualized. To track the vessel's position, methods based on template matching and Doppler sonography were used.</p><p><strong>Results: </strong>Our system was able to successfully scan the femoral artery of seven volunteers automatically for a distance of 20 cm. In particular, our approach using Doppler ultrasound data showed high robustness and an accuracy of 10.7 (±3.1) px in determining the vessel's position and thus outperformed our template matching approach, whereby an accuracy of 13.9 (±6.4) px was achieved.</p><p><strong>Conclusions: </strong>The developed system enables automated robotic ultrasound examinations of vessels and thus represents an opportunity to reduce radiation exposure and staff workload. The integration of Doppler ultrasound improves the accuracy and robustness of vessel tracking, and could thus contribute to the realization of routine robotic vascular examinations and potential endovascular interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1965-1974"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative in-vitro assessment of a novel robot-assisted system for cochlear implant electrode insertion. 对用于植入人工耳蜗电极的新型机器人辅助系统进行体外定量评估。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 DOI: 10.1007/s11548-024-03276-y
Philipp Aebischer, Lukas Anschuetz, Marco Caversaccio, Georgios Mantokoudis, Stefan Weder

Purpose: As an increasing number of cochlear implant candidates exhibit residual inner ear function, hearing preservation strategies during implant insertion are gaining importance. Manual implantation is known to induce traumatic force and pressure peaks. In this study, we use a validated in-vitro model to comprehensively evaluate a novel surgical tool that addresses these challenges through motorized movement of a forceps.

Methods: Using lateral wall electrodes, we examined two subgroups of insertions: 30 insertions were performed manually by experienced surgeons, and another 30 insertions were conducted with a robot-assisted system under the same surgeons' supervision. We utilized a realistic, validated model of the temporal bone. This model accurately reproduces intracochlear frictional conditions and allows for the synchronous recording of forces on intracochlear structures, intracochlear pressure, and the position and deformation of the electrode array within the scala tympani.

Results: We identified a significant reduction in force variation during robot-assisted insertions compared to the conventional procedure, with average values of 12 mN/s and 32 mN/s, respectively. Robotic assistance was also associated with a significant reduction of strong pressure peaks and a 17 dB reduction in intracochlear pressure levels. Furthermore, our study highlights that the release of the insertion tool represents a critical phase requiring surgical training.

Conclusion: Robotic assistance demonstrated more consistent insertion speeds compared to manual techniques. Its use can significantly reduce factors associated with intracochlear trauma, highlighting its potential for improved hearing preservation. Finally, the system does not mitigate the impact of subsequent surgical steps like electrode cable routing and cochlear access sealing, pointing to areas in need of further research.

目的:随着越来越多的人工耳蜗植入者表现出残余内耳功能,植入过程中的听力保护策略变得越来越重要。众所周知,人工植入会产生创伤性的力和压力峰值。在本研究中,我们使用一个经过验证的体外模型来全面评估一种新型手术工具,该工具通过电动镊子的移动来应对这些挑战:方法:我们使用侧壁电极检查了两组插入情况:30 例插入手术由经验丰富的外科医生手动完成,另外 30 例插入手术在同一位外科医生的监督下使用机器人辅助系统完成。我们使用了一个逼真的、经过验证的颞骨模型。该模型准确再现了耳蜗内的摩擦条件,可同步记录耳蜗内结构的受力、耳蜗内压力以及鼓室内电极阵列的位置和变形:结果:我们发现,与传统手术相比,机器人辅助插入过程中的力变化明显减少,平均值分别为 12 mN/s 和 32 mN/s。机器人辅助还显著降低了强压力峰值,并将蜗内压力水平降低了 17 分贝。此外,我们的研究还强调,释放插入工具是一个需要手术培训的关键阶段:结论:与人工技术相比,机器人辅助的插入速度更稳定。结论:与人工技术相比,机器人辅助技术的插入速度更稳定。使用机器人辅助技术可大大减少与蜗内创伤相关的因素,突出了其改善听力保护的潜力。最后,该系统并不能减轻后续手术步骤(如电极电缆布线和耳蜗通道密封)的影响,这也是需要进一步研究的领域。
{"title":"Quantitative in-vitro assessment of a novel robot-assisted system for cochlear implant electrode insertion.","authors":"Philipp Aebischer, Lukas Anschuetz, Marco Caversaccio, Georgios Mantokoudis, Stefan Weder","doi":"10.1007/s11548-024-03276-y","DOIUrl":"https://doi.org/10.1007/s11548-024-03276-y","url":null,"abstract":"<p><strong>Purpose: </strong>As an increasing number of cochlear implant candidates exhibit residual inner ear function, hearing preservation strategies during implant insertion are gaining importance. Manual implantation is known to induce traumatic force and pressure peaks. In this study, we use a validated in-vitro model to comprehensively evaluate a novel surgical tool that addresses these challenges through motorized movement of a forceps.</p><p><strong>Methods: </strong>Using lateral wall electrodes, we examined two subgroups of insertions: 30 insertions were performed manually by experienced surgeons, and another 30 insertions were conducted with a robot-assisted system under the same surgeons' supervision. We utilized a realistic, validated model of the temporal bone. This model accurately reproduces intracochlear frictional conditions and allows for the synchronous recording of forces on intracochlear structures, intracochlear pressure, and the position and deformation of the electrode array within the scala tympani.</p><p><strong>Results: </strong>We identified a significant reduction in force variation during robot-assisted insertions compared to the conventional procedure, with average values of 12 mN/s and 32 mN/s, respectively. Robotic assistance was also associated with a significant reduction of strong pressure peaks and a 17 dB reduction in intracochlear pressure levels. Furthermore, our study highlights that the release of the insertion tool represents a critical phase requiring surgical training.</p><p><strong>Conclusion: </strong>Robotic assistance demonstrated more consistent insertion speeds compared to manual techniques. Its use can significantly reduce factors associated with intracochlear trauma, highlighting its potential for improved hearing preservation. Finally, the system does not mitigate the impact of subsequent surgical steps like electrode cable routing and cochlear access sealing, pointing to areas in need of further research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ladies and Gentlemen! This is no humbug. Why Model-Guided Medicine will become a main pillar for the future healthcare system. 女士们,先生们这绝非虚言。为什么模型指导医学将成为未来医疗系统的主要支柱?
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-09-23 DOI: 10.1007/s11548-024-03269-x
Mario A Cypko, Dirk Wilhelm

Purpose: Model-Guided Medicine (MGM) is a transformative approach to health care that offers a comprehensive and integrative perspective that goes far beyond our current concepts. In this editorial, we want to take a closer look at this innovative concept and how health care could benefit from its further development and application.

Methods: The information presented here is primarily the opinion of the authors and is based on their knowledge in the fields of information technology, computer science, and medicine. The contents are also the result of numerous discussions and scientific meetings within the CARS Society and the CARS Workshop on Model-Guided Medicine and are substantially stimulated by the available literature on the subject.

Results: The current healthcare landscape, with its reliance on isolated data points and broad population-based recommendations, often fails to integrate the dynamic and patient-specific factors necessary for truly personalised care. MGM addresses these limitations by integrating recent advancements in data processing, artificial intelligence, and human-computer interaction for the creation of individual models which integrate the available information and knowledge of patients, healthcare providers, devices, environment, etc. Based on a holistic concept, MGM will become effective tool for modern medicine, which shows a unique ability to assess and analyse interconnected relations and the combined impact of multiple factors on the individual. MGM emphasises transparency, reproducibility, and interpretability, ensuring that models are not black boxes but tools that healthcare professionals can fully understand, validate, and apply in clinical practice.

Conclusion: The practical applications of MGM are vast, ranging from optimising individual treatment plans to enhancing the efficiency of entire healthcare systems. The research community is called upon to pioneer new projects that demonstrate MGM's potential, establishing it as a central pillar of future health care, where more personalised, predictive, and effective medical practices will hopefully become the standard.

目的:模型指导医学(MGM)是一种变革性的医疗保健方法,它提供了一种全面、综合的视角,远远超越了我们现有的概念。在这篇社论中,我们希望更深入地了解这一创新理念,以及医疗保健如何从其进一步发展和应用中获益:本文所提供的信息主要是作者的观点,基于他们在信息技术、计算机科学和医学领域的知识。这些内容也是 CARS 协会和 CARS 模型指导医学研讨会多次讨论和科学会议的成果,并受到有关该主题的现有文献的极大启发:目前的医疗保健领域依赖于孤立的数据点和广泛的基于人群的建议,往往无法整合真正个性化医疗所需的动态和患者特异性因素。MGM 综合了数据处理、人工智能和人机交互方面的最新进展,建立了整合患者、医疗服务提供者、设备、环境等现有信息和知识的个性化模型,从而解决了这些局限性。基于整体概念,MGM 将成为现代医学的有效工具,在评估和分析相互关联的关系以及多种因素对个人的综合影响方面显示出独特的能力。多元基因组学强调透明度、可重复性和可解释性,确保模型不是黑盒子,而是医护人员能够充分理解、验证并应用于临床实践的工具:病理医学的实际应用非常广泛,从优化个人治疗方案到提高整个医疗保健系统的效率,不一而足。我们呼吁研究界开拓新项目,展示 MGM 的潜力,将其确立为未来医疗保健的核心支柱,希望更个性化、更具预测性和更有效的医疗实践将成为标准。
{"title":"Ladies and Gentlemen! This is no humbug. Why Model-Guided Medicine will become a main pillar for the future healthcare system.","authors":"Mario A Cypko, Dirk Wilhelm","doi":"10.1007/s11548-024-03269-x","DOIUrl":"10.1007/s11548-024-03269-x","url":null,"abstract":"<p><strong>Purpose: </strong>Model-Guided Medicine (MGM) is a transformative approach to health care that offers a comprehensive and integrative perspective that goes far beyond our current concepts. In this editorial, we want to take a closer look at this innovative concept and how health care could benefit from its further development and application.</p><p><strong>Methods: </strong>The information presented here is primarily the opinion of the authors and is based on their knowledge in the fields of information technology, computer science, and medicine. The contents are also the result of numerous discussions and scientific meetings within the CARS Society and the CARS Workshop on Model-Guided Medicine and are substantially stimulated by the available literature on the subject.</p><p><strong>Results: </strong>The current healthcare landscape, with its reliance on isolated data points and broad population-based recommendations, often fails to integrate the dynamic and patient-specific factors necessary for truly personalised care. MGM addresses these limitations by integrating recent advancements in data processing, artificial intelligence, and human-computer interaction for the creation of individual models which integrate the available information and knowledge of patients, healthcare providers, devices, environment, etc. Based on a holistic concept, MGM will become effective tool for modern medicine, which shows a unique ability to assess and analyse interconnected relations and the combined impact of multiple factors on the individual. MGM emphasises transparency, reproducibility, and interpretability, ensuring that models are not black boxes but tools that healthcare professionals can fully understand, validate, and apply in clinical practice.</p><p><strong>Conclusion: </strong>The practical applications of MGM are vast, ranging from optimising individual treatment plans to enhancing the efficiency of entire healthcare systems. The research community is called upon to pioneer new projects that demonstrate MGM's potential, establishing it as a central pillar of future health care, where more personalised, predictive, and effective medical practices will hopefully become the standard.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1919-1927"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic navigation with deep reinforcement learning in transthoracic echocardiography. 经胸超声心动图中的深度强化学习机器人导航。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-20 DOI: 10.1007/s11548-024-03275-z
Yuuki Shida, Souto Kumagai, Hiroyasu Iwata

Purpose: The search for heart components in robotic transthoracic echocardiography is a time-consuming process. This paper proposes an optimized robotic navigation system for heart components using deep reinforcement learning to achieve an efficient and effective search technique for heart components.

Method: The proposed method introduces (i) an optimized search behavior generation algorithm that avoids multiple local solutions and searches for the optimal solution and (ii) an optimized path generation algorithm that minimizes the search path, thereby realizing short search times.

Results: The mitral valve search with the proposed method reaches the optimal solution with a probability of 74.4%, the mitral valve confidence loss rate when the local solution stops is 16.3% on average, and the inspection time with the generated path is 48.6 s on average, which is 56.6% of the time cost of the conventional method.

Conclusion: The results indicate that the proposed method improves the search efficiency, and the optimal location can be searched in many cases with the proposed method, and the loss rate of the confidence in the mitral valve was low even when a local solution rather than the optimal solution was reached. It is suggested that the proposed method enables accurate and quick robotic navigation to find heart components.

目的:在机器人经胸超声心动图中搜索心脏部件是一个耗时的过程。本文提出了一种优化的心脏部件机器人导航系统,利用深度强化学习实现高效的心脏部件搜索技术:方法:所提出的方法引入了(i)优化搜索行为生成算法,该算法可避免多个局部解并搜索最优解;(ii)优化路径生成算法,该算法可使搜索路径最小化,从而实现较短的搜索时间:结果:采用所提方法的二尖瓣搜索达到最优解的概率为 74.4%,局部解停止时的二尖瓣置信度损失率平均为 16.3%,生成路径的检查时间平均为 48.6 s,是传统方法时间成本的 56.6%:结果表明,所提出的方法提高了搜索效率,在很多情况下都能搜索到最佳位置,而且即使达到的是局部解而不是最优解,二尖瓣的置信度损失率也很低。建议采用所提出的方法实现准确、快速的机器人导航,以寻找心脏部件。
{"title":"Robotic navigation with deep reinforcement learning in transthoracic echocardiography.","authors":"Yuuki Shida, Souto Kumagai, Hiroyasu Iwata","doi":"10.1007/s11548-024-03275-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03275-z","url":null,"abstract":"<p><strong>Purpose: </strong>The search for heart components in robotic transthoracic echocardiography is a time-consuming process. This paper proposes an optimized robotic navigation system for heart components using deep reinforcement learning to achieve an efficient and effective search technique for heart components.</p><p><strong>Method: </strong>The proposed method introduces (i) an optimized search behavior generation algorithm that avoids multiple local solutions and searches for the optimal solution and (ii) an optimized path generation algorithm that minimizes the search path, thereby realizing short search times.</p><p><strong>Results: </strong>The mitral valve search with the proposed method reaches the optimal solution with a probability of 74.4%, the mitral valve confidence loss rate when the local solution stops is 16.3% on average, and the inspection time with the generated path is 48.6 s on average, which is 56.6% of the time cost of the conventional method.</p><p><strong>Conclusion: </strong>The results indicate that the proposed method improves the search efficiency, and the optimal location can be searched in many cases with the proposed method, and the loss rate of the confidence in the mitral valve was low even when a local solution rather than the optimal solution was reached. It is suggested that the proposed method enables accurate and quick robotic navigation to find heart components.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot prompt-based video encoder for surgical gesture recognition 基于零镜头提示的手术手势识别视频编码器
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-17 DOI: 10.1007/s11548-024-03257-1
Mingxing Rao, Yinhong Qin, Soheil Kolouri, Jie Ying Wu, Daniel Moyer

Purpose

In order to produce a surgical gesture recognition system that can support a wide variety of procedures, either a very large annotated dataset must be acquired, or fitted models must generalize to new labels (so-called zero-shot capability). In this paper we investigate the feasibility of latter option.

Methods

Leveraging the bridge-prompt framework, we prompt-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses.

Results

Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema.

Conclusion

Bridge-prompt and similar pre-trained + prompt-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable.

目的为了开发出能够支持各种手术的手术手势识别系统,要么必须获得非常庞大的注释数据集,要么必须将拟合模型泛化到新的标签(即所谓的 "零镜头能力")。在本文中,我们研究了后一种选择的可行性。方法利用桥接-提示框架,我们对预先训练好的视觉-文本模型(CLIP)进行提示-调整,用于手术视频中的手势识别。结果我们的实验表明,基于提示的视频编码器在手术手势识别任务中的表现优于标准编码器。值得注意的是,它在 "零镜头 "场景中表现出色,即在编码器训练阶段未提供的手势/任务被纳入预测阶段。此外,我们还衡量了在特征提取器训练模式中加入文本描述的益处。结论桥接-提示和类似的预训练+提示调整视频编码器模型为手术机器人提供了重要的视觉表现,尤其是在手势识别任务中。考虑到外科手术任务(手势)的多样性,这些模型无需进行任何任务(手势)特定的再训练就能进行零点转移,这使它们具有无价之宝的价值。
{"title":"Zero-shot prompt-based video encoder for surgical gesture recognition","authors":"Mingxing Rao, Yinhong Qin, Soheil Kolouri, Jie Ying Wu, Daniel Moyer","doi":"10.1007/s11548-024-03257-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03257-1","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>In order to produce a surgical gesture recognition system that can support a wide variety of procedures, either a very large annotated dataset must be acquired, or fitted models must generalize to new labels (so-called zero-shot capability). In this paper we investigate the feasibility of latter option.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Leveraging the bridge-prompt framework, we prompt-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Bridge-prompt and similar pre-trained + prompt-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"45 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust unsupervised texture segmentation for motion analysis in ultrasound images 用于超声图像运动分析的鲁棒无监督纹理分割技术
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-17 DOI: 10.1007/s11548-024-03249-1
Arnaud Brignol, Farida Cheriet, Jean-François Aubin-Fournier, Carole Fortin, Catherine Laporte

Purpose

Ultrasound imaging has emerged as a promising cost-effective and portable non-irradiant modality for the diagnosis and follow-up of diseases. Motion analysis can be performed by segmenting anatomical structures of interest before tracking them over time. However, doing so in a robust way is challenging as ultrasound images often display a low contrast and blurry boundaries.

Methods

In this paper, a robust descriptor inspired from the fractal dimension is presented to locally characterize the gray-level variations of an image. This descriptor is an adaptive grid pattern whose scale locally varies as the gray-level variations of the image. Robust features are then located based on the gray-level variations, which are more likely to be consistently tracked over time despite the presence of noise.

Results

The method was validated on three datasets: segmentation of the left ventricle on simulated echocardiography (Dice coefficient, DC), accuracy of diaphragm motion tracking for healthy subjects (mean sum of distances, MSD) and for a scoliosis patient (root mean square error, RMSE). Results show that the method segments the left ventricle accurately ((textrm{DC}=0.84)) and robustly tracks the diaphragm motion for healthy subjects ((textrm{MSD}=1.10) mm) and for the scoliosis patient ((textrm{RMSE}=1.22) mm).

Conclusions

This method has the potential to segment structures of interest according to their texture in an unsupervised fashion, as well as to help analyze the deformation of tissues. Possible applications are not limited to US image. The same principle could also be applied to other medical imaging modalities such as MRI or CT scans.

目的 超声波成像已成为诊断和跟踪疾病的一种具有成本效益的便携式非辐射模式。运动分析可通过对感兴趣的解剖结构进行分割,然后再对其进行时间跟踪。然而,由于超声波图像通常对比度低、边界模糊,因此要以稳健的方式进行运动分析极具挑战性。本文提出了一种受分形维度启发的稳健描述符,用于局部描述图像的灰度变化。该描述符是一种自适应网格模式,其尺度随图像灰度变化而局部变化。结果该方法在三个数据集上进行了验证:模拟超声心动图左心室的分割(Dice系数,DC)、健康受试者膈肌运动跟踪的准确性(平均距离总和,MSD)和脊柱侧弯患者的准确性(均方根误差,RMSE)。结果表明,该方法能准确地分割左心室((textrm{DC}=0.84)),并能稳健地跟踪健康受试者的膈肌运动((textrm{MSD}=1.10)mm)和脊柱侧弯患者的膈肌运动((textrm{RMSE}=1.22)mm)。结论这种方法可以根据纹理以无监督的方式分割感兴趣的结构,并帮助分析组织的变形。该方法的应用范围不仅限于 US 图像。同样的原理也可应用于其他医学成像模式,如核磁共振成像或 CT 扫描。
{"title":"Robust unsupervised texture segmentation for motion analysis in ultrasound images","authors":"Arnaud Brignol, Farida Cheriet, Jean-François Aubin-Fournier, Carole Fortin, Catherine Laporte","doi":"10.1007/s11548-024-03249-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03249-1","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Ultrasound imaging has emerged as a promising cost-effective and portable non-irradiant modality for the diagnosis and follow-up of diseases. Motion analysis can be performed by segmenting anatomical structures of interest before tracking them over time. However, doing so in a robust way is challenging as ultrasound images often display a low contrast and blurry boundaries.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>In this paper, a robust descriptor inspired from the fractal dimension is presented to locally characterize the gray-level variations of an image. This descriptor is an adaptive grid pattern whose scale locally varies as the gray-level variations of the image. Robust features are then located based on the gray-level variations, which are more likely to be consistently tracked over time despite the presence of noise.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The method was validated on three datasets: segmentation of the left ventricle on simulated echocardiography (Dice coefficient, DC), accuracy of diaphragm motion tracking for healthy subjects (mean sum of distances, MSD) and for a scoliosis patient (root mean square error, RMSE). Results show that the method segments the left ventricle accurately (<span>(textrm{DC}=0.84)</span>) and robustly tracks the diaphragm motion for healthy subjects (<span>(textrm{MSD}=1.10)</span> mm) and for the scoliosis patient (<span>(textrm{RMSE}=1.22)</span> mm).</p><h3 data-test=\"abstract-sub-heading\">Conclusions</h3><p>This method has the potential to segment structures of interest according to their texture in an unsupervised fashion, as well as to help analyze the deformation of tissues. Possible applications are not limited to US image. The same principle could also be applied to other medical imaging modalities such as MRI or CT scans.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"3 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous countertraction for secure field of view in laparoscopic surgery using deep reinforcement learning 利用深度强化学习在腹腔镜手术中自主反牵引以确保视野安全
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-16 DOI: 10.1007/s11548-024-03264-2
Yuriko Iyama, Yudai Takahashi, Jiahe Chen, Takumi Noda, Kazuaki Hara, Etsuko Kobayashi, Ichiro Sakuma, Naoki Tomii

Purpose

Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons’ workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility.

Methods

We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera’s optical axis vector.

Results

The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from (3.6 pm 1.5{text{ mm}}) to (1.1 pm 1.8 {text{mm}}), and the angle error from (29 pm 19 ^circ) to (14 pm 13 ^circ). In the phantom environment, the plane error decreased from (0.96 pm 0.24{text{ mm}}) to (0.39 pm 0.23 {text{mm}}), and the angle error from (32 pm 29 ^circ) to (17 pm 20 ^circ).

Conclusion

The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.

目的反牵引是腹腔镜手术中的一项重要技术,可拉伸组织表面以进行切口和解剖。由于反牵引的技术难度和频率,自主反牵引有可能显著减少外科医生的工作量。尽管提出了几种自动化方法,但要达到最佳的组织能见度和切口张力仍未实现。因此,我们提出了一种可提高组织表面平面度和可见度的自主反牵引方法。方法我们构建了一个神经网络,将点云卷积神经网络(CNN)与深度强化学习(RL)模型整合在一起。该网络根据摄像头观察到的表面形状和镊子位置持续控制镊子位置。RL 在物理模拟环境中进行,并在模拟和幻象环境中进行验证实验。评估基于平面误差(表示组织表面与其最小二乘平面之间的平均距离)和角度误差(表示组织表面矢量与摄像机光轴矢量之间的角度)。在模拟环境中,平面误差从(3.6/pm 1.5)减小到(1.1/pm 1.8),角度误差从(29/pm 19)减小到(14/pm 13)。在幻影环境中,平面误差从(0.96 /pm 0.24)减小到(0.39 /pm 0.23),角度误差从(32 /pm 29 ^circ )减小到(17 /pm 20 ^circ )。这些结果证明了使用所提出的模型自动反牵引的可行性。
{"title":"Autonomous countertraction for secure field of view in laparoscopic surgery using deep reinforcement learning","authors":"Yuriko Iyama, Yudai Takahashi, Jiahe Chen, Takumi Noda, Kazuaki Hara, Etsuko Kobayashi, Ichiro Sakuma, Naoki Tomii","doi":"10.1007/s11548-024-03264-2","DOIUrl":"https://doi.org/10.1007/s11548-024-03264-2","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons’ workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera’s optical axis vector.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from <span>(3.6 pm 1.5{text{ mm}})</span> to <span>(1.1 pm 1.8 {text{mm}})</span>, and the angle error from <span>(29 pm 19 ^circ)</span> to <span>(14 pm 13 ^circ)</span>. In the phantom environment, the plane error decreased from <span>(0.96 pm 0.24{text{ mm}})</span> to <span>(0.39 pm 0.23 {text{mm}})</span>, and the angle error from <span>(32 pm 29 ^circ)</span> to <span>(17 pm 20 ^circ)</span>.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"19 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal registration network with multi-scale feature-crossing 多尺度特征交叉的多模态注册网络
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-16 DOI: 10.1007/s11548-024-03258-0
Shuting Liu, Guoliang Wei, Yi Fan, Lei Chen, Zhaodong Zhang

Purpose

A critical piece of information for prostate intervention and cancer treatment is provided by the complementary medical imaging modalities of ultrasound (US) and magnetic resonance imaging (MRI). Therefore, MRI–US image fusion is often required during prostate examination to provide contrast-enhanced TRUS, in which image registration is a key step in multimodal image fusion.

Methods

We propose a novel multi-scale feature-crossing network for the prostate MRI–US image registration task. We designed a feature-crossing module to enhance information flow in the hidden layer by integrating intermediate features between adjacent scales. Additionally, an attention block utilizing three-dimensional convolution interacts information between channels, improving the correlation between different modal features. We used 100 cases randomly selected from The Cancer Imaging Archive (TCIA) for our experiments. A fivefold cross-validation method was applied, dividing the dataset into five subsets. Four subsets were used for training, and one for testing, repeating this process five times to ensure each subset served as the test set once.

Results

We test and evaluate our technique using fivefold cross-validation. The cross-validation trials result in a median target registration error of 2.20 mm on landmark centroids and a median Dice of 0.87 on prostate glands, both of which were better than the baseline model. In addition, the standard deviation of the dice similarity coefficient is 0.06, which suggests that the model is stable.

Conclusion

We propose a novel multi-scale feature-crossing network for the prostate MRI–US image registration task. A random selection of 100 cases from The Cancer Imaging Archive (TCIA) was used to test and evaluate our approach using fivefold cross-validation. The experimental results showed that our method improves the registration accuracy. After registration, MRI and TURS images were more similar in structure and morphology, and the location and morphology of cancer were more accurately reflected in the images.

目的 超声波(US)和磁共振成像(MRI)这两种互补的医学成像模式为前列腺干预和癌症治疗提供了重要信息。因此,在前列腺检查过程中经常需要进行 MRI-US 图像融合,以提供对比增强 TRUS,其中图像配准是多模态图像融合的关键步骤。我们设计了一个特征交叉模块,通过整合相邻尺度之间的中间特征来增强隐藏层的信息流。此外,一个利用三维卷积的注意力模块可在通道间交互信息,从而提高不同模态特征之间的相关性。我们从癌症成像档案(TCIA)中随机选取了 100 个病例进行实验。我们采用了五倍交叉验证法,将数据集分为五个子集。四个子集用于训练,一个子集用于测试,这一过程重复五次,以确保每个子集都作为测试集一次。交叉验证试验的结果是,地标中心点的目标注册误差中位数为 2.20 毫米,前列腺腺体的 Dice 误差中位数为 0.87,均优于基线模型。此外,骰子相似系数的标准偏差为 0.06,这表明该模型是稳定的。 结论我们针对前列腺 MRI-US 图像配准任务提出了一种新型多尺度特征交叉网络。我们从癌症成像档案(TCIA)中随机选取了 100 个病例,使用五重交叉验证对我们的方法进行了测试和评估。实验结果表明,我们的方法提高了配准精度。配准后,MRI 和 TURS 图像在结构和形态上更加相似,癌症的位置和形态在图像中得到了更准确的反映。
{"title":"Multimodal registration network with multi-scale feature-crossing","authors":"Shuting Liu, Guoliang Wei, Yi Fan, Lei Chen, Zhaodong Zhang","doi":"10.1007/s11548-024-03258-0","DOIUrl":"https://doi.org/10.1007/s11548-024-03258-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>A critical piece of information for prostate intervention and cancer treatment is provided by the complementary medical imaging modalities of ultrasound (US) and magnetic resonance imaging (MRI). Therefore, MRI–US image fusion is often required during prostate examination to provide contrast-enhanced TRUS, in which image registration is a key step in multimodal image fusion.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>We propose a novel multi-scale feature-crossing network for the prostate MRI–US image registration task. We designed a feature-crossing module to enhance information flow in the hidden layer by integrating intermediate features between adjacent scales. Additionally, an attention block utilizing three-dimensional convolution interacts information between channels, improving the correlation between different modal features. We used 100 cases randomly selected from The Cancer Imaging Archive (TCIA) for our experiments. A fivefold cross-validation method was applied, dividing the dataset into five subsets. Four subsets were used for training, and one for testing, repeating this process five times to ensure each subset served as the test set once.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We test and evaluate our technique using fivefold cross-validation. The cross-validation trials result in a median target registration error of 2.20 mm on landmark centroids and a median Dice of 0.87 on prostate glands, both of which were better than the baseline model. In addition, the standard deviation of the dice similarity coefficient is 0.06, which suggests that the model is stable.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We propose a novel multi-scale feature-crossing network for the prostate MRI–US image registration task. A random selection of 100 cases from The Cancer Imaging Archive (TCIA) was used to test and evaluate our approach using fivefold cross-validation. The experimental results showed that our method improves the registration accuracy. After registration, MRI and TURS images were more similar in structure and morphology, and the location and morphology of cancer were more accurately reflected in the images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"24 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated framework for pediatric hip surveillance and severity assessment using radiographs 利用射线照片监测和评估小儿髋关节严重程度的自动框架
IF 3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-09-16 DOI: 10.1007/s11548-024-03254-4
Van Khanh Lam, Elizabeth Fischer, Kochai Jawad, Sean Tabaie, Kevin Cleary, Syed Muhammad Anwar

Purpose

Hip dysplasia is the second most common orthopedic condition in children with cerebral palsy (CP) and may result in disability and pain. The migration percentage (MP) is a widely used metric in hip surveillance, calculated based on an anterior–posterior pelvis radiograph. However, manual quantification of MP values using hip X-ray scans in current standard practice has challenges including being time-intensive, requiring expert knowledge, and not considering human bias. The purpose of this study is to develop a machine learning algorithm to automatically quantify MP values using a hip X-ray scan, and hence provide an assessment for severity, which then can be used for surveillance, treatment planning, and management.

Methods

X-ray scans from 210 patients were curated, pre-processed, and manually annotated at our clinical center. Several machine learning models were trained using pre-trained weights from Inception ResNet-V2, VGG-16, and VGG-19, with different strategies (pre-processing, with and without region of interest (ROI) detection, with and without data augmentation) to find an optimal model for automatic hip landmarking. The predicted landmarks were then used by our geometric algorithm to quantify the MP value for the input hip X-ray scan.

Results

The pre-trained VGG-19 model, fine-tuned with additional custom layers, outputted the lowest mean squared error values for both train and test data, when ROI cropped images were used along with data augmentation for model training. The MP value calculated by the algorithm was compared to manual ground truth labels from our orthopedic fellows using the hip screen application for benchmarking.

Conclusion

The results showed the feasibility of the machine learning model in automatic hip landmark detection for reliably quantifying MP value from hip X-ray scans. The algorithm could be used as an accurate and reliable tool in orthopedic care for diagnosing, severity assessment, and hence treatment and surgical planning for hip displacement.

目的髋关节发育不良是脑性瘫痪(CP)儿童第二大常见的骨科疾病,可能导致残疾和疼痛。髋关节移位百分比(MP)是髋关节监测中广泛使用的指标,根据前后骨盆X光片计算得出。然而,在目前的标准实践中,使用髋关节 X 光扫描手动量化 MP 值存在诸多挑战,包括耗时长、需要专家知识以及不考虑人为偏差。本研究的目的是开发一种机器学习算法,利用髋关节 X 光扫描自动量化 MP 值,从而提供严重程度评估,用于监测、治疗计划和管理。使用来自 Inception ResNet-V2、VGG-16 和 VGG-19 的预训练权重,以不同的策略(预处理、有或无感兴趣区 (ROI) 检测、有或无数据增强)训练了多个机器学习模型,以找到自动髋关节标记的最佳模型。然后,我们的几何算法利用预测的地标来量化输入髋关节 X 光扫描的 MP 值。结果当使用 ROI 裁剪图像和数据增强进行模型训练时,经过额外自定义层微调的预训练 VGG-19 模型在训练和测试数据中输出的均方误差值最低。结果表明,机器学习模型在自动髋关节地标检测方面具有可行性,可以从髋关节 X 光扫描中可靠地量化 MP 值。该算法可作为骨科护理中准确可靠的工具,用于髋关节移位的诊断、严重程度评估,进而制定治疗和手术计划。
{"title":"An automated framework for pediatric hip surveillance and severity assessment using radiographs","authors":"Van Khanh Lam, Elizabeth Fischer, Kochai Jawad, Sean Tabaie, Kevin Cleary, Syed Muhammad Anwar","doi":"10.1007/s11548-024-03254-4","DOIUrl":"https://doi.org/10.1007/s11548-024-03254-4","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Hip dysplasia is the second most common orthopedic condition in children with cerebral palsy (CP) and may result in disability and pain. The migration percentage (MP) is a widely used metric in hip surveillance, calculated based on an anterior–posterior pelvis radiograph. However, manual quantification of MP values using hip X-ray scans in current standard practice has challenges including being time-intensive, requiring expert knowledge, and not considering human bias. The purpose of this study is to develop a machine learning algorithm to automatically quantify MP values using a hip X-ray scan, and hence provide an assessment for severity, which then can be used for surveillance, treatment planning, and management.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>X-ray scans from 210 patients were curated, pre-processed, and manually annotated at our clinical center. Several machine learning models were trained using pre-trained weights from Inception ResNet-V2, VGG-16, and VGG-19, with different strategies (pre-processing, with and without region of interest (ROI) detection, with and without data augmentation) to find an optimal model for automatic hip landmarking. The predicted landmarks were then used by our geometric algorithm to quantify the MP value for the input hip X-ray scan.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The pre-trained VGG-19 model, fine-tuned with additional custom layers, outputted the lowest mean squared error values for both train and test data, when ROI cropped images were used along with data augmentation for model training. The MP value calculated by the algorithm was compared to manual ground truth labels from our orthopedic fellows using the hip screen application for benchmarking.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The results showed the feasibility of the machine learning model in automatic hip landmark detection for reliably quantifying MP value from hip X-ray scans. The algorithm could be used as an accurate and reliable tool in orthopedic care for diagnosing, severity assessment, and hence treatment and surgical planning for hip displacement.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"18 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1