首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
Towards multimodal visualization of esophageal motility: fusion of manometry, impedance, and videofluoroscopic image sequences. 食管运动的多模式可视化:测压、阻抗和视频透视图像序列的融合。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-08 DOI: 10.1007/s11548-024-03265-1
Alexander Geiger, Lukas Bernhard, Florian Gassert, Hubertus Feußner, Dirk Wilhelm, Helmut Friess, Alissa Jell

Purpose: Dysphagia is the inability or difficulty to swallow normally. Standard procedures for diagnosing the exact disease are, among others, X-ray videofluoroscopy, manometry and impedance examinations, usually performed consecutively. In order to gain more insights, ongoing research is aiming to collect these different modalities at the same time, with the goal to present them in a joint visualization. One idea to create a combined view is the projection of the manometry and impedance values onto the right location in the X-ray images. This requires to identify the exact sensor locations in the images.

Methods: This work gives an overview of the challenges associated with the sensor detection task and proposes a robust approach to detect the sensors in X-ray image sequences, ultimately allowing to project the manometry and impedance values onto the right location in the images.

Results: The developed sensor detection approach is evaluated on a total of 14 sequences from different patients, achieving a F1-score of 86.36%. To demonstrate the robustness of the approach, another study is performed by adding different levels of noise to the images, with the performance of our sensor detection method only slightly decreasing in these scenarios. This robust sensor detection provides the basis to accurately project manometry and impedance values onto the images, allowing to create a multimodal visualization of the swallow process. The resulting visualizations are evaluated qualitatively by domain experts, indicating a great benefit of this proposed fused visualization approach.

Conclusion: Using our preprocessing and sensor detection method, we show that the sensor detection task can be successfully approached with high accuracy. This allows to create a novel, multimodal visualization of esophageal motility, helping to provide more insights into swallow disorders of patients.

目的:吞咽困难是指无法或难以正常吞咽。诊断这种疾病的标准程序包括 X 射线视频荧光镜检查、压力测量和阻抗检查,通常是连续进行的。为了获得更多的洞察力,正在进行的研究旨在同时收集这些不同的模式,目的是将它们以联合可视化的方式呈现出来。创建联合视图的一个想法是将测压和阻抗值投影到 X 光图像的正确位置上。这就需要确定图像中传感器的准确位置:方法:这项工作概述了与传感器检测任务相关的挑战,并提出了一种在 X 射线图像序列中检测传感器的稳健方法,最终可将测压和阻抗值投射到图像中的正确位置:结果:开发的传感器检测方法在来自不同患者的总共 14 个序列上进行了评估,F1 分数达到 86.36%。为了证明该方法的鲁棒性,还进行了另一项研究,即在图像中添加不同程度的噪音,在这些情况下,我们的传感器检测方法的性能仅略有下降。这种稳健的传感器检测方法为将压力测量和阻抗值准确投射到图像上提供了基础,从而可以创建吞咽过程的多模态可视化。领域专家对由此产生的可视化效果进行了定性评估,表明这种融合可视化方法具有极大的优势:通过使用我们的预处理和传感器检测方法,我们证明了传感器检测任务可以高精度地成功完成。这使得我们可以创建一种新颖的、多模式的食管运动可视化方法,帮助人们更深入地了解患者的吞咽障碍。
{"title":"Towards multimodal visualization of esophageal motility: fusion of manometry, impedance, and videofluoroscopic image sequences.","authors":"Alexander Geiger, Lukas Bernhard, Florian Gassert, Hubertus Feußner, Dirk Wilhelm, Helmut Friess, Alissa Jell","doi":"10.1007/s11548-024-03265-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03265-1","url":null,"abstract":"<p><strong>Purpose: </strong>Dysphagia is the inability or difficulty to swallow normally. Standard procedures for diagnosing the exact disease are, among others, X-ray videofluoroscopy, manometry and impedance examinations, usually performed consecutively. In order to gain more insights, ongoing research is aiming to collect these different modalities at the same time, with the goal to present them in a joint visualization. One idea to create a combined view is the projection of the manometry and impedance values onto the right location in the X-ray images. This requires to identify the exact sensor locations in the images.</p><p><strong>Methods: </strong>This work gives an overview of the challenges associated with the sensor detection task and proposes a robust approach to detect the sensors in X-ray image sequences, ultimately allowing to project the manometry and impedance values onto the right location in the images.</p><p><strong>Results: </strong>The developed sensor detection approach is evaluated on a total of 14 sequences from different patients, achieving a F1-score of 86.36%. To demonstrate the robustness of the approach, another study is performed by adding different levels of noise to the images, with the performance of our sensor detection method only slightly decreasing in these scenarios. This robust sensor detection provides the basis to accurately project manometry and impedance values onto the images, allowing to create a multimodal visualization of the swallow process. The resulting visualizations are evaluated qualitatively by domain experts, indicating a great benefit of this proposed fused visualization approach.</p><p><strong>Conclusion: </strong>Using our preprocessing and sensor detection method, we show that the sensor detection task can be successfully approached with high accuracy. This allows to create a novel, multimodal visualization of esophageal motility, helping to provide more insights into swallow disorders of patients.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for three-dimensional statistical shape modeling of the proximal femur in Legg-Calvé-Perthes disease. Legg-Calvé-Perthes 病股骨近端三维统计形状建模框架。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-08 DOI: 10.1007/s11548-024-03272-2
Luke G Johnson, Joseph D Mozingo, Penny R Atkins, Seaton Schwab, Alan Morris, Shireen Y Elhabian, David R Wilson, Harry K W Kim, Andrew E Anderson

Purpose: The pathomorphology of Legg-Calvé-Perthes disease (LCPD) is a key contributor to poor long-term outcomes such as hip pain, femoroacetabular impingement, and early-onset osteoarthritis. Plain radiographs, commonly used for research and in the clinic, cannot accurately represent the full extent of LCPD deformity. The purpose of this study was to develop and evaluate a methodological framework for three-dimensional (3D) statistical shape modeling (SSM) of the proximal femur in LCPD.

Methods: We developed a framework consisting of three core steps: segmentation, surface mesh preparation, and particle-based correspondence. The framework aims to address challenges in modeling this rare condition, characterized by highly heterogeneous deformities across a wide age range and small sample sizes. We evaluated this framework by producing a SSM from clinical magnetic resonance images of 13 proximal femurs with LCPD deformity from 11 patients between the ages of six and 12 years.

Results: After removing differences in scale and pose, the dominant shape modes described morphological features characteristic of LCPD, including a broad and flat femoral head, high-riding greater trochanter, and reduced neck-shaft angle. The first four shape modes were chosen for the evaluation of the model's performance, together describing 87.5% of the overall cohort variance. The SSM was generalizable to unfamiliar examples with an average point-to-point reconstruction error below 1mm. We observed strong Spearman rank correlations (up to 0.79) between some shape modes, 3D measurements of femoral head asphericity, and clinical radiographic metrics.

Conclusion: In this study, we present a framework, based on SSM, for the objective description of LCPD deformity in three dimensions. Our methods can accurately describe overall shape variation using a small number of parameters, and are a step toward a widely accepted, objective 3D quantification of LCPD deformity.

目的:Legg-Calvé-Perthes 病(LCPD)的病理形态是导致髋关节疼痛、股骨髋臼撞击和早发性骨关节炎等长期不良后果的关键因素。研究和临床常用的普通X光片无法准确反映LCPD畸形的全部程度。本研究旨在开发和评估 LCPD 股骨近端三维(3D)统计形状建模(SSM)的方法框架:我们开发了一个由三个核心步骤组成的框架:分割、表面网格准备和基于粒子的对应。该框架旨在解决这一罕见病症建模过程中的难题,该病症的特点是在广泛的年龄范围内存在高度异质性畸形,且样本量较小。我们通过对 11 名年龄在 6 到 12 岁之间、患有 LCPD 畸形的 13 名患者的股骨近端临床磁共振图像制作 SSM,对该框架进行了评估:去除比例和姿势上的差异后,主要的形状模式描述了LCPD特有的形态特征,包括宽平的股骨头、高耸的大转子和减小的颈轴角。评估模型性能时选择了前四种形状模式,它们共描述了87.5%的整体队列变异。SSM 可用于陌生病例,平均点对点重建误差低于 1 毫米。我们观察到一些形状模式、股骨头非球面度三维测量值和临床放射学指标之间存在很强的斯皮尔曼等级相关性(高达 0.79):在这项研究中,我们提出了一个基于 SSM 的框架,用于客观描述 LCPD 的三维畸形。我们的方法只需使用少量参数就能准确描述整体形状的变化,这是向广泛接受的、客观的 LCPD 畸形三维量化迈出的一步。
{"title":"A framework for three-dimensional statistical shape modeling of the proximal femur in Legg-Calvé-Perthes disease.","authors":"Luke G Johnson, Joseph D Mozingo, Penny R Atkins, Seaton Schwab, Alan Morris, Shireen Y Elhabian, David R Wilson, Harry K W Kim, Andrew E Anderson","doi":"10.1007/s11548-024-03272-2","DOIUrl":"10.1007/s11548-024-03272-2","url":null,"abstract":"<p><strong>Purpose: </strong>The pathomorphology of Legg-Calvé-Perthes disease (LCPD) is a key contributor to poor long-term outcomes such as hip pain, femoroacetabular impingement, and early-onset osteoarthritis. Plain radiographs, commonly used for research and in the clinic, cannot accurately represent the full extent of LCPD deformity. The purpose of this study was to develop and evaluate a methodological framework for three-dimensional (3D) statistical shape modeling (SSM) of the proximal femur in LCPD.</p><p><strong>Methods: </strong>We developed a framework consisting of three core steps: segmentation, surface mesh preparation, and particle-based correspondence. The framework aims to address challenges in modeling this rare condition, characterized by highly heterogeneous deformities across a wide age range and small sample sizes. We evaluated this framework by producing a SSM from clinical magnetic resonance images of 13 proximal femurs with LCPD deformity from 11 patients between the ages of six and 12 years.</p><p><strong>Results: </strong>After removing differences in scale and pose, the dominant shape modes described morphological features characteristic of LCPD, including a broad and flat femoral head, high-riding greater trochanter, and reduced neck-shaft angle. The first four shape modes were chosen for the evaluation of the model's performance, together describing 87.5% of the overall cohort variance. The SSM was generalizable to unfamiliar examples with an average point-to-point reconstruction error below 1mm. We observed strong Spearman rank correlations (up to 0.79) between some shape modes, 3D measurements of femoral head asphericity, and clinical radiographic metrics.</p><p><strong>Conclusion: </strong>In this study, we present a framework, based on SSM, for the objective description of LCPD deformity in three dimensions. Our methods can accurately describe overall shape variation using a small number of parameters, and are a step toward a widely accepted, objective 3D quantification of LCPD deformity.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intuitive guidewire control mechanism for robotic intervention. 用于机器人介入的直观导丝控制机制。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-07 DOI: 10.1007/s11548-024-03279-9
Rohit Dey, Yichen Guo, Yang Liu, Ajit Puri, Luis Savastano, Yihao Zheng

Purpose: Teleoperated Interventional Robotic systems (TIRs) are developed to reduce radiation exposure and physical stress of the physicians and enhance device manipulation accuracy and stability. Nevertheless, TIRs are not widely adopted, partly due to the lack of intuitive control interfaces. Current TIR interfaces like joysticks, keyboards, and touchscreens differ significantly from traditional manual techniques, resulting in a shallow, longer learning curve. To this end, this research introduces a novel control mechanism for intuitive operation and seamless adoption of TIRs.

Methods: An off-the-shelf medical torque device augmented with a micro-electromagnetic tracker was proposed as the control interface to preserve the tactile sensation and muscle memory integral to interventionalists' proficiency. The control inputs to drive the TIR were extracted via real-time motion mapping of the interface. To verify the efficacy of the proposed control mechanism to accurately operate the TIR, evaluation experiments using industrial grade encoders were conducted.

Results: A mean tracking error of 0.32 ± 0.12 mm in linear and 0.54 ± 0.07° in angular direction were achieved. The time lag in tracking was found to be 125 ms on average using pade approximation. Ergonomically, the developed control interface is 3.5 mm diametrically larger, and 4.5 g. heavier compared to traditional torque devices.

Conclusion: With uncanny resemblance to traditional torque devices while maintaining results comparable to state-of-the-art commercially available TIRs, this research successfully provides an intuitive control interface for potential wider clinical adoption of robot-assisted interventions.

目的:远程操作介入机器人系统(TIR)的开发是为了减少辐射和医生的身体压力,并提高设备操作的准确性和稳定性。然而,远程介入机器人系统并未被广泛采用,部分原因是缺乏直观的控制界面。目前的 TIR 界面(如操纵杆、键盘和触摸屏)与传统的手动技术有很大不同,导致学习曲线较浅、较长。为此,本研究引入了一种新型控制机制,以实现直观操作和无缝采用 TIR:方法:研究人员提出了一种配备微型电磁跟踪器的现成医用扭矩装置作为控制界面,以保留触觉和肌肉记忆,使介入医师能够熟练操作。驱动 TIR 的控制输入是通过界面的实时运动映射提取的。为了验证所提议的控制机制在精确操作 TIR 方面的功效,使用工业级编码器进行了评估实验:结果:直线方向的平均跟踪误差为 0.32 ± 0.12 mm,角度方向的平均跟踪误差为 0.54 ± 0.07°。使用帕德近似法发现,跟踪时滞平均为 125 毫秒。从人体工程学角度看,与传统扭矩装置相比,开发的控制界面直径大 3.5 毫米,重 4.5 克:这项研究成功地提供了一种直观的控制界面,有望在临床上更广泛地采用机器人辅助介入治疗。
{"title":"An intuitive guidewire control mechanism for robotic intervention.","authors":"Rohit Dey, Yichen Guo, Yang Liu, Ajit Puri, Luis Savastano, Yihao Zheng","doi":"10.1007/s11548-024-03279-9","DOIUrl":"https://doi.org/10.1007/s11548-024-03279-9","url":null,"abstract":"<p><strong>Purpose: </strong>Teleoperated Interventional Robotic systems (TIRs) are developed to reduce radiation exposure and physical stress of the physicians and enhance device manipulation accuracy and stability. Nevertheless, TIRs are not widely adopted, partly due to the lack of intuitive control interfaces. Current TIR interfaces like joysticks, keyboards, and touchscreens differ significantly from traditional manual techniques, resulting in a shallow, longer learning curve. To this end, this research introduces a novel control mechanism for intuitive operation and seamless adoption of TIRs.</p><p><strong>Methods: </strong>An off-the-shelf medical torque device augmented with a micro-electromagnetic tracker was proposed as the control interface to preserve the tactile sensation and muscle memory integral to interventionalists' proficiency. The control inputs to drive the TIR were extracted via real-time motion mapping of the interface. To verify the efficacy of the proposed control mechanism to accurately operate the TIR, evaluation experiments using industrial grade encoders were conducted.</p><p><strong>Results: </strong>A mean tracking error of 0.32 ± 0.12 mm in linear and 0.54 ± 0.07° in angular direction were achieved. The time lag in tracking was found to be 125 ms on average using pade approximation. Ergonomically, the developed control interface is 3.5 mm diametrically larger, and 4.5 g. heavier compared to traditional torque devices.</p><p><strong>Conclusion: </strong>With uncanny resemblance to traditional torque devices while maintaining results comparable to state-of-the-art commercially available TIRs, this research successfully provides an intuitive control interface for potential wider clinical adoption of robot-assisted interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features. 多染色病理成像中的图神经网络:Radiomic 特征的扩展比较分析。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-07 DOI: 10.1007/s11548-024-03277-x
Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier

Purpose: This study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles.

Methods: We utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness.

Results: Integrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data.

Conclusion: Radiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study's findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness.

目的:本研究调查了图神经网络(GNN)中 Radiomic 特征在多表位配体制图(MELC)病理样本分类中的应用。其目的是加强对湿疹、淋巴瘤和黑色素瘤等经常被误诊的皮肤病的诊断。其新颖之处在于将 Radiomic 特征与 GNNs 相结合,并将其功效与传统的多染色图谱进行比较:方法:我们利用 GNN 将多张病理切片处理为细胞级图谱,并将其性能与 XGBoost 和随机森林分类器进行比较。分析包括两种特征类型:多纹理轮廓和辐射组特征。采用 UMAP 和 t-SNE 等降维技术来优化特征空间,并根据空间和特征的接近程度来确定图的连通性:结果:与传统模型相比,将 Radiomic 特征整合到空间连接图中可显著提高分类准确性。UMAP 的应用进一步提高了 GNN 的性能,尤其是在对病理特征相似的疾病进行分类时。GNN 模型的表现优于基线方法,证明了它在处理复杂组织病理学数据时的鲁棒性:结论:通过 GNN 处理的放射线组学特征在多疾病分类方面显示出巨大潜力,可提高诊断准确性。这项研究的结果表明,将先进的成像分析与基于图的建模相结合,可以开发出更好的诊断工具。未来的研究应将这些方法扩展到更广泛的疾病中,以验证其通用性和有效性。
{"title":"Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features.","authors":"Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier","doi":"10.1007/s11548-024-03277-x","DOIUrl":"https://doi.org/10.1007/s11548-024-03277-x","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles.</p><p><strong>Methods: </strong>We utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness.</p><p><strong>Results: </strong>Integrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data.</p><p><strong>Conclusion: </strong>Radiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study's findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A usability analysis of augmented reality and haptics for surgical planning. 增强现实技术和触觉技术在手术规划中的可用性分析。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-06-28 DOI: 10.1007/s11548-024-03207-x
Negar Kazemipour, Amir Hooshiar, Marta Kersten-Oertel

Purpose: Proper visualization and interaction with complex anatomical data can improve understanding, allowing for more intuitive surgical planning. The goal of our work was to study what the most intuitive yet practical platforms for interacting with 3D medical data are in the context of surgical planning.

Methods: We compared planning using a monitor and mouse, a monitor with a haptic device, and an augmented reality (AR) head-mounted display which uses a gesture-based interaction. To determine the most intuitive system, two user studies, one with novices and one with experts, were conducted. The studies involved planning of three scenarios: (1) heart valve repair, (2) hip tumor resection, and (3) pedicle screw placement. Task completion time, NASA Task Load Index and system-specific questionnaires were used for the evaluation.

Results: Both novices and experts preferred the AR system for pedicle screw placement. Novices preferred the haptic system for hip tumor planning, while experts preferred the mouse and keyboard. In the case of heart valve planning, novices preferred the AR system but there was no clear preference for experts. Both groups reported that AR provides the best spatial depth perception.

Conclusion: The results of the user studies suggest that different surgical cases may benefit from varying interaction and visualization methods. For example, for planning surgeries with implants and instrumentations, mixed reality could provide better 3D spatial perception, whereas using landmarks for delineating specific targets may be more effective using a traditional 2D interface.

目的:对复杂的解剖数据进行适当的可视化和交互可以提高理解能力,使手术规划更加直观。我们的工作目标是研究在手术规划中,与三维医疗数据进行交互的最直观、最实用的平台是什么:我们比较了使用显示器和鼠标、带有触觉设备的显示器以及使用基于手势交互的增强现实(AR)头戴式显示器进行规划。为了确定最直观的系统,我们进行了两项用户研究,一项针对新手,另一项针对专家。研究涉及三个场景的规划:(1) 心脏瓣膜修复;(2) 髋关节肿瘤切除;(3) 椎弓根螺钉置入。评估中使用了任务完成时间、NASA 任务负荷指数和系统特定问卷:结果:新手和专家都更喜欢使用 AR 系统进行椎弓根螺钉置入术。新手更喜欢使用触觉系统进行髋部肿瘤规划,而专家则更喜欢使用鼠标和键盘。在心脏瓣膜规划中,新手更喜欢使用 AR 系统,而专家则没有明显的偏好。两组用户都表示,增强现实技术能提供最佳的空间深度感知:用户研究结果表明,不同的手术案例可能会从不同的交互和可视化方法中获益。例如,在规划植入物和器械的手术时,混合现实可以提供更好的三维空间感知,而使用地标来划定特定目标,使用传统的二维界面可能更有效。
{"title":"A usability analysis of augmented reality and haptics for surgical planning.","authors":"Negar Kazemipour, Amir Hooshiar, Marta Kersten-Oertel","doi":"10.1007/s11548-024-03207-x","DOIUrl":"10.1007/s11548-024-03207-x","url":null,"abstract":"<p><strong>Purpose: </strong>Proper visualization and interaction with complex anatomical data can improve understanding, allowing for more intuitive surgical planning. The goal of our work was to study what the most intuitive yet practical platforms for interacting with 3D medical data are in the context of surgical planning.</p><p><strong>Methods: </strong>We compared planning using a monitor and mouse, a monitor with a haptic device, and an augmented reality (AR) head-mounted display which uses a gesture-based interaction. To determine the most intuitive system, two user studies, one with novices and one with experts, were conducted. The studies involved planning of three scenarios: (1) heart valve repair, (2) hip tumor resection, and (3) pedicle screw placement. Task completion time, NASA Task Load Index and system-specific questionnaires were used for the evaluation.</p><p><strong>Results: </strong>Both novices and experts preferred the AR system for pedicle screw placement. Novices preferred the haptic system for hip tumor planning, while experts preferred the mouse and keyboard. In the case of heart valve planning, novices preferred the AR system but there was no clear preference for experts. Both groups reported that AR provides the best spatial depth perception.</p><p><strong>Conclusion: </strong>The results of the user studies suggest that different surgical cases may benefit from varying interaction and visualization methods. For example, for planning surgeries with implants and instrumentations, mixed reality could provide better 3D spatial perception, whereas using landmarks for delineating specific targets may be more effective using a traditional 2D interface.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2069-2078"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Needle tracking in low-resolution ultrasound volumes using deep learning. 利用深度学习在低分辨率超声卷中追踪针头。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-13 DOI: 10.1007/s11548-024-03234-8
Sarah Grube, Sarah Latus, Finn Behrendt, Oleksandra Riabova, Maximilian Neidhardt, Alexander Schlaefer

Purpose: Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes.

Methods: We design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16  ×  16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation.

Results: Our experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning.

Conclusion: Our study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.

目的:临床上通常通过二维超声成像辅助将针头插入组织进行实时导航,但这面临着针头和探针精确对准以减少平面外移动的挑战。最近的研究将三维超声成像与深度学习相结合来克服这一问题,重点是获取高分辨率图像,为针尖检测创造最佳条件。然而,高分辨率也需要大量时间进行图像采集和处理,从而限制了实时性。因此,我们的目标是在牺牲低图像分辨率的前提下,最大限度地提高 US 容积率。我们提出了一种深度学习方法,从稀疏采样的 US 容量中直接提取三维针尖位置:我们设计了一个实验装置,让机器人将针头插入水和鸡肝组织中。与手动标注不同,我们通过已知的机器人姿势来评估针尖位置。在插入过程中,我们使用 16 × 16 元素矩阵传感器以 4 Hz 的容积率获取了大量低分辨率容积数据集。我们将深度学习方法的性能与传统的针头分割方法进行了比较:结果:我们在水和肝脏中进行的实验表明,深度学习的效果优于传统方法,同时达到了亚毫米级的精度。深度学习在水中的平均位置误差为 0.54 毫米,在肝脏中的平均位置误差为 1.54 毫米:我们的研究强调了深度学习在从低分辨率超声波体积预测三维针位置方面的优势。这是针头实时导航的一个重要里程碑,它简化了针头和超声探头的对准,并实现了三维运动分析。
{"title":"Needle tracking in low-resolution ultrasound volumes using deep learning.","authors":"Sarah Grube, Sarah Latus, Finn Behrendt, Oleksandra Riabova, Maximilian Neidhardt, Alexander Schlaefer","doi":"10.1007/s11548-024-03234-8","DOIUrl":"10.1007/s11548-024-03234-8","url":null,"abstract":"<p><strong>Purpose: </strong>Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes.</p><p><strong>Methods: </strong>We design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16  <math><mo>×</mo></math>  16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation.</p><p><strong>Results: </strong>Our experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning.</p><p><strong>Conclusion: </strong>Our study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1975-1981"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442564/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bronchoscopic navigation method based on neural radiation fields. 基于神经辐射场的支气管镜导航方法。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-08-07 DOI: 10.1007/s11548-024-03243-7
Lifeng Zhu, Jianwei Zheng, Cheng Wang, Junhong Jiang, Aiguo Song

Purpose: We introduce a novel approach for bronchoscopic navigation that leverages neural radiance fields (NeRF) to passively locate the endoscope solely from bronchoscopic images. This approach aims to overcome the limitations and challenges of current bronchoscopic navigation tools that rely on external infrastructures or require active adjustment of the bronchoscope.

Methods: To address the challenges, we leverage NeRF for bronchoscopic navigation, enabling passive endoscope localization from bronchoscopic images. We develop a two-stage pipeline: offline training using preoperative data and online passive pose estimation during surgery. To enhance performance, we employ Anderson acceleration and incorporate semantic appearance transfer to deal with the sim-to-real gap between training and inference stages.

Results: We assessed the viability of our approach by conducting tests on virtual bronchscopic images and a physical phantom against the SLAM-based methods. The average rotation error in our virtual dataset is about 3.18 and the translation error is around 4.95 mm. On the physical phantom test, the average rotation and translation error are approximately 5.14 and 13.12 mm.

Conclusion: Our NeRF-based bronchoscopic navigation method eliminates reliance on external infrastructures and active adjustments, offering promising advancements in bronchoscopic navigation. Experimental validation on simulation and real-world phantom models demonstrates its efficacy in addressing challenges like low texture and challenging lighting conditions.

目的:我们介绍了一种新型支气管镜导航方法,它利用神经辐射场(NeRF),仅通过支气管镜图像就能被动定位内窥镜。这种方法旨在克服当前支气管镜导航工具的局限性和挑战,这些工具依赖于外部基础设施或需要主动调整支气管镜:为了应对这些挑战,我们利用 NeRF 进行支气管镜导航,从而实现从支气管镜图像进行被动内窥镜定位。我们开发了一个两阶段管道:利用术前数据进行离线训练,并在手术过程中进行在线被动姿态估计。为了提高性能,我们采用了安德森加速技术,并结合语义外观转移技术来处理训练和推理阶段之间的模拟与真实之间的差距:我们通过对虚拟支气管镜图像和物理模型进行测试,评估了我们的方法与基于 SLAM 方法的可行性。虚拟数据集的平均旋转误差约为 3.18 ∘,平移误差约为 4.95 mm。在物理模型测试中,平均旋转和平移误差分别约为 5.14 ∘ 和 13.12 mm:我们基于 NeRF 的支气管镜导航方法无需依赖外部基础设施和主动调整,有望推动支气管镜导航的发展。在模拟和真实世界模型上进行的实验验证表明,该方法在应对低纹理和具有挑战性的照明条件等挑战方面非常有效。
{"title":"A bronchoscopic navigation method based on neural radiation fields.","authors":"Lifeng Zhu, Jianwei Zheng, Cheng Wang, Junhong Jiang, Aiguo Song","doi":"10.1007/s11548-024-03243-7","DOIUrl":"10.1007/s11548-024-03243-7","url":null,"abstract":"<p><strong>Purpose: </strong>We introduce a novel approach for bronchoscopic navigation that leverages neural radiance fields (NeRF) to passively locate the endoscope solely from bronchoscopic images. This approach aims to overcome the limitations and challenges of current bronchoscopic navigation tools that rely on external infrastructures or require active adjustment of the bronchoscope.</p><p><strong>Methods: </strong>To address the challenges, we leverage NeRF for bronchoscopic navigation, enabling passive endoscope localization from bronchoscopic images. We develop a two-stage pipeline: offline training using preoperative data and online passive pose estimation during surgery. To enhance performance, we employ Anderson acceleration and incorporate semantic appearance transfer to deal with the sim-to-real gap between training and inference stages.</p><p><strong>Results: </strong>We assessed the viability of our approach by conducting tests on virtual bronchscopic images and a physical phantom against the SLAM-based methods. The average rotation error in our virtual dataset is about 3.18 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> and the translation error is around 4.95 mm. On the physical phantom test, the average rotation and translation error are approximately 5.14 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> and 13.12 mm.</p><p><strong>Conclusion: </strong>Our NeRF-based bronchoscopic navigation method eliminates reliance on external infrastructures and active adjustments, offering promising advancements in bronchoscopic navigation. Experimental validation on simulation and real-world phantom models demonstrates its efficacy in addressing challenges like low texture and challenging lighting conditions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2011-2021"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformers for colorectal cancer segmentation in CT imaging. 用于 CT 成像中结直肠癌分割的变换器。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-04 DOI: 10.1007/s11548-024-03217-9
Georg Hille, Pavan Tummala, Lena Spitz, Sylvia Saalfeld

Purpose: Most recently transformer models became the state of the art in various medical image segmentation tasks and challenges, outperforming most of the conventional deep learning approaches. Picking up on that trend, this study aims at applying various transformer models to the highly challenging task of colorectal cancer (CRC) segmentation in CT imaging and assessing how they hold up to the current state-of-the-art convolutional neural network (CNN), the nnUnet. Furthermore, we wanted to investigate the impact of the network size on the resulting accuracies, since transformer models tend to be significantly larger than conventional network architectures.

Methods: For this purpose, six different transformer models, with specific architectural advancements and network sizes were implemented alongside the aforementioned nnUnet and were applied to the CRC segmentation task of the medical segmentation decathlon.

Results: The best results were achieved with the Swin-UNETR, D-Former, and VT-Unet, each transformer models, with a Dice similarity coefficient (DSC) of 0.60, 0.59 and 0.59, respectively. Therefore, the current state-of-the-art CNN, the nnUnet could be outperformed by transformer architectures regarding this task. Furthermore, a comparison with the inter-observer variability (IOV) of approx. 0.64 DSC indicates almost expert-level accuracy. The comparatively low IOV emphasizes the complexity and challenge of CRC segmentation, as well as indicating limitations regarding the achievable segmentation accuracy.

Conclusion: As a result of this study, transformer models underline their current upward trend in producing state-of-the-art results also for the challenging task of CRC segmentation. However, with ever smaller advances in total accuracies, as demonstrated in this study by the on par performances of multiple network variants, other advantages like efficiency, low computation demands, or ease of adaption to new tasks become more and more relevant.

目的:最近,变换器模型在各种医学图像分割任务和挑战中成为最先进的技术,表现优于大多数传统的深度学习方法。顺应这一趋势,本研究旨在将各种变换器模型应用于极具挑战性的 CT 成像中的结直肠癌(CRC)分割任务,并评估它们与当前最先进的卷积神经网络(CNN)--nnUnet--相比有何优势。此外,我们还想研究网络规模对结果准确性的影响,因为变压器模型往往比传统网络架构大得多:方法:为此,我们在上述 nnUnet 的基础上实施了六种不同的变压器模型,这些模型具有特定的架构先进性和网络规模,并被应用于医学分割十项全能竞赛的 CRC 分割任务:结果:Swin-UNETR、D-Former 和 VT-Unet(每种变压器模型)的最佳结果分别为 0.60、0.59 和 0.59。因此,目前最先进的 CNN(nnUnet)在这项任务中的表现可能会优于变压器架构。此外,与约 0.64 DSC 的观察者间变异性 (IOV) 相比,其准确性几乎达到了专家级水平。相对较低的 IOV 强调了 CRC 分割的复杂性和挑战性,同时也表明了可实现的分割精度的局限性:本研究的结果表明,变压器模型在 CRC 分段这一具有挑战性的任务中也能产生最先进的结果,这凸显了其目前的上升趋势。然而,正如本研究中多个网络变体的表现所证明的那样,随着总精度的进步越来越小,其他优势,如效率、低计算需求或易于适应新任务等,也变得越来越重要。
{"title":"Transformers for colorectal cancer segmentation in CT imaging.","authors":"Georg Hille, Pavan Tummala, Lena Spitz, Sylvia Saalfeld","doi":"10.1007/s11548-024-03217-9","DOIUrl":"10.1007/s11548-024-03217-9","url":null,"abstract":"<p><strong>Purpose: </strong>Most recently transformer models became the state of the art in various medical image segmentation tasks and challenges, outperforming most of the conventional deep learning approaches. Picking up on that trend, this study aims at applying various transformer models to the highly challenging task of colorectal cancer (CRC) segmentation in CT imaging and assessing how they hold up to the current state-of-the-art convolutional neural network (CNN), the nnUnet. Furthermore, we wanted to investigate the impact of the network size on the resulting accuracies, since transformer models tend to be significantly larger than conventional network architectures.</p><p><strong>Methods: </strong>For this purpose, six different transformer models, with specific architectural advancements and network sizes were implemented alongside the aforementioned nnUnet and were applied to the CRC segmentation task of the medical segmentation decathlon.</p><p><strong>Results: </strong>The best results were achieved with the Swin-UNETR, D-Former, and VT-Unet, each transformer models, with a Dice similarity coefficient (DSC) of 0.60, 0.59 and 0.59, respectively. Therefore, the current state-of-the-art CNN, the nnUnet could be outperformed by transformer architectures regarding this task. Furthermore, a comparison with the inter-observer variability (IOV) of approx. 0.64 DSC indicates almost expert-level accuracy. The comparatively low IOV emphasizes the complexity and challenge of CRC segmentation, as well as indicating limitations regarding the achievable segmentation accuracy.</p><p><strong>Conclusion: </strong>As a result of this study, transformer models underline their current upward trend in producing state-of-the-art results also for the challenging task of CRC segmentation. However, with ever smaller advances in total accuracies, as demonstrated in this study by the on par performances of multiple network variants, other advantages like efficiency, low computation demands, or ease of adaption to new tasks become more and more relevant.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2079-2087"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141535882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extra-abdominal trocar and instrument detection for enhanced surgical workflow understanding. 腹腔外套管和器械检测,增强对手术工作流程的了解。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-15 DOI: 10.1007/s11548-024-03220-0
Franziska Jurosch, Lars Wagner, Alissa Jell, Esra Islertas, Dirk Wilhelm, Maximilian Berlet

Purpose: Video-based intra-abdominal instrument tracking for laparoscopic surgeries is a common research area. However, the tracking can only be done with instruments that are actually visible in the laparoscopic image. By using extra-abdominal cameras to detect trocars and classify their occupancy state, additional information about the instrument location, whether an instrument is still in the abdomen or not, can be obtained. This can enhance laparoscopic workflow understanding and enrich already existing intra-abdominal solutions.

Methods: A data set of four laparoscopic surgeries recorded with two time-synchronized extra-abdominal 2D cameras was generated. The preprocessed and annotated data were used to train a deep learning-based network architecture consisting of a trocar detection, a centroid tracker and a temporal model to provide the occupancy state of all trocars during the surgery.

Results: The trocar detection model achieves an F1 score of 95.06 ± 0.88 % . The prediction of the occupancy state yields an F1 score of 89.29 ± 5.29 % , providing a first step towards enhanced surgical workflow understanding.

Conclusion: The current method shows promising results for the extra-abdominal tracking of trocars and their occupancy state. Future advancements include the enlargement of the data set and incorporation of intra-abdominal imaging to facilitate accurate assignment of instruments to trocars.

目的:腹腔镜手术中基于视频的腹腔内器械追踪是一个常见的研究领域。然而,跟踪只能通过腹腔镜图像中实际可见的器械进行。通过使用腹腔外摄像头检测套管并对其占用状态进行分类,可以获得有关器械位置的额外信息,即器械是否仍在腹腔内。这可以加强对腹腔镜工作流程的理解,并丰富现有的腹腔内解决方案:方法:使用两台时间同步的腹腔外 2D 摄像机记录了四个腹腔镜手术的数据集。经过预处理和注释的数据被用于训练基于深度学习的网络架构,该架构由套管检测、中心点跟踪器和时间模型组成,用于提供手术过程中所有套管的占用状态:套管检测模型的 F1 得分为 95.06 ± 0.88 %。占用状态预测的 F1 得分为 89.29 ± 5.29 %,为增强手术工作流程理解迈出了第一步:目前的方法在腹腔外跟踪套管及其占用状态方面显示出良好的效果。未来的发展包括扩大数据集和结合腹腔内成像,以促进将器械准确分配到套管。
{"title":"Extra-abdominal trocar and instrument detection for enhanced surgical workflow understanding.","authors":"Franziska Jurosch, Lars Wagner, Alissa Jell, Esra Islertas, Dirk Wilhelm, Maximilian Berlet","doi":"10.1007/s11548-024-03220-0","DOIUrl":"10.1007/s11548-024-03220-0","url":null,"abstract":"<p><strong>Purpose: </strong>Video-based intra-abdominal instrument tracking for laparoscopic surgeries is a common research area. However, the tracking can only be done with instruments that are actually visible in the laparoscopic image. By using extra-abdominal cameras to detect trocars and classify their occupancy state, additional information about the instrument location, whether an instrument is still in the abdomen or not, can be obtained. This can enhance laparoscopic workflow understanding and enrich already existing intra-abdominal solutions.</p><p><strong>Methods: </strong>A data set of four laparoscopic surgeries recorded with two time-synchronized extra-abdominal 2D cameras was generated. The preprocessed and annotated data were used to train a deep learning-based network architecture consisting of a trocar detection, a centroid tracker and a temporal model to provide the occupancy state of all trocars during the surgery.</p><p><strong>Results: </strong>The trocar detection model achieves an F1 score of <math><mrow><mn>95.06</mn> <mo>±</mo> <mn>0.88</mn> <mo>%</mo></mrow> </math> . The prediction of the occupancy state yields an F1 score of <math><mrow><mn>89.29</mn> <mo>±</mo> <mn>5.29</mn> <mo>%</mo></mrow> </math> , providing a first step towards enhanced surgical workflow understanding.</p><p><strong>Conclusion: </strong>The current method shows promising results for the extra-abdominal tracking of trocars and their occupancy state. Future advancements include the enlargement of the data set and incorporation of intra-abdominal imaging to facilitate accurate assignment of instruments to trocars.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1939-1945"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving lung nodule segmentation in thoracic CT scans through the ensemble of 3D U-Net models. 通过集合三维 U-Net 模型改进胸部 CT 扫描中的肺结节分割。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-23 DOI: 10.1007/s11548-024-03222-y
Himanshu Rikhari, Esha Baidya Kayal, Shuvadeep Ganguly, Archana Sasi, Swetambri Sharma, Ajith Antony, Krithika Rangarajan, Sameer Bakhshi, Devasenathipathy Kandasamy, Amit Mehndiratta

Purpose: The current study explores the application of 3D U-Net architectures combined with Inception and ResNet modules for precise lung nodule detection through deep learning-based segmentation technique. This investigation is motivated by the objective of developing a Computer-Aided Diagnosis (CAD) system for effective diagnosis and prognostication of lung nodules in clinical settings.

Methods: The proposed method trained four different 3D U-Net models on the retrospective dataset obtained from AIIMS Delhi. To augment the training dataset, affine transformations and intensity transforms were utilized. Preprocessing steps included CT scan voxel resampling, intensity normalization, and lung parenchyma segmentation. Model optimization utilized a hybrid loss function that combined Dice Loss and Focal Loss. The model performance of all four 3D U-Nets was evaluated patient-wise using dice coefficient and Jaccard coefficient, then averaged to obtain the average volumetric dice coefficient (DSCavg) and average Jaccard coefficient (IoUavg) on a test dataset comprising 53 CT scans. Additionally, an ensemble approach (Model-V) was utilized featuring 3D U-Net (Model-I), ResNet (Model-II), and Inception (Model-III) 3D U-Net architectures, combined with two distinct patch sizes for further investigation.

Results: The ensemble of models obtained the highest DSCavg of 0.84 ± 0.05 and IoUavg of 0.74 ± 0.06 on the test dataset, compared against individual models. It mitigated false positives, overestimations, and underestimations observed in individual U-Net models. Moreover, the ensemble of models reduced average false positives per scan in the test dataset (1.57 nodules/scan) compared to individual models (2.69-3.39 nodules/scan).

Conclusions: The suggested ensemble approach presents a strong and effective strategy for automatically detecting and delineating lung nodules, potentially aiding CAD systems in clinical settings. This approach could assist radiologists in laborious and meticulous lung nodule detection tasks in CT scans, improving lung cancer diagnosis and treatment planning.

目的:本研究探索了三维 U-Net 架构与 Inception 和 ResNet 模块的结合应用,通过基于深度学习的分割技术实现肺结节的精确检测。这项研究的目的是开发一种计算机辅助诊断(CAD)系统,用于临床环境中肺部结节的有效诊断和预后判断:方法:所提出的方法在从德里 AIIMS 获取的回顾性数据集上训练了四种不同的 3D U-Net 模型。为了增强训练数据集,利用了仿射变换和强度变换。预处理步骤包括 CT 扫描体素重采样、强度归一化和肺实质分割。模型优化采用了混合损失函数,该函数结合了骰子损失和焦点损失。使用骰子系数和杰卡德系数对所有四个三维 U-Net 的模型性能进行了患者评估,然后在一个由 53 个 CT 扫描数据组成的测试数据集上求出平均容积骰子系数 (DSCavg) 和平均杰卡德系数 (IoUavg)。此外,还采用了一种集合方法(模型-V),包括三维 U-Net(模型-I)、ResNet(模型-II)和 Inception(模型-III)三维 U-Net架构,并结合两种不同的补丁尺寸进行进一步研究:与单个模型相比,模型集合在测试数据集上获得了最高的 DSCavg(0.84 ± 0.05)和 IoUavg(0.74 ± 0.06)。它减少了在单个 U-Net 模型中观察到的误报、高估和低估。此外,与单个模型(2.69-3.39 个结节/扫描)相比,集合模型减少了测试数据集中每次扫描的平均误报率(1.57 个结节/扫描):建议的集合方法为自动检测和划分肺结节提供了一种强大而有效的策略,有可能为临床环境中的 CAD 系统提供帮助。这种方法可以帮助放射科医生在 CT 扫描中完成费力而细致的肺结节检测任务,从而改善肺癌诊断和治疗计划。
{"title":"Improving lung nodule segmentation in thoracic CT scans through the ensemble of 3D U-Net models.","authors":"Himanshu Rikhari, Esha Baidya Kayal, Shuvadeep Ganguly, Archana Sasi, Swetambri Sharma, Ajith Antony, Krithika Rangarajan, Sameer Bakhshi, Devasenathipathy Kandasamy, Amit Mehndiratta","doi":"10.1007/s11548-024-03222-y","DOIUrl":"10.1007/s11548-024-03222-y","url":null,"abstract":"<p><strong>Purpose: </strong>The current study explores the application of 3D U-Net architectures combined with Inception and ResNet modules for precise lung nodule detection through deep learning-based segmentation technique. This investigation is motivated by the objective of developing a Computer-Aided Diagnosis (CAD) system for effective diagnosis and prognostication of lung nodules in clinical settings.</p><p><strong>Methods: </strong>The proposed method trained four different 3D U-Net models on the retrospective dataset obtained from AIIMS Delhi. To augment the training dataset, affine transformations and intensity transforms were utilized. Preprocessing steps included CT scan voxel resampling, intensity normalization, and lung parenchyma segmentation. Model optimization utilized a hybrid loss function that combined Dice Loss and Focal Loss. The model performance of all four 3D U-Nets was evaluated patient-wise using dice coefficient and Jaccard coefficient, then averaged to obtain the average volumetric dice coefficient (DSC<sub>avg</sub>) and average Jaccard coefficient (IoU<sub>avg</sub>) on a test dataset comprising 53 CT scans. Additionally, an ensemble approach (Model-V) was utilized featuring 3D U-Net (Model-I), ResNet (Model-II), and Inception (Model-III) 3D U-Net architectures, combined with two distinct patch sizes for further investigation.</p><p><strong>Results: </strong>The ensemble of models obtained the highest DSC<sub>avg</sub> of 0.84 ± 0.05 and IoU<sub>avg</sub> of 0.74 ± 0.06 on the test dataset, compared against individual models. It mitigated false positives, overestimations, and underestimations observed in individual U-Net models. Moreover, the ensemble of models reduced average false positives per scan in the test dataset (1.57 nodules/scan) compared to individual models (2.69-3.39 nodules/scan).</p><p><strong>Conclusions: </strong>The suggested ensemble approach presents a strong and effective strategy for automatically detecting and delineating lung nodules, potentially aiding CAD systems in clinical settings. This approach could assist radiologists in laborious and meticulous lung nodule detection tasks in CT scans, improving lung cancer diagnosis and treatment planning.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2089-2099"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1