首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
Sparse keypoint segmentation of lung fissures: efficient geometric deep learning for abstracting volumetric images. 肺裂隙稀疏关键点分割:高效几何深度学习提取体积图像。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-07 DOI: 10.1007/s11548-024-03310-z
Paul Kaftan, Mattias P Heinrich, Lasse Hansen, Volker Rasche, Hans A Kestler, Alexander Bigalke

Purpose: Lung fissure segmentation on CT images often relies on 3D convolutional neural networks (CNNs). However, 3D-CNNs are inefficient for detecting thin structures like the fissures, which make up a tiny fraction of the entire image volume. We propose to make lung fissure segmentation more efficient by using geometric deep learning (GDL) on sparse point clouds.

Methods: We abstract image data with sparse keypoint (KP) clouds. We train GDL models to segment the point cloud, comparing three major paradigms of models (PointNets, graph convolutional networks (GCNs), and PointTransformers). From the sparse point segmentations, 3D meshes of the objects are reconstructed to obtain a dense surface. The state-of-the-art Poisson surface reconstruction (PSR) makes up most of the time in our pipeline. Therefore, we propose an efficient point cloud to mesh autoencoder (PC-AE) that deforms a template mesh to fit a point cloud in a single forward pass. Our pipeline is evaluated extensively and compared to the 3D-CNN gold standard nnU-Net on diverse clinical and pathological data.

Results: GCNs yield the best trade-off between inference time and accuracy, being 21 × faster with only 1.4 × increased error over the nnU-Net. Our PC-AE also achieves a favorable trade-off, being 3 × faster at 1.5 × the error compared to the PSR.

Conclusion: We present a KP-based fissure segmentation pipeline that is more efficient than 3D-CNNs and can greatly speed up large-scale analyses. A novel PC-AE for efficient mesh reconstruction from sparse point clouds is introduced, showing promise not only for fissure segmentation. Source code is available on https://github.com/kaftanski/fissure-segmentation-IJCARS.

目的:CT图像上肺裂隙分割通常依赖于三维卷积神经网络(cnn)。然而,3d - cnn在检测裂缝等薄结构时效率低下,这些结构只占整个图像体积的一小部分。本文提出了利用几何深度学习(GDL)对稀疏点云进行肺裂隙分割的方法。方法:采用稀疏关键点云(KP)对图像数据进行抽象。我们训练GDL模型来分割点云,比较了三种主要的模型范式(PointNets,图卷积网络(GCNs)和PointTransformers)。从稀疏的点分割中,重建物体的三维网格,得到一个密集的表面。最先进的泊松表面重建(PSR)占据了我们管道的大部分时间。因此,我们提出了一种有效的点云到网格自动编码器(PC-AE),它可以在单个前向通道中变形模板网格以适应点云。我们的管道被广泛评估,并在不同的临床和病理数据上与3D-CNN金标准nnU-Net进行比较。结果:GCNs在推理时间和准确性之间取得了最好的平衡,比nnU-Net快21倍,误差仅增加1.4倍。我们的PC-AE也实现了良好的权衡,与PSR相比,在1.5倍的误差下速度提高了3倍。结论:我们提出了一种基于kp的裂缝分割管道,该管道比3d - cnn更高效,可以大大加快大规模分析的速度。介绍了一种新的基于PC-AE的稀疏点云网格重构方法,该方法不仅在裂缝分割方面具有良好的应用前景。源代码可在https://github.com/kaftanski/fissure-segmentation-IJCARS上获得。
{"title":"Sparse keypoint segmentation of lung fissures: efficient geometric deep learning for abstracting volumetric images.","authors":"Paul Kaftan, Mattias P Heinrich, Lasse Hansen, Volker Rasche, Hans A Kestler, Alexander Bigalke","doi":"10.1007/s11548-024-03310-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03310-z","url":null,"abstract":"<p><strong>Purpose: </strong>Lung fissure segmentation on CT images often relies on 3D convolutional neural networks (CNNs). However, 3D-CNNs are inefficient for detecting thin structures like the fissures, which make up a tiny fraction of the entire image volume. We propose to make lung fissure segmentation more efficient by using geometric deep learning (GDL) on sparse point clouds.</p><p><strong>Methods: </strong>We abstract image data with sparse keypoint (KP) clouds. We train GDL models to segment the point cloud, comparing three major paradigms of models (PointNets, graph convolutional networks (GCNs), and PointTransformers). From the sparse point segmentations, 3D meshes of the objects are reconstructed to obtain a dense surface. The state-of-the-art Poisson surface reconstruction (PSR) makes up most of the time in our pipeline. Therefore, we propose an efficient point cloud to mesh autoencoder (PC-AE) that deforms a template mesh to fit a point cloud in a single forward pass. Our pipeline is evaluated extensively and compared to the 3D-CNN gold standard nnU-Net on diverse clinical and pathological data.</p><p><strong>Results: </strong>GCNs yield the best trade-off between inference time and accuracy, being <math><mrow><mn>21</mn> <mo>×</mo></mrow> </math> faster with only <math><mrow><mn>1.4</mn> <mo>×</mo></mrow> </math> increased error over the nnU-Net. Our PC-AE also achieves a favorable trade-off, being <math><mrow><mn>3</mn> <mo>×</mo></mrow> </math> faster at <math><mrow><mn>1.5</mn> <mo>×</mo></mrow> </math> the error compared to the PSR.</p><p><strong>Conclusion: </strong>We present a KP-based fissure segmentation pipeline that is more efficient than 3D-CNNs and can greatly speed up large-scale analyses. A novel PC-AE for efficient mesh reconstruction from sparse point clouds is introduced, showing promise not only for fissure segmentation. Source code is available on https://github.com/kaftanski/fissure-segmentation-IJCARS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D CT to 2D X-ray image registration for improved visualization of tibial vessels in endovascular procedures. 三维 CT 与二维 X 光图像配准,改善血管内手术中胫骨血管的可视化。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-05 DOI: 10.1007/s11548-024-03302-z
Moujan Saderi, Jaykumar H Patel, Calder D Sheagren, Judit Csőre, Trisha L Roy, Graham A Wright

Purpose: During endovascular revascularization interventions for peripheral arterial disease, the standard modality of X-ray fluoroscopy (XRF) used for image guidance is limited in visualizing distal segments of infrapopliteal vessels. To enhance visualization of arteries, an image registration technique was developed to align pre-acquired computed tomography (CT) angiography images and to create fusion images highlighting arteries of interest.

Methods: X-ray image metadata capturing the position of the X-ray gantry initializes a multiscale iterative optimization process, which uses a local-variance masked normalized cross-correlation loss to rigidly align a digitally reconstructed radiograph (DRR) of the CT dataset with the target X-ray, using the edges of the fibula and tibia as the basis for alignment. A precomputed library of DRRs is used to improve run-time, and the six-degree-of-freedom optimization problem of rigid registration is divided into three smaller sub-problems to improve convergence. The method was tested on a dataset of paired cone-beam CT (CBCT) and XRF images of ex vivo limbs, and registration accuracy at the midline of the artery was evaluated.

Results: On a dataset of CBCTs from 4 different limbs and a total of 17 XRF images, successful registration was achieved in 13 cases, with the remainder suffering from input image quality issues. The method produced average misalignments of less than 1 mm in horizontal projection distance along the artery midline, with an average run-time of 16 s.

Conclusion: The sub-mm spatial accuracy of artery overlays is sufficient for the clinical use case of identifying guidewire deviations from the path of the artery, for early detection of guidewire-induced perforations. The semiautomatic workflow and average run-time of the algorithm make it feasible for integration into clinical workflows.

目的:在外周动脉疾病的血管内血管重建术干预过程中,用于图像引导的x线透视(XRF)标准模式在观察腘下血管远段时受到限制。为了增强动脉的可视化,研究人员开发了一种图像配准技术,用于对齐预获取的计算机断层扫描(CT)血管造影图像,并创建突出显示感兴趣动脉的融合图像。方法:捕获x射线龙门位置的x射线图像元数据初始化一个多尺度迭代优化过程,该过程使用局部方差掩盖归一化互相关损失,以腓骨和胫骨边缘为基础,将CT数据集的数字重建x射线(DRR)与目标x射线严格对齐。采用预先计算的drr库来提高运行时间,并将刚性配准的六自由度优化问题分解成三个较小的子问题来提高收敛性。在离体肢体的配对锥形束CT (CBCT)和XRF图像数据集上对该方法进行了测试,并评估了动脉中线的配准精度。结果:在来自4个不同肢体的cbct数据集中,共有17张XRF图像,其中13例成功配准,其余患者存在输入图像质量问题。该方法沿动脉中线水平投影距离平均误差小于1mm,平均运行时间为16s。结论:动脉覆盖层的亚毫米空间精度足以用于临床病例识别导丝偏离动脉路径,早期发现导丝引起的穿孔。该算法的半自动化工作流程和平均运行时间使其能够集成到临床工作流程中。
{"title":"3D CT to 2D X-ray image registration for improved visualization of tibial vessels in endovascular procedures.","authors":"Moujan Saderi, Jaykumar H Patel, Calder D Sheagren, Judit Csőre, Trisha L Roy, Graham A Wright","doi":"10.1007/s11548-024-03302-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03302-z","url":null,"abstract":"<p><strong>Purpose: </strong>During endovascular revascularization interventions for peripheral arterial disease, the standard modality of X-ray fluoroscopy (XRF) used for image guidance is limited in visualizing distal segments of infrapopliteal vessels. To enhance visualization of arteries, an image registration technique was developed to align pre-acquired computed tomography (CT) angiography images and to create fusion images highlighting arteries of interest.</p><p><strong>Methods: </strong>X-ray image metadata capturing the position of the X-ray gantry initializes a multiscale iterative optimization process, which uses a local-variance masked normalized cross-correlation loss to rigidly align a digitally reconstructed radiograph (DRR) of the CT dataset with the target X-ray, using the edges of the fibula and tibia as the basis for alignment. A precomputed library of DRRs is used to improve run-time, and the six-degree-of-freedom optimization problem of rigid registration is divided into three smaller sub-problems to improve convergence. The method was tested on a dataset of paired cone-beam CT (CBCT) and XRF images of ex vivo limbs, and registration accuracy at the midline of the artery was evaluated.</p><p><strong>Results: </strong>On a dataset of CBCTs from 4 different limbs and a total of 17 XRF images, successful registration was achieved in 13 cases, with the remainder suffering from input image quality issues. The method produced average misalignments of less than 1 mm in horizontal projection distance along the artery midline, with an average run-time of 16 s.</p><p><strong>Conclusion: </strong>The sub-mm spatial accuracy of artery overlays is sufficient for the clinical use case of identifying guidewire deviations from the path of the artery, for early detection of guidewire-induced perforations. The semiautomatic workflow and average run-time of the algorithm make it feasible for integration into clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward structured abdominal examination training using augmented reality. 利用增强现实技术进行结构化腹部检查训练。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-04 DOI: 10.1007/s11548-024-03311-y
Lovis Schwenderling, Laura Isabel Hanke, Undine Holst, Florentine Huettl, Fabian Joeres, Tobias Huber, Christian Hansen

Purpose: Structured abdominal examination is an essential part of the medical curriculum and surgical training, requiring a blend of theory and practice from trainees. Current training methods, however, often do not provide adequate engagement, fail to address individual learning needs or do not cover rare diseases.

Methods: In this work, an application for structured Abdominal Examination Training using Augmented Reality (AETAR) is presented. Required theoretical knowledge is displayed step by step via virtual indicators directly on the associated body regions. Exercises facilitate building up the routine in performing the examination. AETAR was evaluated in an exploratory user study with medical students (n=12) and teaching surgeons (n=2).

Results: Learning with AETAR was described as fun and beneficial. Usability (SUS=73) and rated suitability for teaching were promising. All students improved in a knowledge test and felt more confident with the abdominal examination. Shortcomings were identified in the area of interaction, especially in teaching examination-specific movements.

Conclusion: AETAR represents a first approach to structured abdominal examination training using augmented reality. The application demonstrates the potential to improve educational outcomes for medical students and provides an important foundation for future research and development in digital medical education.

目的:结构化腹部检查是医学课程和外科培训的重要组成部分,要求学员理论与实践相结合。然而,目前的培训方法往往没有提供充分的参与,未能满足个人的学习需要,或者没有涵盖罕见疾病。方法:本文介绍了一种基于增强现实(AETAR)的结构化腹部检查训练的应用。所需的理论知识通过虚拟指标直接在相关的身体区域上逐步显示。练习有助于建立例行检查。AETAR在一项探索性用户研究中进行了评估,研究对象包括医学院学生(n=12)和外科教学医生(n=2)。结果:使用AETAR学习是有趣和有益的。可用性(SUS=73)和评分适合教学是有希望的。所有学生在知识测试中都有所提高,并且对腹部检查更有信心。在互动方面发现了缺点,特别是在教学考试特定动作方面。结论:AETAR代表了使用增强现实技术进行结构化腹部检查训练的第一种方法。该应用程序展示了改善医学生教育成果的潜力,并为未来数字医学教育的研究和发展提供了重要基础。
{"title":"Toward structured abdominal examination training using augmented reality.","authors":"Lovis Schwenderling, Laura Isabel Hanke, Undine Holst, Florentine Huettl, Fabian Joeres, Tobias Huber, Christian Hansen","doi":"10.1007/s11548-024-03311-y","DOIUrl":"https://doi.org/10.1007/s11548-024-03311-y","url":null,"abstract":"<p><strong>Purpose: </strong>Structured abdominal examination is an essential part of the medical curriculum and surgical training, requiring a blend of theory and practice from trainees. Current training methods, however, often do not provide adequate engagement, fail to address individual learning needs or do not cover rare diseases.</p><p><strong>Methods: </strong>In this work, an application for structured Abdominal Examination Training using Augmented Reality (AETAR) is presented. Required theoretical knowledge is displayed step by step via virtual indicators directly on the associated body regions. Exercises facilitate building up the routine in performing the examination. AETAR was evaluated in an exploratory user study with medical students (n=12) and teaching surgeons (n=2).</p><p><strong>Results: </strong>Learning with AETAR was described as fun and beneficial. Usability (SUS=73) and rated suitability for teaching were promising. All students improved in a knowledge test and felt more confident with the abdominal examination. Shortcomings were identified in the area of interaction, especially in teaching examination-specific movements.</p><p><strong>Conclusion: </strong>AETAR represents a first approach to structured abdominal examination training using augmented reality. The application demonstrates the potential to improve educational outcomes for medical students and provides an important foundation for future research and development in digital medical education.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic navigation with deep reinforcement learning in transthoracic echocardiography. 经胸超声心动图中的深度强化学习机器人导航。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-09-20 DOI: 10.1007/s11548-024-03275-z
Yuuki Shida, Souto Kumagai, Hiroyasu Iwata

Purpose: The search for heart components in robotic transthoracic echocardiography is a time-consuming process. This paper proposes an optimized robotic navigation system for heart components using deep reinforcement learning to achieve an efficient and effective search technique for heart components.

Method: The proposed method introduces (i) an optimized search behavior generation algorithm that avoids multiple local solutions and searches for the optimal solution and (ii) an optimized path generation algorithm that minimizes the search path, thereby realizing short search times.

Results: The mitral valve search with the proposed method reaches the optimal solution with a probability of 74.4%, the mitral valve confidence loss rate when the local solution stops is 16.3% on average, and the inspection time with the generated path is 48.6 s on average, which is 56.6% of the time cost of the conventional method.

Conclusion: The results indicate that the proposed method improves the search efficiency, and the optimal location can be searched in many cases with the proposed method, and the loss rate of the confidence in the mitral valve was low even when a local solution rather than the optimal solution was reached. It is suggested that the proposed method enables accurate and quick robotic navigation to find heart components.

目的:在机器人经胸超声心动图中搜索心脏部件是一个耗时的过程。本文提出了一种优化的心脏部件机器人导航系统,利用深度强化学习实现高效的心脏部件搜索技术:方法:所提出的方法引入了(i)优化搜索行为生成算法,该算法可避免多个局部解并搜索最优解;(ii)优化路径生成算法,该算法可使搜索路径最小化,从而实现较短的搜索时间:结果:采用所提方法的二尖瓣搜索达到最优解的概率为 74.4%,局部解停止时的二尖瓣置信度损失率平均为 16.3%,生成路径的检查时间平均为 48.6 s,是传统方法时间成本的 56.6%:结果表明,所提出的方法提高了搜索效率,在很多情况下都能搜索到最佳位置,而且即使达到的是局部解而不是最优解,二尖瓣的置信度损失率也很低。建议采用所提出的方法实现准确、快速的机器人导航,以寻找心脏部件。
{"title":"Robotic navigation with deep reinforcement learning in transthoracic echocardiography.","authors":"Yuuki Shida, Souto Kumagai, Hiroyasu Iwata","doi":"10.1007/s11548-024-03275-z","DOIUrl":"10.1007/s11548-024-03275-z","url":null,"abstract":"<p><strong>Purpose: </strong>The search for heart components in robotic transthoracic echocardiography is a time-consuming process. This paper proposes an optimized robotic navigation system for heart components using deep reinforcement learning to achieve an efficient and effective search technique for heart components.</p><p><strong>Method: </strong>The proposed method introduces (i) an optimized search behavior generation algorithm that avoids multiple local solutions and searches for the optimal solution and (ii) an optimized path generation algorithm that minimizes the search path, thereby realizing short search times.</p><p><strong>Results: </strong>The mitral valve search with the proposed method reaches the optimal solution with a probability of 74.4%, the mitral valve confidence loss rate when the local solution stops is 16.3% on average, and the inspection time with the generated path is 48.6 s on average, which is 56.6% of the time cost of the conventional method.</p><p><strong>Conclusion: </strong>The results indicate that the proposed method improves the search efficiency, and the optimal location can be searched in many cases with the proposed method, and the loss rate of the confidence in the mitral valve was low even when a local solution rather than the optimal solution was reached. It is suggested that the proposed method enables accurate and quick robotic navigation to find heart components.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"191-202"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the visible: preliminary evaluation of the first wearable augmented reality assistance system for pancreatic surgery. 超越可见:首个胰腺手术可穿戴增强现实辅助系统的初步评估。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-06-07 DOI: 10.1007/s11548-024-03131-0
Hamraz Javaheri, Omid Ghamarnejad, Ragnar Bade, Paul Lukowicz, Jakob Karolus, Gregor Alexander Stavrou

Purpose: The retroperitoneal nature of the pancreas, marked by minimal intraoperative organ shifts and deformations, makes augmented reality (AR)-based systems highly promising for pancreatic surgery. This study presents preliminary data from a prospective study aiming to develop the first wearable AR assistance system, ARAS, for pancreatic surgery and evaluating its usability, accuracy, and effectiveness in enhancing the perioperative outcomes of patients.

Methods: We developed ARAS as a two-phase system for a wearable AR device to aid surgeons in planning and operation. This system was used to visualize and register patient-specific 3D anatomical models during the surgery. The location and precision of the registered 3D anatomy were evaluated by assessing the arterial pulse and employing Doppler and duplex ultrasonography. The usability, accuracy, and effectiveness of ARAS were assessed using a five-point Likert scale questionnaire.

Results: Perioperative outcomes of five patients underwent various pancreatic resections with ARAS are presented. Surgeons rated ARAS as excellent for preoperative planning. All structures were accurately identified without any noteworthy errors. Only tumor identification decreased after the preparation phase, especially in patients who underwent pancreaticoduodenectomy because of the extensive mobilization of peripancreatic structures. No perioperative complications related to ARAS were observed.

Conclusions: ARAS shows promise in enhancing surgical precision during pancreatic procedures. Its efficacy in preoperative planning and intraoperative vascular identification positions it as a valuable tool for pancreatic surgery and a potential educational resource for future surgical residents.

目的:胰腺位于腹膜后,术中器官移位和变形极小,这使得基于增强现实(AR)的系统在胰腺手术中大有可为。本研究介绍了一项前瞻性研究的初步数据,该研究旨在开发首个用于胰腺手术的可穿戴 AR 辅助系统 ARAS,并评估其可用性、准确性以及在提高患者围手术期效果方面的有效性:我们开发的ARAS是一个可穿戴AR设备的两阶段系统,用于辅助外科医生制定计划和进行手术。该系统用于在手术过程中可视化和注册患者特定的三维解剖模型。通过评估动脉脉搏以及使用多普勒和双相超声波检查,对注册的三维解剖模型的位置和精确度进行了评估。使用李克特五点量表问卷对 ARAS 的可用性、准确性和有效性进行了评估:结果:本文介绍了五名使用 ARAS 进行各种胰腺切除术的患者的围手术期结果。外科医生认为ARAS在术前规划方面表现出色。所有结构都能准确识别,没有任何值得注意的错误。只有肿瘤识别率在准备阶段后有所下降,特别是在接受胰十二指肠切除术的患者中,因为需要广泛移动胰腺周围结构。没有观察到与ARAS相关的围手术期并发症:结论:ARAS有望提高胰腺手术的精确度。ARAS在术前规划和术中血管识别方面的功效使其成为胰腺手术的重要工具,也是未来外科住院医生的潜在教育资源。
{"title":"Beyond the visible: preliminary evaluation of the first wearable augmented reality assistance system for pancreatic surgery.","authors":"Hamraz Javaheri, Omid Ghamarnejad, Ragnar Bade, Paul Lukowicz, Jakob Karolus, Gregor Alexander Stavrou","doi":"10.1007/s11548-024-03131-0","DOIUrl":"10.1007/s11548-024-03131-0","url":null,"abstract":"<p><strong>Purpose: </strong>The retroperitoneal nature of the pancreas, marked by minimal intraoperative organ shifts and deformations, makes augmented reality (AR)-based systems highly promising for pancreatic surgery. This study presents preliminary data from a prospective study aiming to develop the first wearable AR assistance system, ARAS, for pancreatic surgery and evaluating its usability, accuracy, and effectiveness in enhancing the perioperative outcomes of patients.</p><p><strong>Methods: </strong>We developed ARAS as a two-phase system for a wearable AR device to aid surgeons in planning and operation. This system was used to visualize and register patient-specific 3D anatomical models during the surgery. The location and precision of the registered 3D anatomy were evaluated by assessing the arterial pulse and employing Doppler and duplex ultrasonography. The usability, accuracy, and effectiveness of ARAS were assessed using a five-point Likert scale questionnaire.</p><p><strong>Results: </strong>Perioperative outcomes of five patients underwent various pancreatic resections with ARAS are presented. Surgeons rated ARAS as excellent for preoperative planning. All structures were accurately identified without any noteworthy errors. Only tumor identification decreased after the preparation phase, especially in patients who underwent pancreaticoduodenectomy because of the extensive mobilization of peripancreatic structures. No perioperative complications related to ARAS were observed.</p><p><strong>Conclusions: </strong>ARAS shows promise in enhancing surgical precision during pancreatic procedures. Its efficacy in preoperative planning and intraoperative vascular identification positions it as a valuable tool for pancreatic surgery and a potential educational resource for future surgical residents.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"117-129"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141288907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global registration of kidneys in 3D ultrasound and CT images. 三维超声波和 CT 图像中肾脏的全局配准。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-09-06 DOI: 10.1007/s11548-024-03255-3
William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins

Purpose: Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization.

Methods: We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD).

Results: This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm.

Conclusion: This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.

目的:需要在腹部超声(US)和计算机断层扫描(CT)图像之间进行自动配准,以加强对肾脏手术的介入性指导,但这仍是一个有待解决的研究难题。我们提出了一种新方法,它不需要初始配准估计(全局方法),还能处理器官自然对称性引起的配准模糊。该方法与配准改进算法相结合,可实现稳健、准确的肾脏配准,同时避免手动初始化:我们建议分三步解决全局配准问题:(1) 自动解剖地标定位,由 2 个深度神经网络(DNN)定位每种模式下的一组地标。(2) 生成注册假设,利用 RANSAC 的确定性变体从地标计算潜在的注册。由于肾脏具有很强的双侧对称性,通常会有两个兼容的解决方案。最后,在步骤 (3) 中,利用 DNN 分类器解决几何模糊性问题,自动确定正确的解决方案。然后,可以使用配准细化方法对配准进行迭代改进。结果显示了最先进的基于曲面的细化--贝叶斯相干点漂移(BCPD):结果:这一自动全局配准方法比各种具有竞争力的先进方法效果更好,后者还需要进行器官分割。在 59 对三维 US/CT 肾脏图像上获得的结果表明,所提出的方法结合 BCPD 精化,使肾脏内部地标(肾盂)的目标配准误差 (TRE) 达到 5.78 毫米,平均近邻表面距离 (nndist) 为 2.42 毫米:这项研究首次提出了在 US 和 CT 图像中进行肾脏自动配准的方法,这种方法不需要预先知道初始手动配准估计值。结果表明,全自动配准方法是可行的,其性能可与人工方法相媲美。
{"title":"Global registration of kidneys in 3D ultrasound and CT images.","authors":"William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins","doi":"10.1007/s11548-024-03255-3","DOIUrl":"10.1007/s11548-024-03255-3","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization.</p><p><strong>Methods: </strong>We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD).</p><p><strong>Results: </strong>This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm.</p><p><strong>Conclusion: </strong>This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"65-75"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust prostate disease classification using transformers with discrete representations. 使用具有离散表示的变换器进行稳健的前列腺疾病分类。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-05-13 DOI: 10.1007/s11548-024-03153-8
Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker

Purpose: Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.

Method: We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.

Results: We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.

Conclusion: We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.

目的:最近,使用卷积神经网络(CNN)对多参数磁共振成像进行前列腺疾病自动分类取得了可喜的成果。视觉转换器(ViT)是一种卷积自由架构,它只利用了自注意机制,在一些自然成像分类任务中已经超越了 CNN。然而,这些模型对输入空间的纹理变化并不十分稳健。在核磁共振成像中,我们经常需要处理因不同采集协议而产生的纹理偏移。在此,我们将重点关注模型对 MRI 新磁铁强度的良好泛化能力:方法:我们提出了一个新框架,通过使用向量量化来构建数据的离散表示,从而提高基于视觉变换器的疾病分类模型的鲁棒性。我们对离散表示的一个子集进行采样,以形成基于转换器的模型的输入。我们在变压器模型中使用交叉注意,将 T2 加权图像和表观扩散系数(ADC)图像的离散表示结合起来:我们通过在 1.5 T 扫描仪上进行训练和在 3 T 扫描仪上进行测试来分析模型的鲁棒性,反之亦然。我们的方法在前列腺磁共振成像病变分类方面实现了 SOTA 性能,在对输入空间的域偏移和扰动的鲁棒性方面优于其他各种基于 CNN 和变压器的模型:我们开发了一种方法,利用 T2 加权和 ADC 图像的离散表示,提高了基于变压器的前列腺 MRI 病变分类的鲁棒性。
{"title":"Robust prostate disease classification using transformers with discrete representations.","authors":"Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker","doi":"10.1007/s11548-024-03153-8","DOIUrl":"10.1007/s11548-024-03153-8","url":null,"abstract":"<p><strong>Purpose: </strong>Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.</p><p><strong>Method: </strong>We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.</p><p><strong>Results: </strong>We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.</p><p><strong>Conclusion: </strong>We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"11-20"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140916593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6G in medical robotics: development of network allocation strategies for a telerobotic examination system. 医疗机器人中的 6G:为远程机器人检查系统开发网络分配策略。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-09-09 DOI: 10.1007/s11548-024-03260-6
Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm

Purpose: Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.

Methods: We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).

Results: The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.

Conclusion: This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.

目的:世界各地的医疗保健系统正日益面临严峻的挑战,其原因包括人员短缺、人口结构的变化以及对人类依赖性很强的环境的依赖。解决这些问题的方法之一是开发新的远程医疗应用。目前正在研究的 6G 网络标准有望提供许多新功能,这将有利于充分发挥新兴远程医疗解决方案的潜力,并克服当前网络标准的局限性:方法:我们开发了一个具有分布式机器人控制基础设施的远程机器人检查系统,以研究医疗应用中分布式计算场景(如雾计算)的优势和挑战。我们研究了不同的软件配置,对其网络流量和计算负荷进行了描述,随后为不同类型的模块化应用功能(MAF)制定了网络分配策略:结果:研究结果表明,这些模块化应用功能的使用情况在计算负荷和网络行为方面都存在很大差异,因此可以根据不同类型模块化应用功能的要求制定分配策略。此外,研究结果还为进一步探索医疗机器人中的分布式计算方案奠定了坚实的基础:这项工作为利用 6G 网络架构和分布式计算场景(如雾计算)开发医疗机器人应用奠定了基础。未来,我们计划研究根据当前形势需求在网络内动态转移 MAF 的能力,这将有助于进一步优化基于网络的医疗应用性能,并在应对日益严峻的医疗挑战中发挥作用。
{"title":"6G in medical robotics: development of network allocation strategies for a telerobotic examination system.","authors":"Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm","doi":"10.1007/s11548-024-03260-6","DOIUrl":"10.1007/s11548-024-03260-6","url":null,"abstract":"<p><strong>Purpose: </strong>Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.</p><p><strong>Methods: </strong>We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).</p><p><strong>Results: </strong>The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.</p><p><strong>Conclusion: </strong>This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"167-178"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Normscan: open-source Python software to create average models from CT scans. Normscan:从 CT 扫描结果创建平均模型的开源 Python 软件。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-05-20 DOI: 10.1007/s11548-024-03185-0
George R Nahass, Mitchell A Marques, Naji Bou Zeid, Linping Zhao, Lee W T Alkureishi

Purpose: Age-matched average 3D models facilitate both surgical planning and intraoperative guidance of cranial birth defects such as craniosynostosis. We aimed to develop an algorithm that accepts any number of CT scans as input and generates highly accurate, average models with minimal user input that are ready for 3D printing and clinical use.

Methods: Using a compiled database of 'normal' pediatric computed tomography (CT) scans, we report Normscan, an open-source platform built in Python that allows users to generate normative models of CT scans through user-defined landmarks. We use the basion, nasion, and left and right porions as anatomical landmarks for initial correspondence and then register the models using the iterative closest points algorithm before downstream averaging.

Results: Normscan is fast and easy to use via our user interface and also creates highly accurate average models of any number of input models. Additionally, it is highly repeatable, with coefficients of variance for the surface area and volume of the average model being less than 3% across ten independent trials. Average models can then be 3D printed and/or visualized in augmented reality.

Conclusions: Normscan provides an end-to-end pipeline for the creation of average models of skulls. These models can be used for the generation of databases of specific demographic anatomical models as well as for intraoperative guidance and surgical planning. While Normscan was designed for craniosynostosis repair, due to the modular nature of the algorithm, Normscan has many applications in other areas of surgical planning and research.

目的:年龄匹配的平均三维模型有助于颅骨先天缺陷(如颅骨发育不良)的手术规划和术中指导。我们的目标是开发一种算法,该算法可接受任意数量的 CT 扫描作为输入,并生成高精度的平均模型,用户只需输入极少的信息即可进行三维打印和临床使用:我们使用一个 "正常 "儿科计算机断层扫描(CT)汇编数据库,报告了 Normscan,这是一个用 Python 构建的开源平台,允许用户通过用户定义的地标生成 CT 扫描的标准模型。我们使用基底、鼻翼、左右孔作为解剖地标进行初始对应,然后在下游平均之前使用迭代最邻近点算法对模型进行注册:通过我们的用户界面,Normscan 既快速又易于使用,还能为任意数量的输入模型创建高度精确的平均模型。此外,它的可重复性很高,在十次独立试验中,平均模型的表面积和体积的方差系数均小于 3%。平均模型可以通过三维打印和/或可视化增强现实技术实现:Normscan为创建头骨平均模型提供了一个端到端的管道。这些模型可用于生成特定人口解剖模型数据库,也可用于术中指导和手术规划。虽然 Normscan 是为颅骨畸形修复而设计的,但由于算法的模块化性质,Normscan 在其他手术规划和研究领域也有很多应用。
{"title":"Normscan: open-source Python software to create average models from CT scans.","authors":"George R Nahass, Mitchell A Marques, Naji Bou Zeid, Linping Zhao, Lee W T Alkureishi","doi":"10.1007/s11548-024-03185-0","DOIUrl":"10.1007/s11548-024-03185-0","url":null,"abstract":"<p><strong>Purpose: </strong>Age-matched average 3D models facilitate both surgical planning and intraoperative guidance of cranial birth defects such as craniosynostosis. We aimed to develop an algorithm that accepts any number of CT scans as input and generates highly accurate, average models with minimal user input that are ready for 3D printing and clinical use.</p><p><strong>Methods: </strong>Using a compiled database of 'normal' pediatric computed tomography (CT) scans, we report Normscan, an open-source platform built in Python that allows users to generate normative models of CT scans through user-defined landmarks. We use the basion, nasion, and left and right porions as anatomical landmarks for initial correspondence and then register the models using the iterative closest points algorithm before downstream averaging.</p><p><strong>Results: </strong>Normscan is fast and easy to use via our user interface and also creates highly accurate average models of any number of input models. Additionally, it is highly repeatable, with coefficients of variance for the surface area and volume of the average model being less than 3% across ten independent trials. Average models can then be 3D printed and/or visualized in augmented reality.</p><p><strong>Conclusions: </strong>Normscan provides an end-to-end pipeline for the creation of average models of skulls. These models can be used for the generation of databases of specific demographic anatomical models as well as for intraoperative guidance and surgical planning. While Normscan was designed for craniosynostosis repair, due to the modular nature of the algorithm, Normscan has many applications in other areas of surgical planning and research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"157-165"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated segmentation and deep learning classification of ductopenic parotid salivary glands in sialo cone-beam CT images. 虹膜锥束 CT 图像中腮腺导管闭合性涎腺的自动分割和深度学习分类。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 Epub Date: 2024-07-31 DOI: 10.1007/s11548-024-03240-w
Elia Halle, Tevel Amiel, Doron J Aframian, Tal Malik, Avital Rozenthal, Oren Shauly, Leo Joskowicz, Chen Nadler, Talia Yeshua

Purpose: This study addressed the challenge of detecting and classifying the severity of ductopenia in parotid glands, a structural abnormality characterized by a reduced number of salivary ducts, previously shown to be associated with salivary gland impairment. The aim of the study was to develop an automatic algorithm designed to improve diagnostic accuracy and efficiency in analyzing ductopenic parotid glands using sialo cone-beam CT (sialo-CBCT) images.

Methods: We developed an end-to-end automatic pipeline consisting of three main steps: (1) region of interest (ROI) computation, (2) parotid gland segmentation using the Frangi filter, and (3) ductopenia case classification with a residual neural network (RNN) augmented by multidirectional maximum intensity projection (MIP) images. To explore the impact of the first two steps, the RNN was trained on three datasets: (1) original MIP images, (2) MIP images with predefined ROIs, and (3) MIP images after segmentation.

Results: Evaluation was conducted on 126 parotid sialo-CBCT scans of normal, moderate, and severe ductopenic cases, yielding a high performance of 100% for the ROI computation and 89% for the gland segmentation. Improvements in accuracy and F1 score were noted among the original MIP images (accuracy: 0.73, F1 score: 0.53), ROI-predefined images (accuracy: 0.78, F1 score: 0.56), and segmented images (accuracy: 0.95, F1 score: 0.90). Notably, ductopenic detection sensitivity was 0.99 in the segmented dataset, highlighting the capabilities of the algorithm in detecting ductopenic cases.

Conclusions: Our method, which combines classical image processing and deep learning techniques, offers a promising solution for automatic detection of parotid glands ductopenia in sialo-CBCT scans. This may be used for further research aimed at understanding the role of presence and severity of ductopenia in salivary gland dysfunction.

目的:本研究解决了检测腮腺导管减少症并对其严重程度进行分类的难题,腮腺导管减少症是一种以唾液腺导管数量减少为特征的结构异常,以前曾被证明与唾液腺功能损害有关。本研究的目的是开发一种自动算法,旨在提高使用虹膜锥束 CT(sialo-CBCT)图像分析腮腺导管缺失症的诊断准确性和效率:方法:我们开发了一种端到端的自动流水线,包括三个主要步骤:(方法:我们开发的端到端自动流水线包括三个主要步骤:(1)感兴趣区(ROI)计算;(2)使用弗兰基滤波器进行腮腺分割;(3)使用多方向最大强度投影(MIP)图像增强的残差神经网络(RNN)进行导管减少症病例分类。为了探索前两个步骤的影响,RNN 在三个数据集上进行了训练:(1) 原始 MIP 图像,(2) 带有预定义 ROI 的 MIP 图像,(3) 分割后的 MIP 图像:对正常、中度和重度腮腺导管闭塞病例的 126 张腮腺ialo-CBCT 扫描图像进行了评估,结果显示,ROI 计算的准确率为 100%,腺体分割的准确率为 89%。原始 MIP 图像(准确率:0.73,F1 分数:0.53)、ROI 预定义图像(准确率:0.78,F1 分数:0.56)和分割图像(准确率:0.95,F1 分数:0.90)的准确率和 F1 分数均有所提高。值得注意的是,在分割数据集中,乳腺导管狭窄的检测灵敏度为 0.99,凸显了该算法检测乳腺导管狭窄病例的能力:我们的方法结合了经典图像处理和深度学习技术,为在颅骨CBCT扫描中自动检测腮腺导管狭窄提供了一种很有前景的解决方案。我们的方法结合了经典图像处理和深度学习技术,为自动检测涎腺CBCT扫描中的腮腺导管缺失症提供了一种有前途的解决方案。
{"title":"Automated segmentation and deep learning classification of ductopenic parotid salivary glands in sialo cone-beam CT images.","authors":"Elia Halle, Tevel Amiel, Doron J Aframian, Tal Malik, Avital Rozenthal, Oren Shauly, Leo Joskowicz, Chen Nadler, Talia Yeshua","doi":"10.1007/s11548-024-03240-w","DOIUrl":"10.1007/s11548-024-03240-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study addressed the challenge of detecting and classifying the severity of ductopenia in parotid glands, a structural abnormality characterized by a reduced number of salivary ducts, previously shown to be associated with salivary gland impairment. The aim of the study was to develop an automatic algorithm designed to improve diagnostic accuracy and efficiency in analyzing ductopenic parotid glands using sialo cone-beam CT (sialo-CBCT) images.</p><p><strong>Methods: </strong>We developed an end-to-end automatic pipeline consisting of three main steps: (1) region of interest (ROI) computation, (2) parotid gland segmentation using the Frangi filter, and (3) ductopenia case classification with a residual neural network (RNN) augmented by multidirectional maximum intensity projection (MIP) images. To explore the impact of the first two steps, the RNN was trained on three datasets: (1) original MIP images, (2) MIP images with predefined ROIs, and (3) MIP images after segmentation.</p><p><strong>Results: </strong>Evaluation was conducted on 126 parotid sialo-CBCT scans of normal, moderate, and severe ductopenic cases, yielding a high performance of 100% for the ROI computation and 89% for the gland segmentation. Improvements in accuracy and F1 score were noted among the original MIP images (accuracy: 0.73, F1 score: 0.53), ROI-predefined images (accuracy: 0.78, F1 score: 0.56), and segmented images (accuracy: 0.95, F1 score: 0.90). Notably, ductopenic detection sensitivity was 0.99 in the segmented dataset, highlighting the capabilities of the algorithm in detecting ductopenic cases.</p><p><strong>Conclusions: </strong>Our method, which combines classical image processing and deep learning techniques, offers a promising solution for automatic detection of parotid glands ductopenia in sialo-CBCT scans. This may be used for further research aimed at understanding the role of presence and severity of ductopenia in salivary gland dysfunction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"21-30"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1