首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification. HarmonyTM:多中心数据协调应用于帕金森病分类的分布式学习。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-09-20 DOI: 10.1117/1.JMI.11.5.054502
Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert

Purpose: Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.

Approach: We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to "unlearn" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.

Results: Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.

Conclusion: HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.

目的:分布式学习被广泛应用于遵守数据共享法规和访问各种数据集以训练机器学习(ML)模型。巡回模型 (TM) 是一种分布式学习方法,每次使用一个中心的数据进行顺序训练,这在处理有限的本地数据集时尤其有利。然而,当各中心使用不同的扫描仪采集数据时,就会出现一个重要的问题,这可能会导致模型利用这些差异作为捷径。虽然数据协调可以缓解这一问题,但目前的方法通常依赖于大型或成对的数据集,而在分布式设置中获取这些数据集可能并不现实:我们引入了 HarmonyTM,这是一种专为 TM 量身定制的数据协调方法。HarmonyTM 能有效减少模型特征表示中的偏差,同时保留关键的疾病相关信息,而这一切都不需要大量的数据集。具体来说,我们采用对抗训练来 "消除 "用于帕金森病(PD)分类模型的特征中的偏差。我们使用来自 83 个中心、使用 23 种不同扫描仪的多中心三维(3D)神经成像数据集对 HarmonyTM 进行了评估:结果表明,在 TM 设置中,HarmonyTM 将 PD 分类准确率从 72% 提高到 76%,将(不需要的)扫描仪分类准确率从 53% 降低到 30%:HarmonyTM 是一种在 TM 方法中协调三维神经成像数据的定制方法,旨在最大限度地减少分布式设置中的捷径学习。这可以防止疾病分类器利用扫描仪的特定细节来对患有或不患有帕金森病的患者进行分类--这是在临床应用中部署 ML 模型的关键环节。
{"title":"HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification.","authors":"Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.11.5.054502","DOIUrl":"10.1117/1.JMI.11.5.054502","url":null,"abstract":"<p><strong>Purpose: </strong>Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.</p><p><strong>Approach: </strong>We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to \"unlearn\" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.</p><p><strong>Results: </strong>Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.</p><p><strong>Conclusion: </strong>HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expanding generalized contrast-to-noise ratio into a clinically relevant measure of lesion detectability by considering size and spatial resolution. 通过考虑大小和空间分辨率,将广义对比度与噪声比扩展为一种与临床相关的病变可探测性测量方法。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-10-23 DOI: 10.1117/1.JMI.11.5.057001
Siegfried Schlunk, Brett Byram

Purpose: Early image quality metrics were often designed with clinicians in mind, and ideal metrics would correlate with the subjective opinion of practitioners. Over time, adaptive beamformers and other post-processing methods have become more common, and these newer methods often violate assumptions of earlier image quality metrics, invalidating the meaning of those metrics. The result is that beamformers may "manipulate" metrics without producing more clinical information.

Approach: In this work, Smith et al.'s signal-to-noise ratio (SNR) metric for lesion detectability is considered, and a more robust version, here called generalized SNR (gSNR), is proposed that uses generalized contrast-to-noise ratio (gCNR) as a core. It is analytically shown that for Rayleigh distributed data, gCNR is a function of Smith et al.'s C ψ (and therefore can be used as a substitution). More robust methods for estimating the resolution cell size are considered. Simulated lesions are included to verify the equations and demonstrate behavior, and it is shown to apply equally well to in vivo data.

Results: gSNR is shown to be equivalent to SNR for delay-and-sum (DAS) beamformed data, as intended. However, it is shown to be more robust against transformations and report lesion detectability more accurately for non-Rayleigh distributed data. In the simulation included, the SNR of DAS was 4.4 ± 0.8 , and minimum variance (MV) was 6.4 ± 1.9 , but the gSNR of DAS was 4.5 ± 0.9 , and MV was 3.0 ± 0.9 , which agrees with the subjective assessment of the image. Likewise, the DAS 2 transformation (which is clinically identical to DAS) had an incorrect SNR of 9.4 ± 1.0 and a correct gSNR of 4.4 ± 0.9 . Similar results are shown in vivo.

Conclusions: Using gCNR as a component to estimate gSNR creates a robust measure of lesion detectability. Like SNR, gSNR can be compared with the Rose criterion and may better correlate with clinical assessments of image quality for modern beamformers.

目的:早期的图像质量指标通常是以临床医生为中心设计的,理想的指标与医生的主观意见相关。随着时间的推移,自适应波束成形器和其他后处理方法变得越来越普遍,而这些新方法往往违反了早期图像质量指标的假设,使这些指标的意义失效。其结果是,波束成形器可能会 "操纵 "指标,而不会产生更多临床信息:在这项工作中,考虑了 Smith 等人的信噪比(SNR)病变可探测性指标,并提出了一个更稳健的版本,这里称为广义信噪比(gSNR),它以广义对比度-噪声比(gCNR)为核心。分析表明,对于瑞利分布数据,gCNR 是 Smith 等人的 C ψ 的函数(因此可用作替代)。我们还考虑了估算分辨单元大小的更稳健的方法。结果表明,gSNR 与延迟和(DAS)波束成形数据的信噪比相当。然而,对于非瑞利分布式数据,gSNR 对变换具有更强的鲁棒性,并能更准确地报告病变可探测性。在模拟中,DAS 的 SNR 为 4.4 ± 0.8,最小方差 (MV) 为 6.4 ± 1.9,但 DAS 的 gSNR 为 4.5 ± 0.9,MV 为 3.0 ± 0.9,这与图像的主观评估一致。同样,DAS 2 转换(临床上与 DAS 相同)的不正确 SNR 为 9.4 ± 1.0,而正确的 gSNR 为 4.4 ± 0.9。类似的结果在体内也有显示:结论:使用 gCNR 作为估算 gSNR 的一个组成部分,可以稳健地衡量病变的可探测性。与信噪比一样,gSNR 也可以与罗斯标准进行比较,并能更好地与临床评估现代波束成形器的图像质量相关联。
{"title":"Expanding generalized contrast-to-noise ratio into a clinically relevant measure of lesion detectability by considering size and spatial resolution.","authors":"Siegfried Schlunk, Brett Byram","doi":"10.1117/1.JMI.11.5.057001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.057001","url":null,"abstract":"<p><strong>Purpose: </strong>Early image quality metrics were often designed with clinicians in mind, and ideal metrics would correlate with the subjective opinion of practitioners. Over time, adaptive beamformers and other post-processing methods have become more common, and these newer methods often violate assumptions of earlier image quality metrics, invalidating the meaning of those metrics. The result is that beamformers may \"manipulate\" metrics without producing more clinical information.</p><p><strong>Approach: </strong>In this work, Smith et al.'s signal-to-noise ratio (SNR) metric for lesion detectability is considered, and a more robust version, here called generalized SNR (gSNR), is proposed that uses generalized contrast-to-noise ratio (gCNR) as a core. It is analytically shown that for Rayleigh distributed data, gCNR is a function of Smith et al.'s <math> <mrow><msub><mi>C</mi> <mi>ψ</mi></msub> </mrow> </math> (and therefore can be used as a substitution). More robust methods for estimating the resolution cell size are considered. Simulated lesions are included to verify the equations and demonstrate behavior, and it is shown to apply equally well to <i>in vivo</i> data.</p><p><strong>Results: </strong>gSNR is shown to be equivalent to SNR for delay-and-sum (DAS) beamformed data, as intended. However, it is shown to be more robust against transformations and report lesion detectability more accurately for non-Rayleigh distributed data. In the simulation included, the SNR of DAS was <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.8</mn></mrow> </math> , and minimum variance (MV) was <math><mrow><mn>6.4</mn> <mo>±</mo> <mn>1.9</mn></mrow> </math> , but the gSNR of DAS was <math><mrow><mn>4.5</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , and MV was <math><mrow><mn>3.0</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , which agrees with the subjective assessment of the image. Likewise, the <math> <mrow><msup><mi>DAS</mi> <mn>2</mn></msup> </mrow> </math> transformation (which is clinically identical to DAS) had an incorrect SNR of <math><mrow><mn>9.4</mn> <mo>±</mo> <mn>1.0</mn></mrow> </math> and a correct gSNR of <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> . Similar results are shown <i>in vivo</i>.</p><p><strong>Conclusions: </strong>Using gCNR as a component to estimate gSNR creates a robust measure of lesion detectability. Like SNR, gSNR can be compared with the Rose criterion and may better correlate with clinical assessments of image quality for modern beamformers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"057001"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated echocardiography view classification and quality assessment with recognition of unknown views. 自动超声心动图视图分类和质量评估,可识别未知视图。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-08-30 DOI: 10.1117/1.JMI.11.5.054002
Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum

Purpose: Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.

Approach: We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.

Results: The proposed method achieved an accuracy of 84.9 % ± 0.67 for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.

Conclusion: The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.

目的:解读超声心动图检查需要大量的人工互动,因为视频缺乏扫描平面信息,图像质量也不一致,有的与临床相关,有的则无法识别。因此,人工分析的先决条件是选择适当的视图,既能显示目标解剖结构,又能获得最佳图像质量。为了实现这一选择过程的自动化,我们提出了一种方法,用于对常规视图进行自动分类、识别未知视图以及对检测到的视图进行质量评估:方法:我们训练一个神经网络来进行视图分类,并利用神经网络的对数激活来识别未知视图。随后,我们训练了一种线性回归算法,利用神经网络的特征嵌入来预测视图质量得分。我们在一个临床测试集上对该方法进行了评估,该测试集包含 2466 个超声心动图视频,其中有专家标注的视图标签和 438 个专家评定视图质量分数的视频子集。第二名观察者对 894 个视频子集(包括所有质量评分视频)进行了注释:结果:在常规视图分类和未知视图识别的共同目标上,所提出的方法达到了 84.9 % ± 0.67 的准确率,而第二位观察者的准确率则达到了 87.6%。在视图质量评估方面,该方法的斯皮尔曼等级相关系数为 0.71,而第二观察者的相关系数为 0.62:所提出的方法性能接近专家水平,能够全自动选择最合适的视图,用于人工或自动下游分析。
{"title":"Automated echocardiography view classification and quality assessment with recognition of unknown views.","authors":"Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum","doi":"10.1117/1.JMI.11.5.054002","DOIUrl":"10.1117/1.JMI.11.5.054002","url":null,"abstract":"<p><strong>Purpose: </strong>Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.</p><p><strong>Approach: </strong>We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.</p><p><strong>Results: </strong>The proposed method achieved an accuracy of <math><mrow><mn>84.9</mn> <mo>%</mo> <mo>±</mo> <mn>0.67</mn></mrow> </math> for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.</p><p><strong>Conclusion: </strong>The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054002"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI. 融合深度网络和多图谱分割技术,在三维水脂分离磁共振成像中划分大腿肌肉群。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-09-03 DOI: 10.1117/1.JMI.11.5.054003
Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis

Purpose: Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.

Approach: We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.

Results: For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.

Conclusions: Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.

目的:在研究衰老、年龄相关疾病和代谢性疾病以及开发成像生物标记物时,分割对于组织量化和特征描述至关重要。我们提出了一种多方法和多图谱方法,用于自动分割三维(3D)大腿磁共振图像中的功能性肌肉群。这些肌群在解剖学上彼此相邻,因此人工划分这些肌群是一项具有挑战性且耗时的任务:方法:我们采用化学位移编码水脂磁共振成像(CSE-MRI)技术,为自动分割大腿的四个主要功能肌群(腓肠肌、腘绳肌、股四头肌和滑肌)提供了一个框架。我们建议将多个可变形模型的解剖映射与基于三维深度学习模型的分割相结合。这种方法充分利用了多图谱分割(MAS)的通用性和深度网络的准确性,从而能够准确评估肌肉群的体积和脂肪含量:为了评估分割性能,我们计算了戴斯相似系数(DSC)和豪斯多夫距离第 95 百分位数(HD-95)。通过三重交叉验证,我们在 15 名健康受试者身上评估了所提出的框架、其变体和基线方法,并在 4 名患者身上进行了测试。融合多种地图集、可变形配准模型和深度学习分割法产生了最佳性能,所有肌肉的平均 DSC 为 0.859,HD-95 为 8.34:结论:融合多种 MAS 技术的多种解剖映射可丰富模板集,提高分割准确性。与应用于主体空间的深度网络决策的额外融合提供了补充信息。所提出的方法可以在三维大腿磁共振成像扫描中准确分割出单个肌肉群。
{"title":"Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI.","authors":"Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis","doi":"10.1117/1.JMI.11.5.054003","DOIUrl":"10.1117/1.JMI.11.5.054003","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.</p><p><strong>Approach: </strong>We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.</p><p><strong>Results: </strong>For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.</p><p><strong>Conclusions: </strong>Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054003"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11369361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth. 用于预测子宫肌瘤生长的放射组学和定量多参数磁共振成像。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-09-12 DOI: 10.1117/1.JMI.11.5.054501
Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux

Significance: Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.

Aim: We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.

Approach: We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.

Results: The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of 0.93    cm 3 / year / fibroid from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.

Conclusion: We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.

意义重大:子宫肌瘤(UFs)会对女性健康造成严重危害。子宫肌瘤是一种良性肿瘤,其临床表现各不相同,有的无症状,有的会导致衰弱症状。我们无法预测 UF 的生长率和未来的发病率,这限制了 UF 的治疗。目的:我们旨在开发一种预测模型,以识别生长率增高并可能导致发病率增高的 UF:我们回顾性分析了 20 名患者的 44 个专家概述 UF,这些患者在平均 16 个月的时间内接受了两次多参数 MR 成像检查,这是前瞻性研究的一部分。我们从 DCE、T2 和表观扩散系数序列中提取了定量磁共振成像(MRI)特征以及形态和纹理放射组学特征,从而确定了 44 个初始特征。主成分分析降低了维度,所选最小数量的成分可解释97.5%以上的方差。线性判别分析分类器采用 "留一剔除 "方案,利用这些成分输出生长风险评分:分类器包含前三个主成分,接收者操作特征曲线下面积为 0.80(95% 置信区间 [0.69; 0.91]),能有效区分生长速度快于中位数 0.93 厘米 3 / 年/肌瘤的 UF 和队列中生长速度较慢的 UF。根据中位生长风险评分对队列进行时间到事件分析,得出的危险比为 0.33 [0.15; 0.76],显示了潜在的临床实用性:我们利用磁共振成像的定量特征和主成分分析建立了一个很有前景的预测模型,用于识别生长率增高的 UFs。此外,该模型的辨别能力支持其在更大范围内验证后,在制定针对患者和肌瘤的定制化管理方面的潜在临床实用性。
{"title":"Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth.","authors":"Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux","doi":"10.1117/1.JMI.11.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.054501","url":null,"abstract":"<p><strong>Significance: </strong>Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.</p><p><strong>Aim: </strong>We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.</p><p><strong>Approach: </strong>We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.</p><p><strong>Results: </strong>The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of <math><mrow><mn>0.93</mn> <mtext>  </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> <mo>/</mo> <mi>year</mi> <mo>/</mo> <mi>fibroid</mi></mrow> </math> from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.</p><p><strong>Conclusion: </strong>We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Photon-counting computed tomography versus energy-integrating computed tomography for detection of small liver lesions: comparison using a virtual framework imaging. 光子计数计算机断层扫描与能量积分计算机断层扫描在检测肝脏小病变方面的比较:利用虚拟框架成像进行比较。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-10-17 DOI: 10.1117/1.JMI.11.5.053502
Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi

Purpose: Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.

Approach: Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( CTDI vol 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( f 50 ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( d ' , per lesion) were measured.

Results: Across all studied conditions, the best detection performance, measured by d ' , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and d ' compared with EICT, with a mean increase in CNR of 35.0% ( p < 0.001 ) and 21% ( p < 0.001 ) and a mean increase in d ' of 41.0% ( p < 0.001 ) and 23.3% ( p = 0.007 ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.

Conclusions: PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.

目的:光子计数计算机断层扫描(PCCT)有望提供优于能量整合 CT(EICT)的图像质量。我们对 PCCT 和 EICT 在肝脏病变检测方面进行了客观比较:方法:生成 50 个拟人化的计算模型,并插入肝脏病变。在门静脉相位模拟每个模型的对比增强扫描。采集使用的是经过验证的 CT 模拟平台 DukeSim。模拟扫描采用两种剂量水平(CTDI vol 1.5 至 6.0 mGy),分别以 PCCT(NAEOTOM Alpha,西门子,德国埃尔兰根)和 EICT(SOMATOM Flash,西门子)为模型。图像以不同的内核锐利度(柔和、中等、锐利)进行重建。为了对图像质量进行定量评估,测量了调制传递函数(MTF)、MTF 50%时的频率(f 50)、噪声大小、对比度与噪声比(CNR,每个病变)和可探测性指数(d ' ,每个病变):在所有研究条件下,剂量水平最高、内核最软的 PCCT 图像的检测性能最佳(以 d ' 为衡量标准)。与 EICT 相比,采用软核重建的 PCCT 提高了病变的 CNR 和 d',1.5 和 6.0 mGy 采集的 CNR 平均分别提高了 35.0% ( p 0.001 ) 和 21% ( p 0.001 ) ,d'平均分别提高了 41.0% ( p 0.001 ) 和 23.3% ( p = 0.007)。较大的模型、低对比度病变和低剂量扫描的改善幅度最大:结论:与 EICT 相比,PCCT 在肝脏病变检测和图像质量指标方面都有客观的改善。这些进步可能会使肝脏病变检测更早、更准确,从而改善患者护理。
{"title":"Photon-counting computed tomography versus energy-integrating computed tomography for detection of small liver lesions: comparison using a virtual framework imaging.","authors":"Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi","doi":"10.1117/1.JMI.11.5.053502","DOIUrl":"10.1117/1.JMI.11.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.</p><p><strong>Approach: </strong>Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( <math> <mrow><msub><mi>f</mi> <mn>50</mn></msub> </mrow> </math> ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , per lesion) were measured.</p><p><strong>Results: </strong>Across all studied conditions, the best detection performance, measured by <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> compared with EICT, with a mean increase in CNR of 35.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 21% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and a mean increase in <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> of 41.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 23.3% ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.007</mn></mrow> </math> ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.</p><p><strong>Conclusions: </strong>PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"053502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGP-Me? ChatGP-Me?
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-10-28 DOI: 10.1117/1.JMI.11.5.050101
Elias Levy, Bennett Landman

The editorial evaluates how the GenAI technologies available in 2024 (without specific coding) could impact scientific processes, exploring two AI tools with the aim of demonstrating what happens when using custom LLMs in five research lab workflows.

这篇社论评估了 2024 年可用的 GenAI 技术(无需特定编码)如何影响科学流程,探讨了两种人工智能工具,旨在展示在五个研究实验室工作流程中使用定制 LLM 时会发生什么。
{"title":"ChatGP-Me?","authors":"Elias Levy, Bennett Landman","doi":"10.1117/1.JMI.11.5.050101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.050101","url":null,"abstract":"<p><p>The editorial evaluates how the GenAI technologies available in 2024 (without specific coding) could impact scientific processes, exploring two AI tools with the aim of demonstrating what happens when using custom LLMs in five research lab workflows.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"050101"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring single-shot propagation and speckle based phase recovery techniques for object thickness estimation by using a polychromatic X-ray laboratory source. 探索利用多色 X 射线实验室光源进行物体厚度估算的单发传播和基于斑点的相位恢复技术。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-25 DOI: 10.1117/1.JMI.11.4.043501
Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva

Purpose: Propagation and speckle-based techniques allow reconstruction of the phase of an X-ray beam with a simple experimental setup. Furthermore, their implementation is feasible using low-coherence laboratory X-ray sources. We investigate different approaches to include X-ray polychromaticity for sample thickness recovery using such techniques.

Approach: Single-shot Paganin (PT) and Arhatari (AT) propagation-based and speckle-based (ST) techniques are considered. The radiation beam polychromaticity is addressed using three different averaging approaches. The emission-detection process is considered for modulating the X-ray beam spectrum. Reconstructed thickness of three nylon-6 fibers with diameters in the millimeter-range, placed at various object-detector distances are analyzed. In addition, the thickness of an in-house made breast phantom is recovered by using multi-material Paganin's technique (MPT) and compared with micro-CT data.

Results: The best quantitative result is obtained for the PT and ST combined with sample thickness averaging (TA) approach that involves individual thickness recovery for each X-ray spectral component and the smallest considered object-detector distance. The error in the recovered fiber diameters for both techniques is < 4 % , despite the higher noise level in ST images. All cases provide estimates of fiber diameter ratios with an error of 3% with respect to the nominal diameter ratios. The breast phantom thickness difference between MPT-TA and micro-CT is about 10%.

Conclusions: We demonstrate the single-shot PT-TA and ST-TA techniques feasibility for thickness recovery of millimeter-sized samples using polychromatic microfocus X-ray sources. The application of MPT-TA for thicker and multi-material samples is promising.

目的:基于传播和斑点的技术可以通过简单的实验装置重建 X 射线束的相位。此外,使用低相干实验室 X 射线源也可以实现这些技术。我们研究了不同的方法,将 X 射线多色性纳入此类技术的样本厚度恢复中:方法:我们考虑了基于单发帕加宁(PT)和阿尔哈特里(AT)传播和基于斑点(ST)的技术。使用三种不同的平均方法来解决辐射光束的多色性问题。考虑了调制 X 射线束光谱的发射检测过程。分析了放置在不同物体-探测器距离上的三根直径在毫米范围内的尼龙-6 纤维的重建厚度。此外,还使用多材料帕加宁技术(MPT)恢复了自制乳房模型的厚度,并与显微 CT 数据进行了比较:结果:PTT 和 ST 结合样本厚度平均(TA)方法获得了最佳定量结果,TA 方法包括对每个 X 射线光谱成分和最小考虑的物体-探测器距离进行单独厚度恢复。尽管 ST 图像的噪声水平较高,但两种技术恢复的纤维直径误差均为 4%。所有情况下,纤维直径比的估计值与标称直径比的误差均为 3%。MPT-TA 和 micro-CT 之间的乳房模型厚度差异约为 10%:我们证明了使用多色微焦 X 射线源进行毫米级样品厚度恢复的单发 PT-TA 和 ST-TA 技术的可行性。将 MPT-TA 应用于较厚的多材料样品前景广阔。
{"title":"Exploring single-shot propagation and speckle based phase recovery techniques for object thickness estimation by using a polychromatic X-ray laboratory source.","authors":"Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva","doi":"10.1117/1.JMI.11.4.043501","DOIUrl":"10.1117/1.JMI.11.4.043501","url":null,"abstract":"<p><strong>Purpose: </strong>Propagation and speckle-based techniques allow reconstruction of the phase of an X-ray beam with a simple experimental setup. Furthermore, their implementation is feasible using low-coherence laboratory X-ray sources. We investigate different approaches to include X-ray polychromaticity for sample thickness recovery using such techniques.</p><p><strong>Approach: </strong>Single-shot Paganin (PT) and Arhatari (AT) propagation-based and speckle-based (ST) techniques are considered. The radiation beam polychromaticity is addressed using three different averaging approaches. The emission-detection process is considered for modulating the X-ray beam spectrum. Reconstructed thickness of three nylon-6 fibers with diameters in the millimeter-range, placed at various object-detector distances are analyzed. In addition, the thickness of an in-house made breast phantom is recovered by using multi-material Paganin's technique (MPT) and compared with micro-CT data.</p><p><strong>Results: </strong>The best quantitative result is obtained for the PT and ST combined with sample thickness averaging (TA) approach that involves individual thickness recovery for each X-ray spectral component and the smallest considered object-detector distance. The error in the recovered fiber diameters for both techniques is <math><mrow><mo><</mo> <mn>4</mn> <mo>%</mo></mrow> </math> , despite the higher noise level in ST images. All cases provide estimates of fiber diameter ratios with an error of 3% with respect to the nominal diameter ratios. The breast phantom thickness difference between MPT-TA and micro-CT is about 10%.</p><p><strong>Conclusions: </strong>We demonstrate the single-shot PT-TA and ST-TA techniques feasibility for thickness recovery of millimeter-sized samples using polychromatic microfocus X-ray sources. The application of MPT-TA for thicker and multi-material samples is promising.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"043501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projected pooling loss for red nucleus segmentation with soft topology constraints. 利用软拓扑约束进行红核分割的投影集合损失。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.044002
Guanghui Fu, Rosana El Jurdi, Lydia Chougar, Didier Dormont, Romain Valabregue, Stéphane Lehéricy, Olivier Colliot

Purpose: Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.

Approach: This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.

Results: When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.

Conclusions: We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.

目的:深度学习是医学图像分割的标准。然而,当训练集较小时,它可能会遇到困难。此外,它还可能产生解剖异常分割。解剖学知识可以作为深度学习分割方法的一个约束条件。我们提出了一种基于投影集合的损失函数,以引入软拓扑约束。我们的主要应用是从定量易感性图谱(QSM)中分割红核,这在帕金森综合症中很有意义:这种新的损失函数通过放大要分割结构的小部分来引入拓扑软约束,以避免在分割过程中丢弃这些小部分。为此,我们将结构投影到三个平面上,然后使用一系列内核大小不断增大的 MaxPooling 运算。这些操作同时针对地面实况和预测结果执行,并通过计算差值获得损失函数。因此,它可以减少拓扑误差以及结构边界的缺陷。该方法易于实施,计算效率高:结果:在应用 QSM 数据分割红色细胞核时,该方法的准确率非常高(Dice 89.9%),而且没有拓扑误差。此外,当训练集较小时,所提出的损失函数还能提高 Dice 精确度。我们还研究了医学分割十项全能挑战赛(MSD)的三个任务(心脏、脾脏和海马)。在 MSD 任务中,两种方法的 Dice 精确度相似,但拓扑误差有所降低:我们提出了一种自动分割红核的有效方法,该方法基于一种新的损失,可在深度学习分割中引入拓扑约束。
{"title":"Projected pooling loss for red nucleus segmentation with soft topology constraints.","authors":"Guanghui Fu, Rosana El Jurdi, Lydia Chougar, Didier Dormont, Romain Valabregue, Stéphane Lehéricy, Olivier Colliot","doi":"10.1117/1.JMI.11.4.044002","DOIUrl":"10.1117/1.JMI.11.4.044002","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.</p><p><strong>Approach: </strong>This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.</p><p><strong>Results: </strong>When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.</p><p><strong>Conclusions: </strong>We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044002"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach. 利用医疗到医疗的迁移学习方法在低剂量计算机断层扫描中检测肺结节。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.044502
Jenita Manokaran, Richa Mittal, Eranga Ukwatta
<p><strong>Purpose: </strong>Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.</p><p><strong>Approach: </strong>In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.</p><p><strong>Results: </strong>The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a <math><mrow><mi>p</mi></mrow> </math> -value of 0.0054 for precision and a <math><mrow><mi>p</mi></mrow> </math> -value of 0.00034 for specificity.</p><p><strong>Conclusions: </strong>In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, re
目的:肺癌是全球第二大常见癌症,也是导致癌症死亡的主要原因。低剂量计算机断层扫描(LDCT)是早期检测肺癌的推荐成像筛查工具。低剂量计算机断层扫描的全自动计算机辅助检测方法将大大改善现有的临床工作流程。现有的肺部检测方法大多是针对高剂量 CT(HDCT)设计的,由于域偏移和 LDCT 图像质量较差,这些方法无法直接应用于 LDCT。在这项工作中,我们介绍了一种基于迁移学习的半自动方法,用于利用 LDCT 早期检测肺结节:在这项工作中,我们开发了一种基于物体检测模型 "只看一次"(YOLO)的算法来检测肺结节。首先在 CT 上训练 YOLO 模型,然后使用医学到医学迁移学习方法在 LDCT 上重新训练模型时,将预先训练的权重用作初始权重。本研究的数据集来自一项筛查试验,包括 50 名经活检确诊的肺癌患者连续三年(T1、T2 和 T3)的 LDCT。约 60 名肺癌患者的 HDCT 图像来自公共数据集。使用由 15 个患者病例(93 张有癌结节的切片)组成的保留测试集,使用精确度、特异性、召回率和 F1 分数对所开发的模型进行了评估。评估指标按患者逐年报告,并取 3 年的平均值。为了进行比较分析,使用 COCO 数据集的预训练权重作为初始权重来训练所提出的检测模型。采用配对 t 检验和α值为 0.05 的卡方检验进行统计显著性检验:结果:通过比较使用 HDCT 预训练权重和 COCO 预训练权重开发的拟议模型,报告了结果。前一种方法与后一种方法在检测癌结节方面的精确度分别为 0.982 和 0.93,在识别无癌结节切片方面的特异性分别为 0.923 和 0.849,召回率分别为 0.87 和 0.886,F1 分数分别为 0.924 和 0.903。随着结节的发展,前者的精确度为 1,特异性为 0.92,灵敏度为 0.930。比较研究中进行的统计分析结果显示,精确度的 p 值为 0.0054,特异性的 p 值为 0.00034:本研究开发了一种半自动方法,使用 HDCT 预先训练的权重作为初始权重,并对模型进行再训练,从而检测 LDCT 中的肺结节。此外,将上述方法中的 HDCT 预训练权重替换为 COCO 预训练权重,对结果进行了比较。建议的方法可在筛查项目中发现早期肺结节,减少因 LDCT 误诊而导致的过度诊断和随访,为受影响的患者提供治疗方案,并降低死亡率。
{"title":"Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach.","authors":"Jenita Manokaran, Richa Mittal, Eranga Ukwatta","doi":"10.1117/1.JMI.11.4.044502","DOIUrl":"10.1117/1.JMI.11.4.044502","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; -value of 0.0054 for precision and a &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; -value of 0.00034 for specificity.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, re","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044502"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1